AI/ML
-
Sep 26, 2025
The Trust Paradox: When Your AI Gets Catfished
Learn how MCP prompt-injection exploits trusted tools—and how to defend with context isolation, AI behavior checks, and human-in-the-loop review.
Read now
-
Sep 25, 2025
Run, Test, and Evaluate Models and MCP Locally with Docker + Promptfoo
Learn how promptfoo and Docker help developers compare models, evaluate MCP servers, and even perform LLM red-teaming.
Read now
-
Sep 19, 2025
Beyond Containers: llama.cpp Now Pulls GGUF Models Directly from Docker Hub
Learn how llama.cpp is using Docker Hub as a powerful, versioned, and centralized repository for your AI models.
Read now
-
Sep 18, 2025
Build and Distribute AI Agents and Workflows with cagent
cagent is a new open-source project from Docker that makes it simple to build, run, and share AI agents, without writing a single line of code. Instead of writing code and wrangling Python versions and dependencies when creating AI agents, you define your agent’s behavior, tools, and persona in a single YAML file, making it…
Read now
-
Sep 18, 2025
Docker Model Runner General Availability
Docker Model Runner offers a new way for developers to manage, run, and share local AI models with cutting-edge features and more on the way.
Read now
-
Sep 16, 2025
MCP Security: A Developer’s Guide
MCP security refers to the controls and risks that govern how agents discover, connect to, and execute MCP servers.
Read now
-
Sep 15, 2025
The Nine Rules of AI PoC Success: How to Build Demos That Actually Ship
Build AI POCs that ship. Use remocal workflows, start small, design for production, track costs, and involve users to move from demo to dependable deployment.
Read now
-
Sep 10, 2025
From Hallucinations to Prompt Injection: Securing AI Workflows at Runtime
Stop LLM mishaps before production. Secure AI agents at runtime with Docker Desktop, Docker Scout, hardened images, and policies against prompt injection.
Read now