AI/ML
-
Dec 11, 2025
Docker Model Runner now supports vLLM on Windows
Run vLLM with GPU acceleration on Windows using Docker Model Runner and WSL2. Fast AI inference is here.
Read now
-
Docker Captain Dec 11, 2025
Breaking Free From AI Vendor Lock-in: Integrating GitHub Models with Docker cagent
See how Docker cagent integrates with GitHub Models to build and ship multi-agent apps without vendor lock-in.
Read now
-
Dec 5, 2025
Docker, JetBrains, and Zed: Building a Common Language for Agents and IDEs
As agents become capable enough to write and refactor code, they should work natively inside the environments developers work in: editors. That’s why JetBrains and Zed are co-developing ACP, the Agent Client Protocol. ACP gives agents and editors a shared language, so any agent can read context, take actions, and respond intelligently without bespoke wiring…
Read now
-
Dec 5, 2025
Announcing vLLM v0.12.0, Ministral 3 and DeepSeek-V3.2 for Docker Model Runner
Run Ministral 3 and DeepSeek-V3.2 on Docker Model Runner with vLLM 0.12. Test-drive the latest open-weights models as soon as they’re released.
Read now
-
Dec 3, 2025
Securing the Docker MCP Catalog: Commit Pinning, Agentic Auditing, and Publisher Trust Levels
Learn how we’re enhancing trust in the MCP ecosystem with commit planning, AI-audited updates, and publisher trust levels within the Docker MCP catalog.
Read now
-
Dec 1, 2025
Run Embedding Models and Unlock Semantic Search with Docker Model Runner
In this guide, we’ll cover how to use embedding models for semantic search and how to run them with Docker Model Runner.
Read now
-
Nov 25, 2025
A New Approach for Coding Agent Safety
Coding agents like Claude Code, Gemini CLI, Codex, Kiro, and OpenCode are changing how developers work. But as these agents become more autonomous with capabilities like deleting repos, modifying files, and accessing secrets, developers face a real problem: how do you give agents enough access to be useful without adding unnecessary risk to your local…
Read now
-
Nov 20, 2025
Docker Model Runner Integrates vLLM for High-Throughput Inference
New: vLLM in Docker Model Runner. High-throughput inference for safetensors models with auto engine routing for NVIDIA GPUs using Docker.
Read now