Docker Model Runner
-
Jan 16, 2026
Making (Very) Small LLMs Smarter
Run tiny LLMs locally and still get helpful code. Use vector search to feed the right snippets with Docker Model Runner, LangChainJS, and Nova.
Read now
-
Jan 15, 2026
OpenCode with Docker Model Runner for Private AI Coding
Configure OpenCode to use Docker Model Runner for a private, cost-aware coding assistant. Run models locally via an OpenAI-compatible API with full control.
Read now
-
Docker Captain Dec 16, 2025
Develop and deploy voice AI apps using Docker
Build real-time voice agents with Docker. Use EchoKit, Model Runner, and the MCP Toolkit to run ASR/LLM/TTS locally or in the cloud
Read now
-
Dec 16, 2025
Docker Model Runner now included with the Universal Blue family
Docker Model Runner now ships with Universal Blue (Aurora, Bluefin), delivering an out-of-the-box, GPU-ready AI development environment.
Read now
-
Dec 11, 2025
Docker Model Runner now supports vLLM on Windows
Run vLLM with GPU acceleration on Windows using Docker Model Runner and WSL2. Fast AI inference is here.
Read now
-
Dec 5, 2025
Announcing vLLM v0.12.0, Ministral 3 and DeepSeek-V3.2 for Docker Model Runner
Run Ministral 3 and DeepSeek-V3.2 on Docker Model Runner with vLLM 0.12. Test-drive the latest open-weights models as soon as they’re released.
Read now
-
Dec 1, 2025
Run Embedding Models and Unlock Semantic Search with Docker Model Runner
In this guide, we’ll cover how to use embedding models for semantic search and how to run them with Docker Model Runner.
Read now
-
Nov 20, 2025
Docker Model Runner Integrates vLLM for High-Throughput Inference
New: vLLM in Docker Model Runner. High-throughput inference for safetensors models with auto engine routing for NVIDIA GPUs using Docker.
Read now