Docker Model Runner
-
Powering Local AI Together: Docker Model Runner on Hugging Face
Developers can use Docker Model Runner as the local inference engine for running models and filtering for Model Runner-supported models on Hugging Face!
Read now
-
AI-Powered Testing: Using Docker Model Runner with Microcks for Dynamic Mock APIs
Learn how to create AI-enhanced mock APIs for testing with Docker Model Runner and Microcks. Generate dynamic, realistic test data locally for faster dev cycles.
Read now
-
Build a GenAI App With Java Using Spring AI and Docker Model Runner
Build a GenAI app in Java using Spring AI, Docker Model Runner, and Testcontainers. Follow this step-by-step guide to get started.
Read now
-
Tool Calling with Local LLMs: A Practical Evaluation
Find the best local LLM for tool calling to use on your agentic applications with this carefully tested leaderboard from Docker.
Read now
-
Building an Easy Private AI Assistant with Goose and Docker Model Runner
Learn how to build your own AI assistant that’s private, scriptable, and capable of powering real developer workflows.
Read now
-
Why Docker Chose OCI Artifacts for AI Model Packaging
Explore why Docker chose OCI artifacts to package and share AI models. Standardized, flexible, and made for developers.
Read now
-
Behind the scenes: How we designed Docker Model Runner and what’s next
Get an inside look at Docker’s first major step into the AI development space and a preview of what’s ahead for Model Runner.
Read now
-
Publishing AI models to Docker Hub
Learn about the new commands for Model Runner and how they can help you publish and share your own AI models on Docker Hub.
Read now