AI/ML
-
Oct 14, 2025
Build a Multi-Agent System in 5 Minutes with cagent
Learn what a multi-agent system is and how to build one in minutes using Docker cagent.
Read now
-
Oct 14, 2025
Join Us in Revitalizing the Docker Model Runner Community!
Docker Model Runner is GA, now in all Docker versions with Vulkan support for nearly any GPU. Star, fork, and contribute in our unified repo.
Read now
-
Oct 13, 2025
Docker Model Runner on the new NVIDIA DGX Spark: a new paradigm for developing AI locally
We’re thrilled to bring NVIDIA DGX™ Spark support to Docker Model Runner. The new NVIDIA DGX Spark delivers incredible performance, and Docker Model Runner makes it accessible. With Model Runner, you can easily run and iterate on larger models right on your local machine, using the same intuitive Docker experience you already trust. In this…
Read now
-
Oct 9, 2025
LoRA Explained: Faster, More Efficient Fine-Tuning with Docker
LoRA is a method that freezes a base model and adds trainable adapters to teach pre-trained models new behaviors, without overwriting their existing knowledge.
Read now
-
Oct 8, 2025
Unlocking Local AI on Any GPU: Docker Model Runner Now with Vulkan Support
Run local LLMs on more GPUs with Docker Model Runner. New Vulkan support accelerates AMD, Intel, and integrated GPUs—auto-detects hardware with CPU fallback.
Read now
-
Oct 6, 2025
IBM Granite 4.0 Models Now Available on Docker Hub
Developers can now discover and run IBM’s latest open-source Granite 4.0 language models from the Docker Hub model catalog, and start building in minutes with Docker Model Runner. Granite 4.0 pairs strong, enterprise-ready performance with a lightweight footprint, so you can prototype locally and scale confidently. The Granite 4.0 family is designed for speed, flexibility,…
Read now
-
Oct 6, 2025
Llama.cpp Gets an Upgrade: Resumable Model Downloads
New: resumable GGUF downloads in llama.cpp. Learn how Docker Model Runner makes models versioned, shareable, and OCI-native for seamless dev-to-prod
Read now
-
Oct 2, 2025
Fine-Tuning Local Models with Docker Offload and Unsloth
Learn how to fine-tune models locally with Docker Offload and Unsloth and how smaller models can become practical assistants for real-world problems.
Read now