AI/ML
-
Oct 8, 2025
Unlocking Local AI on Any GPU: Docker Model Runner Now with Vulkan Support
Run local LLMs on more GPUs with Docker Model Runner. New Vulkan support accelerates AMD, Intel, and integrated GPUs—auto-detects hardware with CPU fallback.
Read now
-
Oct 6, 2025
IBM Granite 4.0 Models Now Available on Docker Hub
Developers can now discover and run IBM’s latest open-source Granite 4.0 language models from the Docker Hub model catalog, and start building in minutes with Docker Model Runner. Granite 4.0 pairs strong, enterprise-ready performance with a lightweight footprint, so you can prototype locally and scale confidently. The Granite 4.0 family is designed for speed, flexibility,…
Read now
-
Oct 6, 2025
Llama.cpp Gets an Upgrade: Resumable Model Downloads
New: resumable GGUF downloads in llama.cpp. Learn how Docker Model Runner makes models versioned, shareable, and OCI-native for seamless dev-to-prod
Read now
-
Oct 2, 2025
Fine-Tuning Local Models with Docker Offload and Unsloth
Learn how to fine-tune models locally with Docker Offload and Unsloth and how smaller models can become practical assistants for real-world problems.
Read now
-
Sep 26, 2025
The Trust Paradox: When Your AI Gets Catfished
Learn how MCP prompt-injection exploits trusted tools—and how to defend with context isolation, AI behavior checks, and human-in-the-loop review.
Read now
-
Sep 25, 2025
Run, Test, and Evaluate Models and MCP Locally with Docker + Promptfoo
Learn how promptfoo and Docker help developers compare models, evaluate MCP servers, and even perform LLM red-teaming.
Read now
-
Sep 19, 2025
Beyond Containers: llama.cpp Now Pulls GGUF Models Directly from Docker Hub
Learn how llama.cpp is using Docker Hub as a powerful, versioned, and centralized repository for your AI models.
Read now
-
Sep 18, 2025
Build and Distribute AI Agents and Workflows with cagent
cagent is a new open-source project from Docker that makes it simple to build, run, and share AI agents, without writing a single line of code. Instead of writing code and wrangling Python versions and dependencies when creating AI agents, you define your agent’s behavior, tools, and persona in a single YAML file, making it…
Read now