Eric Curtin
-
Unlocking Local AI on Any GPU: Docker Model Runner Now with Vulkan Support
Run local LLMs on more GPUs with Docker Model Runner. New Vulkan support accelerates AMD, Intel, and integrated GPUs—auto-detects hardware with CPU fallback.
Read now
-
IBM Granite 4.0 Models Now Available on Docker Hub
Developers can now discover and run IBM’s latest open-source Granite 4.0 language models from the Docker Hub model catalog, and start building in minutes with Docker Model Runner. Granite 4.0 pairs strong, enterprise-ready performance with a lightweight footprint, so you can prototype locally and scale confidently. The Granite 4.0 family is designed for speed, flexibility,…
Read now
-
Llama.cpp Gets an Upgrade: Resumable Model Downloads
New: resumable GGUF downloads in llama.cpp. Learn how Docker Model Runner makes models versioned, shareable, and OCI-native for seamless dev-to-prod
Read now
-
Beyond Containers: llama.cpp Now Pulls GGUF Models Directly from Docker Hub
Learn how llama.cpp is using Docker Hub as a powerful, versioned, and centralized repository for your AI models.
Read now