Eric Curtin
Principal Software Engineer, Docker
More by Eric
Announcing vLLM v0.12.0, Ministral 3 and DeepSeek-V3.2 for Docker Model Runner
Run Ministral 3 and DeepSeek-V3.2 on Docker Model Runner with vLLM 0.12. Test-drive the latest open-weights models as soon as they’re released.
Read now
Level Up Your Local AI Workflows with Model Runner
Learn how to run multi-modal models on virtually any GPU, streamline model distribution, and simplify collaboration across teams with Docker Model Runner.
Watch video
Docker Model Runner Integrates vLLM for High-Throughput Inference
New: vLLM in Docker Model Runner. High-throughput inference for safetensors models with auto engine routing for NVIDIA GPUs using Docker.
Read now
Introducing a Richer ”docker model run” Experience
New interactive prompt for docker model run: readline-style editing, history, multi-line input, and Ctrl+C to stop responses. Try it today!
Read now
Join Us in Revitalizing the Docker Model Runner Community!
Docker Model Runner is GA, now in all Docker versions with Vulkan support for nearly any GPU. Star, fork, and contribute in our unified repo.
Read now
Unlocking Local AI on Any GPU: Docker Model Runner Now with Vulkan Support
Run local LLMs on more GPUs with Docker Model Runner. New Vulkan support accelerates AMD, Intel, and integrated GPUs—auto-detects hardware with CPU fallback.
Read now
IBM Granite 4.0 Models Now Available on Docker Hub
Developers can now discover and run IBM’s latest open-source Granite 4.0 language models from the Docker Hub model catalog, and start building in minutes with Docker Model Runner. Granite 4.0 pairs strong, enterprise-ready performance with a lightweight footprint, so you can prototype locally and scale confidently. The Granite 4.0 family is designed for speed, flexibility,…
Read now
Llama.cpp Gets an Upgrade: Resumable Model Downloads
New: resumable GGUF downloads in llama.cpp. Learn how Docker Model Runner makes models versioned, shareable, and OCI-native for seamless dev-to-prod
Read now
Beyond Containers: llama.cpp Now Pulls GGUF Models Directly from Docker Hub
Learn how llama.cpp is using Docker Hub as a powerful, versioned, and centralized repository for your AI models.
Read now
Announcing vLLM v0.12.0, Ministral 3 and DeepSeek-V3.2 for Docker Model Runner
Run Ministral 3 and DeepSeek-V3.2 on Docker Model Runner with vLLM 0.12. Test-drive the latest open-weights models as soon as they’re released.
Read now
Docker Model Runner Integrates vLLM for High-Throughput Inference
New: vLLM in Docker Model Runner. High-throughput inference for safetensors models with auto engine routing for NVIDIA GPUs using Docker.
Read now
Introducing a Richer ”docker model run” Experience
New interactive prompt for docker model run: readline-style editing, history, multi-line input, and Ctrl+C to stop responses. Try it today!
Read now
Join Us in Revitalizing the Docker Model Runner Community!
Docker Model Runner is GA, now in all Docker versions with Vulkan support for nearly any GPU. Star, fork, and contribute in our unified repo.
Read now
Unlocking Local AI on Any GPU: Docker Model Runner Now with Vulkan Support
Run local LLMs on more GPUs with Docker Model Runner. New Vulkan support accelerates AMD, Intel, and integrated GPUs—auto-detects hardware with CPU fallback.
Read now
IBM Granite 4.0 Models Now Available on Docker Hub
Developers can now discover and run IBM’s latest open-source Granite 4.0 language models from the Docker Hub model catalog, and start building in minutes with Docker Model Runner. Granite 4.0 pairs strong, enterprise-ready performance with a lightweight footprint, so you can prototype locally and scale confidently. The Granite 4.0 family is designed for speed, flexibility,…
Read now
Llama.cpp Gets an Upgrade: Resumable Model Downloads
New: resumable GGUF downloads in llama.cpp. Learn how Docker Model Runner makes models versioned, shareable, and OCI-native for seamless dev-to-prod
Read now
Beyond Containers: llama.cpp Now Pulls GGUF Models Directly from Docker Hub
Learn how llama.cpp is using Docker Hub as a powerful, versioned, and centralized repository for your AI models.
Read now
