Kevin Wittek
-
Powering Local AI Together: Docker Model Runner on Hugging Face
Developers can use Docker Model Runner as the local inference engine for running models and filtering for Model Runner-supported models on Hugging Face!
Read now
-
Publishing AI models to Docker Hub
Learn about the new commands for Model Runner and how they can help you publish and share your own AI models on Docker Hub.
Read now
-
Run LLMs Locally with Docker: A Quickstart Guide to Model Runner
Learn how to easily pull and run LLMs locally on your machine with Model Runner. No infrastructure headaches, no complicated setup.
Read now