Ignasi Lopez Luna
Sr Software Engineer, Docker
More by Ignasi
How to Use Multimodal AI Models With Docker Model Runner
Run multimodal AI models that understand text, images, and audio with Docker Model Runner. Explore CLI and API examples, run Hugging Face models, and try a real-time webcam vision demo.
Read now
LoRA Explained: Faster, More Efficient Fine-Tuning with Docker
LoRA is a method that freezes a base model and adds trainable adapters to teach pre-trained models new behaviors, without overwriting their existing knowledge.
Read now
Fine-Tuning Local Models with Docker Offload and Unsloth
Learn how to fine-tune models locally with Docker Offload and Unsloth and how smaller models can become practical assistants for real-world problems.
Read now
Hybrid AI Isn’t the Future — It’s Here (and It Runs in Docker using the Minions protocol)
Learn how to use Docker Compose, Model Runner, and the MinionS protocol to deploy hybrid models.
Read now
Tool Calling with Local LLMs: A Practical Evaluation
Find the best local LLM for tool calling to use on your agentic applications with this carefully tested leaderboard from Docker.
Read now
Run Gemma 3 with Docker Model Runner: Fully Local GenAI Developer Experience
Explore how to run Gemma 3 models locally using Docker Model Runner, alongside a Comment Processing System as a practical case study.
Read now
How to Run Hugging Face Models Programmatically Using Ollama and Testcontainers
Learn how you can programmatically consume and run AI models from Hugging Face with Testcontainers and Ollama.
Read now
A Promising Methodology for Testing GenAI Applications in Java
Testing applications that incorporate AI can be difficult. In this article, we share a promising new methodology for testing GenAI applications in Java.
Read now
How to Use Multimodal AI Models With Docker Model Runner
Run multimodal AI models that understand text, images, and audio with Docker Model Runner. Explore CLI and API examples, run Hugging Face models, and try a real-time webcam vision demo.
Read now
LoRA Explained: Faster, More Efficient Fine-Tuning with Docker
LoRA is a method that freezes a base model and adds trainable adapters to teach pre-trained models new behaviors, without overwriting their existing knowledge.
Read now
Fine-Tuning Local Models with Docker Offload and Unsloth
Learn how to fine-tune models locally with Docker Offload and Unsloth and how smaller models can become practical assistants for real-world problems.
Read now
Hybrid AI Isn’t the Future — It’s Here (and It Runs in Docker using the Minions protocol)
Learn how to use Docker Compose, Model Runner, and the MinionS protocol to deploy hybrid models.
Read now
Tool Calling with Local LLMs: A Practical Evaluation
Find the best local LLM for tool calling to use on your agentic applications with this carefully tested leaderboard from Docker.
Read now
Run Gemma 3 with Docker Model Runner: Fully Local GenAI Developer Experience
Explore how to run Gemma 3 models locally using Docker Model Runner, alongside a Comment Processing System as a practical case study.
Read now
How to Run Hugging Face Models Programmatically Using Ollama and Testcontainers
Learn how you can programmatically consume and run AI models from Hugging Face with Testcontainers and Ollama.
Read now
A Promising Methodology for Testing GenAI Applications in Java
Testing applications that incorporate AI can be difficult. In this article, we share a promising new methodology for testing GenAI applications in Java.
Read now