Ignasi Lopez Luna
Sr Software Engineer, Docker
More by Ignasi
LoRA Explained: Faster, More Efficient Fine-Tuning with Docker
LoRA is a method that freezes a base model and adds trainable adapters to teach pre-trained models new behaviors, without overwriting their existing knowledge.
Read now
Fine-Tuning Local Models with Docker Offload and Unsloth
Learn how to fine-tune models locally with Docker Offload and Unsloth and how smaller models can become practical assistants for real-world problems.
Read now
Hybrid AI Isn’t the Future — It’s Here (and It Runs in Docker using the Minions protocol)
Learn how to use Docker Compose, Model Runner, and the MinionS protocol to deploy hybrid models.
Read now
Tool Calling with Local LLMs: A Practical Evaluation
Find the best local LLM for tool calling to use on your agentic applications with this carefully tested leaderboard from Docker.
Read now
Run Gemma 3 with Docker Model Runner: Fully Local GenAI Developer Experience
Explore how to run Gemma 3 models locally using Docker Model Runner, alongside a Comment Processing System as a practical case study.
Read now
LoRA Explained: Faster, More Efficient Fine-Tuning with Docker
LoRA is a method that freezes a base model and adds trainable adapters to teach pre-trained models new behaviors, without overwriting their existing knowledge.
Read now
Fine-Tuning Local Models with Docker Offload and Unsloth
Learn how to fine-tune models locally with Docker Offload and Unsloth and how smaller models can become practical assistants for real-world problems.
Read now
Hybrid AI Isn’t the Future — It’s Here (and It Runs in Docker using the Minions protocol)
Learn how to use Docker Compose, Model Runner, and the MinionS protocol to deploy hybrid models.
Read now
Tool Calling with Local LLMs: A Practical Evaluation
Find the best local LLM for tool calling to use on your agentic applications with this carefully tested leaderboard from Docker.
Read now
Run Gemma 3 with Docker Model Runner: Fully Local GenAI Developer Experience
Explore how to run Gemma 3 models locally using Docker Model Runner, alongside a Comment Processing System as a practical case study.
Read now