Ignasi Lopez Luna
- 
			
	
    	
        
        LoRA Explained: Faster, More Efficient Fine-Tuning with DockerLoRA is a method that freezes a base model and adds trainable adapters to teach pre-trained models new behaviors, without overwriting their existing knowledge. Read now 
- 
			
	
    	
        
        Fine-Tuning Local Models with Docker Offload and UnslothLearn how to fine-tune models locally with Docker Offload and Unsloth and how smaller models can become practical assistants for real-world problems. Read now 
- 
			
	
    	
        
        Hybrid AI Isn’t the Future — It’s Here (and It Runs in Docker using the Minions protocol)Learn how to use Docker Compose, Model Runner, and the MinionS protocol to deploy hybrid models. Read now 
- 
			
	
    	
        
        Tool Calling with Local LLMs: A Practical EvaluationFind the best local LLM for tool calling to use on your agentic applications with this carefully tested leaderboard from Docker. Read now 
- 
			
	
    	
        
        Run Gemma 3 with Docker Model Runner: Fully Local GenAI Developer ExperienceExplore how to run Gemma 3 models locally using Docker Model Runner, alongside a Comment Processing System as a practical case study. Read now 
- 
			
	
    	
        
        How to Run Hugging Face Models Programmatically Using Ollama and TestcontainersLearn how you can programmatically consume and run AI models from Hugging Face with Testcontainers and Ollama. Read now 
- 
			
	
    	
        
        A Promising Methodology for Testing GenAI Applications in JavaTesting applications that incorporate AI can be difficult. In this article, we share a promising new methodology for testing GenAI applications in Java. Read now 
 
                    