The Docker Desktop 4.26 release delivers the latest breakthroughs in Rosetta for Docker Desktop optimization and boosts developer productivity by solving common issues such as Node.js freezes and PHP segmentation faults.
Docker Blog
Empowering Data-Driven Development: Docker’s collaboration with Snowflake and Docker AI Advancements
Learn how Docker, in collaboration with Snowflake, introduces an enhanced level of developer productivity when you leverage the power of Docker Desktop, Docker AI, and Snowpark Container Services.
Announcing Builds View in Docker Desktop GA
Now generally available, the Builds view feature in Docker Desktop provides detailed insight into your build performance and usage.
Announcing the Docker AI/ML Hackathon 2023 Winners
Learn about the winners of the recent Docker AI/ML Hackathon, which encouraged participants to build innovative solutions with Docker technology.
Accelerating Developer Velocity with Microsoft Dev Box and Docker Desktop
We’re pleased to announce our partnership with the Microsoft Dev Box team to streamline developer onboarding, environment set-up, security, and administration with Docker Desktop.
The Livecycle Docker Extension: Instantly Share Changes and Get Feedback in Context
Livecycle’s Docker Extension makes it easy to share your work in progress and collaborate with your team. We provide step-by-step instructions for getting started with the Livecycle Docker Extension.
How JW Player Secured 300 Repos in an Hour with Docker Scout
For companies like JW Player, whose core business revolves around streaming, content, and infrastructure, security must be a priority without slowing down delivery or affecting operations. Learn how JW Player uses Docker to help meet such challenges, including how JW Player enabled more than 300 repositories for Docker Scout within just one hour.
Achieve Security and Compliance Goals with Policy Guardrails in Docker Scout
We show how Docker Scout policies enable teams to identify, prioritize, and fix their software quality issues at the point of creation.
LLM Everywhere: Docker for Local and Hugging Face Hosting
We show to use the Hugging Face hosted AI/ML Llama model in a Docker context, which makes it easier to deploy advanced language models for a variety of applications.