Guest Blog: Deciding Between Docker Desktop and a DIY Solution

Guest author Ben Hall is the lead technical developer for C# .NET at a gov.uk (a United Kingdom public sector information website) and a .NET Foundation member. He worked for nine years as a school teacher, covering programming and computer science. Ben enjoys making complex topics accessible and practical for busy developers.

Deciding Between Docker Desktop and a DIY Solution

At the heart of the Docker experience is Docker Engine. Docker Desktop’s ready-to-use solution for building containerized applications includes Docker Engine and all the other tooling and setup you need to start developing right away.

Developers can create a “DIY” Docker implementation around Docker Engine manually. Some organizations may prefer the flexibility and control of doing it themselves. But opting for a DIY Docker Engine solution requires much more engineering, development, and setup. Docker and its Windows companion, WSL, are relatively complex, so the DIY approach isn’t for everyone.

In this article, we’ll help you decide which approach is right for you and your organization. To illustrate, we’ll draw comparisons between what Docker Desktop offers and a DIY Docker setup on Windows.

Setting Up Docker on Windows

This article on failingfast.io describes the main steps for a manual installation on Windows, which involves creating a WSL 2 distro, setting up a Docker repository, and installing the Docker Engine on the WSL 2 distro additional setup. This process is a bit fragile, so prepare for troubleshooting before you’re up and running. And it’s only a guide to getting started. Most use cases will need further setup, including:

  • Configuring Docker to start on boot
  • Logging
  • Accepting connections to Docker daemon from remote hosts
  • Configuring remote access
  • Fixing IP forwarding problems

Setting up Docker Desktop is a very different experience. You simply download and run the latest Docker Desktop installer — it automatically completes all the work. We’re up and running in a few minutes, ready to deploy containers.

Cutting Edge and Stable

Docker Desktop and the DIY implementation that we linked to share a common foundation on Windows Subsystem for Linux (WSL) 2 that enables developers to run a Linux environment directly on Windows.

WSL 2 significantly improved memory use, code execution, and compatibility. It achieved this through an architectural shift to a full Linux kernel, which supports Linux containers running natively, without emulation.

Working closely with Microsoft and Windows Insider, Docker was quick to adopt this beneficial emerging technology as the primary backend for Docker Desktop. Docker then released a technical preview far in advance of WSL 2 reaching general availability in Windows. Every effort was made also, to maintain feature parity with the previous version that used Hyper-V.

We can add Docker Desktop to our developer tooling, confident that it will continue to support that latest technology while avoiding breaking changes to the experience we are accustomed to.

Software Updates

Docker Desktop manages everything, from setup through to future kernel patches. And because it’s a complete bundle, the automatic software updates will keep all the tools installed on it up-to-date and secure, including the Docker Engine itself. That’s one less machine image to manage in-house!

1

With a DIY Docker setup, it’s up to you to keep up with all security patches and other updates. A DIY solution will also provide you with plenty of ongoing problems that need solving. So, be sure to multiply those developer hours across a large organization when you are calculating the ROI for Docker Desktop.

Networking

Docker Desktop will automatically propagate configured HTTP/HTTPS proxy settings to Docker to use when pulling containers.

It will also function properly when attached to a VPN. It achieves this by intercepting traffic from containers and injecting it into Windows as if it originated from the Docker application itself.

Pause and Resume

This feature was requested by a user on the public roadmap for Docker Desktop. It’s not the biggest feature ever, but it’s another great reminder that Docker Desktop is under active development. It’s continually being improved in response to user feedback, implemented with monthly releases.

Users can now pause a Docker Desktop session to reduce CPU usage and conserve battery life. When paused, the current state of all your containers is saved in memory and all processes are frozen.

Volume Management

Volumes are the standard approach to persisting any data that Docker containers work with, including files shared between containers. Unlike bind mounts, which work directly with host machine files, volumes are managed by Docker, offering several advantages.

You’ll face two big challenges when working with Docker volumes manually in the Docker CLI:

  • It can be difficult to identify which container each volume belongs to, so clearing up old volumes can be a slow process.
  • Transferring content in and out of volumes is more convoluted than it really needs to be.

Docker Desktop provides a solution for this by providing a view in the Dashboard to explore volumes. In this view, you can:

  • Easily identify which volumes are being used 
  • See which containers are using a volume
  • Create and delete volumes
  • Explore the files and folders in a volume, including file sizes
  • Download files from volumes
  • Search and sort by name, date, and size
2

Kubernetes Integration

Although there are too many features to explore in a single article, we should take a look at the Kubernetes integration in Docker Desktop.

Kubernetes has become a standard for container orchestration, with 83 percent of respondents to the 2020 CNCF Survey reporting that they use it in production.

Granted, we don’t need Kubernetes to get Docker’s benefits in local development, like the isolation from the host system. Plus, we can even use Docker Compose 2.0 to run multiple containers with some nifty networking features. But if you’re working on a project that will deploy to Kubernetes in production, using a similar environment locally is a wise choice.

In the past, a local Kubernetes instance was something else to set up, and the costs in developer time didn’t offer enough benefit to some. This is likely still the case for a DIY Docker solution.

Docker Desktop, in contrast, comes with a standalone Kubernetes server and client for local testing. It’s an uncomplicated, no-configuration, single-node cluster. You can switch to it through the UI, as the image below shows, or in the usual way with kubectl config use-context.

3

Native Apple Silicon Support

In 2021, a version of Docker Desktop for Mac that could fully leverage the latest M1 chip reached general availability. There are already over 145,000 ARM-based images on Docker Hub. This Apple Silicon version supports multi-platform images, which means you can build and run images for x86 and ARM architectures without complex cross-compilation environments.

This was very well-received because the emulation offered by Rosetta 2, which offers acceptable functionality for many common applications, isn’t sufficient to run containers.

Costs and Scalability

The DIY alternative requires a great deal of engineering time to build and configure, with an ongoing maintenance commitment for updating, patching, and troubleshooting the container environment. Each developer in an organization will carry out most of this work individually every time they work in a fresh environment. 

This approach doesn’t scale well! It means developers won’t be spending time on activities directly benefiting the business, like new features. None of us enjoy a sprint review where we have to explain that we didn’t deliver a feature because of problems with or work setting up development environments.

Containerization should help facilitate product delivery. What Docker Desktop sets out to achieve is not new. We have always invested in programming IDEs and other tooling that bundle functionality in a single, user-friendly package to improve productivity.

To help you determine whether Docker Desktop is right for your organization from a cost perspective, Jeremy Castile has some guidance to help you assess the ROI.

Working with Multiple Environments

Developers widely accept that build artifacts must be immutable — the same application, built, must move through QA to production. The next level, if you’d like, is packaging an application and its dependencies together. This helps to further maintain consistency between development, testing, and production environments.

We risk not realizing this benefit if the process is too complicated. Organizations have introduced many great tools and processes to teams, only for these tools to gather dust because the entry bar for the required skills is too high.

This situation is more prominent in QA teams. Many testers are technical, but more typically, they have a particular set of skills geared towards testing. Since QA is one group set to benefit a great deal from consistent testing environments, consider what they are most likely to use.

Introducing Dev Environments

To improve the experience further for these scenarios, Docker Desktop has added a new collaborative development feature, currently in preview, called Dev Environments.

4

Switching git branches or environments usually requires lots of manual changes to configuration, dependencies, and other environment setup before it’s possible to run the code.

The new feature makes it easy to keep the environment details themselves in source control with the code. With a click of a button, a developer can share their work-in-progress and its dependencies via Docker Hub. This means developers can easily switch to fully functioning instances of each other’s work to, for example, complete a pull request without having to change from their local branch and make all those environment changes.

Get started in Development Environments with the preview documentation.

Conclusion

Bret Fisher, an author who writes about Docker, summed up the need for Docker Desktop: “It’s really a testament to Docker Desktop that there isn’t a comparable tool for local Linux containers on macOS/Windows that solves 80% of what people typically need out of a container runtime locally.”

We’ve explored what Docker Desktop offers and along the way. We’ve also touched on the subjects of cost and ROI, setup and maintenance, scalability, and onboarding. Although some will prefer DIY Docker’s flexibility and control, Docker Desktop requires less setup effort and maintenance, offering a gentler learning curve for everyone from development to QA.

Perhaps the greatest challenge of a DIY solution is from a business value perspective. Developers love discovering how to do these things. So, a developer won’t necessarily track how many hours they spent over a week engaged in maintaining a DIY solution — the business will have no visibility into any productivity loss.
If you’re still using a DIY solution for local development with Docker on Windows or macOS, learn more about Docker Desktop and download it to get started.

Feedback

0 thoughts on "Guest Blog: Deciding Between Docker Desktop and a DIY Solution"