Containers are not VMs

Mar 24 2016

I spend a good portion of my time at Docker talking to community members with varying degrees of familiarity with Docker and I sense a common theme: people’s natural response when first working with Docker is to try and frame it in terms of virtual machines. I can’t count the number of times I have heard Docker containers described as “lightweight VMs”.

I get it because I did the exact same thing when I first started working with Docker. It’s easy to connect those dots as both technologies share some characteristics. Both are designed to provide an isolated environment in which to run an application. Additionally, in both cases that environment is represented as a binary artifact that can be moved between hosts. There may be other similarities, but to me these are the two biggies.

The key is that the underlying architecture is fundamentally different between the two. The analogy I use (because if you know me, you know I love analogies) is comparing houses (VMs) to apartment buildings (containers).

Houses (the VMs) are fully self-contained and offer protection from unwanted guests. They also each possess their own infrastructure – plumbing, heating, electrical, etc. Furthermore, in the vast majority of cases houses are all going to have at a minimum a bedroom, living area, bathroom, and kitchen. I’ve yet to ever find a “studio house” – even if I buy the smallest house I may end up buying more than I need because that’s just how houses are built.  (for the pedantic out there, yes I’m ignoring the new trend in micro houses because they break my analogy)

Apartments (the containers) also offer protection from unwanted guests, but they are built around shared infrastructure. The apartment building (Docker Host) shares plumbing, heating, electrical, etc. Additionally apartments are offered in all kinds of different sizes – studio to multi-bedroom penthouse. You’re only renting exactly what you need. Finally, just like houses, apartments have front doors that lock to keep out unwanted guests.

With containers, you share the underlying resources of the Docker host and you build an image that is exactly what you need to run your application. You start with the basics and you add what you need. VMs are built in the opposite direction. You are going to start with a full operating system and, depending on your application, might be strip out the things you don’t want.

I’m sure many of you are saying “yeah, we get that. They’re different”. But even as we say this, we still try and adapt our current thoughts and processes around VMs and apply them to containers.

  • “How do I backup a container?”
  • “What’s my patch management strategy for my running containers?”
  • “Where does the application server run?”

To me the light bulb moment came when I realized that Docker is not a virtualization technology, it’s an application delivery technology. In a VM-centered world, the unit of abstraction is a monolithic VM that stores not only application code, but often its stateful data. A VM takes everything that used to sit on a physical server and just packs it into a single binary so it can be moved around.  But it is still the same thing.  With containers the abstraction is the application; or more accurately a service that helps to make up the application.

With containers, typically many services (each represented as a single container) comprise an application. Applications are now able to be deconstructed into much smaller components which fundamentally changes the way they are managed in production.

So, how do you backup your container, you don’t. Your data doesn’t live in the container, it lives in a named volume that is shared between 1-N containers that you define. You backup the data volume, and forget about the container. Optimally your containers are completely stateless and immutable.

Certainly patches will still be part of your world, but they aren’t applied to running containers. In reality if you patched a running container, and then spun up new ones based on an unpatched image, you’re gonna have a bad time. Ideally you would update your Docker image, stop your running containers, and fire up new ones. Because a container can be spun up in a fraction off a second, it’s just much cheaper to go this route.

Your application server translates into a service run inside of a container. Certainly there may be cases where your microservices-based application will need to connect to a non-containerized service, but for the most part standalone servers where you execute your code give way to one or more containers that provide the same functionality with much less overhead (and offer up much better horizontal scaling).

“But, VMs have traditionally been about lift and shift. What do I do with my existing apps?”

I often have people ask me how to run huge monolithic apps in a container. There are many valid strategies for migrating to a microservices architecture that start with moving an existing monolithic application from a VM into a container but that should be thought of as the first step on a journey, not an end goal.

As you consider how your organization can leverage Docker, try and move away from a VM-focused mindset and realize that Docker is way more than just “a lightweight VM.” It’s an application-centric way to  deliver high-performing, scalable applications on the infrastructure of your choosing.

Check out these resources to start learning more about Docker and containers:


Learn More about Docker


21 thoughts on “Containers are not VMs

  1. Docker is a fancy way to run a process. 🙂

    • When you say, the data doesn't live in the container, but externalised to a "named volume", what do you mean?? Could you please help clarifying couple of questions I have? Thanks in advance.
      1) What is "named volume"?? Is it something like In-Memory Data grid or Distributed Cache??
      2) How to treat embedded Tomcat/Undertow/Jetty containers packaged as libraries with Spring Boot? Are they similar to Docker??

  2. It took me some time to distinguish completely between docker and VM. So, now that I’m convinced with the advantages of docker against VM, I’m confronted with docker machine. Isn’t docker machine against the paradigm of docker?

    • Docker machine is a tool to provision a host (virtual or physical) that is running docker.

      Once docker is running somewhere, you can fire up containers.

  3. Slightly differenet view on what Docker is and why it’s not important to campare it to VM is covered hre

  4. Most flexible approach in my experience (have been running docker in production since 0.6) is mixing VM’s and containers. They’re not mutually exclusive, and indeed not the same thing at all…

    Many images on the hub however make these mistakes, and I would like to see clear guidelines and best practices for good, clean images. Many have ssh (use docker exec if you really need to), logger daemon (mount /dev/log in your container), cron (run externally please), database instances (run externally please), … built-in. Sure it makes it ‘easy to fire up’ – but we have docker-compose for this these days. At the moment however, it’s still not as straight-forward to get an application stack up & running. Sure we have docker-compose, but you have to manually hunt for them.

    There are multiple approaches to improve this. One would be to create separate ‘image’ and ‘application stack’ indexes. Another would be to add the ability to host a default docker-compose file with each image to spin up a working instance of the application stack.

    Doesn’t matter how it’s done, but adding a standard way to pull such a complete application stack in docker or docker-compose would be required in my opinion. A simple “docker-compose get myname/someimage:sometag” to fetch that image’s docker-compose file and pull all required images would make this easier to use.

    Distributing docker-compose with docker would make this even better, but the python dependency might make this a bit tricky if we’re talking about non-*nix platforms.

    • Avatar for Mike Coleman

      Raphael Bottino

      I totally agree with you. There should be a docker-compose-hub and it would be even better if docker-compose was, indeed, distributed with Docker.
      Don’t worry about the Python dependency, run compose as a container 😉

  5. Avatar for Mike Coleman

    Elijah Zupancic

    I think your premise is spot on. Containers are not VMs. This is becoming increasingly apparent in monitoring. You have to monitor the performance profile of a container in a different way than a VM. You can’t just stick performance monitor probes on a host and expect to get information that is directly relevant to the applications that are running inside of containers especially if you are using memory caps or cpu shares.

    However, I do think that some of your examples are a bit shaky:
    “Houses (the VMs) are fully self-contained and offer protection from unwanted guests. They also each possess their own infrastructure – plumbing, heating, electrical, etc.”

    This runs counter to Solomon’s keynote at DockerCon SF in 2015 where he mentions the legacy of Jails and Zones, right? In those models of OS Virtualization, you are getting all of the utilities and isolation as well (ELA4 DOD certification on Solaris Zones).

    The only reason you don’t get that is in Linux is that the Linux kernel didn’t build in robust isolation or the ability to have a unique network per container. However, it sounds like the Windows Docker implementation will include many of those features, if I am not mistaken.

    Maybe it is best to remove that part of the analogy?

  6. Describing containers as ‘lightweigth vm’s’ makes it easy to explain to the less technical (mgmt). They can grasp the VM technology and know what it is by now. Then you have a basic understanding you can build your story on by saying which advantages containers bring to the table compared to a traditional VM.

  7. I have a better analogy.
    Containers are not apartments. Containers are hotel rooms. You get one any time you need, configured the way you ask. Two beds – no problem. One bed – no problem. You bring all your crap with you in suitcases, and take it all out with you when you leave. If the room doesn’t suit your needs, you don’t just ask for another bed. You move out and move into another room. Just like containers. If you need an extra environment variable, or patch level, or software package, you don’t just add it. You ‘move out’ and take all your crap with you (cause you either backed up your volumes or had them mounted to host to start with) and then you move into another container with the desired configuration.

    • My thoughts exactly. Container = hotel room, VM = apartment, physical (non-virtualized) server = house. An apartment is still signifiantly more difficult to move out of, upsize etc than a hotel room. I have to change my address, sign a new lease, get out of my old lease etc. Hotel room involves a call to the front desk. VMs and physical servers also tend to have a tight association with a given application "server for application A". Just like an apartment or house have an association to a person "my residence or my address". Hotel rooms don't have that association, that longevity, just like a container environment

      • Avatar for Mike Coleman

        Ranjit Padmanabhan

        I like the hotel room analogy, which exemplifies "nestedness", "sharedness", and the "customize vs discard" trade-offs.

        So then the metaphor for Function-as-a-Service (e.g. AWS Lambda) would be a closet?

  8. I’ve always seen Containers as VMs before. But when a Container ideally is stateless, where is data stored? If I have a PHP app that uses MySQL, do I run it within two containers? One PHP/Apache and one MySQL? But then the data is within the MySQL container. Furthermore, the Application server might save data as well on the filesystem (i.e., Session data).

    • To me containers are not that much different from VMs. From a purely technical perspective, a (Linux) container, much like a VM, is just a means to isolate one or more computational processes and run them on shared hardware. Even if consider the immutable/ephemeral containers (which are not a bad idea at all), what is actually stopping you from having an immutable/ephemeral VM? In a data center, the volumes of the VMs are typically mounted from a NAS anyway. I certainly get your point about the DB in a container. E.g. what is the point of having a postgres volume mounted outside of a container if you don't use the raw postgres files for backups anyway? Also, at the point in time when you put a data storage software (e.g. a DB, Cassandra etc) in a container it is not ephemeral anymore. Imagine that you have Postgres in HA setup with a master and a slave, both in containers. What would happen if both of these are scheduled on the same blade and it dies? Same holds true for Cassandra, which even has the notion of a rack and a data center, so that HA works reliably. Unfortunately, these are questions that the docker community does not have the complete answers for.
      Another thing I don't really like about the article is that the docker folks are somehow putting themselves into a position to define the general term container, which I think is not right. I think that when someone says "containers", people usually get this as "Linux containers". It was not Docker who invented and implemented Linux containers and therefore, I don't think they have the right to say what containers are once and for all. If the article was named "Docker containers are not VMs" then that would be more accurate and just.

    • along with a PHP container, you would need a MySQL container (for the database process), and a few data containers (volume container or data volume container) for the database storage and other application data if any. Note; If you have some persistent data that you want to share between containers and then mount the data from it…

  9. That is my thought exactly.
    Container is not a VM.
    I like your analogy. 😉

  10. VMs are not the model for Docker containers. Docker containers are modeled on Unix processes (including daemons). This point should be belabored in the introductory Docker documentation but it is not, presently.

    If you and your team create your own Docker images, you need to take the time to understand the relationship between Docker containers, PID 1, and Unix signals to ensure your Docker containers fit into the intended architecture of Docker.

  11. Avatar for Mike Coleman

    Jens Bremmekamp

    "To me the light bulb moment came when I realized that Docker is not a virtualization technology, (…)"

    Of course it is, but not on the same level as a VM. It is OS level virtualization, not machine level. I've found when explaining this to people it helps to focus on the M in VM instead of the V. 😉

  12. Thanks for this write-up. I'm still learning, but so far, Docker containers remind me of something like a Mac OS X app bundle, rather than a VM.

  13. Avatar for Mike Coleman

    Jagadisha Gangulli

    Hi Mike,

    I have a fundamental question here, though containers optimally utilizes the resource, we we go for the deployment say in AWS, we need to first opt for the instances and its capacity. In this context do I get any advantage for adopting to containers. Because it sounds me that, based on my capacity planning if i require 'n' instances of 'x' gb memory (RAM) and 'y' gb disk machines, the same configuration systems are required for container based implementations too. Could you please clarify this.

  14. Thanks for the write up. It really clarified the differences as I'm sitting here running docker and a VM for cross compiling and trying to understand the two.

    Oh, I've lived in a "studio house", good sized with 17 foot ceilings where the only walls were for the bathrooms and outer walls. Just finished up talking with an architect to build a new one. So rare like a unicorn but they do exist 😉

Leave a Reply