Docker 1.12: Now with Built-in Orchestration!

Three years ago, Docker made an esoteric Linux kernel technology called containerization simple and accessible to everyone. Today, we are doing the same for container orchestration.

Container orchestration is what is needed to transition from deploying containers individually on a single host, to deploying complex multi-container apps on many machines. It requires a distributed platform, independent from infrastructure, that stays online through the entire lifetime of your application, surviving hardware failure and software updates. Orchestration is at the same stage today as containerization was 3 years ago. There are two options: either you need an army of technology experts to cobble together a complex ad hoc system, or you have to rely on a company with a lot of experts to take care of everything for you as long as you buy all hardware, services, support, software from them. There is a word for that, it’s called lock-in.

Docker users have been sharing with us that neither option is acceptable. Instead, you need a platform that makes orchestration usable by everyone, without locking you in. Container orchestration would be easier to implement, more portable, secure, resilient, and faster if it was built into the platform.

Starting with Docker 1.12, we have added features to the core Docker Engine to make multi-host and multi-container orchestration easy. We’ve added new API objects, like Service and Node, that will let you use the Docker API to deploy and manage apps on a group of Docker Engines called a swarm. With Docker 1.12, the best way to orchestrate Docker is Docker!


The Docker 1.12 design is based on four principles:

  • Simple Yet Powerful – Orchestration is a central part of modern distributed applications; it’s so central that we have seamlessly built it into our core Docker Engine. Our approach to orchestration follows our philosophy about containers: no setup, only a small number of simple concepts to learn, and an “it just works” user experience.
  • Resilient – Machines fail all the time. Modern systems should expect these failures to occur regularly and adapt without any application downtime that’s why a zero single-point-of-failure design is a must.
  • Secure – Security should be the default. Barriers to strong security — certificate generation, having to understand PKI — should be removed. But advanced users should still be able to control and audit every aspect of certificate signing and issuance.
  • Optional Features and Backward Compatibility – With millions of users, preserving backwards compatibility is a must for Docker Engine. All new features are optional, and you don’t incur any overhead (memory, cpu) if you don’t use them. Orchestration in Docker Engine aligns with our platform’s batteries included but swappable approach allowing users to continue using any third-party orchestrator that is built on Docker Engine.

Let’s take a look at how the new features in Docker 1.12 work.


Creating Swarms with One Decentralized Building Block

It all starts with creating a swarm–a self-healing group of engines–which for the bootstrap node is as simple as:

docker swarm init

Under the hood this creates a Raft consensus group of one node. This first node has the role of manager, meaning it accepts commands and schedule tasks. As you join more nodes to the swarm, they will by default be workers, which simply execute containers dispatched by the manager. You can optionally add additional manager nodes. The manager nodes will be part of the Raft consensus group. We use an optimized Raft store in which reads are serviced directly from memory which makes scheduling performance fast.


Creating and Scaling Services

Just as you run a single container with docker run, you can now start a replicated, distributed, load balanced process on a swarm of Engines with docker service:

docker service create –name frontend –replicas 5 -p 80:80/tcp nginx:latest

This command declares a desired state on your swarm of 5 nginx containers, reachable as a single, internally load balanced service on port 80 of any node in your swarm. Internally, we make this work using Linux IPVS, an in-kernel Layer 4 multi-protocol load balancer that’s been in the Linux kernel for more than 15 years. With IPVS routing packets inside the kernel, swarm’s routing mesh delivers high performance container-aware load-balancing.

When you create services, you can optionally create replicated or global services. Replicated services mean any number of containers that you define will be spread across the available hosts. Global services, by contrast, schedule one instance the same container on every host in the swarm.

Let’s turn to how Docker provides resiliency. Swarm mode enabled engines are self-healing, meaning that they are aware of the application you defined and will continuously check and reconcile the environment when things go awry. For example, if you unplug one of the machines running an nginx instance, a new container will come up on another node. Unplug the network switch for half the machines in your swarm, and the other half will take over, redistributing the containers amongst themselves. For updates, you now have flexibility in how you re-deploy services once you make a change. You can set a rolling or parallel update of the containers on your swarm.

Want to scale up to 100 instances? It’s as simple as:

docker service scale frontend=100

A typical two-tier (web+db) application would be created like this:

docker network create -d overlay mynet
docker service create –name frontend –replicas 5 -p 80:80/tcp \
–network mynet mywebapp
docker service create –name redis –network mynet redis:latest

This is the basic architecture of this application:



A core principle for Docker 1.12 is creating a zero configuration, secure-by-default, out of the box experience for the Docker platform. One of the major hurdles that administrators often face with deploying applications into production is running them securely, Docker 1.12 allows an administrator to follow the exact same steps setting up a demo cluster that they would to setup a secure production cluster.

Security is not something you can bolt-on after the fact. That is why Docker 1.12 comes with mutually authenticated TLS, providing authentication, authorization and encryption to the communications of every node participating in the swarm, out of the box.

When starting your first manager, Docker Engine will generate a new Certificate Authority (CA) and a set of initial certificates for you. After this initial step, every node joining the swarm will automatically be issued a new certificate with a randomly generated ID, and their current role in the swarm (manager or worker). These certificates will be used as their cryptographically secure node identity for the lifetime of their participation in this swarm, and will be used by the managers to ensure secure dissemination of tasks and other updates.


One of the biggest barriers of adoption of TLS has always been the difficulty of creating, configuring and maintaining the necessary Public Key Infrastructure (PKI). With Docker 1.12, everything not only gets setup and configured with safe defaults for you, but we also automated one of the most painful parts of dealing with TLS certificates: certificate rotation.

Under the hood, every node participating in the swarm is constantly refreshing its certificates, ensuring that potentially leaked or compromised certificates are no longer valid. The frequency with which certificates are rotated can be configured by the user, and set as low as every 30 minutes.

If you would like to use your own Certificate Authority, we also support an external-CA mode, where the managers in the swarm simply relay the Certificate Signing Requests of the nodes attempting to join the cluster to a remote URL.



Docker 1.12 introduces a new file format called a Distributed Application Bundle (experimental build only). Bundle is a new abstraction on top of service focused on the full stack application.

A Docker Bundle file is a declarative specification of a set of services that mandates:

  • What specific image revision to run
  • What networks to create
  • How containers in those services must be networked to run

Bundle files are fully portable and are perfect deployment artifacts for software delivery pipelines because they let you ship fully spec’ed and versioned multi-container Docker apps.

The bundle file spec is simple and open, and you can create bundles however you want. To get you started, Docker Compose has experimental support for creating bundle files and with Docker 1.12 and swarm mode enabled, you can deploy the bundle files.

Bundles are an efficient mechanism for moving multi-service apps from developer laptops through CI to production. It’s experimental, and we’re looking for feedback from the community.


Under the hood of Docker 1.12

When you take a look under the hood, Docker 1.12 uses a number of other interesting technologies. Inter-node communication is done using gRPC, which gives us HTTP/2 benefits like connection multiplexing and header compression. Our data structures are transmitted efficiently thanks to protobufs.

Check out these additional resources on Docker 1.12:


Learn More about Docker


31 thoughts on “Docker 1.12: Now with Built-in Orchestration!

  1. Awesome!

  2. The in-build load balancing, can it be be customized ? I saw the demo that happened a few minutes back where they displayed the round robin ability. Would we be able to customize the load balancing algorithms to our needs ?

  3. Removing the need to use external cluster manager seems a good approach, however from what u describe here – creating a cluster with the first node to run init and others to join it – you created a single point of failure in the first node.
    This is bad.
    Cluster creation needs to be declarative (look in consul for example).

    Surprising the community with a new version which is not compatible with old version is also not recommended -since we already have up and running production swarm cluster.
    I hope u keep backward compatibility, and keep the option to use external cluster managers for now (until we see that the new version is production ready).

    • Actually you can promote several nodes to act as manager nodes. A single manager node is elected as the leader manager. If the manager leader becomes unavailable another manager node can be elected to take the leader role as long as there are two available managers at any point in time. So redundancy and automatic failover in the cluster management role is supported.

      Docker swarm mode does not prevent users from continuing to use Docker Swarm or any other 3rd party cluster and orchestration technology. If you prefer what you already have, you can continue to use it. However, Docker swarm mode does make it easier for newcomers to achieve the same sort of functionality.

      Docker swarm mode also implements several new features right out of the box, such as load balancing and service discovery, so you do not have to cobble together additional technologies that may not be available by default within alternate orchestration packages.

      Finally, Docker swarm mode is tightly integrated, so you get to use a single API and CLI to manage everything, which provides some advantages over using an external orchestration package.

      To summarize and address your concerns: Manager failover is supported and backward compatibility is still there.

  4. Sounds really cool.
    This basically means that Consul/Zookeeper/etcd goes out of equation for a typical Swarm deployment… am I right?

    • Swarm Mode is an optional feature which gets initialize whenever you run "docker swarm init" utility. Under Swarm Mode, you don't need any external KV store like consul, zookeeper or etcd. If you want to stick to traditional Docker Swarm which exists in your infrastructure, you might need consul or zookeeper or etcd.

  5. +1 for Ashwin's question. Example: Services with Write-Access to a database with multiple replicas. Can we make the load balancer persistent for each client, such that there can't be race conditions between the balancer and the DB replicator (and thus state inconsistencies towards the user)?

  6. When will this be available for Ubuntu?

    • You can use it in Ubuntu already. You just have to use the experimental PPA.

      sudo apt-key adv –keyserver hkp:// –recv-keys 58118E89F3A912897C070ADBF76221572C52609D
      echo "deb ubuntu-xenial experimental" > /etc/apt/sources.list.d/docker.list
      sudo apt-get update
      sudo apt-get purge lxc-docker
      sudo apt-get install -y docker-engine
      sudo service docker start

  7. Where can we get that slick node dashboard thing on the left side of the demo?

  8. Avatar for Docker Core Engineering

    Adriano Vieira

    I've done some simple tests with it and I'd like to report some of my findings.

    In my point of view we have a "orchestration" problem (on rc3 included).

    How (or where) could I report it?

  9. Avatar for Docker Core Engineering

    Nathália Torezani

    Hi, Docker Team!

    My name is Nathália Torezani and I´m a journalist at portal iMasters (, which is one of the greatest portals facing developers in Brazil. My editor, Alex Lattaro, just read this article and became very interested in the content.

    Hence, we would like to republish your articles in our portal with all rights directed to you. Would you be interested in a partnership?

    Please, contact us at or

    Hope to hear from you soon!

  10. Hi, I can't find the way how to run individual container in Swarm in this version, I've tried
    "docker -H DOCKER_MASTER:2377 run –memory="2048m" –name CONTAINER -P -d IMAGE_NAME" and it gives me:
    "* Are you trying to connect to a TLS-enabled daemon without TLS?."
    It's surprising for me, because I thought this all TLS thing is now automatically set for me as you mentioned in the article.
    Did I get it wrong? Would you possibly give me some source where could I find how to run it correctly without using "docker service" command?

  11. Hi , what is the best practice docker storage driver for mongodb cluster ?

  12. So, I am trying to install docker 1.12 from the download link here:

    After adding the docker yum repo, I came to know that it only installs docker-engine version 1.11.2. The top of the page however says that it is 1.12 release but all I can find is 1.11 in the yum repo below:

    Whats the best way to download and install 1.12 on CentOS ?

  13. Avatar for Docker Core Engineering

    Justin Hansen

    Hello, I have done a lot of reading recently about orchestrating docker containers. It's great that swarm mode is now built into Docker as this makes a lot of sense. One point of confusion I have, however, is that Docker Cloud appears to have full support for rapid scaling, cluster management, and orchestration. Does Docker Cloud use Swarm Mode or what is the relationship? It appears that if I use Docker Cloud I will not need Swarm Mode.

    Also, I've been reading a lot of talk about an "API Gateway" when building a microservice architecture. How does this fit into the mix of all these technologies? Thanks.

  14. Would there still be a need for Interlock and Nginx on the new Docker swarm cluster? For example, if one had multiple instances of the "voting" app and "results" app, would the new docker features be able to accomplish the same results of redirecting to the correct apps that interlock and nginx could in the old examples?

  15. How about the existing Mesos integration and how about future plans? We don't want to start two conflicting ressource managers that contend for ressources…

  16. This is awesome addition!!

  17. How is the current support for Docker Swarm mode in Windows Server 2016?

  18. All this sounds great, but I am facing an issue with swarm mode. If an external consumer wants to consume services inside my swarm cluster so it can point to any of the manager or workers. Ok, but to the customer I have to setup just one ip, let say the worker01 ip. What happen if the worker01 is down, I had to change the ip addres in my consumer to poin to the worker02 or any manager so it means from the consumer perspective as a service downtime.
    Now nginx comes to my mind as a service running outside of the swarm cluster just to make transparent to my consumer the ips of the swarm cluster. so the consumer just point to nginx (or the nginx cluster) and nginx distributo the requesto to any of the swarm nodes.
    The previous block sound great but I have to configure manually the nginx conf files every time that I create, scale up or scale down services. so it is painfull yet. I want to automate that so for me make sense to consume the etcd sercices inside the swarm mode an create automatically the nginx conf file based on etcd services. the point is that I don know how to consume the etcd service that docker swarm mode says that it has inside. There is any way to discover it to set up my nginx automatically?

Leave a Reply