CPU Management in Docker 1.13

Jan 20 2017

Resource management for containers is a huge requirement for production users. Being able to run multiple containers on a single host and ensure that one container does not starve the others in terms of cpu, memory, io, or networking in an efficient way is why I like working with containers. However, cpu management for containers is still not as straightforward as what I would like. There are many different options when it comes to dealing with restricting the cpu usage for a container. With things like memory, its is very easy for people to think that , --memory 512m gives the container up to 512mb. With CPU, it’s hard for people to understand a container’s limit with the current options.

In Docker 1.13 we added a --cpus flag, which is the best tech for limiting cpu usage of a container with a sane UX that the majority of users can understand. Let’s take a look at a couple of the options in 1.12 to show why this is necessary.

There are various ways to set a cpu limit for a container. Cpu shares, cpuset, cfs quota and period are the three most common ways. We can just go ahead and say that using cpu shares is the most confusing and worst functionality out of all the options we have. The numbers don’t make sense. For example, is 5 a large number or is 512 half of my system’s resources if there is a max of 1024 shares?  Is 100 shares significant when I only have one container; however, if I add two more containers each with 100 shares, what does that mean?  We could go in depth on cpu shares but you have to remember that cpu shares are relative to everything else on the system.

Cpuset is a viable alternative but it takes much more thought and planning to use it correctly and use it in the correct circumstances. The cfs scheduler along with quota and period are some of the best options for limiting container cpu usage but they come with bad user interfaces. Specifying cpu usage in nanoseconds for a user is sometimes hard to determine when you want to do simple tasks such as limiting a container to one core.

In 1.13 though, if you want a container to be limited to one cpu then you can just add --cpus 1.0 to your Docker run/create command line. If you would like two and a half cpus as the limit of the container then just add --cpus 2.5. In Docker we are using the CFS quota and period to limit the container’s cpu usage to what you want and doing the calculations for you.

If you are limiting cpu usage for your containers, look into using this new flag and API to handle your needs. This flag will work on both Linux and Windows when using Docker.  

For more information on the feature you can look at the docs https://docs.docker.com/engine/admin/resource_constraints/

For more information on Docker 1.13 in general, check out these links:


4 thoughts on “CPU Management in Docker 1.13

  1. This looks to be a useful option.
    I think where shares played a role was to have a unit of compute power that can be used to specify the required compute resources independent of the CPU power of the underlying hardware.
    We can have the container move from one system to another with different physical compute resources and yet be assigned the same amount of compute power.
    Also in case a single host has multiple CPUs of non identical power, CPU shares helps to have a common denominator to talk about computational power…

  2. Is –cpus applies to services in Docker Swarm Mode?

  3. Well I understand cpu shares might be a confusing and not always easy to deal with but I think it's a pretty good options when you want to have CPU intensive computation with a low priority I use it like the docker equivalent of the unix "nice" command. With the new cpu management commands it's quite hard to achieve this.

  4. cat /proc/cpuinfo

    continues to show all of my cores when I attempt to limit them with –cpus=1 and –cpuset-cpus="0"

    Is this normal ?

Leave a Reply