At Docker, we strive to create tools of mass innovation. But what exactly is Docker? And how can it benefit both your developers looking to build applications quickly and your IT team looking to manage the IT environment?
As part of our mission to educate practitioners on Docker and revolutionize the way that they build, ship and run their applications, Technical Evangelist Mike Coleman and myself presented a Docker 101 webinar.
In this webinar, we discussed several introductory topics including:
- The difference between containers and VMs
- Defined key Docker terminology that beginners should familiarize themselves with
- Explained how Docker drives modern application initiatives taking place in the enterprise
- Gave an overview of the Docker Containers-as-a-Service platform
- Walked through a live demo of deploying a website via Docker Cloud
At the end of Mike’s presentation, we answered a few questions from the attendees. You can watch the webinar replay and read the answers from the Q&A section below:
Q: What exactly is a container?
A: Containerization uses the kernel on the host operating system (Linux today with Windows container support coming with Windows Server 2016) to run multiple root file systems. Each root file system is called a container. A Container is a standard unit in which an application resides. A container packages an application and everything it needs to run into one portable unit. Each container has its own: processes, memory, devices and network stack. Containers are managed by the Docker engine. The Docker Engine is responsible for creating and managing containers and can run in any physical, virtual or cloud environment.
Q: How is a container different from a VM?
A: It’s important to realize that containers are NOT vms. Containers leverage shared resources, are lighter weight, have faster instantiation, do not require a hypervisor and provide greater portability. They are ideal for microservices environments. Because of this they can reduce costs (no hypervisor licensing costs and potentially more efficient hardware utilization) to the enterprises while accelerating application development.
VMs use isolated resources, require a full OS, take several minutes to boot, are hypervisor based and are used for monolithic app architectures.
Q: What exactly is an image registry, and what are the Docker options?
A: A registry is where a Docker image is stored and secured. A Docker image is a snapshot of an application and serves as the basis of a container. Once an image is instantiated by the Docker engine via the docker run command the engine spins up a container. The engine instantiates a new process based on the image, and adds a read/write layer to the top of the image to create a container
Here at Docker there are three registry options: the open source registry, Hub or Docker Trusted Registry. Docker Hub is our SaaS hosted commercial registry and Docker Trusted Registry (DTR) is our commercial on-premises registry.
Q: How can one container be aware of the other containers being run?
A: Containers themselves are isolated. With the help of the 200,000 person strong Docker Community we have created Docker Networking. At its most basic level Docker Networking enables containers to talk to one another. We can support host only networks as well as multi-host networks as well, where containers can talk across hosts. You can learn more about Docker Networking here.
Q: How can we deploy apps into cluster environments with Docker?
A: You can use Docker Swarm for orchestration of your dockerized applications. Docker Swarm is a scalable and production-ready Docker engine clustering and scheduling tool. It allows you to create clusters of Docker nodes, and deploy apps across various nodes within your environment. Swarm has built in high availability, so if a node goes down, containers can automatically failover to another node.
You can utilize different deployment strategies as well. For instance, you can use the “spread” method to event spread your code across the nodes within your environment.The “random” method will deploy code to random nodes within your environment. The “binpack” strategy allows you to fully load a node before deploying to another one.
NOTE: Docker Swarm is embedded with Docker Universal Control Plane (UCP), giving UCP the ability to manage production nodes and leverage Docker APIs.
Docker Compose is another tool that you can leverage for orchestration of your applications. Compose allows you to deploy multi-container applications into your environment.
Q: I’m joining a startup where we want to go all-in with Docker, and I as a person need to get everything done. How do I go from web developer to Docker DevOps hero?
A: Love this question. We are always happy to hear when companies are going all in on Docker. So first off, thank you. The good news is that you are already on your way. The best thing to do is come up to speed on the Docker technology. We have several Docker docs that can walk you through getting started with Docker.
Q: What is the difference between OSS and Docker’s commercial offerings?
A: It really comes down to your team. Our OSS provides the tools that “do it yourself “ teams can use to build, ship and run their applications. We encourage users to leverage our OSS technology when looking to build a container-based platform, themselves.
However, if your team is looking for an end to end Container-as-a-Service (CaaS) platform that is built by Docker and that you can leverage versus building themselves, we have our commercial options. The commercial options offer key enterprise features (LDAP/AD integration, web UI, SLAs), security features (i.e role based access controls, image security scanning, on-premises deployment), as well as support from the Docker team. Docker Cloud and Docker Datacenter are our two commercial offerings. Docker Cloud is built for smaller teams in need of a CaaS platform that is hosted in the cloud. Docker Datacenter is ideal for larger mid to enterprise teams that require an on-premises CaaS platform.
Q: How can devs log into a cluster of containers to access logs or to config?
A: So you wouldn’t actually log into the container to access logs. What you would do is push logs out to something like an ELK stack so you can look at them. Essentially the same way that you would monitor today in an enterprise deployment. Docker Universal Control Plane actually has the ability to push logs out to external logging services as well.
In terms of configuration. Configuration is defined by the Dockerfile. Containers are stateless so you kill or restart them. We recommend you keep the dockerfile in something like GitHub.
Q: Can you speak more about load balancing?
A: Sure, for load balancing Docker users can leverage a load balancer like HA Proxy (which is a container) or NGINX. You could use these in combination with Interlock. Interlock monitors Docker events and updates NGINX and HA Proxy when something is added to the load balancer. If a container is killed, Interlock also notifies the load balancers and the load balancer will then remove the workload.
Q: Is there a way to learn about Docker from a professional standpoint?
A: Absolutely. We offer Docker Training which will help teach you the skills you need to become a Docker professional and start helping your company reduce costs, and accelerate the application development process.
As companies embrace DevOps, move to microservices and migrate to the cloud, we hope that they will consider Docker as their platform of choice.
Get started with Docker today by installing the tools.
Learn More about Docker
- New to Docker? Try our 10 min online tutorial
- Share images, automate builds, and more with a free Docker Hub account
- Read the Docker 1.11 Release Notes
- Subscribe to Docker Weekly
- Sign up for upcoming Docker Online Meetups
- Attend upcoming Docker Meetups
- Register for DockerCon 2016
- Watch DockerCon EU 2015 videos
- Start contributing to Docker