While we’ve made strides in diversity within tech, the 2019 Stack Overflow Developer Survey shows we have work to do. According to the survey, only 7.5 percent of professional developers are women worldwide (it’s 11 percent of all developers in the U.S.). t’s why Docker hosts Women in Tech events at our own conferences, and we’re pleased to participate in the Grace Hopper Celebration this year.
One of the core design principles of any containerized app must be portability. A well-designed application should treat configuration like an independent object, separate from the containers themselves, that’s provisioned to them at runtime. That way, when you move your app from one environment to another, you don’t need to rewrite any of your containers or controllers; you simply provide a configuration object appropriate to this new environment, leaving everything else untouched.
Kubernetes networking model says that any pod can reach any other pod at the target pod’s IP by default, but discovering those IPs and maintaining that list while pods are potentially being rescheduled — resulting in them getting an entirely new IP — by hand would be a lot of tedious, fragile work. In this post, I’ll explain how to configure networking services in Kubernetes to allow pods to communicate reliably with each other.
I reviewed the basic setup for building applications in Kubernetes in part 1 of this blog series. In this post, I’ll explain how to use pods and controllers to create scalable processes for managing your applications. The heart of any application is its running processes, and in Kubernetes we fundamentally create processes as pods. Pods are a bit fancier than individual containers, in that they can schedule whole groups of containers, co-located on a single host.
Kubernetes’ complexity is overwhelming for a lot of people jumping in for the first time. In this blog series, I’m going to walk you through the basics of architecting an application for Kubernetes, with a tactical focus on the actual Kubernetes objects you’re going to need.
Docker support for cross-platform applications is better than ever. At this month’s Docker Virtual Meetup, we featured Docker Architect Elton Stoneman showing how to build and run truly cross-platform apps – including Arm, Intel, Linux and Windows – using Docker’s buildx functionality.
Already available as part of Docker Trusted Registry, personal access tokens can now be used as a substitute for your password in Docker Hub, especially for integrating your Hub account with other tools. You’ll be able to leverage these tokens for authenticating your Hub account from the Docker CLI.
ey has been around for over 200 years. One of the really amazing things about being in an organization that’s been around that long is that you have to have a culture of innovation at your core. Technology like Docker has really empowered our business because it allows us to innovate, and it allows us to experiment. That’s critical because experimentation and being allowed to fail is what allows us to innovate and learn.