I reviewed the basic setup for building applications in Kubernetes in part 1 of this blog series, and discussed processes as pods and controllers in part 2. In this post, I’ll explain how to configure networking services in Kubernetes to allow pods to communicate reliably with each other.
Setting up Communication via Services
At this point, we’ve deployed our workloads as pods managed by controllers, but there’s no reliable, practical way for pods to communicate with each other, nor is there any way for us to visit any network-facing pod from outside the cluster. Kubernetes networking model says that any pod can reach any other pod at the target pod’s IP by default, but discovering those IPs and maintaining that list while pods are potentially being rescheduled — resulting in them getting an entirely new IP — by hand would be a lot of tedious, fragile work.
Instead, we need to think about Kubernetes services when we’re ready to start building the networking part of our application. Kubernetes services provide reliable, simple networking endpoints for routing traffic to pods via the fixed metadata defined in the controller that created them, rather than via unreliable pod IPs. For simple applications, two services cover most use cases: clusterIP and nodePort services. This brings us to another decision point:
Decision #3: What kind of services should route to each controller?
For simple use cases, you’ll choose either clusterIP or nodePort services. The simplest way to decide between them is to determine whether the target pods are meant to be reachable from outside the cluster or not. In our example application, our web frontend should be reachable externally so users can access our web app.
In this case, we’d create a nodePort service, which would route traffic sent to a particular port on any host in your Kubernetes cluster onto our frontend pods (Swarm fans: this is functionally identical to the L4 mesh net).
For our private API + database pods, we may only want them to be reachable from inside our cluster for security and traffic control purposes. In this case, a clusterIP service is most appropriate. The clusterIP service will provide an IP and port which only other containers in the cluster may send traffic to, and have it forwarded onto the backend pods.
Checkpoint #3: Write some yaml and verify routing
Write some Kubernetes yaml to describe the services you choose for your application and make sure traffic gets routed as you expect.
The simple routing and service discovery above will get pods talking to other pods and allow some simple ingress traffic, but there are many more advanced patterns you’ll want to learn for future applications:
- Headless Services can be used to discover and route to specific pods; you’ll use them for stateful pods declared by a statefulSet controller.
- Kube Ingress and IngressController objects provide managed proxies for doing routing at layer 7 and implementing patterns like sticky sessions and path-based routing.
- ReadinessProbes work exactly like the healthchecks mentioned above, but instead of managing the health of containers and pods, they monitor and respond to their readiness to accept network traffic.
- NetworkPolicies allow for the segmentation of the normally flat and open Kubernetes network, allowing you to define what ingress and egress communication is allowed for a pod, preventing access from or to an unauthorized endpoint.
You can continue reading about Kubernetes application configuration in part 4.
For additional information on these topics, have a look at the Kubernetes documentation:
You can also check out Play with Kubernetes, powered by Docker.
We will also be offering training on Kubernetes starting in early 2020. In the training, we’ll provide more specific examples and hands on exercises.To get notified when the training is available, sign up here: