Kubernetes Services, Ingress, Network Policies, DNS, and CNI plugins
What is Kubernetes Networking?
Kubernetes networking is the mechanism by which different resources within and outside your cluster are able to communicate with each other which include communication between Pods, communication between Kubernetes Services, and handling external traffic to the cluster.
Because Kubernetes is a distributed system, the network plane spans across your cluster’s physical Nodes. It uses a virtual overlay network that provides a flat structure for your cluster resources to connect to.
Below is an example of a Kubernetes networking diagram,
The Kubernetes networking implementation allocates IP addresses, assigns DNS names, and maps ports to your Pods and Services. This process is generally automatic when using Kubernetes, you won’t normally have to manage these tasks on your network infrastructure or Node hosts.
Kubernetes network model works by allocating each Pod a unique IP address that resolves within your cluster. Pods can then communicate using their IP addresses, without requiring NAT or any other configuration.
Types of Kubernetes networking and examples
Kubernetes clusters need to handle several types of network access:
Pod-to-Pod communication
Service-to-Pod communication
External communication to services
Pod-to-Pod (same Node)
The Pod that initiates the network communication uses its default network interface to make a request to the target Pod’s IP address. The interface will be a virtual ethernet connection provided by Kubernetes, usually called eth0 on the Pod side and veth0 on the Node side. The second Pod on the Node will have veth1, the third Pod veth2, and so on:
Pod 0 —
10.244.0.1
,veth0
Pod 1 —
10.244.0.1
,veth1
Pod 2 —
10.244.0.2
,veth2
The Node side of the connection acts as a network bridge. Upon receiving the request for the target Pod’s IP, the bridge checks if any of the devices attached to it (which are the Pod network interfaces veth0
, veth1
, and veth2
, etc.) have the requested IP address,
Incoming Request: 10.244.0.1
Devices on the bridge:
Pod 0 – 10.244.0.1, veth0
Pod 1 – 10.244.0.1, veth1
Pod 2 – 10.244.0.2, veth2
Matching device: veth1
- If there’s a match, then the data is forwarded to that network interface, which will belong to the correct Pod.
Pod-to-Pod (different Nodes)
Communication between Pods on different Nodes isn’t much more complex.
First, the previous Pod-to-Pod flow is initiated, but this will fail when the bridge finds none of its devices have the correct IP address. At this point, the resolution process will fall back to the default gateway on the Node, which will resolve to the cluster-level network layer.
Each Node in the cluster is assigned a unique IP address range; this could look like the following:
Node 1 – All Pods have IP addresses in the range
10.244.0.x
Node 2 – All Pods have IP addresses in the range
10.244.1.x
Node 3 – All Pods have IP addresses in the range
10.244.2.x
Thanks to these known ranges, the cluster can establish which Node is running the Pod and forward the network request on. The destination Node then follows the rest of the Pod-to-Pod routing procedure to select the target Pod’s network interface.
(Node 1) Incoming Request: 10.244.1.1
(Node 1) Devices on the bridge:
Pod 0 – 10.244.0.1, veth0
Pod 1 – 10.244.0.1, veth1
Pod 2 – 10.244.0.2, veth2
(Node 1) No matching interface, fallback to cluster-level network layer
(Cluster) Node IP ranges:
Node 1 – 10.244.0.x
Node 2 – 10.244.1.x
Node 3 – 10.244.2.x
(Cluster) Matching Node: Node 1; forward request
(Node 1) Devices on the bridge:
Pod 0 – 10.244.1.1, veth0
Pod 1 – 10.244.1.1, veth1
Pod 2 – 10.244.1.2, veth2
(Node 1) Matching device: veth1
- The network connection is established to the correct Pod network interface on the remote Node. It’s notable that no NAT, proxying, or direct opening of ports was required for the communication, because all Pods in the cluster ultimately share a flat IP address space.
Service-to-Pod
- Service networking results in multiple Pods being used to handle requests to one IP address or DNS name. The Service is assigned a virtual IP address that resolves to one of the available Pods.
Several different service types are supported, giving you options for a variety of use cases:
ClusterIP
– ClusterIP services expose the service on an IP address that’s only accessible within the cluster. Use these services for internal components such as databases, where the service is exclusively used by other Pods.NodePort
– The service is exposed on a specified port on each Node in the cluster. Only one service can use a given NodePort at a time.LoadBalancer
– Exposes the service externally using a Load Balancer that’s provisioned in your cloud provider account. (This service type is discussed in more depth below.)
Requests to services are proxied to the available Pods. The proxying is implemented by kube-proxy
, a Node-level control plane process that runs on each Node. Three different proxy modes are supported to change how the request is forwarded:
iptables
– Forwarding is configured using iptables rules.ipvs
– Netlink is used to configure IPVS forwarding rules. Compared to iptables, this provides more traffic balancing options, such as selecting the Pod with the fewest connections or shortest queue.kernelspace
– This option is used on Windows Nodes; it configures packet filtering rules for the Windows Virtual Filtering Platform (VFP), which is comparable to Linux’s iptables.
Once kube-proxy
has forwarded the request, the network communication to the Pod proceeds as for a regular Pod-to-Pod request. The proxy step is only required to select a candidate Pod from those available in the Service.
External-to-Service
External traffic to Kubernetes clusters terminates at Services, within the cluster-level network layer. Direct internet access to Nodes isn’t possible by default.
Services are exposed by assigning them one or more
externalIPs
(an IP that’s publicly accessible outside the cluster), or by using the LoadBalancer service type. The latter is the preferred approach; it uses your cloud provider’s API to provision a new load balancer infrastructure resource that routes external requests into your cluster.When a load balancer is used, the load balancer’s IP address will map to the service that created it. When traffic enters the cluster, Kubernetes selects the matching service, then uses the Service-to-Pod flow described above to proxy the network request to a suitable Pod.
Services in Kubernetes
In Kubernetes, a service is an entity that represents a set of pods running an application or functional component. The service holds access policies, and is responsible for enforcing these policies for incoming requests.
The need for services arises from the fact that pods in Kubernetes are short lived and can be replaced at any time. Kubernetes guarantees the availability of a given pod and replica, but not the liveness of individual pods.
This means that pods that need to communicate with another pod cannot rely on the IP address of the underlying single pod. Instead, they connect to the service, which relays them to a relevant, currently-running pod.
The service is assigned a virtual IP address, known as a clus terIP, which persists until it is explicitly destroyed. The service acts as a reliable endpoint for communication between components or applications.
How to Create a Kubernetes Service
A Kubernetes service can be configured using a YAML manifest. Here is an example of a service YAML:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: nginx
ports:
protocol: TCP
port: 80
targetPort: 8080
Here are important aspects of the service manifest:
metadata:name—this is the logical name of the service, which will also become the DNS name of the service when it is created.
spec:selector—the selector identifies which pods that should be included in the service. In this example, pods that have the label app: nginx will become part of the service.
spec:ports—a list of port configurations (there can be one or more). Each port configuration defines a network protocol and port number. Optionally, the port configuration can define a targetPort, which is the port the pod should send traffic to.
What are the Types of Kubernetes Services?
ClusterIP
- ClusterIP is the default service type in Kubernetes. It receives a cluster-internal IP address, making its pods only accessible from within the cluster. If necessary, you can set a specific clusterIP in the service manifest, but it must be within the cluster IP range.
Manifest example:
apiVersion: v1
kind: Service
metadata:
name: my-clusterip-service
spec:
type: ClusterIP
clusterIP: 10.10.5.10
ports:
—name: http
protocol: TCP
port: 80
targetPort: 8080
NodePort
A NodePort service builds on top of the ClusterIP service, exposing it to a port accessible from outside the cluster. If you do not specify a port number, Kubernetes automatically chooses a free port. The kube-proxy component on each node is responsible for listening on the node’s external ports and forwarding client traffic from the NodePort to the ClusterIP.
By default, all nodes in the cluster listen on the service’s NodePort, even if they are not running a pod that matches the service selector. If these nodes receive traffic intended for the service, it is handled by network address translation (NAT) and forwarded to the destination pod.
NodePort can be used to configure an external load balancer to forward network traffic from clients outside the cluster to a specific set of pods. For this to work, you must set a specific port number for the NodePort, and configure the external load balancer to forward traffic to that port on all cluster nodes. You also need to configure health checks in the external load balancer to determine whether a node is running healthy pods.
The nodePort field in the service manifest is optional, and lets you specify a custom port between 30000-32767.
Manifest example:
apiVersion: v1
kind: Service
metadata:
name: my-nodeport-service
spec:
type: NodePort
selector:
app: nginx
ports:
—name: http
protocol: TCP
port: 80
targetPort: 8080
nodePort: 30000
LoadBalancer
A LoadBalancer service is based on the NodePort service, and adds the ability to configure external load balancers in public and private clouds. It exposes services running within the cluster by forwarding network traffic to cluster nodes.
The LoadBalancer service type lets you dynamically implement external load balancers. This typically requires an integration running inside the Kubernetes cluster, which performs a watch on LoadBalancer services.
Manifest example:
apiVersion: v1
kind: Service
metadata:
name: my-loadbalancer-service
spec:
type: LoadBalancer
clusterIP: 10.0.160.135
loadBalancerIP: 168.196.90.10
selector:
app: nginx
ports:
—name: http
protocol: TCP
port: 80
targetPort: 8080
ExternalName
An ExternalName service maps the service to a DNS name instead of a selector. You define the name using the spec:externalName parameter. It returns a CNAME record matching the contents of the externalName field (for example, my.service.domain.com), without using a proxy.
This type of service can be used to create services in Kubernetes that represent external components such as databases running outside of Kubernetes. Another use case is allowing a pod in one namespace to communicate with a service in another namespace—the pod can access the ExternalName as a local service.
Manifest example:
apiVersion: v1
kind: Service
metadata:
name: my-externalname-service
spec:
type: ExternalName
externalName: my.database.domain.com
Ingress in Kubernetes
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.
Here is a simple example where an Ingress sends all its traffic to one Service:
An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting. An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic.
An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.
Manifest example:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx-example
rules:
- http:
paths:
- path: /testpath
pathType: Prefix
backend:
service:
name: test
port:
number: 80
Network Policies in Kubernetes
If you want to control traffic flow at the IP address or port level for TCP, UDP, and SCTP protocols, then you might consider using Kubernetes NetworkPolicies for particular applications in your cluster.
NetworkPolicies are an application-centric construct which allow you to specify how a pod is allowed to communicate with various network "entities" (we use the word "entity" here to avoid overloading the more common terms such as "endpoints" and "services", which have specific Kubernetes connotations) over the network.
NetworkPolicies apply to a connection with a pod on one or both ends, and are not relevant to other connections.
The entities that a Pod can communicate with are identified through a combination of the following three identifiers:
Other pods that are allowed (exception: a pod cannot block access to itself)
Namespaces that are allowed
IP blocks (exception: traffic to and from the node where a Pod is running is always allowed, regardless of the IP address of the Pod or the node)
When defining a pod- or namespace-based NetworkPolicy, you use a selector to specify what traffic is allowed to and from the Pod(s) that match the selector.
Meanwhile, when IP-based NetworkPolicies are created, we define policies based on IP blocks (CIDR ranges).
Manifest exmaple:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- namespaceSelector:
matchLabels:
project: myproject
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 6379
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
DNS in Kubernetes
Kubernetes DNS is a built-in service within the Kubernetes platform, designed to provide name resolution for services within a Kubernetes cluster. It simplifies the communication process between different services and pods within the cluster by allowing the use of hostnames instead of IP addresses.
This is essential because in a dynamic environment like Kubernetes, where pods are continuously created and destroyed, tracking and using IP addresses is very difficult.
The Kubernetes DNS service is automatically configured for each new Kubernetes cluster and assigns a DNS name to each service within the cluster.
This DNS name is then used to resolve to the service’s ClusterIP, the stable IP address assigned to the service within the cluster. This mechanism allows applications running within the cluster to easily discover and communicate with each other.
What Is CoreDNS?
CoreDNS is the default DNS server for Kubernetes, replacing the previous default, kube-dns, as of Kubernetes 1.13. CoreDNS is a flexible, extensible authoritative DNS server that can serve as a cluster DNS.
It is a plugin-based DNS server, which means it can be easily extended with custom functionality. Furthermore, it supports multiple DNS protocols including UDP/TCP, TLS, DNS over HTTP/2, and DNS over QUIC.
CoreDNS is designed to be lightweight and easy to configure. It uses a minimal configuration file that can be modified to add or remove plugins, change the behavior of existing plugins, or even write new plugins.
CNI network plugins in Kubernetes
Container Network Interface(CNI) is a specification and library for configuring network interfaces in Linux containers. In Kubernetes, CNI is the standard way to provide networking to pods.
The main purpose of CNI is to allow different networking plugins to be used with container runtimes. This allows Kubernetes to be flexible and work with different networking solutions, such as Calico, Flannel, and Weave Net. CNI plugins are responsible for configuring network interfaces in pods, such as setting IP addresses, configuring routing, and managing network security policies.
In Kubernetes, the kubelet is responsible for setting up the network for a new Pod using the CNI plugin specified in the network configuration file located in the
/etc/cni/net.d/
directory on the node. This configuration file contains necessary parameters to configure the network for the Pod.The required CNI plugins referenced by the configuration should be installed in the
/opt/cni/bin
directory, which is the directory used by Kubernetes to store the CNI plugin binaries that manage network connectivity for Pods.When a pod is created, the kubelet reads the network configuration file and identifies the CNI plugin specified in the file. The kubelet then loads the CNI plugin and invokes its “ADD” command with the Pod’s network configuration parameters.
The CNI plugin takes over and creates a network namespace, configures the network interface, and sets up routing and firewall rules based on the configuration parameters provided by the kubelet.
The kubelet saves the actual network configuration parameters used by the CNI plugin in a file in the Pod’s network namespace, located in the
/var/run/netns/
directory on the node.
Connect With Me
Thank you for reading. I hope you were able to understand and learn something new from my blog.
Happy Learning!