Kubernetes: Container Network Interface(CNI)
In Kubernetes, each Pod is assigned a unique IP address and can communicate with other Pods without requiring NAT. To provide networking to Pods, Kubernetes uses Container Network Interface (CNI), a library for configuring network interfaces in Linux containers. The kubelet is responsible for setting up the network for new Pods using the CNI plugin specified in the configuration file located in the
/etc/cni/net.d/
directory on the node.
Pod Networking
Base on the Kubernetes network model, the key concepts for Pod networking in Kubernetes include:
- Each Pod has a unique cluster-wide IP address.
- Pods can communicate with all other Pods across nodes without NAT.
- Agents on a node can communicate with all Pods on that node.
Container Network Interface(CNI)
Container Network Interface(CNI) is a specification and library for configuring network interfaces in Linux containers. In Kubernetes, CNI is the standard way to provide networking to pods.
The main purpose of CNI is to allow different networking plugins to be used with container runtimes. This allows Kubernetes to be flexible and work with different networking solutions, such as Calico, Flannel, and Weave Net. CNI plugins are responsible for configuring network interfaces in pods, such as setting IP addresses, configuring routing, and managing network security policies.
Kubelet and CNI: Managing Networks for Pods
In Kubernetes, the kubelet is responsible for setting up the network for a new Pod using the CNI plugin specified in the network configuration file located in the /etc/cni/net.d/
directory on the node. This configuration file contains necessary parameters to configure the network for the Pod.
The required CNI plugins referenced by the configuration should be installed in the /opt/cni/bin
directory, which is the directory used by Kubernetes to store the CNI plugin binaries that manage network connectivity for Pods.
When a pod is created, the kubelet reads the network configuration file and identifies the CNI plugin specified in the file. The kubelet then loads the CNI plugin and invokes its “ADD” command with the Pod’s network configuration parameters. The CNI plugin takes over and creates a network namespace, configures the network interface, and sets up routing and firewall rules based on the configuration parameters provided by the kubelet. The kubelet saves the actual network configuration parameters used by the CNI plugin in a file in the Pod’s network namespace, located in the /var/run/netns/
directory on the node.
Finally, the kubelet notifies the container runtime, such as Docker, that the network is ready for the Pod to start.
Network configuration file
example:
10-cni-plugin-example.conflist
{
"cniVersion": "0.3.1",
"name": "mynet",
"type": "bridge",
"bridge": "mybridge",
"isGateway": true,
"ipMasq": true,
"ipam": {
"type": "host-local",
"subnet": "10.244.0.0/16",
"routes": [
{ "dst": "0.0.0.0/0" }
]
}
}
cniVersion
: The version of the CNI specification that the configuration file adheres to.name
: A name that uniquely identifies the network configuration.type
: The type of the network plugin to use.bridge
: The name of the bridge device to create.isGateway
: A boolean value that specifies whether the bridge device should be used as the default gateway for containers.ipMasq
: A boolean value that specifies whether to enable IP masquerading for traffic leaving the network.ipam
: The IP address management plugin to use. In this example, it is set to "host-local". This plugin assigns IP addresses to containers based on the network namespace where the container is created.subnet
: The subnet from which to allocate IP addresses.routes
: The routing table entries to add to the container's network namespace.
These are my personal notes for CKA exam preparation on Kubernetes. Please feel free to correct me if you notice any errors. 😊
Related story:
Reference: