Kubernetes: Node Selector and Node Affinity

Claire Lee
5 min readMar 9, 2023

--

Node selector and node affinity are two ways to specify the nodes where a pod should be scheduled in a Kubernetes cluster, based on specific criteria like labels or other attributes. Node selector uses a basic key-value matching mechanism, while node affinity provides more advanced rule-based matching. Taints and tolerations complement these features, as they allow nodes to be marked with taints that repel certain pods, which can then be tolerated by pods with matching tolerations, enabling scheduling of pods on specific nodes based on desired criteria.

Kubernetes: node selectors and node affinity

In Kubernetes, Node Selector and Node Affinity are used to control the placement of pods onto specific nodes in a cluster. Both mechanisms use labels to specify constraints, but they differ in their flexibility and expressiveness.

Node Selector

Node Selector is a simple way to specify which nodes a pod should be scheduled onto based on a set of label key-value pairs. When a pod is created, it can specify a set of node selector labels, and the Kubernetes scheduler will only consider nodes that have labels that match the selector. Node Selector is limited to looking for exact label key-value pair matches, meaning that it cannot use logical operators or more complex matching rules.

In a restaurant, think of each sous chef as a node, and each dish they prepare as a pod. Sous chefs who specialize in Chinese cuisine should be assigned to cook dishes from the Chinese menu. This is where the concept of node selector comes in — it helps to identify which sous chef has the necessary skills to prepare a particular dish.

Node Affinity

Node Affinity provides a more flexible and expressive way to specify constraints on node selection. It enables a conditional approach with logical operators in the matching process, allowing users to specify more complex rules for selecting nodes. Node Affinity uses a combination of label selectors, operators, and values to match nodes. The operators include In, NotIn, Exists, and DoesNotExist, and they can be combined with logical operators such as AND and OR to create more complex rules. This allows users to create rules that are based on node attributes such as CPU architecture, memory, region, or custom labels.

In a restaurant, think of each sous chef as a node, and each dish they prepare as a pod. Some dishes may be more complex and require a sous chef with an even more specific set of skills. This is where the concept of node affinity comes in — it helps to identify which sous chef has the necessary skills to prepare a more complex dish.

Node Affinity Types

  1. requiredDuringSchedulingIgnoredDuringExecution
    -
    Ensure that the pod is only scheduled on nodes that match the specified node selector rules.
    - If no nodes match the node selector rules, the pod remains unscheduled.
  2. preferredDuringSchedulingIgnoredDuringExecution
    -
    The scheduler will attempt to schedule the pod on a node that matches one of the rules.
    - If no nodes match any of the rules, the pod may be scheduled on any node.

Node Selector vs. Node Affinity

Node selector is simpler and easier to use but may not provide the granularity and control required for more complex use cases. Node affinity, on the other hand, offers more flexibility and control but requires more configuration and can be more challenging to troubleshoot if misconfigured. Ultimately, the choice between node selector and node affinity depends on the specific requirements and complexity of your Kubernetes environment.

node selector vs. node affinity

Taints, Tolerations and Node Selector/Affinity

taints, tolerations and node selector/affinity

Taints and tolerations do not guarantee that a pod with matching tolerations will always be placed on a node with a matching taint. (Check Kubernetes: Taints and Tolerations for details)

By combining taints and tolerations with node selectors or affinity, you can create a more robust scheduling strategy for your pods in Kubernetes. You can use taints and tolerations to specify which nodes are ineligible for running a pod and use node selectors or affinity to specify which nodes are preferred for running the pod. This combination provides more control over pod scheduling and ensures that the pod is placed on a node that meets the desired criteria.

Manifest File

Node selector and node affinity can be defined in the Pod configuration by utilizing the Pod spec section.

Node selector

pod.yaml

apiVersion: v1
kind: Pod
metadata:
name: <pod_name>
labels:
<key1>: <value1>
<key2>: <value2>
:
:
<keyN>: <valueN>
spec:
containers:
- name: <container1_name>
image: <image>
- name: <container2_name>
image: <image>

nodeSelector:
<key1>: <value1>
<key2>: <value2>
:
:
<keyM>: <valueM>

Node affinity

pod.yaml

apiVersion: v1
kind: Pod
metadata:
name: <pod_name>
labels:
<key1>: <value1>
<key2>: <value2>
:
:
<keyN>: <valueN>
spec:
containers:
- name: <container1_name>
image: <image>
- name: <container2_name>
image: <image>

affinity:
nodeAffinity:
<nodeAffinity_type>:
nodeSelectorTerms:
- matchExpressions:
- key: <key1>
operator: <operator>
values:
- <value1>
- <value2>
:
- <valueM>

Commands

  • Add or modify labels on a node
$ kubectl label nodes <node_name> <key>=<value>

These are my personal notes for CKA exam preparation on Kubernetes. Please feel free to correct me if you notice any errors. 😊

Reference:

--

--

Claire Lee
Claire Lee

No responses yet