Kubernetes: Taints and Tolerations

Claire Lee
4 min readMar 8, 2023

--

Taints and tolerations in Kubernetes are used to restrict which pods can be scheduled on which nodes in a cluster. Taints are applied to nodes to indicate that certain pods should not be scheduled on them unless they tolerate the taint. Tolerations are set on pods to indicate that they are willing to be scheduled on nodes with the corresponding taints. There are three types of taint effect: NoSchedule, PreferNoSchedule, and NoExecute. Taints and tolerations have some limitations, as they do not guarantee that a pod with matching tolerations will always be placed on a node with a matching taint.

Kubernetes: Taints and Tolerations

Taints and Tolerations

In Kubernetes, taints and tolerations are used to control which nodes are able to run specific pods.

taints and tolerations

Taints

A taint is a label that is applied to a node to indicate that the node has a preference or restriction for running certain types of pods. A taint effectively repels pods from being scheduled on a node unless that pod has a matching toleration.

Taints-Node

Tolerations

A toleration is a property that is added to a pod that enables it to be scheduled on a node with a matching taint. Tolerations enable pods to be scheduled on nodes that would otherwise be unsuitable for running them due to a taint.

tolerations-Pod

Analogy

In a restaurant hosting a VIP feast, certain highly-skilled sous chefs are “tainted” for the event, as they are specifically selected to serve the VIP guests. Meanwhile, the budget restriction for dishes in the menu is like the “toleration” in Kubernetes, where only dishes that meet a certain budget can be served by the sous chefs.

The Limitations of Taints and Tolerations: Matching ≠ Placement

limitations of taints and tolerations

The use of taints and tolerations in Kubernetes provides a way to control which nodes can run certain pods based on certain restrictions or preferences. However, it’s important to note that they do not guarantee that a pod with matching tolerations will always be placed on a node with a matching taint.

This is because there may be other factors at play that affect pod scheduling, such as resource constraints or other pod scheduling preferences specified by the user.

Therefore, while taints and tolerations can be useful for controlling pod placement based on certain restrictions or preferences, they should not be relied on as the sole method for ensuring pod placement on specific nodes. Other scheduling options, such as node affinity, should also be considered for more fine-grained control over pod placement.

Taint Effect

Taint Effect is the behavior that Kubernetes applies to the Pods that do not tolerate the Taint.

  1. NoSchedule
    Prevent Kubernetes from scheduling any Pod that does not tolerate the taint on that particular node.
  2. PreferNoSchedule
    Allow scheduling of non-tolerant Pods but prefers to schedule Pods that do tolerate the taint.
  3. NoExecute
    Evict any Pod that was running on a node when a taint is added or updated, and the Pod does not tolerate that taint.

Pod with YAML

A “toleration” is a specification that is added to a pod to indicate that the pod can tolerate a taint.

pod.yaml

apiVersion: v1
kind: Pod
metadata:
name: <pod_name>
labels:
<key1>: <value1>
<key2>: <value2>
:
:
<keyN>: <valueN>
spec:
containers:
- name: <container1_name>
image: <image>
- name: <container2_name>
image: <image>
tolerations:
########################
# pod tolerations #
########################

with value:

tolerations:
- key: "key1"
operator: "Equal"
value: "value1"
effect: "<taint_effect>"

without value:

tolerations:
- key: "key1"
operator: "Exists"
effect: "<taint_effect>"

Commands

commands
  1. Add a taint to a node
$ kubectl taint nodes <node_name> <key1>=<value1>:<taint_effect>

2. Remove a taint from a node

$ kubectl taint nodes <node_name> <key1>-

These are my personal notes for CKA exam preparation on Kubernetes. Please feel free to correct me if you notice any errors. 😊

Reference:

--

--

Claire Lee
Claire Lee

No responses yet