stay Kubernetes One of the most complex schedulers in can handle pod Allocation strategy of . Based on pod Resource requirements mentioned in the specification ,Kubernetes The scheduler automatically selects the most appropriate node to run pod.

But in many real situations , We have to intervene in the scheduling process pod And one node or two specific pod Match between . therefore ,Kubernetes There is a very powerful mechanism to manage and control pod Distribution logic of .

that , This article will explore the impact Kubernetes Key characteristics of default scheduling decision in .

<> Node affinity / Anti affinity

Kubernetes It's always been dependent label and selector To group resources . for example , Use of a service selector To filter with specific label Of pod, these ones here label Traffic can be selectively received .Label and selector Simple equality based conditions can be used (=and!=) To evaluate the rules . adopt nodeSelector Characteristics of ( That is to force the pod Scheduling to specific nodes ), This technology can be extended to nodes .

in addition ,label and selector Support for collection based query, It brings with it in,notin and exist Advanced filtering technology of operator . Combined with equation based requirements , Set based requirements provide sophisticated techniques for filtering Kubernetes Resources in .

Node affinity / Anti affinity use label and annotation Based on the expression set filtering technology to define the pod Distribution logic of .Annotation Can be provided without exposure to selector Other metadata for , This means that annotation The key of is not included in the query And filter resources . But node affinity can be used in expressions annotation. Anti affinity can ensure pod It is not forced to schedule to the node that matches the rule .

Except to be able to query In addition to using complex logic in , Node affinity / Anti affinity can impose hard and soft rules on allocation logic . Hard rules will enforce strict policies , May prevent pod Assigned to a node that does not meet the criteria . The soft rule will first confirm whether the node matches the specific condition , If they don't match , It will be allocated using the default scheduling mode Pod. expression
requiredDuringSchedulingIgnoredDuringExecution and
preferredDuringSchedulingIgnoredDuringExecution Hard rules and soft rules will be executed separately .

Here's how to use node affinity under hard and soft rules / Examples of anti affinity :
affinity: nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms: - matchExpressions: - key:
"failure-domain.beta.kubernetes.io/zone" operator: In values: ["asia-south1-a"]

The above rules will indicate Kubernetes The scheduler attempts to Pod Assigned to GKE Clustered asia-south1-a On the node running in the zone . If no nodes are available , The scheduler will apply the standard allocation logic directly .
affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms: - matchExpressions: - key:
"failure-domain.beta.kubernetes.io/zone" operator: NotIn values:
["asia-south1-a"]
The above rules are used by NotIn Operator to enforce anti affinity . This is a hard and fast rule , It can ensure that there is No pod Assigned to run in asia-south1-a In space GKE node .

<>Pod Affinity / Anti affinity

Although node affinity / Anti affinity can be handled pod And nodes , But there are scenarios where we need to make sure that pod Run together or not on the same node 2 individual pod.Pod Affinity / Anti affinity will help our application enforce granular allocation logic .

Affinity with nodes / Expressions in anti affinity are similar ,pod Affinity / Anti affinity can also be achieved through
requiredDuringSchedulingIgnoredDuringExecution and
preferredDuringSchedulingIgnoredDuringExecution
Enforcement of hard and soft rules . You can also combine node affinity with pod Affinity for mixing and matching , To define complex allocation logic .

In order to better understand the concept , Imagine that we have one web And cache deployment, Three of them are in one 3 Running in cluster of nodes . To ensure that the web And cache pod Low delay between , We want to run them on a single node . meanwhile , We don't want to run more than on the same node 1 Caches pod. Based on this situation , We need to implement the following strategies : Each node only runs 1 One and only one 1 Caches Pod Of web
pod.

first , We will use anti affinity rules to deploy the cache , It will prevent more than 1 individual pod Running in 1 On nodes :
affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: -
labelSelector: matchExpressions: - key: app operator: In values: - redis
topologyKey: "kubernetes.io/hostname"
topoloyKey Use the default attached to the node label The name of the dynamic filter node . Please note that , We use podAntiAffinity Expressions and in Operator to apply rules .

Let's assume that the 3 individual pod cache , So now we want to use caching Pod Deploy on the same node web pod. We will use podAffinity To implement this logic :
podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector:
matchExpressions: - key: app operator: In values: - redis topologyKey:
"kubernetes.io/hostname"
The above code indicates that Kubernetes The scheduler is looking for a cache Pod And deploy web pod.

Except for nodes and pod Affinity / In addition to anti affinity , We can still use it taints and tolerations To define custom allocation logic . in addition , We can also write custom schedulers , It can take over the scheduling logic from the default scheduler .

Technology
©2019-2020 Toolsou All rights reserved,
1190 Reverses the substring between each pair of parentheses leetcodemysql Joint index details You don't know ——HarmonyOS Create data mysql Library process Character recognition technology of vehicle license plate based on Neural Network A guess number of small games , use JavaScript realization Talking about uni-app Page value transfer problem pytorch of ResNet18( Yes cifar10 The accuracy of data classification is achieved 94%)C++ Method of detecting memory leak One is called “ Asking for the train ” A small village Finally got the train