Node affinity is an attribute of pods that attracts them to a set of nodes (either as a preference or a mandatory requirement). Taints are the opposite: they allow a node to repel a set of pods.
Tolerations are applied to pods, and they allow (but do not require) the pods to be scheduled to nodes with matching taints.
Taints and tolerations work together to ensure that pods are not scheduled to inappropriate nodes. One or more taints are applied to a node, and the node should not accept any pods that do not tolerate the taints.
In Fluid, considering the schedulability of 'Dataset', tolerations also need to be defined in the resource object. In this way, you can schedule the storage locations of caches on Kubernetes clusters like scheduling your pods.
mkdir <any-path>/tolerations cd <any-path>/tolerations
Check all nodes
kubectl get no NAME STATUS ROLES AGE VERSION 192.168.1.146 Ready <none> 200d v1.18.4-tke.13
Configure a taint for a node
kubectl taint nodes 192.168.1.146 hbase=true:NoSchedule
In the following step, we shall see the taint configuration of this node.
Check the node again
kubectl get node 192.168.1.146 -o yaml | grep taints -A3 taints: - effect: NoSchedule key: hbase value: "true"
Now, the taint configuration
NoSchedule is added to the node, and no dataset can be placed on this node by default.
Dataset resource object to be created
apiVersion: data.fluid.io/v1alpha1 kind: Dataset metadata: name: hbase spec: mounts: - mountPoint: https://mirrors.tuna.tsinghua.edu.cn/apache/hbase/stable/ name: hbase tolerations: - key: hbase operator: Equal value: "true"
To facilitate testing,
mountPointis set to WebUFS in this example. If you want to mount COS, see Mounting COS (COSN) to GooseFS.
spec attribute of the
Dataset resource object, we define a
tolerations sub-attribute to specify that data caches can be placed on the tainted node.
Dataset resource object
kubectl create -f dataset.yaml dataset.data.fluid.io/hbase created
GooseFSRuntime resource object to be created
apiVersion: data.fluid.io/v1alpha1 kind: GooseFSRuntime metadata: name: hbase spec: replicas: 1 tieredstore: levels: - mediumtype: SSD path: /mnt/disk1 quota: 2G high: "0.95" low: "0.7"
The above snippet of the configuration file contains a lot of GooseFS related configuration, and Fluid will start a GooseFS instance based on the configuration. Among the configuration, the
spec.replicas attribute is set to
1, indicating that Fluid will start a GooseFS instance containing 1 GooseFS master and 1 GooseFS worker.
Create the GooseFSRuntime resource object and check its status
kubectl create -f runtime.yaml goosefsruntime.data.fluid.io/hbase created kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES hbase-fuse-n4tnc 1/1 Running 0 63m 192.168.1.146 192.168.1.146 <none> <none> hbase-master-0 2/2 Running 0 85m 192.168.1.146 192.168.1.146 <none> <none> hbase-worker-qs26l 2/2 Running 0 63m 192.168.1.146 192.168.1.146 <none> <none>
As shown above, the GooseFS worker is started and is running on the tainted node.
Check the status of the
kubectl get goosefsruntime hbase -o wide NAME READY MASTERS DESIRED MASTERS MASTER PHASE READY WORKERS DESIRED WORKERS WORKER PHASE READY FUSES DESIRED FUSES FUSE PHASE AGE hbase 1 1 Ready 1 1 Ready 1 1 Ready 4m3s
Check the application to be created
A sample application is provided to demonstrate how data cache affinity scheduling is implemented in Fluid. First, you need to check the application:
selector: # define how the deployment finds the pods it manages
template: # define the pods specifications
- key: hbase
- name: nginx
- mountPath: /data
- name: hbase-vol
Run the application
kubectl create -f app.yaml statefulset.apps/nginx created
Check the application running status
kubectl get pod -o wide -l app=nginx NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-0 1/1 Running 0 2m5s 192.168.1.146 192.168.1.146 <none> <none>
As shown above, the Nginx Pod is started successfully and is running on the tainted node.
kubectl delete -f . kubectl taint nodes 192.168.1.146 hbase=true:NoSchedule-