In Tencent Kubernetes Engine (TKE), Pod Networking is implemented by a high-performance pod network based on the VPC at the IaaS layer, and the service proxy is provided by the ipvs and iptables modes supported by kube-proxy. We recommend that you use only the kube-router Network Policy feature for network access control in TKE clusters.
Note:
In TKE, kube-router serves only as a supplement to the kube-proxy feature, so you cannot completely replace kube-proxy with kube-router.
A network policy is a resource provided by Kubernetes to define the pod-based network isolation policy. It specifies whether a group of pods can communicate with other groups of pods and other network endpoints.
Kube-router is a key-type Kubernetes network solution designed to simplify operations and improve performance. The latest version is 0.2.0. It has the following three features:
For more information, go to the Kube-router official website or Kube-router project.
Based on the latest official image version v0.2.1
, the Tencent Cloud PaaS team provides the ccr.ccs.tencentyun.com/library/kube-router:v1
image. During project development, the Tencent Cloud PaaS team actively built a community, provided features, and fixed bugs. The PRs we committed that were incorporated into the community are listed as follows:
The Tencent Cloud PaaS team will continue to contribute to the community and provide Tencent Cloud image version upgrades.
On a server that can access the Internet and TKE cluster API Server, run the following commands in sequence to deploy Kube-router:
Note:
- If a public IP address is configured for a cluster node, you can run the following commands on the node.
- If no public IP address is configured for a cluster node, manually download and copy the content of the YAML file to the node, save it as
kube-router-firewall-daemonset.yaml
, and run thekubectl create
command.
wget https://ask.qcloudimg.com/draft/4495365/4srd9nlfla.zip
unzip 4srd9nlfla.zip
kubectl create -f kube-router-firewall-daemonset.yaml
The content of the kube-router-firewall-daemonset.yaml
file is as follows:
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-router-cfg
namespace: kube-system
labels:
tier: node
k8s-app: kube-router
data:
cni-conf.json: |
{
"name":"kubernetes",
"type":"bridge",
"bridge":"kube-bridge",
"isDefaultGateway":true,
"ipam": {
"type":"host-local"
}
}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-router
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kube-router
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kube-router
namespace: kube-system
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: kube-router
namespace: kube-system
labels:
k8s-app: kube-router
spec:
template:
metadata:
labels:
k8s-app: kube-router
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
serviceAccountName: kube-router
containers:
- name: kube-router
image: ccr.ccs.tencentyun.com/library/kube-router:v1
args: ["--run-router=false", "--run-firewall=true", "--run-service-proxy=false", "--iptables-sync-period=5m", "--cache-sync-timeout=3m"]
securityContext:
privileged: true
imagePullPolicy: Always
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
livenessProbe:
httpGet:
path: /healthz
port: 20244
initialDelaySeconds: 10
periodSeconds: 3
volumeMounts:
- name: lib-modules
mountPath: /lib/modules
readOnly: true
- name: cni-conf-dir
mountPath: /etc/cni/net.d
initContainers:
- name: install-cni
image: busybox
imagePullPolicy: Always
command:
- /bin/sh
- -c
- set -e -x;
if [ ! -f /etc/cni/net.d/10-kuberouter.conf ]; then
TMP=/etc/cni/net.d/.tmp-kuberouter-cfg;
cp /etc/kube-router/cni-conf.json ${TMP};
mv ${TMP} /etc/cni/net.d/10-kuberouter.conf;
fi
volumeMounts:
- name: cni-conf-dir
mountPath: /etc/cni/net.d
- name: kube-router-cfg
mountPath: /etc/kube-router
hostNetwork: true
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
volumes:
- name: lib-modules
hostPath:
path: /lib/modules
- name: cni-conf-dir
hostPath:
path: /etc/cni/net.d
- name: kube-router-cfg
configMap:
name: kube-router-cfg
args description:
apiVersion: extensions/v1beta1
kind: NetworkPolicy
metadata:
name: npa
namespace: nsa
spec:
ingress:
- from:
- podSelector: {}
podSelector: {}
policyTypes:
- Ingress
apiVersion: extensions/v1beta1
kind: NetworkPolicy
metadata:
name: npa
namespace: nsa
spec:
podSelector: {}
policyTypes:
- Ingress
apiVersion: extensions/v1beta1
kind: NetworkPolicy
metadata:
name: npa
namespace: nsa
spec:
ingress:
- from:
- namespaceSelector:
matchLabels:
app: nsb
ports:
- protocol: TCP
port: 6379
podSelector: {}
policyTypes:
- Ingress
apiVersion: extensions/v1beta1
kind: NetworkPolicy
metadata:
name: npa
namespace: nsa
spec:
egress:
- to:
- ipBlock:
cidr: 14.215.0.0/16
ports:
- protocol: TCP
port: 5978
podSelector: {}
policyTypes:
- Egress
apiVersion: extensions/v1beta1
kind: NetworkPolicy
metadata:
name: npd
namespace: default
spec:
ingress:
- from:
- ipBlock:
cidr: 14.215.0.0/16
ports:
- protocol: TCP
port: 80
podSelector: {}
policyTypes:
- Ingress
Test Case | Test Result |
---|---|
Pods in different namespaces are isolated from one another, and pods in the same namespace can communicate with each other | Pass |
Pods in the same and different namespaces are isolated from one another | Pass |
Pods in different namespaces are isolated from one another, and namespace B can access namespace A as specified in the allowlist | Pass |
A specified namespace can access the specified CIDR block outside the cluster, and all other external IP addresses are blocked | Pass |
Pods in different namespaces are isolated from one another, and namespace B can access the corresponding pods and ports in namespace A as specified in the allowlist | Pass |
In the preceding test cases, when the source pod and the destination pod are in the same node, isolation takes effect | Pass |
For more information on functional test cases, see #kube-router Test Case.xlsx.zip#.
A large number of Nginx services are deployed in the Kubernetes cluster, and a fixed service is measured with ApacheBench (ab). The QPS values in Kube-router-enabled and Kube-router-disabled scenarios are compared to measure the performance loss caused by Kube-router.
apiVersion: extensions/v1beta1
kind: NetworkPolicy
metadata:
name: npd
namespace: default
spec:
ingress:
- from:
- ipBlock:
cidr: 14.215.0.0/16
ports:
- protocol: TCP
port: 9090
- from:
- ipBlock:
cidr: 14.215.0.0/16
ports:
- protocol: TCP
port: 8080
- from:
- ipBlock:
cidr: 14.215.0.0/16
ports:
- protocol: TCP
port: 80
podSelector: {}
policyTypes:
- Ingress
As the number of pods increases from 2,000 to 8,000, the performance with Kube-router enabled is 10% to 20% lower than that with Kube-router disabled.
Was this page helpful?