NGINX Ingress has powerful features, excellent performance, and supports various deployment methods. This document describes how to deploy NGINX Ingress on Tencent Kubernetes Engine (TKE) using Deployment + LB, Daemonset + HostNetwork + LB, and Deployment + LB Direct Connect to Pods.
NGINX Ingress is an implementation method of Kubernetes ingress resources. It watches ingress resources in a Kubernetes cluster to convert ingress rules into NGINX configurations and enable NGINX to forward L7 traffic, as shown below.
NGINX Ingress supports the following two implementation methods. This document describes the implementation method provided by the Kubernetes open-source community.
The following describes three solutions for deploying NGINX Ingress on TKE.
The simplest way to deploy NGINX Ingress on TKE is to deploy the NGINX Ingress controller as a Deployment workload and create a LoadBalancer service for it. A CLB is automatically created for the service or the service is bound to an existing CLB. The CLB receives external traffic and forwards the traffic to NGINX Ingress, as shown below.
Currently, LoadBalancer services on TKE are implemented based on NodePort by default. The CLB binds NodePorts of different nodes as the backend real server (RS) and forwards traffic to these NodePorts. The nodes use Iptables or IPVS to forward the traffic to the backend pods of the service, that is, the pods where the NGINX Ingress controller is located. If nodes are added or deleted later, the CLB will automatically update the NodePort binding.
To install NGINX Ingress, run the following command:
kubectl create ns nginx-ingress
kubectl apply -f https://raw.githubusercontent.com/TencentCloudContainerTeam/manifest/master/nginx-ingress/nginx-ingress-deployment.yaml -n nginx-ingress
In solution 1, traffic is forwarded through NodePorts, which causes the following issues:
Solution 2 provides the following solutions to these problems:
NGINX Ingress uses hostNetwork, and the CLB is bound to the IP addresses and port numbers (80 or 443) instead of the NodePorts of nodes. With hostNetwork, the pods where NGINX Ingress is located will not be scheduled to the same node. To prevent port listening conflicts, you can select certain nodes as edge nodes to deploy NGINX Ingress, label these nodes, and deploy NGINX Ingress on these nodes as a DaemonSet workload. The following figure shows the architecture.
To install NGINX Ingress, perform the following steps:
kubectl label node 10.0.0.3 nginx-ingress=true
kubectl create ns nginx-ingress
kubectl apply -f https://raw.githubusercontent.com/TencentCloudContainerTeam/manifest/master/nginx-ingress/nginx-ingress-daemonset-hostnetwork.yaml -n nginx-ingress
Solution 2 is superior to solution 1 but still has the following issues:
3Solution 2 provides the following solutions to these problems:
kubectl create ns nginx-ingress
kubectl apply -f https://raw.githubusercontent.com/TencentCloudContainerTeam/manifest/master/nginx-ingress/nginx-ingress-deployment-eni.yaml -n nginx-ingress
In Solution 2: Daemonset + HostNetwork + LB, the CLB is manually managed. When you create a CLB, you can select a public or private network. In Solution 1: Deployment + LB and Solution 3: Deployment + LB Direct Connection to Pods, a public CLB is created by default.
To provide NGINX Ingress for private access, reconfigure the YAML file. Add a key for the service of the NGINX Ingress controller, for example,
service.kubernetes.io/qcloud-loadbalancer-internal-subnetid. The key value is the annotation of the subnet ID of the private CLB. See the following code:
apiVersion: v1 kind: Service metadata: annotations: service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxxx # Replace the value with the ID of a subnet in the VPC where the cluster is located. labels: app: nginx-ingress component: controller name: nginx-ingress-controller
In Solution 1: Deployment + LB and Solution 3: Deployment + LB Direct Connection to Pods, a CLB is automatically created by default. The traffic entry address of NGINX Ingress depends on the IP address of the created CLB. If your businesses depend on the entry address, bind NGINX Ingress to an existing CLB.
To bind NGINX Ingress to an existing CLB, reconfigure the YAML file. Add a key to the service of the NGINX Ingress controller, for example,
service.kubernetes.io/tke-existed-lbid. The value is the annotation of the CLB ID. See the following code:
apiVersion: v1 kind: Service metadata: annotations: service.kubernetes.io/tke-existed-lbid: lb-6swtxxxx # Replace the value with the CLB ID. labels: app: nginx-ingress component: controller name: nginx-ingress-controller
Tencent Cloud accounts are classified into bill-by-IP accounts and bill-by-CVM accounts.
To check your account type, see Checking Account Type.
If you deploy NGINX Ingress on TKE yourself, you need to use NGINX Ingress to manage ingress resources and cannot create ingresses in the TKE console. In this case, you can use the YAML file to create ingresses and specify an ingress class annotation for each ingress. See the following code:
apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: test-ingress annotations: kubernetes.io/ingress.class: nginx # This is the key. spec: rules: - host: * http: paths: - path: / backend: serviceName: nginx-v1 servicePort: 80
When NGINX Ingress is installed using the method in How can I create an ingress?, metrics ports are exposed and metrics can be collected using Prometheus. If a cluster has prometheus-operator installed, ServiceMonitor can be used to collect NGINX Ingress monitoring data. See the following code:
apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: nginx-ingress-controller namespace: nginx-ingress labels: app: nginx-ingress component: controller spec: endpoints: - port: metrics interval: 10s namespaceSelector: matchNames: - nginx-ingress selector: matchLabels: app: nginx-ingress component: controller
For more information about the native Prometheus configuration, see the following code:
- job_name: nginx-ingress scrape_interval: 5s kubernetes_sd_configs: - role: endpoints namespaces: names: - nginx-ingress relabel_configs: - action: keep source_labels: - __meta_kubernetes_service_label_app - __meta_kubernetes_service_label_component regex: nginx-ingress;controller - action: keep source_labels: - __meta_kubernetes_endpoint_port_name regex: metrics
After monitoring data is collected, you can configure dashboards provided by the NGINX Ingress community for Grafana to display data.
Copy the JSON file and import it to Grafana to import the data to dashboards.
nginx.json displays the common monitoring dashboards of NGINX Ingress, as shown below.
request-handling-performance.json displays performance monitoring dashboards of NGINX Ingress, as shown below.