This document describes the FAQs of CLB, and introduces the causes and solutions of various FAQs of Service/Ingress CLB.
Prerequisites:
Note:
There are many methods to manage the K8s cluster resources. This document describes how to manage K8s cluster resources through the Tencent Cloud console and kubectl command line tool.
EKS will create CLB instances for Ingress that meets the following conditions:
Requirements for Ingress resources | Notes |
---|---|
annotations contains the following key-value pairs: kubernetes.io/ingress.class: qcloud | If you don’t want EKS to create a CLB instance for Ingress, for example, you want to use nginx-Ingress, you only need to make sure the key-value pairs are not contained in the annotations. |
If EKS has successfully created CLB instance for Ingress, it will write the VIP of the CLB instance into the status.loadBalancer.ingress
of the Ingress resource, and write the following key-value pairs into annotations.
kubernetes.io/ingress.qcloud-loadbalance-id: CLB instance ID
To view the CLB instance created by EKS for Ingress:
EKS will create CLB instances for Service that meets the following conditions:
K8s Version | Requirements for Service resources |
---|---|
All K8s versions supported by EKS | spec.type is LoadBalancer. |
The modified version of K8s (Server GitVersion returned by kubectl version has "eks.*" or "tke.*" suffix). | spec.type is ClusterIP, and the value of spec.clusterIP is not None (that is, a non-Headless ClusterIP Service). |
The non-modified version of K8s (Server GitVersion returned by kubectl version does not have "eks.*" or "tke.*" suffix). | spec.type is ClusterIP, and spec.clusterIP is specified as an empty string (""). |
Note:
If the CLB instance is successfully created, EKS will write the following key-value pairs into Service annotations:
service.kubernetes.io/loadbalance-id: CLB instance ID
If EKS has successfully created CLB instance for Service, it will write the VIP of the CLB instance into the status.loadBalancer.ingress
of the Service resource, and write the following key-value pairs into annotations.
kubernetes.io/ingress.qcloud-loadbalance-id: CLB instance ID
To view the CLB instance created by EKS for Service:
For the Service whose spec.type is LoadBalancer, currently EKS does not allocate ClusterIP by default, or the allocated ClusterIP is invalid (cannot be accessed normally). If users need to use ClusterIP to access the Service, they can add the following key-value pairs in annotations to indicate that EKS implements ClusterIP based on the private network CLB.
service.kubernetes.io/qcloud-clusterip-loadbalancer-subnetid: Service CIDR subnet ID
The Service CIDR subnet ID is specified when you create the cluster, and is a string in subnet-********
format. You can view the subnet ID on the CLB basic information page.
Note:
Only EKS clusters that use the modified version of K8s (Server GitVersion returned by kubectl version has "eks.*" or "tke.*" suffix) supports this feature. For the EKS clusters created earlier that use the non-modified version of K8s (the Server GitVersion returned by kubectl version does not have the "eks.*" or "tke.*" suffix), you need to upgrade the K8s version to use this feature.
You can specify the CLB instance type via TKE console or Kubectl command line tool.
For Ingress, select Public Network or Private Network for Network Type to specify the CLB instance type.
For Service, set the Service Access to specify the CLB instance type. Via VPC means the private network CLB instance.
Resource Type | Add the following key-value pairs in the annotation |
---|---|
Service | service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet ID |
Ingress | kubernetes.io/ingress.subnetId: subnet ID |
Note:
The subnet ID is a string in the form of subnet-****, and the subnet must be in the VPC specified for the Cluster Network when creating the cluster. The VPC information can be found in the Basic Information of the cluster in TKE console.
You can specify the existing CLB instance via TKE console or Kubectl command line tool.
When creating a Service or Ingress, you can select Use Existing to use the existing CLB instance. For Service, you can switch to “Use Existing” to use the existing CLB instance through “Update Access Method” after Service is created.
When creating a Service/Ingress or modifying a Service, you need to add the corresponding annotation for the Service or Ingress.
Resource Type | Add the following key-value pairs in the annotation |
---|---|
Service | service.kubernetes.io/tke-existed-lbid: CLB instance ID |
Ingress | kubernetes.io/ingress.existLbId: CLB instance ID |
Note:
The existing CLB instance cannot be the CLB instance created by EKS for Service or Ingress, and EKS does not support multiple Service/Ingress to share the same existing CLB instance.
Only the layer-7 CLB instance supports configuring access logs, but the access logs of layer-7 CLB instance created by EKS for Ingress is not enabled by default. You can enable the access log of the CLB instance in the details page of the CLB instance, as shown below:
Please refer to Which Ingress can EKS create CLB instance for? and Which Service can EKS create CLB instance for? to confirm whether the corresponding resources have the conditions to create CLB instance. If the conditions are met but the CLB instance is not successfully created, you can use the kubectl describe
command to view the related events of the resource.
Generally, EKS will output the related “Warning” events. In the following example, the output event indicates that there are no available IP resources in the subnet, so the CLB instance cannot be successfully created.
Please follow the steps below to analyze:
Note:
LoadBalancer system always has loopback problems (for example, [Troubleshoot Azure Load Balancer](https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-troubleshoot#cause-3- accessing-the-load-balancer-from-the-same-vm-and-network-interface )). Please do not access the services provided by the workload through the VIP opened by itself (via Service or Ingress) in the Pods to which this workload belongs. That is, Pods should not access the services provided by themselves through VIP (including "private network" and "public network"). Otherwise, the access delay may increase, or the access will be blocked when there is only one RS/Pod under the rules corresponding to the VIP.
In CLB management page, select the Listener Management tab to view the forwarding rules (layer-7 protocol) and the bound backend services (layer-4 protocol). The IP address is expected to be the IP of each Pod. The example is as follows:
If you have correctly set the labels for the workload and the Selectors for the Service resource, after the Pods of the workload run successfully, you can find that Pods are added by K8S to the ready IP list of the Endpoints corresponding to the Service by running the kubectl get endpoints
command. The example is as follows:
Pods that are created but in an abnormal state are added by K8S to the unready IP list of the Endpoints corresponding to the Service. The example is as follows:
Note:
You can run the
kubectl describe
command to view the cause of the abnormal Pods. >The command is as follows:kubectl describe pod nginx-7c7c647ff7-4b8n5 -n demo
Even Pods in the Running state may not be able to provide services normally due to some exceptions. For example, the specified protocol + port are not listened to, the internal logic of Pods is incorrect, the process is blocked, etc. You can run the kubectl exec
command to log in to the Pod, and run the telnet/wget/curl
command or use a custom client tool to directly access the Pod IP+ port. If the direct access fails in the Pod, you need to further analyze the reasons why the Pod cannot provide services normally.
The security group controls the network access policy of Pods, just like the IPTables rules in the Linux server. Please check based on the actual situation:
The interactive process requires to specify a security group, and EKS will use this security group to control the Pods' network access policy. The specified security group will be stored in the spec.template.metadata.annotations
of the workload, and finally added to the annotations of the Pods. Examples are as follows:
If you create a workload through the kubectl command and do not specify a security group for Pods (by adding annotations), EKS will use the default security group of the default project in the same region under the account. The directions are as follows:
If the problem persists, please submit a ticket to contact us.
Was this page helpful?