Using Services with CLB-to-Pod Direct Access Mode

Last updated: 2021-10-13 16:23:24

    Overview

    For a service in native LoadBalancer mode, a Cloud Load Balancer (CLB) can be automatically created. It first forwards traffic to a cluster through the Nodeport of the cluster, and then forwards it again through iptable or ipvs. Services in this mode can meet users’ needs in most scenarios, but in the following scenarios, services in CLB-to-Pod direct access mode are recommended:

    • The source IP needs to be obtained (local forwarding must be enabled for non-direct access mode)
    • Higher forwarding performance is required (there are two layers of CLBs when the CLB and service are in non-direct access mode, so performance loss is inevitable).
    • Complete health checks and session persistence are required for the Pod layer (there are two layers of CLBs when the CLB and service are in non-direct access mode, so health checks and session persistence are difficult to configure).
    Note:

    Currently, the CLB-to-Pod direct access mode is available for both GlobalRouter and VPC-CNI container network modes. Click the cluster ID in the cluster list to go to the cluster details page. On the Basic Information page, you can find the container network add-on used by the current cluster.

    VPC-CNI Mode

    Use limits

    • The Kubernetes version of the cluster must be 1.12 or later.
    • The VPC-CNI ENI mode must be enabled for the cluster network mode.
    • The workloads used by a service in direct access mode must adopt the VPC-CNI ENI mode.
    • Up to 200 workload replicas can be bound to the CLB backend by default. If you need to bind more replicas, please submit a ticket to increase the quota.
    • The feature limits of a CLB bound to an ENI must be satisfied. For more information, please see Binding an ENI.
    • When workloads in CLB-to-Pod direct access mode are updated, a rolling update is performed based on the health check status of the CLB, which will affect the update speed.
    • HostNetwork type workloads are not supported.

    Directions

    1. Log in to the TKE console.
    2. Refer to the step of Creating a service in the console to go to the "Create a Service" page and set the service parameters as required.
      Some key parameters need to be set as follows:
      • Service Access Method: select Public Network CLB Access or Private Network CLB Access.
      • Network Mode: select Enable CLB-to-Pod Direct Access.
      • Workload Binding: select Reference Workload. In the displayed window, select the backend workload of the VPC-CNI mode.
    3. Click Create Service.

    Notes

    Ensuring the availability during rolling update

    ReadinessGate, provided by the official Kubernetes, is mainly used to control the status of Pod, and requires the cluster version to be later than 1.12. By default, a Pod has the following conditions: PodScheduled, Initialized, ContainersReady, when these statuses are all Ready, the Pod Ready passes the conditions. However, in the cloud native scenario, the status of Pods needs to be judged in combination with other factors. ReadinessGate provides a mechanism that allows you to add a fence for the Pod's status judgment, which is judged and controlled by a third party. In this way, the status of the Pod is associated with the third party.

    Changes in the rolling update of CLB-to-Pod direct access mode

    When users start the rolling update of an app, Kubernetes will perform the rolling update according to the update policy. However, the identification that it uses to judge whether a batch of Pods have started only includes the status of the Pods themselves, and does not consider whether the Pods are configured with health check in the CLB and have passed it. If such Pods cannot be scheduled in time when the access layer components are under high load, the Pods with successful rolling update may not be providing services to external users, thus resulting in service interruption.
    In order to associate the backend status of the CLB and rolling update, the TKE access layer components introduced a new feature: ReadinessGate, which was introduced in Kubernetes 1.12. Only when the TKE access layer components confirm that the backend binding is successful and the health check is passed, will it configure the state of ReadinessGate, so that Pods can reach the Ready state and the rolling update of the entire workload can be facilitated.

    Using ReadinessGate in a cluster

    Kubernetes clusters provide a service registration mechanism. You only need to register your services to a cluster in the form of MutatingWebhookConfigurations resources. When a Pod is created, the cluster will deliver notifications according to the configured callback path. At this time, the pre-creation operation can be performed for the Pod, that is, ReadinessGate can be added to the Pod. This callback process must be based on HTTPS. That is, the CA that issues requests needs to be configured in MutatingWebhookConfigurations, and a certificate issued by the CA needs to be configured on the server.

    Disaster recovery of the ReadinessGate mechanism

    The service registration or certificates in user clusters may be deleted by users, although these system component resources should not be modified or destroyed by users. However, such problems will inevitably occur because of users’ exploration of clusters or misoperations. Therefore, the integrity of the above resources will be checked when the access layer component is started, and the resources will be rebuilt if the integrity is damaged to strengthen the robustness of the system. For more information, please see Pod readiness.

    GlobalRouter Mode

    Use limits

    • A workload can only run in one network mode. You can choose VPC-CNI ENI mode or GlobalRouter mode for the workloads used by a service in direct access mode.
    • It is only available for the bill-by-IP accounts.
    • Up to 200 workload replicas can be bound to the CLB backend by default. If you need to bind more replicas, please submit a ticket to increase the quota.
    • When the CLB-to-Pod direct access mode is used, the network linkage is restricted by the security group of CVM. Please confirm whether the security group configuration opens the corresponding protocol and port. The port corresponding to the workload on the CVM needs to be opened.
    • After the CLB-to-Pod direct access mode is enabled, the ReadinessGate (readiness check) will be enabled by default. It will check whether the traffic from the load balancer is normal during the rolling update of Pod. You also need to configure the correct health check configuration for the application. For details, please see Service CLB Configuration.
    • The CLB-to-Pod direct access in GlobalRouter mode is in beta test. You can use it through the following two ways:
      -You can use it via CCN (recommended). CCN can verify the bound IP address to prevent common IP binding problems such as binding errors and address loopback. The instructions are as follows:
      1. Create a CCN instance. For more information, please see Creating a CCN Instance.
      2. Add the VPC where the cluster is located to the created CCN instance.
      3. Register the container network CIDR block of the relevant cluster to the CCN. On the cluster's Basic Information page, enable the CCN.
        • You can submit a ticket to apply for it. CCN will not verify the IP address in this method (not recommended).

    Directions

    1. Log in to the TKE console.
    2. Refer to the step of Creating a service in the console to go to the "Create a Service" page and set the service parameters as required.
      Some key parameters need to be set as follows:
      • Service Access Method: select Public Network CLB Access or Private Network CLB Access.
      • Network Mode: select Enable CLB-to-Pod Direct Access.
      • Workload Binding: select Reference Workload. In the displayed window, select the backend workload of the VPC-CNI mode.
    3. Click Create Service.