Creating a Node Pool

Last updated: 2020-10-28 16:53:46

    Introduction

    This document describes how to create a node pool in a cluster via the TKE console and describes node pool-related operations, such as viewing, managing, and deleting a node pool.

    Prerequisites

    • You have applied to beta test the node pool feature. To apply to beta test, Submit a Ticket.
    • You have created a cluster. For more information, see Creating a Cluster.

    Notes

    TKE allows you to convert an existing scaling group in a cluster to a node pool. If a scaling group has already been created in the cluster, you can create a node pool by using the existing scaling group.

    1. Log in to the Tencent Kubernetes Engine console and click Cluster in the left sidebar.
    2. On the “Cluster Management” page, click the desired cluster ID to open the “Deployment” page.
    3. In the left sidebar, choose Node Management -> Scaling group to open the “Scaling group list” page.
    4. In the Action column of the desired scaling group, choose More -> Create Node Pool. In the window that appears, click OK.

      After the node pool is created, you can view the related information of the node pool. For more information, see Viewing a Node Pool. The original Scaling group cannot be viewed again.

    Directions

    1. On the “Cluster Management” page, click the desired cluster ID to open the “Deployment” page.

    2. In the left sidebar, choose Node Management -> Node pool to open the “Node pool list” page.

    3. Click Create Node Pool to open the “Create Node Pool” page. Specify the configurations according to the following descriptions.

      • Node pool name: you can customize the name of the node pool based on service requirements to facilitate subsequent resource management.

      • Billing Mode: valid values include Pay-as-you-go and Spot. You can select the value as required. For more information, see Payment Modes.

      • Supported network: the system provides IP addresses within the address range of the node network for servers in the cluster.

        This configuration item is specified at the cluster level and therefore cannot be modified after configuration.

      • Model Settings: click Select the model. On the “Model Settings” page, select the values as required according to the following descriptions:

        • Availability Zone: launch configurations do not contain availability zone information. This option is only used to filter available instance types in the availability zone.

        • Model: you can select the model by specifying the number of CPU cores, memory size, and instance type. For more information, see Instance Types.

        • System disk: controls the storage and schedules the operating of Cloud Virtual Machines (CVMs). You can view the system disk types available for the selected model and select the system disk as required. For more information, see Cloud Disk Types.

        • Data disk: stores all the user data. You can specify the values according to the following descriptions. Each model corresponds to different data disk settings. For more information, see the following table:

          Model Data Disk Settings
          Standard, Memory Optimized, Computing, and GPU No option is selected by default. If you select any of these options, you must specify the cloud disk settings and formatting settings.
          High I/O and Big Data These options are selected by default and cannot be cleared. You can customize the formatting settings for the default local disks.
          Batch-based This option is selected by default, but can be cleared. If this option is selected, you can purchase only default local disks. You can customize the formatting settings for the default local disks.
        • Add Data Disk (Optional): click Add Data Disk and specify the settings according to the preceding table.

        • Public Bandwidth: Assign free public IP is selected by default. The system assigns a free public IP address. You can select Bill By Traffic or Bill by Bandwidth for the billing mode as required and customize the network speed. For more information, see Public Network Billing.

      • Login Methods: you can select any one of the following login methods as required:

        • SSH Key Pair: a key pair is a pair of parameters generated by using an algorithm. Using a key pair to log in to a CVM instance is more secure than using regular passwords. For more information, see SSH Key.
        • SSH Key: this configuration item is available only when SSH Key Pair is selected. You can select an existing key from the drop-down list. If you need to create an SSH key pair, see Creating an SSH Key Pair.
        • Random Password: the system sends an automatically generated password to you via an Internal Message.
        • Custom Password: set a password as prompted.
      • Security Groups: the default value is the security group specified when the cluster is created. You can replace the security group or add a security group as required.

      • Quantity: the desired capacity. You can specify this value as required.

        If auto scaling has been enabled for the node pool, this quantity will be automatically adjusted according to the loads of the cluster.

      • Node Quantity Range: the number of nodes will be automatically adjusted within the specified node quantity range and will not exceed the specified range.

      • Supported subnets: select an available subnet as required.

    4. (Optional) Click More Settings to view or configure more information, as shown in the following figure.

      • CAM role: binds the same CAM role to all nodes in the node pool, granting the authorization policy of this role to the nodes. For more information, please see Managing instance role.
      • Container directory: after selecting this option, you can specify directories for storing containers and images. We recommend that you store the containers and images in a data disk, such as /var/lib/docker.
      • Security Reinforcement: DDoS Protection, Web Application Firewall (WAF), and Cloud Workload Protection are activated by default. For more information, see Cloud Workload Protection.
      • Cloud Monitor: Tencent Cloud service monitoring, analysis, and alarms are activated by default, and components are installed to obtain CVM monitoring metrics. For more information, see Cloud Monitor.
      • Auto Scaling: Enable is selected by default.
      • Cordon Initial Node: after Cordon this node is selected, no new Pod can be scheduled to this node. To uncordon the node, you must manually uncordon the node or run the Uncordon command in custom data and specify the values as required.
      • Label: click New Label and customize the settings of the label. The specified label here will be automatically added to nodes created in the node pool to help filter and manage nodes using labels.
      • Taints: taints are node attributes. This parameter is usually used with Tolerations. You can specify this parameter for all the nodes of the node pool so that Pods that do not meet the relevant conditions cannot be scheduled to these nodes and will be drained from these nodes.

        The value of Taints usually consists of key, value, and effect. Valid values of effect:

        • PreferNoSchedule: optional condition. A Pod is not likely to be scheduled to a node with a taint that cannot be tolerated by the Pod.
        • NoSchedule: when a node contains a taint, a Pod without the corresponding toleration to the taint will never be scheduled to the node.
        • NoExecute: when a node contains a taint, a Pod without the corresponding toleration to the taint will not be scheduled to the node and any such Pods already on the node will be drained.
        Assume that Taints is set to key1=value1:PreferNoSchedule. The following figure shows the configurations in the TKE console:
      • Retry Policy: select one of the following policies as required.
        • Try again: retry immediately. The system stops retrying after failing five times in a row.
        • Retry with incremental intervals: the retry interval extends as the number of consecutive failures increases. The value ranges from seconds to one day.
      • Scaling Mode: select one of the following two scaling modes as required.
        • Release Mode: if this mode is selected, the system automatically releases idle nodes as determined by Cluster AutoScaler during scale-in and automatically creates and adds a node to scaling groups during scale-out.
        • Shutdown Mode: if this mode is selected, during scale-out, the system preferably starts nodes that have been shut down, and if the number of nodes still fails to meet requirements, the system creates the desired number of nodes. During scale-in, the system shuts down idle nodes. If the nodes support the No Charges When Shut Down feature, the nodes that are shut down will not be billed, but remaining nodes are still billed. For more information, see No Charges When Shut down for Pay-as-You-Go Instances Details.
        • Custom data: specifies custom data to configure the node. This means that, when the node starts, the system runs the configured script. You must ensure that the script is reentrant and a retry logic is configured for the script. You can view the script and the log file generated by the script in the /usr/local/qcloud/tke/userscript path of the node.
    5. Click Create Node Pool to create the node pool.

    Relevant Operations

    After a node pool is created, you can manage the node pool according to the following documents:

    Was this page helpful?

    Was this page helpful?

    • Not at all
    • Not very helpful
    • Somewhat helpful
    • Very helpful
    • Extremely helpful
    Send Feedback
    Help