tencent cloud

Feedback

Native Node Scaling

Last updated: 2023-05-05 11:05:32

    Note

    The auto-scaling of a native node is implemented by Tencent Kubernetes Engine (TKE). The auto-scaling of a normal node relies on Auto Scaling (AS).
    If auto-scaling is not enabled for a native node pool:
    The number of initialized nodes is specified by the Nodes parameter in the console, or the replicas parameter in the YAML configuration file.
    You can manually adjust the number of nodes as needed. However, the number of nodes is limited by the maximum number, which is 500 by default, and the number of IP addresses in the container subnet.
    If auto-scaling is enabled for a native node pool:
    The number of initialized nodes is specified by the Nodes parameter in the console, or the replicas parameter in the YAML configuration file.
    You must specify the Number of Nodes parameter in the console, or the minReplicas and maxReplicas parameters in the YAML configuration file to set the range for the number of nodes. Cluster Autoscaler (CA) adjusts the number of nodes in the current node pool within the specified range.
    You cannot manually adjust the number of nodes as needed.

    Enabling the Auto-scaling Feature for Nodes

    Parameter description

    Function
    Parameter and Values
    Description
    Auto Scaling
    Parameter: spec.scaling
    The auto-scaling feature is enabled by default. If the auto-scaling feature is enabled for a node pool, CA automatically scales in or out the node pool.
    Number of Nodes
    Parameter: spec.scaling.maxReplicas and spec.scaling.minReplicas
    Valid values: The value is customizable.
    The number of nodes in the node pool cannot exceed the specified range. If auto-scaling is enabled for a node pool, the number of native nodes in the node pool can be automatically adjusted within the specified range.
    Scaling policy
    Parameter: spec.scaling.createPolicy
    Example values:
    Zone priority in the console, or ZonePriority in the YAML configuration file.
    Zone equality in the console, or ZoneEquality in the YAML configuration file.
    If you specify Zone priority, the auto-scaling feature performs scaling in the preferred zone first. If the preferred zone cannot be scaled, other zones are used.
    If you specify Zone equality, the auto-scaling feature distributes node instances evenly among the zones, or subnets, specified in the scaling group. This policy takes effect only if you have configured multiple subnets.

    Enabling the feature in the TKE console

    Method 1: Enabling auto-scaling on the node pool creation page

    1. Log in to the TKE console and create a node pool in the cluster. For more information, see Creating Native Nodes.
    2. On the Create node pool page, select Enable for Auto-scaling. See the following figure:
    

    Method 2: Enabling auto-scaling on the details page of a node pool

    1. Log in to the TKE console and select Cluster in the left sidebar.
    2. On the cluster list page, click the ID of the target cluster to go to the details page.
    3. Choose Node management > Node pool in the left sidebar to go to the Node pool list page.
    4. Click the ID of the target node pool to go to the details page of the node pool.
    5. In the node pool details page, click Edit on the right side of the Ops information section.
    6. Select Enable for Auto-scaling and click OK.

    Enabling the feature by using YAML

    Specify the scaling parameter in the YAML configuration file for a node pool.
    apiVersion: node.tke.cloud.tencent.com/v1beta1
    kind: MachineSet
    spec:
    type: Native
    displayName: mstest
    replicas: 2
    autoRepair: true
    deletePolicy: Random
    healthCheckPolicyName: test-all
    instanceTypes:
    - C3.LARGE8
    subnetIDs:
    - subnet-xxxxxxxx
    - subnet-yyyyyyyy
    scaling:
    createPolicy: ZonePriority
    minReplicas: 10
    maxReplicas: 100
    template:
    spec:
    displayName: mtest
    runtimeRootDir: /var/lib/containerd
    unschedulable: false
    ......
    

    Viewing the scaling records

    1. Log in to the TKE console and select Cluster in the left sidebar.
    2. On the cluster list page, click the ID of the target cluster to go to the details page.
    3. Choose Node management > Node pool in the left sidebar to go to the Node pool list page.
    4. Click the ID of the target node pool to go to the details page of the node pool.
    5. View the scaling records on the Ops records page.
    Contact Us

    Contact our sales team or business advisors to help your business.

    Technical Support

    Open a ticket if you're looking for further assistance. Our Ticket is 7x24 avaliable.

    7x24 Phone Support