tencent cloud

Feedback

Notes on Pod Scheduled to Supernodes

Last updated: 2022-06-23 15:53:27

    Billing Mode

    For pods scheduled to the supernodes, there are two billing mode options, pay-as-you-go and spot billing. See Billing Overview, Product Pricing, and Purchase Limits.

    Kubernetes Version

    It supports clusters of v1.16 and above.

    Default Quota

    By default, up to 100 Pods can be scheduled to a supernode for each cluster. If the number of required Pods exceeds the quota limit, you can submit a ticket to apply for a higher quota. Tencent Cloud will assess your actual needs and increase your quota as appropriate.

    Applying for a higher quota

    1. Submit a ticket. On the Submit a ticket page, select the product name and issue type (Others), and then complete the ticket information.
    2. In the Problem description field, enter a description such as "I want to apply for a higher quota for the Pod of cluster supernode." Then, enter the region where your cluster is located and your desired quota. Finally, enter your mobile number and other information as instructed.
    3. Click to contact customer service.

    Pod Configurations

    Pod specification configuration

    Pod specification configuration is the basis for billing the available resources and services when the container is running. For the resource specification configuration of supernode Pod and how to specify the resource specification, please see Resource Specifications and Specifying Resource Specifications.

    Pod temporary storage

    When each Pod scheduled to the supernode is created, a temporary image storage of no more than 20 GiB will be allocated.

    Note:

    • Temporary image storage will be deleted when the Pod lifecycle ends. Therefore, please do not store important data in it.
    • The actual available storage will be less than 20 GiB due to the stored images.
    • It is recommended to mount important data and large files to Volume for persistent storage.

    Pod network

    The Pods scheduled to supernode are on the same VPC network plane as the Tencent Cloud services such as CVM and TencentDB. Each Pod takes an IP address in the VPC subnet.

    A Pod can connect to other Pods or Tencent Cloud services in the same VPC without any performance losses.

    Pod isolation

    The Pod scheduled to the supernode has the same security isolation as the CVM. Pods are scheduled and created on the underlying physical server of Tencent Cloud, and the resource isolation between Pods is guaranteed by virtualization technology during the creation.

    Other special configurations

    You can define template annotation in a YAML file to implement capabilities such as binding security groups, allocating resources and allocating EIP for Pods scheduled to the supernode. For more information about the configuration method, please see the following table:

    Note:

    • If no security group is specified, the Pod will be bound to the specified security group of the node pool by default. Please ensure that the network policy of the security group does not affect the normal operation of the Pod. For example, you need to open port 80 if the Pods provide service via port 80.
    • To allocate CPU resources, you must specify both cpu and mem annotations and make sure that their values meet the CPU specifications in Resource Specifications. In addition, you can select Intel or AMD CPUs to allocate by specifying cpu-type. AMD CPUs are more cost-effective. For more information, see Product Pricing.
    • To allocate GPU resources through the method specified in the annotation, you must specify the gpu-type and gpu-count annotations and ensure that their values meet the GPU specifications in Resource Specifications.
    Annotation Key Annotation Value and Description Required
    eks.tke.cloud.tencent.com/security-group-id Default security group bound with a workload. Specify the security group ID.
  • You can specify multiple security group IDs and separate each of them by commas (,). For example, sg-id1,sg-id2.
  • Network policies take effect based on the sequence of security groups.
  • Not required. Make sure that the security group ID exists in the region of the workload.
    If it’s not specified, the workload is bound to the security group specified in the node pool by default.
    eks.tke.cloud.tencent.com/cpu Number of CPU cores required by a Pod. See Resource Specifications. Not required. Make sure the entered specification is supported and both the cpu and mem are specified.
    eks.tke.cloud.tencent.com/mem Memory required by a Pod. See Resource Specifications. The unit must be included in the value, for example, `512Mi`, `0.5Gi` and `1Gi`. Not required. Make sure the entered specification is supported and both the cpu and mem are specified.
    eks.tke.cloud.tencent.com/cpu-type Model of the CPU resources required by a Pod. The supported models include:
  • intel
  • amd
  • Specific model, such as `S4`, `S3`
  • See Resource Specifications.
    Not required. If it’s not specified, the system automatically choose the best-suit specification. See Specifying Resource Specifications. If the matched specifications are supported by both Intel and AMD, Intel CPUs are preferred.
    eks.tke.cloud.tencent.com/gpu-type Model of the GPU resources required by a Pod. The supported models include:
    • V100
    • 1/4*T4
    • 1/2*T4
    • T4
    • You can specify the model by priority. For example, “T4,V100” indicates T4 resource Pods will be created first. If the T4 resources in the selected region are insufficient, V100 resource Pods will be created.
    For more information, see Resource Specifications.
    If GPUs are required, this option is required. When specifying it, ensure that the GPU model is supported. Otherwise, an error will be reported.
    eks.tke.cloud.tencent.com/gpu-count Number of GPU cards required by a Pod. For more information, see Resource Specifications. Not required. Make sure that the entered specification is supported.
    eks.tke.cloud.tencent.com/retain-ip Whether to retain the IP when the Pod is deleted. "true": Retain the IP after the deletion of a Pod (for 24 hours by default. If the Pod is rebuilt the retention period, this IP can be retrieve.It’s only valid for `statefulset` and `rawpod`. Not required
    eks.tke.cloud.tencent.com/retain-ip-hours The retention period of the IP of a Pod in hours. It can be up to 8760 hours (one year).It’s only valid for `statefulset` and `rawpod`. Not required
    eks.tke.cloud.tencent.com/eip-attributes Attributes of the EIP associated with Pods of the Workload. When the value is `""`, it indicates that the default EIP configuration is used. You can enter the API parameter json of the EIP in within "" to realize custom configuration. For example, if the value of annotation is '{"InternetMaxBandwidthOut":2}', it means the bandwidth is 2M. Note that it is only applicable to bill-by-IP accounts. Not required
    eks.tke.cloud.tencent.com/eip-claim-delete-policy Whether to release the EIP once the Pod is deleted. `Never`: Do not release. This parameter takes effect only when eks.tke.cloud.tencent.com/eip-attributes is specified. Note that it is only applicable to bill-by-IP accounts. Not required
    eks.tke.cloud.tencent.com/eip-id-list For a StatefulSet workload, you can specify multiple existing EIPs (such as "eip-xx1, eip-xx2"). Note that the number of StatefulSet pods can not exceed the number of EIPs specified in the annotation. Otherwise, the pods without EIPs go pending. It is only applicable to bill-by-IP accounts. Not required

    For samples, see Annotation.

    Pod Limits

    Workload limits

    The Pods for DaemonSet workloads are not scheduled to the supernode.

    Service limits

    For cluster services using GlobalRouter Mode, if externaltrafficpolicy = local, the traffic is not forwarded to Pods scheduled to the supernode.

    Volume limits

    Pods that mount with hostpath volumes are not scheduled to the supernode.

    Other limits

    • The supernode feature is not available for the cluster without any server nodes.
    • Pods that have enabled the Static IP Address cannot be scheduled to the supernode.
    • Pods with hostPort specified are not scheduled to the supernode.
    • Pods that with hostIP specified use the Pod IP as the hostIP by default.
    • If Anti-affinity is enabled, only one pod is created on the supernode for the same workload.
    • If the container logs are stored in the specified node file, and log collection is performed through the node file, the Pod logs on the supernode cannot be collected.
    Contact Us

    Contact our sales team or business advisors to help your business.

    Technical Support

    Open a ticket if you're looking for further assistance. Our Ticket is 7x24 avaliable.

    7x24 Phone Support