All cloud users want their migrations to the cloud to be efficient, stable, and highly available, but this depends on system availability, data reliability, and OPS stability. This document describes the check items for deploying containerized applications to the cloud from three perspectives: evaluation item, impact, and reference. This will help ensure you experience a smooth and efficient migration to Tencent Kubernetes Engine (TKE).
Category | Item | Type | Impact | Reference |
Cluster |
Before creating a cluster, plan the node network and container network to suit your application scenario to prevent restricted capacity scaling in the future. |
Network planning |
If you have small-scale subnets or container IP ranges, your cluster may support fewer nodes than your application actually needs. |
Network Planning |
Before creating a cluster, review your planning of direct connect, peering connection, container IP ranges, and subnet IP ranges to prevent IP range conflicts and impacts on your applications. |
Network planning |
For simple networking scenarios, follow the instructions on the page to configure cluster-related IP ranges to avoid conflicts. For complex networking
scenarios, such as peering connection, direct connect, and VPN, improper network planning can affect the normal communication within your application.
| VPC Connections |
When you create a cluster, a new security group is automatically bound to the cluster. You can also set custom security group rules to meet the needs of your application. |
Deployment |
Security groups provide an important means of security isolation. Improper security policy configuration may lead to security-related risks, service connectivity issues, and other problems. | Configuring TKE Security Groups |
As the runtime components currently supported by TKE, Containerd and Docker suit different scenarios. When creating a cluster, select the appropriate container runtime component according to your application scenarios. | Deployment | Once the cluster is created, the container runtime cannot be changed unless the cluster is recreated. | How to Choose Between Containerd and Docker |
By default, Kube-proxy uses iptables to balance the load between Service and Pod. When creating a cluster, you can quickly enable IPVS for traffic distribution and load balancing. |
Deployment |
You can enable IPVS when creating a cluster. It will take effect for the entire cluster and cannot be disabled. |
Enabling IPVS for a Cluster |
When creating a cluster, choose the independent cluster mode or managed cluster mode as needed. | Deployment | The Master and Etcd of the managed cluster are not user resources and are managed and maintained by Tencent Cloud's technical team. You cannot modify the deployment scale and service parameters of Master and Etcd. If you do need to modify them, choose the independent deployment mode. |
Cluster OverviewIntroduction to Cluster Hosting Modes |
Workload
|
When creating a workload, set the CPU and memory limits to improve the robustness of your application. |
Deployment |
When multiple applications are deployed on one node, if an application without resource upper and lower limits encounters a resource leak, exceptions will occur in other applications on the same node due to the lack of resources, and they will report monitoring information errors. | Setting Resource Limits for Workloads |
When creating a workload, you can configure container health checks, which are "liveness check" and "readiness check". | Reliability | If container health checks are not configured, when application exceptions occur, the pod will not be able to detect them to automatically restart the application for recovery. In this case, while the pod seems normal, the application in the pod will behave abnormally. | Configuring Health Checks for Workloads |
When creating a service, choose the appropriate service access method as needed. Four access methods are currently supported: Via Internet, Intra-cluster, Via VPC, and Node Port Access. |
Deployment |
An improper access method may cause access logic confusion and waste resources inside and outside the service. | Service Management
When creating a workload, do not set the number of pod replicas to 1. Set a node scheduling policy based on the needs of your application. |
Reliability |
Setting the number of pod replicas to 1 incurs service exceptions when node exceptions or pod exceptions occur. To ensure that your pod can be scheduled successfully, ensure that the node has resources available for container scheduling after setting the scheduling rules. | Adjusting the Pod QuantitySetting Scheduling Rules for Workloads |
Category | Item | Type | Impact | Reference |
Engineering | Check whether the quotas of resources such as CVMs, VPCs, subnets, and CBS disks can meet customer needs. | Deployment | Insufficient quotas will cause resource creation to fail. If you have enabled auto scaling, ensure that you have sufficient quotas for your Tencent Cloud services. | Quota Limits for Cluster PurchaseQuota Limits |
We recommend that you do not modify the kernel parameters, system configurations, versions of cluster core components, security groups, and LB parameters on the nodes in your cluster. | Deployment | This may cause TKE cluster features or Kubernetes components installed on the node to fail, making the node unavailable for application deployment. | High-risk Operations in TKE |
Proactive OPS | TKE provides multidimensional monitoring and alarm features, along with basic resource monitoring provided by Cloud Monitor, to provide more refined metrics. Configuring monitoring and alarm helps you receive prompt alarms and locate faults in case of exceptions. | Monitoring | If the monitoring and alarm features are not configured, no normal standard can be established for container cluster performance, and alarms will not be promptly received when an exception occurs. In this case, you will have to manually inspect your environment. | Setting AlarmsViewing Monitoring DataList of Monitoring and Alarm Metrics |
Was this page helpful?