EKS is community-driven and supports the latest Kubernetes version as well as native Kubernetes cluster management. It is ready-to-use in the form of a plugin to support Tencent Cloud products for storage, networking, load balancing, and more.
EKS is a fully managed Kubernetes service, eliminating the need for you to manage any compute nodes. It delivers computing resources using pods. You can purchase, return, and manage cloud resources in native Kubernetes mode.
EKS is built on Tencent Cloud's well-developed virtualization technology and network architecture, providing 99.95% service availability. Tencent Cloud ensures the virtual and network isolation of EKS clusters between users. You can configure network policies for specific products using security groups, network ACL, etc.
EKS leverages the lightweight virtualization technology independently developed by Tencent Cloud to improve efficiency, allowing you to create or delete a TKE instance within seconds. EKS supports automatic scaling of services based on actual loads by configuring the native HPA of Kubernetes.
The serverless framework of EKS ensures higher resource utilization and lower OPS costs. Flexible and efficient auto scaling ensures that EKS only consumes the amount of resources required by the current load.
EKS provides solutions that meet different business needs and can be integrated with most Tencent Cloud services, such as CBS, CFS, COS, TencentDB products, VPC and more.
Website, app backend, etc.
Using EKS to run microservices frees you from operating and maintaining compute nodes. The services can be automatically scaled based on actual loads, which can optimize resource usage and reduce costs.
AI training, supercomputing, etc.
To run offline computing tasks with EKS, you only need to prepare a container image to quickly deploy loads. EKS is billed based on the amount of computing resources used during task execution. When the task ends, pods are automatically released and billing stops.
Image processing, online translation, etc.
EKS can run online inference services using CPU, GPU, and vGPU. It features diverse resource specifications and auto scaling to provide you with high performance and cost efficiency.