This document describes how to quickly create an Nginx service in a container cluster.
Creating Nginx service
- Log in to the TKE console and select Clusters in the left sidebar.
- Click the ID of the cluster for which the service is to be created on the Cluster Management page to go to the workload Deployment page of the cluster, and click Create. See the figure below.
- On the Create Workload page, specify the basic information of the workload as instructed. See the figure below.
- Workload Name: enter the name of the workload to be created. Here,
nginx is used as an example.
- Description: fill in the related workload information.
- Tag: this is a key value pair, and the tag in this example is set to k8s-app = nginx by default.
- Namespace: select a namespace based on your requirements.
- Type: select a type based on your requirements.
- Volume: set up the workload volumes mounted based on your requirements. For more information, see Volume Management.
- Configure "Containers in the pod" as instructed. See the figure below.
Main parameters include:
- Name: enter the name of the container in the pod. Here, “test” is used as an example.
- Image: click Select an image, select DockerHub Image -> nginx in the pop-up window, and then click OK.
- Image Tag: use the default value
- Image Pull Policy: select one of the three available policies as needed. In this example, the default policy is applied.
If you do not set any image pull policy and Image Tag is left empty or
Always policy is used. Otherwise, the
IfNotPresent policy is used.
- Always: the image is always pulled remotely.
- IfNotPresent: a local image is used by default. If no local image is available, the image is pulled remotely.
- Never: a local image is used. If no local image is available, an exception is reported.
- In the "Number of Pods" section, set the number of pods for the service as instructed. See the figure below.
- Manual adjustment: set the number of pods. The number of pods in this example is set to 1. You can click "+" or "-" to change the number of pods.
- Auto Adjustment: the number of pods is automatically adjusted if any of the setting conditions are met. For more information, see Service Auto Scaling.
- Set up the workload access according to the following instructions. See the figure below:
- Service: check Enable.
- Service Access: select Via Internet.
- Load Balancer: select according to your requirements.
- Port Mapping: select TCP protocol, and set both the container port and service port to 80.
The security group of the service’s cluster must open the node network and container network to the Internet. It is also required to open ports 30000 to 32768 to the Internet. Otherwise, the problem of TKE being unusable could occur. For more information, see TKE Security Group Settings.
- Click Create workload to complete the creation of the Nginx service.
Accessing Nginx service
Ngnix service can be accessed using the following two methods.
Accessing Nginx service using Cloud Load Balancer IP
- In the left sidebar, click Clusters to go to the Cluster Management page.
- Click on the Nginx service’s cluster ID and select Service -> Service.
- Copy the Nginx service’s cloud load balancer IP from the service management page. See the figure below:
- Enter the cloud load balancer IP in the browser’s address bar and press Enter to access the service.
Accessing Nginx service using service name
Other services or containers in the cluster can be accessed directly by the service name.
Verifying Nginx service
When the service is successfully created, you directly enter the Nginx server welcome page when accessing the service. See the figure below:
More Nginx settings
- See Building a Simple Web Service with Tencent Cloud TKE.
- If creation of the container failed, you can read the Event FAQs.