The EKS log collection feature can send the logs of services within a cluster to CLS, CKafka, or self-built Kafka. This feature is suitable for users who need to store and analyze service logs in EKS clusters. This document describes how to use the cluster log collection feature provided by EKS.
To use the EKS log collection feature, you need to manually enable it for each elastic cluster when creating a workload. You can enable the EKS log collection feature by performing the following operations:
After the EKS log collection feature is enabled, the log collection agent will send the collected logs in JSON format to the consumer that you have specified based on your configuration of the collection path and log consumer. The details of the collection path and consumer are as follows:
The EKS log collection feature collects the log information and outputs it to the specified consumer in JSON format with Kubernetes metadata attached, including the label of the pod to which the container belongs, annotation, etc. The specific directions are as follows:
Note:
- You can only select the same authorization method for the containers in the same pod. The last modification shall prevail. For example, if you select key authorization for the first container and role authorization for the second container, finally both containers will be role authorization.
- You can only select the same role to authorize for the containers in the same pod.
Note:
The user corresponding to the API key must have the permission to access the CLS. If there is no API key, you need to create a new one. For more information, see Access Key.
The log collection feature supports setting user-built Kafka pods or log topics specified by CLS as the consumer of log content. The log collection agent will send the collected logs to the topic specified by Kafka or the log topic specified by CLS.
If you select Kafka as the consumer of log collection, it is recommended you to use CKafka. The experience of its consumption and production modes are the same as the native version, and it supports alarm configurations.
Specify the Broker address and Topic of Kafka in the container configuration, and ensure that all resources in the cluster can access the user-specified Kafka Topic, as shown in the figure below:
Note:
You need to select “delete” for
cleanup.policy
in Kafka Topic configuration. If you select “compact”, CLS will fail to report to Kafka, resulting in data loss, as shown in the figure below:
This document provides three collection methods for your choice: collecting logs to Kafka, collecting logs to CLS via a secret and collecting logs to CLS via a role.
Note:
If both key and role authorization are configured in yaml, pod actually uses role authorization.
Enable log collection by adding environmental variables.
apiVersion: apps/v1beta2
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
labels:
k8s-app: kafka
qcloud-app: kafka
name: kafka
namespace: default
spec:
replicas: 1
selector:
matchLabels:
k8s-app: kafka
qcloud-app: kafka
template:
metadata:
annotations:
eks.tke.cloud.tencent.com/cpu: "0.25"
eks.tke.cloud.tencent.com/mem: "0.5Gi"
labels:
k8s-app: kafka
qcloud-app: kafka
spec:
containers:
- env:
- name: EKS_LOGS_OUTPUT_TYPE
value: kafka
- name: EKS_LOGS_KAFKA_BROKERS
value: 10.0.16.42:9092
- name: EKS_LOGS_KAFKA_TOPIC
value: eks
- name: EKS_LOGS_METADATA_ON
value: "true"
- name: EKS_LOGS_LOG_PATHS
value: stdout,/tmp/busy*.log
image: busybox:latest
command: ["/bin/sh"]
args: ["-c", "while true; do echo hello world; date; echo hello >> /tmp/busy.log; sleep 1; done"]
imagePullPolicy: Always
name: while
resources:
requests:
cpu: 250m
memory: 512Mi
Field description:
Field Name | Meaning |
---|---|
EKS_LOGS_OUTPUT_TYPE | The consumer can be kafka or CLS. The key indicates whether log collection is enabled. |
EKS_LOGS_LOG_PATHS | Log path. It supports stdout (collection standard output) and absolute paths. It also supports wildcard (*). If multiple paths are specified, separate them with “,”. |
EKS_LOGS_METADATA_ON | Valid values: true; false. Default value: true |
EKS_LOGS_KAFKA_TOPIC | Log topic |
EKS_LOGS_KAFKA_BROKERS | kafka brokers: ip1:port1, ip1:port2, and ip2:port2 formats, separated by “,”. Use this environmental variable for external application. EKS_LOGS_KAFKA_HOST will no longer be visible to external users. |
Creating a secret
Note:
The following sample is to manually create a secret through yaml. If you create a secret through the console, you do not need to perform 64 encoding. For more information, see Secret Management.
Run the following command via kubectl to obtain the secretid and secretkey for base64 encoding. Replace secretid and secretkey with the actual secretid and secretkey that you use.
$ echo -n 'secretid' | base64
c2VjcmV0aWQ=
$ echo -n 'secretkey' | base64
c2VjcmV0a2V5
Manually create a secret via yaml. Specify secretid and secretkey based on the values obtained in the step of Creating a secret.
apiVersion: v1
kind: Secret
metadata:
name: secretidkey
data:
secretid:
secretkey:
Creating a deployment
Enable log collection by adding environmental variables.
apiVersion: apps/v1beta2
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
labels:
k8s-app: cls
qcloud-app: cls
name: cls
namespace: default
spec:
replicas: 1
selector:
matchLabels:
k8s-app: cls
qcloud-app: cls
template:
metadata:
annotations:
eks.tke.cloud.tencent.com/cpu: "0.25"
eks.tke.cloud.tencent.com/mem: "0.5Gi"
labels:
k8s-app: cls
qcloud-app: cls
spec:
containers:
- env:
- name: EKS_LOGS_OUTPUT_TYPE
value: cls
- name: EKS_LOGS_LOGSET_NAME
value: eks
- name: EKS_LOGS_TOPIC_ID
value: 617c8270-e8c8-46e2-a90b-d94c4bebe519
- name: EKS_LOGS_SECRET_ID
valueFrom:
secretKeyRef:
name: secretidkey
key: secretid
- name: EKS_LOGS_SECRET_KEY
valueFrom:
secretKeyRef:
name: secretidkey
key: secretkey
- name: EKS_LOGS_LOG_PATHS
value: stdout,/tmp/busy*.log
- name: EKS_LOGS_METADATA_ON
value: "true"
image: busybox:latest
command: ["/bin/sh"]
args: ["-c", "while true; do echo hello world; date; echo hello >> /tmp/busy.log; sleep 1; done"]
imagePullPolicy: Always
name: hello
- env:
- name: EKS_LOGS_OUTPUT_TYPE
value: cls
- name: EKS_LOGS_LOGSET_NAME
value: eks
- name: EKS_LOGS_TOPIC_ID
value: 617c8270-e8c8-46e2-a90b-d94c4bebe519
- name: EKS_LOGS_SECRET_ID
valueFrom:
secretKeyRef:
name: secretidkey
key: secretid
- name: EKS_LOGS_SECRET_KEY
valueFrom:
secretKeyRef:
name: secretidkey
key: secretkey
- name: EKS_LOGS_LOG_PATHS
value: stdout,/tmp/busy*.log
- name: EKS_LOGS_METADATA_ON
value: "true"
image: busybox:latest
command: ["/bin/sh"]
args: ["-c", "while true; do echo hello world; date; echo hello >> /tmp/busy.log; sleep 1; done"]
imagePullPolicy: Always
name:world
Field description:
Field Name | Meaning |
---|---|
EKS_LOGS_OUTPUT_TYPE | The consumer can be kafka or CLS. The key indicates whether log collection is enabled. |
EKS_LOGS_LOG_PATHS | Log path. It supports stdout (collection standard output) and absolute paths. It also supports wildcard (*). If multiple paths are specified, separate them with “,”. |
EKS_LOGS_METADATA_ON | Valid values: true; false. Default value: true |
EKS_LOGS_LOGSET_NAME | CLS logset name |
EKS_LOGS_TOPIC_ID | CLS logset topic ID | EKS_LOGS_SECRET_ID | SecretId |
EKS_LOGS_SECRET_KEY | SecretKey |
Creating a role
On the CAM console, create a role. While creating a role, you need to select Tencent Cloud product services, bind the role with a CVM, and select QcloudCLSAccessForApiGateWayRole policy. For more information, see Creating Roles.
In the pod template, add annotation, specify the role name, and obtain the permission policy of the role.
template:
metadata:
annotations:
eks.tke.cloud.tencent.com/role-name: "eks-pushlog"
Creating a deployment
apiVersion: apps/v1beta2
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
labels:
k8s-app: cls
qcloud-app: cls
name: cls
namespace: default
spec:
replicas: 1
selector:
matchLabels:
k8s-app: cls
qcloud-app: cls
template:
metadata:
annotations:
eks.tke.cloud.tencent.com/cpu: "0.25"
eks.tke.cloud.tencent.com/mem: "0.5Gi"
eks.tke.cloud.tencent.com/role-name: "eks-pushlog"
labels:
k8s-app: cls
qcloud-app: cls
spec:
containers:
- env:
- name: EKS_LOGS_OUTPUT_TYPE
value: cls
- name: EKS_LOGS_LOGSET_NAME
value: eks
- name: EKS_LOGS_TOPIC_ID
value: 617c8270-e8c8-46e2-a90b-d94c4bebe519
- name: EKS_LOGS_LOG_PATHS
value: stdout,/tmp/busy*.log
- name: EKS_LOGS_METADATA_ON
value: "true"
image: busybox:latest
command: ["/bin/sh"]
args: ["-c", "while true; do echo hello world; date; echo hello >> /tmp/busy.log; sleep 1; done"]
imagePullPolicy: Always
name: hello
- env:
- name: EKS_LOGS_OUTPUT_TYPE
value: cls
- name: EKS_LOGS_LOGSET_NAME
value: eks
- name: EKS_LOGS_TOPIC_ID
value: 617c8270-e8c8-46e2-a90b-d94c4bebe519
- name: EKS_LOGS_LOG_PATHS
value: stdout,/tmp/busy*.log
- name: EKS_LOGS_METADATA_ON
value: "true"
image: busybox:latest
command: ["/bin/sh"]
args: ["-c", "while true; do echo hello world; date; echo hello >> /tmp/busy.log; sleep 1; done"]
imagePullPolicy: Always
name: world
Field description:
Field Name | Meaning |
---|---|
EKS_LOGS_OUTPUT_TYPE | The consumer can be kafka or CLS. The key indicates whether log collection is enabled. |
EKS_LOGS_LOG_PATHS | Log path. It supports stdout (collection standard output) and absolute paths. It also supports wildcard (*). If multiple paths are specified, separate them with “,”. |
EKS_LOGS_METADATA_ON | Valid values: true; false. Default value: true |
EKS_LOGS_LOGSET_NAME | CLS logset name |
EKS_LOGS_TOPIC_ID | CLS logset topic ID |
You can update log collection via the console and yaml. Please refer to the following directions:
Find the yaml corresponding to the workload for which you want to update log collection. Then modify the corresponding variable values based on the relevant variable name changes of the configuration. You can view the meanings of variable names in Configuring log collection.
Was this page helpful?