TKE's log collection feature allows you to collect logs in a cluster and send logs in specific paths of cluster services or nodes to Kafka, Elasticsearch, or Tencent Cloud Log Service (CLS). Log collection applies to users who need to store and analyze service logs in Kubernetes clusters.
Log collection must be manually enabled for each cluster. After log collection is enabled for a cluster, the log collection agent runs as a DaemonSet in the cluster, collects logs from the collection source, and sends the collected logs to the consumer based on the collection source and consumer configured using the log collection rules. Log collection supports the following operations:
request
be set to 1 core and limit
be set to 2 cores.request
be set to 1 GB and limit
be set to 1.5 GB./var/lib/docker/containers/<container-id>/<container-id>.json-log
, you can specify the log collection path as /var/lib/docker/containers/*/*.json-log
.
The log collection feature allows you to collect standard output logs of a specified container in Kubernetes clusters. You can configure collection rules flexibly based on your needs.
The collected logs are sent to the specified consumer in JSON format with Kubernetes metadata, including the labels and annotations of the pod to which the container belongs.
log
indicates the raw log information. This log source type allows you to select workloads of multiple namespaces at the same time.Field | Description |
---|---|
docker.container_id | ID of the container to which logs belong |
kubernetes.annotations | Annotations of the pod to which logs belong |
kubernetes.container_name | Name of the container to which logs belong |
kubernetes.host | Host IP address of the pod to which logs belong |
kubernetes.labels | Labels of the pod to which logs belong |
kubernetes.namespace_name | Namespace of the pod to which logs belong |
kubernetes.pod_id | ID of the pod to which logs belong |
kubernetes.pod_name | Name of the pod to which logs belong |
log | Raw log information |
The log collection feature also allows you to collect file logs of a specified pod in a cluster.
The collected logs are sent to the specified consumer in JSON format with Kubernetes metadata, including the labels and annotations of the pod to which the container belongs.
Currently, you can only collect log files stored in volumes. You must mount volumes such as emptyDir and hostpath when creating a workload and save the log files to the specified volume.
You can specify a path or use wildcards, for example,
/var/log/nginx.log
or/var/lib/docker/containers/*/*.log
to collect log files in the corresponding paths of the pod.
When you select container file path as the collection type, the metadata below is added for each log by default. message
indicates the raw log information. This log source type does not support selecting workloads of multiple namespaces.
Field | Description |
---|---|
docker.container_id | ID of the container to which logs belong |
kubernetes.annotations | Annotations of the pod to which logs belong |
kubernetes.container_name | Name of the container to which logs belong |
kubernetes.host | Host IP address of the pod to which logs belong |
kubernetes.labels | Labels of the pod to which logs belong |
kubernetes.namespace_name | Namespace of the pod to which logs belong |
kubernetes.pod_id | ID of the pod to which logs belong |
kubernetes.pod_name | Name of the pod to which logs belong |
file | Source log file |
message | Raw log information |
5. Click Done to complete the creation.
The log collection feature allows you to collect logs in specified node paths of all nodes in a cluster. You can configure the required paths flexibly based on your needs. The log collection agent collects file logs in paths that meet the specified path rules of all nodes in the cluster.
The collected logs are sent to the specified consumer in JSON format with specified metadata, including the source file path and custom metadata.
You can specify a path or use wildcards, for example,
/var/log/nginx.log
or/var/lib/docker/containers/*/*.log
to collect log files in the corresponding paths of all nodes in the cluster.
You can add custom metadata
as needed and attach metadata
specified in key-value format to the collected logs as their metadata tags.
Attached metadata will be added to the logs in JSON format, as shown below:
For example, Without specified metadata attached, the collected logs appear as below:
With specified metadata attached, the collected logs appear as below:
Compared with logs without specified metadata attached, JSON logs with metadata attached have an additional key service
.
Log metadata is defined as follows:
Field | Description |
---|---|
path | Source file of logs |
message | Log information |
Custom key | Custom value |
The log collection feature allows you to set off-premises Kafka pods, topics specified by Tencent Cloud Ckafka pods, or log topics specified by Tencent Cloud Log Service (CLS) as the consumer of logs. The log collection agent will send the collected logs to the topic specified by Kafka or the log topic specified by CLS.
Only Kafka pods without access authentication are supported. All nodes in the cluster must be able to access the specified Kafka topic.
If you use the Ckafka service provided by Tencent Cloud, select a Ckafka pod. Otherwise, enter the Kafka access address and topic, as shown below:
Currently, CLS only supports log collection and reporting for intra-region container clusters.
Only Elasticsearch services without access authentication are supported. All nodes in the cluster must be able to access the specified Elasticsearch service.
Enter the Elasticsearch service's access address and storage index, as shown below:
Was this page helpful?