The Log Collection feature lets users to collect logs in the cluster, store them to a file in the cluster service or cluster node, and ship them to Tencent Cloud CLS and CKafka.
You need to manually enable log collection for each cluster, and configure the collection rules. After log collection is enabled for a cluster, the log collection agent runs as a DaemonSet in the cluster, collects logs from the collection source based on the collection source, CLS log topic, and log parsing method configured by users in the log collection rules, and sends the collected logs to the consumer. Log collection supports the following operations:
Log in to the TKE console and click Operation Management > Log Rules in the left sidebar.
At the top of the “Log Rules” page, select the region and the cluster where you want to configure the log collection rules and click Create, as shown in the figure below:
On the Create Log Collecting Policy page, select the collection type and configure the log source. Currently, the following collection types are supported: Container Standard Output, Container File Path, and Node File Path.
Select Container Standard Output as the collection type and configure the log source as needed. This type of log source allows you to select the workloads of multiple namespaces at a time, as shown in the figure below:
NoteFor container standard output and container files (not mounted in hostPath), besides the original log content, the metadata related to the container or Kubernetes (such as the ID of the container that generated the logs) will also be reported to the CLS. Therefore, when viewing logs, users can trace the log source or search based on the container identifier or characteristics (such as container name and labels).
The metadata related to the container or Kubernetes is shown in the table below:
Field Name Description container_id ID of the container to which logs belong container_name The name of the container to which logs belong image_name The image name IP of the container to which logs belong namespace The namespace of the Pod to which logs belong pod_uid The UID of the Pod to which logs belong pod_name The name of the Pod to which logs belong pod_lable_{label name} The labels of the Pod to which logs belong (for example, if a Pod has two labels: app=nginx and env=prod, the reported log will have two metadata entries attached: pod_label_app:nginx and pod_label_env:prod).
Configure the consumer of logs.
Select a logset and the corresponding log topic. You can create a log topic or select an existing one. See the figure below:
>!
>- CLS only supports log collection and reporting for intra-region container clusters.
>- If there are already 500 log topics in the log sets, no more log topic can be created.
>
Click Next and choose a log extraction mode, as shown below:
Note:
- One log topic supports only one collection configuration. Ensure that all container logs that adopt the log topic can accept the log parsing method that you choose. If you create different collection configurations under the same log topic, the earlier collection configurations will be overwritten.
- Configuring log parsing method is only supported when you select shipping logs to CLS.
Parsing Mode | Description | Related Document |
---|---|---|
Full text in a single line | A log contains only one line of content, and the line break `\n` to mark the end of a log. Each log will be parsed into a complete string with CONTENT as the key value. When log Index is enabled, you can search for log content via full-text search. The time attribute of a log is determined by the collection time. | Full Text in a Single Line |
Full text in multi lines | A log with full text in multi lines spans multiple lines and a first-line regular expression is used for match. When a log in a line matches the preset regular expression, it is considered as the beginning of a log, and the next matching line will be the end mark of the log. A default key value, CONTENT, will be set as well. The time attribute of a log is determined by the collection time. The regular expression can be generated automatically. | Full Text in Multi Lines |
Single line - full regex | The single-line - full regular expression mode is a log parsing mode where multiple key-value pairs can be extracted from a complete log. When configuring the single-line - full regular expression mode, you need to enter a sample log first and then customize your regular expression. After the configuration is completed, the system will extract the corresponding key-value pairs according to the capture group in the regular expression. The regular expression can be generated automatically. | Full Regular Format (Single-Line) |
Multiple lines - full regex | The multi-line - full regular expression mode is a log parsing mode where multiple key-value pairs can be extracted from a complete piece of log data that spans multiple lines in a log text file (such as Java program logs) based on a regular expression. When configuring the multi-line - full regular expression mode, you need to enter a sample log first and then customize your regular expression. After the configuration is completed, the system will extract the corresponding key-value pairs according to the capture group in the regular expression. The regular expression can be generated automatically. | Full Regular Format (Multi-Line) |
JSON | A JSON log automatically extracts the key at the first layer as the field name and the value at the first layer as the field value to implement structured processing of the entire log. Each complete log ends with a line break `\n`. | JSON Format |
Separator | Structure the data in a log with the specified separator, and each complete log ends with a line break `\n`. Define a unique key for each separate field. Leave the field blank if you don’t need to collect it. At least one field is required. | Separator Format |
Enable the filter and configure rules as needed and then click Done, as shown in the figure below.
Note:The logset and log topic cannot be modified later.
Was this page helpful?