Using CRD to Configure Log Collection via the Console

Last updated: 2021-07-15 14:18:35

    Overview

    TKE's log collection feature allows you to collect logs in a cluster and send logs in specific paths of cluster services or nodes to Tencent Cloud Log Service (CLS). Log collection applies to users who need to store and analyze service logs in Kubernetes clusters.

    You need to manually enable log collection for each cluster, and configure the collection rules. After log collection is enabled for a cluster, the log collection agent runs as a DaemonSet in the cluster, collects logs from the collection source based on the collection source, CLS log topic, and log parsing method configured by users in the log collection rules, and sends the collected logs to the CLS for storage. Log collection supports the following operations:

    Prerequisites

    • Before enabling log collection, ensure that there are sufficient resources on cluster nodes. Enabling log collection will occupy some cluster resources.
      • CPU resources occupied: 0.11 to 1.1 cores by default. You can increase the CPU resources as needed if the quantity of logs is too large.
      • Memory resources occupied: 24 to 560 MB by default. You can increase the memory resources as needed if the quantity of logs is too large.
      • Maximum log length: 512 K for a log. The log is truncated if this limit is exceeded.
    • To use log collection, confirm that nodes in the Kubernetes cluster can access CLS. Only Kubernetes clusters of version 1.10 or later support the following log collection features.

    Concepts

    • Log Collection Agent: the agent that TKE uses to collect logs. It adopts Loglistener and runs within the cluster as a DaemonSet.
    • Log Rules: users can use log rules to specify the log collection source, log topic, and log parsing method and configure the filter.
      • The log collection agent monitors changes in the log collection rules, and rule changes take effect within 10 seconds.
      • Multiple log collection rules do not create multiple DaemonSets, but too many log collection rules cause the log collection agent to occupy more resources.
    • Log Source: log sources include specified container standard output, files in containers, and node files.
      • When collecting container standard output logs, users can select TKE logs in all containers or specified workloads and specified Pod labels as the log collection source.
      • When collecting container file path logs, users can specify container file path logs in workloads or Pod labels as the collection source.
      • When collecting node file path logs, users can set the node file path as the log collection source.
    • Consumer: users can select the logsets and log topics of the CLS as the consumer end.
    • Extraction mode: the log collection agent supports the delivery of collected logs to the user-specified log topic in the format of single-line text, JSON, separator-based text, multi-line text, or full regex.
    • Filter: after filter is enabled, logs will be collected according to the specified rules. "key" supports full matching and the rule supports regex matching. For example, you can set to collect logs containing "ErrorCode = 404".

    Directions

    Enabling log collection

    1. Log in to the TKE console and choose Cluster OPS > Feature Management in the left sidebar.
    2. At the top of the Feature Management page, select the region. On the right side of the cluster for which you want to enable log collection, click Set, as shown in the figure below:
    3. On the "Configure Features" page, click Edit for log collection, enable log collection, and confirm this operation, as shown in the figure below:

    Configuring the log rules

    1. Log in to the TKE console and choose Cluster OPS > Log Collection Rules in the left sidebar.
    2. At the top of the “Log Rules” page, select the region and the cluster where you want to configure the log collection rules and click Create, as shown in the figure below:
    3. On the Create Log Collecting Policy page, select the collection type and configure the log source. Currently, the following collection types are supported: Container Standard Output, Container File Path, and Node File Path.

      Select Container Standard Output as the collection type and configure the log source as needed. This type of log source allows you to select the workloads of multiple namespaces at a time, as shown in the figure below:

    4. Configure the CLS as the consumer end. Select the desired logset and log topic. You can select new or existing log topics, as shown in the figure below:
      Note:

      • CLS currently only supports log collection and reporting for intra-region container clusters.
      • If there are already 10 log topics, you cannot create a new one.
    1. Click Next and choose a log extraction mode, as shown below:

      Note:

      Currently, one log topic supports only one collection configuration. Ensure that all container logs that adopt the log topic can accept the log parsing method that you choose. If you create different collection configurations under the same log topic, the earlier collection configurations will be overwritten.

      Parsing Mode Description Documentation
      Full text in a single line A log contains only one line of content, and the line break `\n` to mark the end of a log. Each log will be parsed into a complete string with CONTENT as the key value. When log Index is enabled, you can search for log content via full-text search. The time attribute of a log is determined by the collection time. Full Text in a Single Line
      Full text in multi lines A log with full text in multi lines spans multiple lines and a first-line regular expression is used for match. When a log in a line matches the preset regular expression, it is considered as the beginning of a log, and the next matching line will be the end mark of the log. A default key value, CONTENT, will be set as well. The time attribute of a log is determined by the collection time. The regular expression can be generated automatically. Full Text in Multi Lines
      Single line - full regex The single-line - full regular expression mode is a log parsing mode where multiple key-value pairs can be extracted from a complete log. When configuring the single-line - full regular expression mode, you need to enter a sample log first and then customize your regular expression. After the configuration is completed, the system will extract the corresponding key-value pairs according to the capture group in the regular expression. The regular expression can be generated automatically. Full Regular Format (Single-Line)
      Multiple lines - full regex The multi-line - full regular expression mode is a log parsing mode where multiple key-value pairs can be extracted from a complete piece of log data that spans multiple lines in a log text file (such as Java program logs) based on a regular expression. When configuring the multi-line - full regular expression mode, you need to enter a sample log first and then customize your regular expression. After the configuration is completed, the system will extract the corresponding key-value pairs according to the capture group in the regular expression. The regular expression can be generated automatically. Full Regular Format (Multi-Line)
      JSON A JSON log automatically extracts the key at the first layer as the field name and the value at the first layer as the field value to implement structured processing of the entire log. Each complete log ends with a line break `\n`. JSON Format
      Separator In a separator log, the entire log data can be structured according to the specified separator, and each complete log ends with a line break `\n`. When CLS processes separator logs, you need to define a unique key for each separate field. Invalid fields, which are fields that need not be collected, can be left blank. However, you cannot leave all fields blank. Separator Format
    2. Enable the filter and configure rules as needed and then click Complete, as shown in the figure below.

    Updating the log rules

    1. Log in to the TKE console and choose Cluster OPS > Log Collection Rules in the left sidebar.
    2. At the top of the “Log Rules” page, select the region and the cluster where you want to update the log collection rules and click Edit Rule at the right, as shown in the figure below:
    3. Update the configuration as needed and click Done.
      Note:

      The logset and log topic cannot be updated.