CLS allows you to collect/store/search/ship logs, analyze charts, and monitor alarms. With CLS, you can collect logs for central management, search, and analysis. Also, you can set monitor alarms for log topics and ship the collected logs to COS or other products for further analysis.
To get you started, instructions to the following CLS features are described in this document:
First, you need to activate CLS on Tencent Cloud.
LogListener is a client that collects log data and sends it to CLS in a fast and non-intrusive way. You can install it as follows:
To install LogListener, the source server and the CLS region must be able to connect to each other. Tencent’s Cloud Virtual Machine (CVM) accesses CLS via a private network by default.
You can run the following command to check the network connectivity, where
<region> is the abbreviation for the CLS region. For more information about regions, please see Available Regions.
telnet <region abbreviation>.cls.tencentyun.com 80
Log in to the CAM console, view (or create) a key pair, and make sure that the key is enabled.
In this example, the CLS service runs in the CVM CentOS 7.2 (64-bit) environment. To download and install LogListener, see LogListener Installation Guide.
CLS introduces various regions. To lower network latency, please create log resources in a region closest to your business region. For supported regions, please see Available Regions. Log topic management includes the management of logsets and log topics. A logset represents a project, while a log topic represents a type of service. A single logset may contain multiple log topics.
Log in to the CLS console.
In the left sidebar, click Log Topic to go to the management page.
Select a region and click Create Log Topic.
Configure as needed on the page that is displayed.
A logset can be retained for 3−90 days. To retain it for a longer period, please submit a ticket.
A created log topic will be displayed in the log topic list.
You can click Manage Logset to view the created logset.
CLS uses a server group to manage a list of log source servers.
The following describes how to collect logs using LogListener. For more information, please see Collection Methods.
The collection path needs to match the absolute path of the log file on the server. You are required to enter two parameters: the directory prefix and the file name in the format of [directory prefix expression]//**//[file name expression]. LogListener matches all paths with common prefixes that satisfy the [directory prefix expression], and monitors all log files under these directories (including subdirectories) that satisfy the [file name expression]. The detailed parameter description is as follows:
|Directory prefix||The directory prefix for log files, which supports only the wildcard characters
|/**/||Current directory and all its subdirectories|
|File name||A log file name supports only the wildcard characters
For example, if the absolute path of the file to be collected is
/cls/logs/access.log, then the directory prefix entered for the collection path should be
/cls/logs, and the file name
access.log, as shown below:
Select an existing server group, and associate it with the current log topic. Then, LogListener will monitor the log files in this server group according to the rules you set. You may bind a log topic to multiple server groups, but a log file will only be collected into one log topic.
CLS supports various log parsing modes such as full text in a single line, separator, JSON, and full regex. The following log sample uses the separator mode (for more information, please see Separator Format).
Tue Jan 22 14:49:45 2019;download;success;194;a31f28ad59434528660c9076517dc23b
Tue Jan 22 14:49:45 2019,
a31f28ad59434528660c9076517dc23b. The keys defined for these 5 fields are
hashcoderespectively. LogListener will then use this defined structure to collect data.
CLS offers a log search and analysis feature based on segment indexing. We currently offer two index types, full-text index and key-value index. They can be managed on the index configuration tab in the log topic management page. Both indexes can be enabled at the same time.
|Full-Text||Breaks a full log into segments by delimiter, and executes keyword query based on the segments|
|Key-Value||Breaks a full log into key-value pairs according to the specifications, and executes field query based on the key-value pairs|
Here we use key-value index as an example to describe how to configure indexes. In the log topic management page, go to the Index Configuration tab, click Edit, and toggle the key to enable index status. Toggle on Key-Value Index. Click Add to add keys. Select a field type for each key, currently
text are supported.
text type allows you to specify delimiters, which separate a character string into segments. Continuing the above example, enter
hashcode as key-value indexes, and set the field type of
Once the index rule is enabled, indexes will be created for new input data accordingly, and stored over a specified period of time depending on your configured storage cycle. Only logs for which indexes have been created can be queried for analysis. Therefore, modifying an index rule or disabling an index only affects new input data. Unexpired legacy data will still be searchable.
With CLS, you can ship data to COS or CKafka to store logs for a longer period at a lower cost. Moreover, you can analyze big data logs offline.
To ship logs to COS, you can perform the following steps:
After a shipping task is created, CLS asynchronously ships data to the destination bucket. To view the shipping status, you can click the desired log topic and then choose the Ship to COS tab. Alternatively, you can click Shipping task in the left sidebar of the console.
Only logs generated after the configuration can be shipped.
To ship logs to CKafka, you can perform the following steps:
Currently, CLS supports shipping original logs and JSON-formatted logs. To view the shipping status, you can click the consumed log topic and then select the Ship to CKafka tab.