tencent cloud

Feedback

CLS

Last updated: 2022-06-07 14:44:13
This document is currently invalid. Please refer to the documentation page of the product.

    Overview

    DataHub offers data distribution capabilities. You can distribute CKafka data to CLS for business problem locating, metric monitoring, and security audit.

    Prerequisites

    Currently, this feature relies on the SCF and CLS services, which should be activated first.

    Directions

    1. Log in to the CKafka console.
    2. Click Data Distribution on the left sidebar, select the region, and click Create Task.
    3. Select EventBridge as the Target Type and click Next.
    Note:

    Before using SCF and EventBridge to process data, you need to read and agree to SCF Description and Billing Overview.

    1. On the Configure Task page, enter the task details.
      • Task Name: It can only contain letters, digits, underscores, or symbols ("-" and ".").
      • CKafka Instance: Select the source CKafka instance.
      • Source Topic: Select the source topic.
      • Delivery Target: Select CLS.
      • Starting Position: Select the topic offset of historical messages when dumping.
      • Logset: Select a logset. A logset is a project management unit in CLS and is used to distinguish between logs in different projects.
      • Log Topic: You can select Auto-create log topic or Select from existing log topics. One logset can contain multiple log topics, and one log topic corresponds to one type of applications or services. We recommend you collect similar logs on different machines into the same log topic.
      • Role Authorization: You need to grant permissions to a third-party role to access EventBridge.
    2. Click Submit.

    Viewing monitoring data

    1. Log in to the CKafka console.
    2. Click Data Distribution on the left sidebar and click the ID of the target task to enter its basic information page.
    3. At the top of the task details page, click Monitoring, select the resource to be viewed, and set the time range to view the corresponding monitoring data.

    Restrictions and Billing

    • The dump speed is subject to the limit of the peak bandwidth of the CKafka instance. If the consumption is too slow, check the peak bandwidth settings or increase the number of CKafka partitions.
    • The dump speed is subject to the size of a single CKafka file. A file exceeding 500 MB in size will be automatically split for multipart upload.
    • This feature is provided based on the SCF service that offers a free tier. For more information on the fees for excessive usage, see the billing rules of SCF.
    Contact Us

    Contact our sales team or business advisors to help your business.

    Technical Support

    Open a ticket if you're looking for further assistance. Our Ticket is 7x24 avaliable.

    7x24 Phone Support