You can consume the data collected by CLS to downstream big data components or data warehouses over the Kafka protocol, such as self-built Kafka clusters, open-source ClickHouse, Hive, and Flink, as well as Tencent Cloud EMR and Oceanus.
Log in to the CLS console.
Click Log Topic on the left sidebar.
Click the ID/name of the log topic that needs to be consumed over the Kafka protocol to enter the log topic management page.
Click the Consumption over Kafka tab.
Click Edit on the right.
Set Current Status to On and click OK.
Construct the consumer according to the topic information given by CLS.
Example: Construct Consumer.py in Python
import uuid
from kafka import KafkaConsumer,TopicPartition,OffsetAndMetadata
consumer = KafkaConsumer(
// Enter the topic name in the above figure here, and you will use this topic for consumption.
'out-633a268c-XXXX-4a4c-XXXX-7a9a1a7baXXXX',
group_id = uuid.uuid4().hex,
auto_offset_reset='earliest',
// Kafka protocol service address. Enter the service access information in the above figure here. If you consume over the public network, enter
the public network service domain name + port. If over the private network, enter the private network service domain name + port. The example uses the private network service.
bootstrap_servers = ['kafkaconsumer-ap-guangzhou.cls.tencentyun.com:9096'],
security_protocol = "SASL_PLAINTEXT",
sasl_mechanism = 'PLAIN',
// SASL information. Enter the logset ID of the log topic here.
sasl_plain_username = "ca5cXXXX-dd2e-4ac0-af12-92d4b677d2c6",
// SASL information. #Enter the string of your secretid#secrectkey. Be sure to keep it confidential.#
sasl_plain_password = "AKIDWrwkHYYHjvqhz1mHVS8YhXXXX#XXXXuXtymIXT0Lac",
api_version = (0,10,0)
)
print('begin')
for message in consumer:
print('begins')
print ("Topic:[%s] Partition:[%d] Offset:[%d] Value:[%s]" % (message.topic, message.partition, message.offset, message.value))
print('end')
Was this page helpful?