Last updated: 2020-02-17 16:05:02PDF
- How to select the number of CKafka copies?
- How to deal with production / consumption errors when message queuing is newly connected to the client?
- If the Cloud Kafka message retention time is configured as 1 minute, will the heap message be deleted immediately after 1min?
- Does Cloud Kafka support automatic creation of Topic (auto.create.topic)?
- CKafka Message Retention a lot of how to deal with?
How to select the number of CKafka copies?
It is recommended that dual-replica or three copies be selected to store data when Topic is created to ensure the reliability of the data. Currently, CKafka has prohibited the creation of single-copy Topic. For example, Topic, with a single copy under your account is recommended to follow the steps below:
- Create a new Topic, and select the same partition and select dual-replica.
- The production message is sent to the new Topic, and the single copy of the stock Topic continues to be consumed;
- Modify the consumer configuration after consumption and subscribe to the new Topic for consumption.
How to deal with production / consumption errors when message queuing is newly connected to the client?
- Check if the telnet is connected (network problems, whether the Kafka and the producer are in the same network environment).
- Whether Access's vip-port is configured correctly.
- Whether the Topic whitelist is enabled? if enabled, you need to configure the correct IP to Access.
If the Cloud Kafka message retention time is configured as 1 minute, will the heap message be deleted immediately after 1min?
Not necessarily. Message deletion is related not only to the retention time configuration, but also to the data magnitude of the production message.
The smallest unit for CKafka to delete heap messages is partition-level file sharding. The current file shard size is 1GB. Heap will not delete a file shard until it reaches one. If there are 10 partition, within 1 minute if the amount of 10GB is not reached, the file will not be scrolled and will not be deleted.
Does Cloud Kafka support automatic creation of Topic (auto.create.topic)?
Currently, Cloud Kafka does not open the open source interface for automatically creating Topic. Customers are advised to adopt the standard API. CreateTopic Create Topic.
- The Topic automatically created through the API CreateTopic will also occupy the quota of Topic and Partition in your instance. Please pay attention to the quota limit.
- The configuration of the automatically created Topic under this instance inherits the values of the number of partitions and replicas you configured.
- To prevent Topic, from accidentally creating too many exceptions, this interface has limited flow control.
CKafka Message Retention a lot of how to deal with?
CKafka and open source Kafka are exactly the same mechanism and principle. You can troubleshoot errors by following these steps:
- Determine how many consumers are spending in your business.
- If the consumer's spending power is relatively poor, just add the consumer directly.
- Consumers have reached the highest setting (≥ your number of partitions). It is recommended that you increase the number of topic partitions. You can submit Ticket to apply for the same whitelist. We, Backend Background, audit to help increase the number of partitions.