By default, no ACLs are set for a topic, and the topic can be accessed without limit by instances in the same VPC. If you want to control the permissions in the VPC, you can configure an ACL as instructed in Configuring ACL Policy.
When you add an ACL policy for a topic, the policy will prevent all other ineligible requests from accessing the topic, including those initiated by other Tencent Cloud services connected to CKafka (e.g., log shipping in CLS, message dump in SCF, and component consumption in EMR).
From a business point of view, the business wants to ensure that clients that don't meet the requirements cannot access Kafka data once an ACL is set; therefore, the rejection is reasonable.
Therefore, before adding an ACL policy for a topic, you must determine whether the topic is being used in other scenarios through the service information or the monitoring information in the console; otherwise, problems with other linked features may occur.
In such cases, if you have to use an ACL policy, we recommend you produce messages to a new topic for permission grant instead of reusing the original topic.
We recommend you select two or three replicas for data storage when creating a topic to ensure data reliability. A created topic has two replicas by default. If your business requires higher availability, you can select three replicas. If you need more replicas, you can submit a ticket for assistance. When creating a topic, you can select the number of replicas as shown below:
To improve data security, CKafka has banned the creation of single-replica topics currently. If you have a single-replica topic in your account, please migrate it as follows:
LogAppendTime. The former indicates the time when the message is created by the producer, and the latter indicates the time when the broker receives the message. If the timestamp data of the time when a client produces a message is invalid, data deletion on the broker server will be affected.
A high number of topics (total number of partitions) in Kafka will compromise the cluster performance and stability.
As the number of partitions that a server can sustain is limited, if you want more partitions, you need to add more nodes, which incur more fees. Kafka's storage and coordination mechanisms work by partition. If there are too many partitions, storage fragmentation will be severe, random writes on individual servers will increase, the efficiency of leader switch in the cluster will decrease, controller node failover will slow down, and other problems may occur, which lower the overall cluster performance and stability.
CKafka uses partition as an allocation unit.
Total number of partitions = topic A * number of replicas + topic B * number of replicas + ... + topic N * number of replicas.
Replicas are also counted into the total number of partitions. For example, if you create 1 topic with 6 partitions and 2 replicas for each partition, then you have a total of 12 partitions (1 * 6 * 2).