tencent cloud

TDMQ for CKafka

Release Notes and Announcements
Release Notes
Broker Release Notes
Announcement
Product Introduction
Introduction and Selection of the TDMQ Product Series
What Is TDMQ for CKafka
Strengths
Scenarios
Technology Architecture
Product Series Introduction
Apache Kafka Version Support Description
Comparison with Apache Kafka
High Availability
Use Limits
Regions and AZs
Related Cloud Services
Billing
Billing Overview
Pricing
Billing Example
Changing from Postpaid by Hour to Monthly Subscription
Renewal
Viewing Consumption Details
Overdue Payments
Refund
Getting Started
Guide for Getting Started
Preparations
VPC Network Access
Public Domain Name Access
User Guide
Usage Process Guide
Configuring Account Permission
Creating Instance
Configuring Topic
Connecting Instance
Managing Messages
Managing Consumer Group
Managing Instance
Changing Instance Specification
Configuring Traffic Throttling
Configuring Elastic Scaling Policy
Configuring Advanced Features
Viewing Monitoring Data and Configuring Alarm Rules
Synchronizing Data Using CKafka Connector
Use Cases
Cluster Resource Assessment
Client Practical Tutorial
Log Integration
Open-Source Ecosystem Integration
Replacing Supporting Route (Old)
Migration Guide
Migration Solution Overview
Migrating Cluster Using Open-Source Tool
Troubleshooting
Topics
Clients
Messages
​​API Reference
History
Introduction
API Category
Making API Requests
Other APIs
ACL APIs
Instance APIs
Routing APIs
DataHub APIs
Topic APIs
Data Types
Error Codes
SDK Reference
SDK Overview
Java SDK
Python SDK
Go SDK
PHP SDK
C++ SDK
Node.js SDK
SDK for Connector
Security and Compliance
Permission Management
Network Security
Deletion Protection
Event Record
CloudAudit
FAQs
Instances
Topics
Consumer Groups
Client-Related
Network-Related
Monitoring
Messages
Agreements
CKafka Service Level Agreements
Contact Us
Glossary

Selection Suggestion

PDF
Focus Mode
Font Size
Last updated: 2026-01-21 09:23:13

Selection Overview

Before purchasing a TDMQ for CKafka (CKafka) instance, you need to comprehensively consider key factors such as price, performance, load, and business scenarios to select the most suitable instance specifications. Different types of CKafka instances vary in specifications, performance capabilities, and resource calculation logic. This document details the specification characteristics and calculation methods of each edition to help you make a quick selection.

Product Form Selection

Product Forms

CKafka offers two product series: Serverful and Serverless, to meet the needs of different business scenarios.
For more information on product forms and detailed feature differences, see Capability Comparison.

Product Form Selection Recommendations

You can select from Serverful editions (Advanced Edition and Pro Edition) and Serverless Edition based on factors such as specification range, feature differences, and applicable scenarios.
Recommended Edition
Specification Range
Differentiated Feature
Applicable Scenario
Serverful Edition
Advanced Edition
Bandwidth range: 20 MB/s–360 MB/s.
Number of partitions: starts from 400 to 1800, and can be scaled out independently.
Basic monitoring
Instance traffic throttling
Two availability zones (AZs)
No AZ migration support
Public network access (bandwidth not scalable)
Small to medium business traffic
Cost control
Short-term testing and development
Pro Edition
Bandwidth range: 20 MB/s–100000 MB/s.
Number of partitions: starts from 400 to 6000, and can be scaled out independently.
Basic and advanced monitoring
Prometheus monitoring
Instance and topic traffic throttling
Intelligent Ops
Deployment with up to 4 AZs
Support for AZ migration
Public network access (scalable bandwidth)
High business traffic
High stability requirements
Fine-grained Ops management
Stringent high availability requirements
Serverless Edition
Reserved bandwidth range
Production: 90 MB/s - 1020 MB/s
Consumption: 30 MB/s - 1020 MB/s
The upper limit for the consumption-to-production ratio is 3:1.
Actual available bandwidth
Production: 180 MB/s - 2040 MB/s
Consumption: 60 MB/s - 2040 MB/s
Read/Write bandwidth can be configured independently.
Number of partitions (including replicas): 3000. If you require a higher number, contact us to request a modification.
Elastic storage
Supports cross-AZ disaster recovery
Standard monitoring
One-click diagnosis
Instance and topic traffic throttling
Public network access and Event Center are not currently supported.
Transactions, idempotency, and Compact are not currently supported.
Small to medium business traffic
High business flexibility
Has strong demand for AS.

Specification Selection

After selecting an edition, you can select the appropriate specifications based on your business usage. The following are the main factors that affect the specification selection:
Factor
Description
Bandwidth
Bandwidth is divided into write bandwidth and read bandwidth. The number of replicas must be considered when calculating write bandwidth, but not for read bandwidth.
For example, if your business message generates a total write traffic of 50 MB/s with three replicas and a read traffic of 200 MB/s. In this case, the required instance should meet the following conditions: the write bandwidth must be at least 150 MB/s (50 x 3), and the read bandwidth must be at least 200 MB/s. Therefore, you should select an instance specification that provides a minimum bandwidth of 200 MB/s.
Storage
You need to consider the data write rate, retention period, and number of replicas.
For example, if your average daily write speed is 50 MB/s, with a message retention period of 72 hours (3 days) and 3 replicas, the required storage space is at least 38000 GB (50 MB/s x 3600 x 72 x 3).
Number of partitions
You need to consider the impact of replicas.
For example: Your instance has 10 Topics, each allocated 20 partitions with 3 replicas. The actual number of partitions used is: 10 * 20 * 3 = 600 partitions.
Number of Replicas
The number of replicas affects the assessment of bandwidth, storage, and the number of partitions. In addition, an excessively large number of partitions within a single cluster may cause cluster instability.
Number of topics
The number of topics depends on the number of partitions purchased for the cluster. To create more topics, you can increase the number of partitions.
Cluster load
If you frequently enable modes that consume cluster CPU resources such as transactional messages and compression algorithms, you need to pay close attention to the cluster load metric in advanced monitoring. For details, see Use Cases of Cluster Capacity Planning.

Must-Knows

Although CKafka supports scaling out bandwidth, storage, and the number of partitions, cluster performance depends on more than these dimensions. Cluster stability is influenced by a combination of factors. The following elements may impact cluster stability and should be considered when you select specifications. Select specifications that best match your actual usage scenarios:
Influencing Factor
Description
High cluster TPS
Excessively high cluster TPS will significantly consume underlying server resources and greatly impact cluster stability. It is recommended to closely monitor cluster TPS and load status to determine whether a higher-specification cluster is required.
Whether to enable compression
Enabling compression consumes CPU resources on the server for message verification. Different compression algorithms have varying levels of CPU impact. For details on compression algorithms, see Compressing Data.
Whether to enable transactional messages
Enabling transactional messages will impact cluster CPU, memory, and network bandwidth metrics due to the transaction coordinator. In addition, high-concurrency transactions can easily cause lock contention, leading to message write delays. Cluster throughput may decrease by approximately 30% to 50%. For more information on using transaction management, see Configuring Transactional Messages.
Whether to enable message idempotence
Enabling message idempotence consumes cluster resources (CPU, memory, and network) as the cost of achieving strong consistency for single-partition data. It cannot address cross-session or cross-partition issues and will impact the overall cluster load.
Frequent consumption offset commits
Frequent consumption offset commits represent a trade-off between reliability and throughput. This approach may simultaneously increase message loss and performance risks while also imposing significant pressure on cluster brokers. It is recommended to evaluate overall cluster load conditions to determine whether a higher-specification cluster is needed.


Help and Support

Was this page helpful?

Help us improve! Rate your documentation experience in 5 mins.

Feedback