tencent cloud

Feedback

TDSQL-C PostgreSQL

Last updated: 2024-01-09 14:54:11

    Overview

    CKafka Connector offers data distribution capabilities. You can distribute CKafka data to TDSQL-C for PostgreSQL for further storage, query, and analysis.

    Prerequisites

    Currently, this feature relies on the TDSQL-C for PostgreSQL service, which should be activated first.

    Directions

    1. Log in to the CKafka console.
    2. Click Connector > Task Management > Task List on the left sidebar, select the region, and click Create Task.
    3. Enter the task name, select Data Distribution as the Task Type, select TDSQL-C for PostgreSQL as the Data Target Type, and click Next.
    4. Configure the data source information.
    Source Topic: Select the data source topic.
    Elastic Topic: Select the created elastic topic. For more information, see Topic Management.
    CKafka Instance Topic: Select the created CKafka instance and topic. If the instance is configured with ACL policies, ensure that the selected topic has read/write permissions. For more information, see Creating Topic.
    Start Offset: Select the topic offset of historical messages when dumping.
    5. Click Next, click Preview Data, and the first message from the specified Source Topic will be obtained and parsed.
    Note:
    Currently, message parsing must meet the following requirements:
    The message is a JSON string.
    The source data must be in single-level JSON format. To convert nested JSON into single-level JSON, see Data Processing Rule Description.
    6. (Optional) Enable Data Processing Rule. For more information, see Data Parsing.
    7. Click Next to configure the data target.
    Data Target: Select the created connection to TDSQL-C.
    Database: Select the source database.
    Table: Select the source table.
    Database Sync Mode
    Default field match: This option is only used for the following:
    The source topic data is the binlog/row-level changes data (insertion, deletion, or modification) of a single table subscribed by CKafka Connector from MySQL/PostgreSQL;
    The source topic data must have a primary key and contain a schema.
    Field match one by one:
    Source Data: Click to pull the source topic data. You need to select the matching fields in the target table for the message fields one by one.
    Insertion Mode: This option supports INSERT or UPSERT. If you select UPSERT, you need to select the primary key (when the inserted rows conflict, the task will update the columns of the conflicting rows except the primary key).
    Upstream Data Format: JSON and Debezium are supported.
    Note:
    When the upstream MySQL binlog/PostgreSQL row-level changes data table structure changes, the changes can be synced to the downstream PostgreSQL.
    Handle Failed Message: Specify the method of handling failed messages. You can select Discard, Retain, or Deliver to CLS (for this option, you need to specify the target logset and log topic and grant the CLS access).
    Retain: This mode is suitable for the test environment. When a task fails, it will be terminated without retry, and the cause of failure will be recorded in the event center.
    Discard: This mode is suitable for the production environment. When a task fails, the current failure message will be ignored. We recommend you use the Retain mode to conduct a test first and then change the task to Discard mode for production.
    Deliver to CLS: This mode is suitable for a strict production environment. When a task fails, the failure message, metadata, and cause of failure will be uploaded to the specified CLS topic.
    8. Click Submit, and you can see the created task in the Task List. You can also check the task creation progress on the status bar.
    Contact Us

    Contact our sales team or business advisors to help your business.

    Technical Support

    Open a ticket if you're looking for further assistance. Our Ticket is 7x24 avaliable.

    7x24 Phone Support