Scheme 2: Single-Producer Single-Consumer Migration

Last updated: 2021-09-17 11:24:33

    Overview

    This document describes how to use the single-producer single-consumer scheme to migrate data from a self-built Kafka cluster to a CKafka cluster.

    Prerequisites

    Directions

    The prerequisite for guaranteeing message ordering is to strictly limit data consumption to only one consumer. Therefore, timing of the migration is vital.

    The single-producer single-consumer scheme is simple, clear, and easy to implement; however, after the production is switched to the new cluster, before the old consumer is switched to the new cluster, there will be a certain amount of data heap in the new cluster.

    The migration steps are as follows:

    1. Switch the production flow so that the producer produces data to the CKafka instance.
      Configure the accessed network of the CKafka instance as the IP in broker-list by copying the information in the Network column in the Access Mode section on the Instance Details page in the console, and change the topicName to the topic name in the CKafka instance.
      ./kafka-console-producer.sh --broker-list xxx.xxx.xxx.xxx:9092 --topic topicName
      
    1. The original consumer does not need to be configured and can continue to consume the data in your self-built Kafka cluster until all data is consumed.

    2. When the consumption by the original consumer is completed, switch to the new CKafka cluster for consumption through the following configuration (let only one consumer consume the data to guarantee message ordering). If you add a new consumer, you need to configure the accessed network of the CKafka instance as the IP in --bootstrap-server.

      Note:

      If the original consumer is a CVM instance, it can continue to consume the data.

      ./kafka-console-consumer.sh --bootstrap-server xxx.xxx.xxx.xxx:9092 --from-beginning --new-consumer --topic topicName --consumer.config ../config/consumer.properties
      
    1. The consumer continues to consume data in the CKafka cluster after switch, and the migration is completed (if the original consumer is a CVM instance, it can continue to consume the data).
    Note:

    The above commands are test commands. In actual business operations, just modify the broker address configured for the corresponding application and then restart the application.