qyfftf opened a new pull request, #3906:
URL: https://github.com/apache/flink-cdc/pull/3906

   I've implemented a solution where data is collected from MySQL once and then 
sent to multiple target data sources, such as Paimon, Kafka, and StarRocks, 
which reduces redundant data collection.
   pipeline yaml :
   ```
   source:
     type: mysql
     name: MySQL Source
     hostname: 127.0.0.1
     port: 3306
     username:root
     password: 123456
     tables: test.order
     server-id: 5401-5404
     jdbc.properties.useSSL: false
   
   sink:
      - type: paimon
        name: Paimon Sink
        catalog.properties.metastore: filesystem
        catalog.properties.warehouse: /tmp/path/warehouse
      - type: kafka
        name: Kafka Sink
        properties.bootstrap.servers: PLAINTEXT://localhost:9092
      - type: starrocks
        jdbc-url: jdbc:mysql://127.0.0.1:9030
        load-url: 127.0.0.1:8030
        username: root
        password: ""
        table.create.properties.replication_num: 1
   
   route:
     - source-table:  test.order
       sink-table:  test.order
   
   pipeline:
     name: MySQL to Paimon Pipeline
     parallelism: 2
   ```
   
   
   <img width="1611" alt="image" 
src="https://github.com/user-attachments/assets/00ff0811-d682-4cfe-b21e-18ca56bbf53f";
 />
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to