lvyanquan commented on code in PR #3605: URL: https://github.com/apache/flink-cdc/pull/3605#discussion_r1905042739
########## README.md: ########## @@ -28,14 +28,25 @@ and elegance of data integration via YAML to describe the data movement and tran The Flink CDC prioritizes efficient end-to-end data integration and offers enhanced functionalities such as full database synchronization, sharding table synchronization, schema evolution and data transformation. - + +### Quickstart Guide +Flink CDC provides a CdcUp CLI utility to start a playground environment and run Flink CDC jobs. +You will need to have a working Docker and Docker compose environment to use it. + +1. Run `git clone https://github.com/apache/flink-cdc.git --depth=1` to retrieve a copy of Flink CDC source code. +2. Run `cd tools/cdcup/ && ./cdcup.sh init` to use the CdcUp tool to start a playground environment. +3. Run `./cdcup.sh up` to initialize docker containers, and `./cdcup.sh pipeline <YAML pipeline file>` to submit a pipeline job. Review Comment: Maybe we can split this into two or three steps, as user need to prepare their datas before submit a pipeline job, and it's better to provide some guides to help user enter the pod and execute sql ``` docker-compose exec mysql mysql -uroot ``` like https://nightlies.apache.org/flink/flink-cdc-docs-release-3.2/docs/connectors/flink-sources/tutorials/build-real-time-data-lake-tutorial/#preparing-data-in-databases does. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
