Thanks leo65535!
It`s a good idea. Using SQL to complete the configuration of data synchronization jobs is indeed a good way. Can you explain it in more detail? Do you plan to add SQL support to seatunnel? The user configures the synchronization job by writing SQL, and then seatunnel is responsible for parsing the SQL and generating a job, and selects Flink / spark to execute it? leo65535 <[email protected]> 于2022年5月19日周四 10:15写道: > Hi, everyone > > > We know that there are many data transmission products, like Apache Flume, > Apache Sqoop, > > Alibaba Datax, DTStack flinkx etc, we can see that more and more products > support creating > > data transmission task through SQL configuration. So I wana to raise a > topic that let > > SeaTunnel focus on SQL, we can get a lot of benefits from it, and this > will be more in line > > with the goals of the project `Next-generation high-performance, > distributed, massive data integration framework`. > > > > > The SQL is a language-integrated query that allows the composition of > queries from relational > > operators such as selection, filter, and join in a very intuitive way. We > can use catalog management > > to manage these sqls, and not to maintain the api configuration. > > > > > So, suggest that we can create a new branch which foucus on SQL like > api-draft branch, many features need > > to develop quickly, like cdc, breakpoint continuation, metrics, catalog > management, web ui and etc. The > > goal of the branch is `Data Transmission based on SQL`. > > > > > If anyone are interesting, looking forward to your ideas, thanks. > > > > > Best, > > Leo65535 -- Best Regards ------------ Apache DolphinScheduler PMC Jun Gao [email protected]
