Hi, @Leo65535, I am totally agreed with the SQL-oriented proposal, and I'm also do the work to enrich Flink SQL in seatunnel.
Building a SQL system is a long building, it involved many things, and the most important thing is how to run it. If we run it as Flink/Spark job, we need to do SQL convertion between Seatunnel SQL with Flink/Spark SQL. I think we can use Flink/Spark SQL ability to do the work, and we can extend the SQL grammar if needed. And also we can support more connectors and udf for the SQL, which might be the core works in Seatunnel. Thanks. Kelu On Thu, May 19, 2022 at 10:15 AM leo65535 <[email protected]> wrote: > Hi, everyone > > > We know that there are many data transmission products, like Apache Flume, > Apache Sqoop, > > Alibaba Datax, DTStack flinkx etc, we can see that more and more products > support creating > > data transmission task through SQL configuration. So I wana to raise a > topic that let > > SeaTunnel focus on SQL, we can get a lot of benefits from it, and this > will be more in line > > with the goals of the project `Next-generation high-performance, > distributed, massive data integration framework`. > > > > > The SQL is a language-integrated query that allows the composition of > queries from relational > > operators such as selection, filter, and join in a very intuitive way. We > can use catalog management > > to manage these sqls, and not to maintain the api configuration. > > > > > So, suggest that we can create a new branch which foucus on SQL like > api-draft branch, many features need > > to develop quickly, like cdc, breakpoint continuation, metrics, catalog > management, web ui and etc. The > > goal of the branch is `Data Transmission based on SQL`. > > > > > If anyone are interesting, looking forward to your ideas, thanks. > > > > > Best, > > Leo65535 -- Hello, Find me here: www.legendtkl.com.
