[ https://issues.apache.org/jira/browse/BAHIR-228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100552#comment-17100552 ]
Gyula Fora commented on BAHIR-228: ---------------------------------- Hi [~lsy] We have opened a PR with the Table/SQL support features: [https://github.com/apache/bahir-flink/pull/78] You can build it locally and try it out already! > Flink SQL supports kudu sink > ---------------------------- > > Key: BAHIR-228 > URL: https://issues.apache.org/jira/browse/BAHIR-228 > Project: Bahir > Issue Type: New Feature > Components: Flink Streaming Connectors > Reporter: dalongliu > Priority: Major > > currently, for Flink-1.10.0, we can use the catalog to store our stream table > sink for kudu, it should exist a kudu table sink so we can register it to > catalog, and use kudu as a table in SQL environment. > we can use kudu table sink like this: > {code:java} > KuduOptions options = KuduOptions.builder() > .setKuduMaster(kuduMaster) > .setTableName(kuduTable) > .build(); > KuduWriterOptions writerOptions = KuduWriterOptions.builder() > .setWriteMode(KuduWriterMode.UPSERT) > .setFlushMode(FlushMode.AUTO_FLUSH_BACKGROUND) > .build(); > KuduTableSink tableSink = KuduTableSink.builder() > .setOptions(options) > .setWriterOptions(writerOptions) > .setTableSchema(schema) > .build(); > tEnv.registerTableSink("kudu", tableSink); > tEnv.sqlUpdate("insert into kudu select * from source"); > {code} > I have used kudu table sink to sync data in company's production environment, > the writing speed at 5w/s in upsert mode -- This message was sent by Atlassian Jira (v8.3.4#803005)