Gyula Fora commented on BAHIR-228:

cc [~mbalassi] 

Thanks for opening this Jira ticket. We have been working on complete Table/SQL 
api support for the Kudu connector including some refactorings and other 
improvements. We have started a discussion on the ML already and a PR should 
follow in the next couple of days :)

> Flink SQL supports kudu sink
> ----------------------------
>                 Key: BAHIR-228
>                 URL: https://issues.apache.org/jira/browse/BAHIR-228
>             Project: Bahir
>          Issue Type: New Feature
>          Components: Flink Streaming Connectors
>            Reporter: dalongliu
>            Priority: Major
> currently, for Flink-1.10.0, we can use the catalog to store our stream table 
> sink for kudu, it should exist a kudu table sink so we can register it to 
> catalog, and use kudu as a table in SQL environment.
> we can use kudu table sink like this:
> {code:java}
> KuduOptions options = KuduOptions.builder() 
>             .setKuduMaster(kuduMaster) 
>             .setTableName(kuduTable) 
>             .build(); 
> KuduWriterOptions writerOptions = KuduWriterOptions.builder()          
> .setWriteMode(KuduWriterMode.UPSERT) 
>             .setFlushMode(FlushMode.AUTO_FLUSH_BACKGROUND) 
>             .build(); 
> KuduTableSink tableSink = KuduTableSink.builder() 
>             .setOptions(options) 
>             .setWriterOptions(writerOptions) 
>             .setTableSchema(schema) 
>             .build(); 
> tEnv.registerTableSink("kudu", tableSink);  
> tEnv.sqlUpdate("insert into kudu select * from source");
> {code}
> I have used kudu table sink to sync data in company's production environment, 
> the writing  speed at 5w/s in upsert mode

This message was sent by Atlassian Jira

Reply via email to