[jira] [Commented] (BAHIR-228) Flink SQL supports kudu sink

2020-07-28 Thread Jira


[ 
https://issues.apache.org/jira/browse/BAHIR-228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17166625#comment-17166625
 ] 

João Boto commented on BAHIR-228:
-

closing this as its already merged

> Flink SQL supports kudu sink
> 
>
> Key: BAHIR-228
> URL: https://issues.apache.org/jira/browse/BAHIR-228
> Project: Bahir
>  Issue Type: New Feature
>  Components: Flink Streaming Connectors
>Reporter: dalongliu
>Assignee: dalongliu
>Priority: Major
> Fix For: Flink-Next
>
>
> currently, for Flink-1.10.0, we can use the catalog to store our stream table 
> sink for kudu, it should exist a kudu table sink so we can register it to 
> catalog, and use kudu as a table in SQL environment.
> we can use kudu table sink like this:
> {code:java}
> KuduOptions options = KuduOptions.builder() 
> .setKuduMaster(kuduMaster) 
> .setTableName(kuduTable) 
> .build(); 
> KuduWriterOptions writerOptions = KuduWriterOptions.builder()  
> .setWriteMode(KuduWriterMode.UPSERT) 
> .setFlushMode(FlushMode.AUTO_FLUSH_BACKGROUND) 
> .build(); 
> KuduTableSink tableSink = KuduTableSink.builder() 
> .setOptions(options) 
> .setWriterOptions(writerOptions) 
> .setTableSchema(schema) 
> .build(); 
> tEnv.registerTableSink("kudu", tableSink);  
> tEnv.sqlUpdate("insert into kudu select * from source");
> {code}
> I have used kudu table sink to sync data in company's production environment, 
> the writing  speed at 5w/s in upsert mode



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (BAHIR-228) Flink SQL supports kudu sink

2020-05-06 Thread Gyula Fora (Jira)


[ 
https://issues.apache.org/jira/browse/BAHIR-228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17100552#comment-17100552
 ] 

Gyula Fora commented on BAHIR-228:
--

Hi [~lsy]

We have opened a PR with the Table/SQL support features: 
[https://github.com/apache/bahir-flink/pull/78]

You can build it locally and try it out already!

> Flink SQL supports kudu sink
> 
>
> Key: BAHIR-228
> URL: https://issues.apache.org/jira/browse/BAHIR-228
> Project: Bahir
>  Issue Type: New Feature
>  Components: Flink Streaming Connectors
>Reporter: dalongliu
>Priority: Major
>
> currently, for Flink-1.10.0, we can use the catalog to store our stream table 
> sink for kudu, it should exist a kudu table sink so we can register it to 
> catalog, and use kudu as a table in SQL environment.
> we can use kudu table sink like this:
> {code:java}
> KuduOptions options = KuduOptions.builder() 
> .setKuduMaster(kuduMaster) 
> .setTableName(kuduTable) 
> .build(); 
> KuduWriterOptions writerOptions = KuduWriterOptions.builder()  
> .setWriteMode(KuduWriterMode.UPSERT) 
> .setFlushMode(FlushMode.AUTO_FLUSH_BACKGROUND) 
> .build(); 
> KuduTableSink tableSink = KuduTableSink.builder() 
> .setOptions(options) 
> .setWriterOptions(writerOptions) 
> .setTableSchema(schema) 
> .build(); 
> tEnv.registerTableSink("kudu", tableSink);  
> tEnv.sqlUpdate("insert into kudu select * from source");
> {code}
> I have used kudu table sink to sync data in company's production environment, 
> the writing  speed at 5w/s in upsert mode



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (BAHIR-228) Flink SQL supports kudu sink

2020-05-05 Thread dalongliu (Jira)


[ 
https://issues.apache.org/jira/browse/BAHIR-228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17100443#comment-17100443
 ] 

dalongliu commented on BAHIR-228:
-

Ok, thanks, how soon can we see it?

> Flink SQL supports kudu sink
> 
>
> Key: BAHIR-228
> URL: https://issues.apache.org/jira/browse/BAHIR-228
> Project: Bahir
>  Issue Type: New Feature
>  Components: Flink Streaming Connectors
>Reporter: dalongliu
>Priority: Major
>
> currently, for Flink-1.10.0, we can use the catalog to store our stream table 
> sink for kudu, it should exist a kudu table sink so we can register it to 
> catalog, and use kudu as a table in SQL environment.
> we can use kudu table sink like this:
> {code:java}
> KuduOptions options = KuduOptions.builder() 
> .setKuduMaster(kuduMaster) 
> .setTableName(kuduTable) 
> .build(); 
> KuduWriterOptions writerOptions = KuduWriterOptions.builder()  
> .setWriteMode(KuduWriterMode.UPSERT) 
> .setFlushMode(FlushMode.AUTO_FLUSH_BACKGROUND) 
> .build(); 
> KuduTableSink tableSink = KuduTableSink.builder() 
> .setOptions(options) 
> .setWriterOptions(writerOptions) 
> .setTableSchema(schema) 
> .build(); 
> tEnv.registerTableSink("kudu", tableSink);  
> tEnv.sqlUpdate("insert into kudu select * from source");
> {code}
> I have used kudu table sink to sync data in company's production environment, 
> the writing  speed at 5w/s in upsert mode



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (BAHIR-228) Flink SQL supports kudu sink

2020-04-10 Thread Gyula Fora (Jira)


[ 
https://issues.apache.org/jira/browse/BAHIR-228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17080315#comment-17080315
 ] 

Gyula Fora commented on BAHIR-228:
--

Hi!
cc [~mbalassi] 

Thanks for opening this Jira ticket. We have been working on complete Table/SQL 
api support for the Kudu connector including some refactorings and other 
improvements. We have started a discussion on the ML already and a PR should 
follow in the next couple of days :)

> Flink SQL supports kudu sink
> 
>
> Key: BAHIR-228
> URL: https://issues.apache.org/jira/browse/BAHIR-228
> Project: Bahir
>  Issue Type: New Feature
>  Components: Flink Streaming Connectors
>Reporter: dalongliu
>Priority: Major
>
> currently, for Flink-1.10.0, we can use the catalog to store our stream table 
> sink for kudu, it should exist a kudu table sink so we can register it to 
> catalog, and use kudu as a table in SQL environment.
> we can use kudu table sink like this:
> {code:java}
> KuduOptions options = KuduOptions.builder() 
> .setKuduMaster(kuduMaster) 
> .setTableName(kuduTable) 
> .build(); 
> KuduWriterOptions writerOptions = KuduWriterOptions.builder()  
> .setWriteMode(KuduWriterMode.UPSERT) 
> .setFlushMode(FlushMode.AUTO_FLUSH_BACKGROUND) 
> .build(); 
> KuduTableSink tableSink = KuduTableSink.builder() 
> .setOptions(options) 
> .setWriterOptions(writerOptions) 
> .setTableSchema(schema) 
> .build(); 
> tEnv.registerTableSink("kudu", tableSink);  
> tEnv.sqlUpdate("insert into kudu select * from source");
> {code}
> I have used kudu table sink to sync data in company's production environment, 
> the writing  speed at 5w/s in upsert mode



--
This message was sent by Atlassian Jira
(v8.3.4#803005)