[jira] [Commented] (BAHIR-228) Flink SQL supports kudu sink

2020-05-05 Thread dalongliu (Jira)


[ 
https://issues.apache.org/jira/browse/BAHIR-228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17100443#comment-17100443
 ] 

dalongliu commented on BAHIR-228:
-

Ok, thanks, how soon can we see it?

> Flink SQL supports kudu sink
> 
>
> Key: BAHIR-228
> URL: https://issues.apache.org/jira/browse/BAHIR-228
> Project: Bahir
>  Issue Type: New Feature
>  Components: Flink Streaming Connectors
>Reporter: dalongliu
>Priority: Major
>
> currently, for Flink-1.10.0, we can use the catalog to store our stream table 
> sink for kudu, it should exist a kudu table sink so we can register it to 
> catalog, and use kudu as a table in SQL environment.
> we can use kudu table sink like this:
> {code:java}
> KuduOptions options = KuduOptions.builder() 
> .setKuduMaster(kuduMaster) 
> .setTableName(kuduTable) 
> .build(); 
> KuduWriterOptions writerOptions = KuduWriterOptions.builder()  
> .setWriteMode(KuduWriterMode.UPSERT) 
> .setFlushMode(FlushMode.AUTO_FLUSH_BACKGROUND) 
> .build(); 
> KuduTableSink tableSink = KuduTableSink.builder() 
> .setOptions(options) 
> .setWriterOptions(writerOptions) 
> .setTableSchema(schema) 
> .build(); 
> tEnv.registerTableSink("kudu", tableSink);  
> tEnv.sqlUpdate("insert into kudu select * from source");
> {code}
> I have used kudu table sink to sync data in company's production environment, 
> the writing  speed at 5w/s in upsert mode



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (BAHIR-228) Flink SQL supports kudu sink

2020-03-21 Thread dalongliu (Jira)
dalongliu created BAHIR-228:
---

 Summary: Flink SQL supports kudu sink
 Key: BAHIR-228
 URL: https://issues.apache.org/jira/browse/BAHIR-228
 Project: Bahir
  Issue Type: New Feature
  Components: Flink Streaming Connectors
Reporter: dalongliu


currently, for Flink-1.10.0, we can use the catalog to store our stream table 
sink for kudu, it should exist a kudu table sink so we can register it to 
catalog, and use kudu as a table in SQL environment.

we can use kudu table sink like this:
{code:java}
KuduOptions options = KuduOptions.builder() .setKuduMaster(kuduMaster) 
.setTableName(kuduTable) .build(); KuduWriterOptions writerOptions = 
KuduWriterOptions.builder() .setWriteMode(KuduWriterMode.UPSERT) 
.setFlushMode(FlushMode.AUTO_FLUSH_BACKGROUND) .build(); KuduTableSink 
tableSink = KuduTableSink.builder() .setOptions(options) 
.setWriterOptions(writerOptions) .setTableSchema(schema) .build(); 
tEnv.registerTableSink("kudu", tableSink);  
tEnv.sqlUpdate("insert into kudu select * from source");
{code}
I have used kudu table sink to sync data in company's production environment, 
the writing  speed at 5/s in upsert mode



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (BAHIR-228) Flink SQL supports kudu sink

2020-03-21 Thread dalongliu (Jira)


 [ 
https://issues.apache.org/jira/browse/BAHIR-228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dalongliu updated BAHIR-228:

Description: 
currently, for Flink-1.10.0, we can use the catalog to store our stream table 
sink for kudu, it should exist a kudu table sink so we can register it to 
catalog, and use kudu as a table in SQL environment.

we can use kudu table sink like this:
{code:java}
KuduOptions options = KuduOptions.builder() 
.setKuduMaster(kuduMaster) 
.setTableName(kuduTable) 
.build(); 
KuduWriterOptions writerOptions = KuduWriterOptions.builder()  
.setWriteMode(KuduWriterMode.UPSERT) 
.setFlushMode(FlushMode.AUTO_FLUSH_BACKGROUND) 
.build(); 
KuduTableSink tableSink = KuduTableSink.builder() 
.setOptions(options) 
.setWriterOptions(writerOptions) 
.setTableSchema(schema) 
.build(); 
tEnv.registerTableSink("kudu", tableSink);  
tEnv.sqlUpdate("insert into kudu select * from source");
{code}
I have used kudu table sink to sync data in company's production environment, 
the writing  speed at 5w/s in upsert mode

  was:
currently, for Flink-1.10.0, we can use the catalog to store our stream table 
sink for kudu, it should exist a kudu table sink so we can register it to 
catalog, and use kudu as a table in SQL environment.

we can use kudu table sink like this:
{code:java}
KuduOptions options = KuduOptions.builder() 
.setKuduMaster(kuduMaster) 
.setTableName(kuduTable) 
.build(); 
KuduWriterOptions writerOptions = KuduWriterOptions.builder()  
.setWriteMode(KuduWriterMode.UPSERT) 
.setFlushMode(FlushMode.AUTO_FLUSH_BACKGROUND) 
.build(); 
KuduTableSink tableSink = KuduTableSink.builder() 
.setOptions(options) 
.setWriterOptions(writerOptions) 
.setTableSchema(schema) 
.build(); 
tEnv.registerTableSink("kudu", tableSink);  
tEnv.sqlUpdate("insert into kudu select * from source");
{code}
I have used kudu table sink to sync data in company's production environment, 
the writing  speed at 5/s in upsert mode


> Flink SQL supports kudu sink
> 
>
> Key: BAHIR-228
> URL: https://issues.apache.org/jira/browse/BAHIR-228
> Project: Bahir
>  Issue Type: New Feature
>  Components: Flink Streaming Connectors
>Reporter: dalongliu
>Priority: Major
>
> currently, for Flink-1.10.0, we can use the catalog to store our stream table 
> sink for kudu, it should exist a kudu table sink so we can register it to 
> catalog, and use kudu as a table in SQL environment.
> we can use kudu table sink like this:
> {code:java}
> KuduOptions options = KuduOptions.builder() 
> .setKuduMaster(kuduMaster) 
> .setTableName(kuduTable) 
> .build(); 
> KuduWriterOptions writerOptions = KuduWriterOptions.builder()  
> .setWriteMode(KuduWriterMode.UPSERT) 
> .setFlushMode(FlushMode.AUTO_FLUSH_BACKGROUND) 
> .build(); 
> KuduTableSink tableSink = KuduTableSink.builder() 
> .setOptions(options) 
> .setWriterOptions(writerOptions) 
> .setTableSchema(schema) 
> .build(); 
> tEnv.registerTableSink("kudu", tableSink);  
> tEnv.sqlUpdate("insert into kudu select * from source");
> {code}
> I have used kudu table sink to sync data in company's production environment, 
> the writing  speed at 5w/s in upsert mode



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (BAHIR-228) Flink SQL supports kudu sink

2020-03-21 Thread dalongliu (Jira)


 [ 
https://issues.apache.org/jira/browse/BAHIR-228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dalongliu updated BAHIR-228:

Description: 
currently, for Flink-1.10.0, we can use the catalog to store our stream table 
sink for kudu, it should exist a kudu table sink so we can register it to 
catalog, and use kudu as a table in SQL environment.

we can use kudu table sink like this:
{code:java}
KuduOptions options = KuduOptions.builder() 
.setKuduMaster(kuduMaster) 
.setTableName(kuduTable) 
.build(); 
KuduWriterOptions writerOptions = KuduWriterOptions.builder()  
.setWriteMode(KuduWriterMode.UPSERT) 
.setFlushMode(FlushMode.AUTO_FLUSH_BACKGROUND) 
.build(); 
KuduTableSink tableSink = KuduTableSink.builder() 
.setOptions(options) 
.setWriterOptions(writerOptions) 
.setTableSchema(schema) 
.build(); 
tEnv.registerTableSink("kudu", tableSink);  
tEnv.sqlUpdate("insert into kudu select * from source");
{code}
I have used kudu table sink to sync data in company's production environment, 
the writing  speed at 5/s in upsert mode

  was:
currently, for Flink-1.10.0, we can use the catalog to store our stream table 
sink for kudu, it should exist a kudu table sink so we can register it to 
catalog, and use kudu as a table in SQL environment.

we can use kudu table sink like this:
{code:java}
KuduOptions options = KuduOptions.builder() .setKuduMaster(kuduMaster) 
.setTableName(kuduTable) .build(); KuduWriterOptions writerOptions = 
KuduWriterOptions.builder() .setWriteMode(KuduWriterMode.UPSERT) 
.setFlushMode(FlushMode.AUTO_FLUSH_BACKGROUND) .build(); KuduTableSink 
tableSink = KuduTableSink.builder() .setOptions(options) 
.setWriterOptions(writerOptions) .setTableSchema(schema) .build(); 
tEnv.registerTableSink("kudu", tableSink);  
tEnv.sqlUpdate("insert into kudu select * from source");
{code}
I have used kudu table sink to sync data in company's production environment, 
the writing  speed at 5/s in upsert mode


> Flink SQL supports kudu sink
> 
>
> Key: BAHIR-228
> URL: https://issues.apache.org/jira/browse/BAHIR-228
> Project: Bahir
>  Issue Type: New Feature
>  Components: Flink Streaming Connectors
>Reporter: dalongliu
>Priority: Major
>
> currently, for Flink-1.10.0, we can use the catalog to store our stream table 
> sink for kudu, it should exist a kudu table sink so we can register it to 
> catalog, and use kudu as a table in SQL environment.
> we can use kudu table sink like this:
> {code:java}
> KuduOptions options = KuduOptions.builder() 
> .setKuduMaster(kuduMaster) 
> .setTableName(kuduTable) 
> .build(); 
> KuduWriterOptions writerOptions = KuduWriterOptions.builder()  
> .setWriteMode(KuduWriterMode.UPSERT) 
> .setFlushMode(FlushMode.AUTO_FLUSH_BACKGROUND) 
> .build(); 
> KuduTableSink tableSink = KuduTableSink.builder() 
> .setOptions(options) 
> .setWriterOptions(writerOptions) 
> .setTableSchema(schema) 
> .build(); 
> tEnv.registerTableSink("kudu", tableSink);  
> tEnv.sqlUpdate("insert into kudu select * from source");
> {code}
> I have used kudu table sink to sync data in company's production environment, 
> the writing  speed at 5/s in upsert mode



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (BAHIR-246) Kudu Flink Table API connector support DynamicTableSource

2020-09-14 Thread dalongliu (Jira)
dalongliu created BAHIR-246:
---

 Summary: Kudu Flink Table API connector support 
DynamicTableSource
 Key: BAHIR-246
 URL: https://issues.apache.org/jira/browse/BAHIR-246
 Project: Bahir
  Issue Type: Improvement
  Components: Flink Streaming Connectors
Reporter: dalongliu


In flink 1.11 version, community refactor table api & SQL connector which 
propose new DynamicTableSource & DynamicTableSink interface, it more suitable 
for flink dynamic table concept, the old interface will be deprecated in 
future. In [Bahir-241|https://issues.apache.org/jira/browse/BAHIR-241], we are 
upgrade flink to 1.11.1, after that, I think we can support new 
DynamicTableSource & DynamicTableSink for kudu SQL connector



--
This message was sent by Atlassian Jira
(v8.3.4#803005)