[ 
https://issues.apache.org/jira/browse/FLINK-14868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16995265#comment-16995265
 ] 

ShenDa edited comment on FLINK-14868 at 12/13/19 1:32 AM:
----------------------------------------------------------

[~rmetzger] Thanks for your reply, it's a nice a idea to implement a custom 
sink that wrap series sinks to write data together.  But, if  I want to use 
TableAPI&SQL to sink data into different targets by order, is there any good 
way to achieve this?


was (Author: dadashen):
[~rmetzger] Thanks for your reply, it's a nice a idea to implement a custom 
sink that wrap series sinks to write data together.  But, if  I want to use 
TableAPI&SQL to sink data into different targets by order, is there any good 
way to archive this?

> Provides the ability for multiple sinks to write data serially
> --------------------------------------------------------------
>
>                 Key: FLINK-14868
>                 URL: https://issues.apache.org/jira/browse/FLINK-14868
>             Project: Flink
>          Issue Type: Wish
>          Components: API / DataStream, Table SQL / Runtime
>    Affects Versions: 1.9.1
>            Reporter: ShenDa
>            Priority: Major
>
> At present, Flink can use multiple sinks to write data into different data 
> source such as HBase,Kafka,Elasticsearch,etc.And this process is concurrent 
> ,in other words, one record will be written into data sources simultaneously.
> But there is no approach that can sinking data serially.We really wish Flink 
> can providing this kind of ability that a sink can write data into target 
> database only after the previous sink transfers data successfully.And if the 
> previous sink encounters any exception, the next sink will not work.
> h1.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to