[ 
https://issues.apache.org/jira/browse/FLINK-14868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ShenDa resolved FLINK-14868.
----------------------------
    Resolution: Resolved

We implements a TableSink which wraps other sinks that needed to be written 
serially

> Provides the ability for multiple sinks to write data serially
> --------------------------------------------------------------
>
>                 Key: FLINK-14868
>                 URL: https://issues.apache.org/jira/browse/FLINK-14868
>             Project: Flink
>          Issue Type: Wish
>          Components: API / DataStream, Table SQL / Runtime
>    Affects Versions: 1.9.1
>            Reporter: ShenDa
>            Priority: Major
>
> At present, Flink can use multiple sinks to write data into different data 
> source such as HBase,Kafka,Elasticsearch,etc.And this process is concurrent 
> ,in other words, one record will be written into data sources simultaneously.
> But there is no approach that can sinking data serially.We really wish Flink 
> can providing this kind of ability that a sink can write data into target 
> database only after the previous sink transfers data successfully.And if the 
> previous sink encounters any exception, the next sink will not work.
> h1.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to