ShenDa created FLINK-14868:
------------------------------

             Summary: Provides the ability for multiple sinks to write data 
serially
                 Key: FLINK-14868
                 URL: https://issues.apache.org/jira/browse/FLINK-14868
             Project: Flink
          Issue Type: Wish
          Components: API / DataStream, Table SQL / Runtime
    Affects Versions: 1.9.1
            Reporter: ShenDa
             Fix For: 1.9.2


At present, Flink can use multiple sinks to write data into different data 
source such as HBase,Kafka,Elasticsearch,etc.And this process is concurrent ,in 
other words, one record will be written into data sources simultaneously.

But there is no approach that can sinking data serially.We really wish Flink 
can providing this kind of ability that a sink can write data into target 
database only after the previous sink transfers data successfully.And if the 
previous sink encounters any exception, the next sink will not work.
h1.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to