[
https://issues.apache.org/jira/browse/FLINK-17916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Yuan Mei closed FLINK-17916.
----------------------------
Resolution: Won't Do
> Provide API to separate KafkaShuffle's Producer and Consumer to different jobs
> ------------------------------------------------------------------------------
>
> Key: FLINK-17916
> URL: https://issues.apache.org/jira/browse/FLINK-17916
> Project: Flink
> Issue Type: Improvement
> Components: API / DataStream, Connectors / Kafka
> Affects Versions: 1.11.0
> Reporter: Yuan Mei
> Priority: Minor
> Labels: auto-deprioritized-major, auto-unassigned,
> pull-request-available
>
> Follow up of FLINK-15670
> *Separate sink (producer) and source (consumer) to different jobs*
> * In the same job, a sink and a source are recovered independently according
> to regional failover. However, they share the same checkpoint coordinator and
> correspondingly, share the same global checkpoint snapshot.
> * That says if the consumer fails, the producer can not commit written data
> because of two-phase commit set-up (the producer needs a checkpoint-complete
> signal to complete the second stage)
> * Same applies to the producer
>
--
This message was sent by Atlassian Jira
(v8.20.10#820010)