[
https://issues.apache.org/jira/browse/FLINK-15670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17040874#comment-17040874
]
Yuan Mei commented on FLINK-15670:
----------------------------------
Some updates:
I wrapped the shuffle producer and consumer into `KafkaShuffle` Proto-type
version2:
[https://github.com/apache/flink/compare/master...curcur:kafka_shuffle?expand=1]
In this version, we can use the shuffle end to end as follows:
Produce data:
{code:java}
DataStream<...> source = ...;
KafkaShuffle.persistentKeyBy(source, topic, numberOfPartitions,
producerProperties, keyByFields);
{code}
Consume data:
{code:java}
KafkaShuffle.readKeyBy(environment, topic, readSchema, numberOfPartitions,
consumerProperties);{code}
A complete example of usage can be found in `KafkaSimpleITCase`. Any comments
on the wrapped API?
BTW, Watermark has not been supported yet in this version.
> Provide a Kafka Source/Sink pair that aligns Kafka's Partitions and Flink's
> KeyGroups
> -------------------------------------------------------------------------------------
>
> Key: FLINK-15670
> URL: https://issues.apache.org/jira/browse/FLINK-15670
> Project: Flink
> Issue Type: New Feature
> Components: API / DataStream, Connectors / Kafka
> Reporter: Stephan Ewen
> Priority: Major
> Labels: usability
> Fix For: 1.11.0
>
>
> This Source/Sink pair would serve two purposes:
> 1. You can read topics that are already partitioned by key and process them
> without partitioning them again (avoid shuffles)
> 2. You can use this to shuffle through Kafka, thereby decomposing the job
> into smaller jobs and independent pipelined regions that fail over
> independently.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)