[
https://issues.apache.org/jira/browse/FLINK-16481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17054914#comment-17054914
]
Aljoscha Krettek commented on FLINK-16481:
------------------------------------------
Do you have any suggestion about how to fix this? I think this comes mostly
from how Flink works with Kafka and is not easily addressable.
> Improved FlinkkafkaConnector support for dynamically increasing capacity
> ------------------------------------------------------------------------
>
> Key: FLINK-16481
> URL: https://issues.apache.org/jira/browse/FLINK-16481
> Project: Flink
> Issue Type: Improvement
> Components: Connectors / Kafka
> Reporter: likang
> Priority: Minor
> Attachments: Flink-Kafka-Connetor的改进.docx
>
>
>
> The source and sink of the community version of Flink-kafka-connector do
> not have the ability to sense kafka metadata during dynamic expansion and
> contraction.
> 1. When writing data to flink-kafka-producer, the metadata map is not updated
> regularly, and only when the Task sends data for the first time, there is an
> update operation of metadata.
> 2.Flink-kafka-consumer, the current AbstractPartitionDiscoverer has bugs. The
> data consumption thread and the discoverer thread use two Kafka-consumer
> objects. The metadata update of Kafka-consumer requires the poll function to
> trigger. , Source is also unable to perceive metadata changes.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)