[
https://issues.apache.org/jira/browse/FLINK-6288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15986328#comment-15986328
]
ASF GitHub Bot commented on FLINK-6288:
---------------------------------------
Github user gyfora commented on the issue:
https://github.com/apache/flink/pull/3766
I liked the proposed API and I agree that it's probably best to keep the
old behaviour for the deprecated API.
I don't think the Kafka partition info fetching should be a huge problem as
it shouldnt happen too often and Kafka should be able to return the info if you
can write to it. We of course need some timeout/retry mechanism to not fail
unnecessarily.
The producer itself is not very resilient in case of errors in the current
state as it can't really handle the async errors it will just shut throw them
and fail.
> FlinkKafkaProducer's custom Partitioner is always invoked with number of
> partitions of default topic
> ----------------------------------------------------------------------------------------------------
>
> Key: FLINK-6288
> URL: https://issues.apache.org/jira/browse/FLINK-6288
> Project: Flink
> Issue Type: Improvement
> Components: Kafka Connector
> Reporter: Tzu-Li (Gordon) Tai
> Assignee: Fang Yong
>
> The {{FlinkKafkaProducerBase}} supports routing records to topics besides the
> default topic, but the custom {{Partitioner}} interface does not follow this
> semantic.
> The partitioner is always invoked the {{partition}} method with the number of
> partitions in the default topic, and not the number of partitions of the
> current {{targetTopic}}.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)