[ 
https://issues.apache.org/jira/browse/FLINK-36817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17903569#comment-17903569
 ] 

Levani Kokhreidze commented on FLINK-36817:
-------------------------------------------

Hi [~davidradl] 

Thanks for getting back to me.

The reason I have not added it to _KafkaPartitionDiscoverer_ and __ 
_KafkaConsumerThread_ is because they are based on Source V1 API which is 
deprecated and I thought it wasn't worth it to add the new feature there.

Of course I can add it there too if it's needed.

> Give users ability to provide their own KafkaConsumer when using 
> flink-connector-kafka
> --------------------------------------------------------------------------------------
>
>                 Key: FLINK-36817
>                 URL: https://issues.apache.org/jira/browse/FLINK-36817
>             Project: Flink
>          Issue Type: New Feature
>          Components: Connectors / Kafka
>            Reporter: Levani Kokhreidze
>            Priority: Major
>              Labels: pull-request-available
>
> In certain scenarios users of the `KafkaSource` in the flink-connector-kafka 
> might want to provide their own KafkaConsumer. Right now this is not possible 
> as consumer is created in the 
> [KafkaPartitionSplitReader|https://github.com/apache/flink-connector-kafka/blob/main/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/reader/KafkaPartitionSplitReader.java#L97]
>  which makes customisation impossible.
> Proposal is to let users pass `KafkaConsumerFactory` when building the 
> KafkaSource.
> {code:java}
> public interface KafkaConsumerFactory {
>   KafkaConsumer<byte[], byte[]> get(Properties properties);
> }{code}
> Builder will have a default implementation which creates the KafkaConsumer 
> the same as it is done now.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to