[ 
https://issues.apache.org/jira/browse/KAFKA-2022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2022.
------------------------------
    Resolution: Won't Fix

I think we need to catch the exception and retry with a new leader.  Pl reopen 
if you think the issue still exists


> simpleconsumer.fetch(req) throws a java.nio.channels.ClosedChannelException: 
> null exception when the original leader fails instead of being trapped in the 
> fetchResponse api while consuming messages
> -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: KAFKA-2022
>                 URL: https://issues.apache.org/jira/browse/KAFKA-2022
>             Project: Kafka
>          Issue Type: Bug
>          Components: consumer
>    Affects Versions: 0.8.2.1
>         Environment: 3 linux nodes with both zookeepr & brokers running under 
> respective users on each..
>            Reporter: Muqeet Mohammed Ali
>            Assignee: Neha Narkhede
>
> simpleconsumer.fetch(req) throws a java.nio.channels.ClosedChannelException: 
> null exception when the original leader fails, instead of being trapped in 
> the fetchResponse api while consuming messages. My understanding was that any 
> fetch failures can be found via fetchResponse.hasError() call and then be 
> handled to fetch new leader in this case. Below is the relevant code snippet 
> from the simple consumer with comments marking the line causing 
> exception..can you please comment on this?
> if (simpleconsumer == null) {
>                                       simpleconsumer = new 
> SimpleConsumer(leaderAddress.getHostName(), leaderAddress.getPort(), 
> consumerTimeout,
>                                                       consumerBufferSize, 
> consumerId);
> }
> FetchRequest req = new FetchRequestBuilder().clientId(getConsumerId())
>                                               .addFetch(topic, partition, 
> offsetManager.getTempOffset(), consumerBufferSize)
>               // Note: the fetchSize might need to be increased
>                                       // if large batches are written to Kafka
>                                               .build();
> // exception is throw at the below line
> FetchResponse fetchResponse = simpleconsumer.fetch(req);
> if (fetchResponse.hasError()) {
>                       numErrors++;
> etc...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to