Thanks for the response Ewen!

On Tue, May 26, 2015 at 10:52 PM, Ewen Cheslack-Postava <e...@confluent.io>
wrote:

> It's not being switched in this case because the broker hasn't failed. It
> can still connect to all the other brokers and zookeeper. The only failure
> is of the link between a client and the broker.
>
> Another way to think of this is to extend the scenario with more producers.
> If I have 100 other producers and they can all still connect, would you
> still consider this a failure and expect the leader to change? Since
> network partitions (or periods of high latency, or long GC pauses, etc) can
> happen arbitrarily and clients might be spread far and wide, you can't rely
> on their connectivity as an indicator of the health of the Kafka broker.
>
> Of course, there's also a pretty big practical issue: since the client
> can't connect to the broker, how would it even report that it has a
> connectivity issue?
>
> -Ewen
>
> On Mon, May 25, 2015 at 10:05 PM, Kamal C <kamaltar...@gmail.com> wrote:
>
> > Hi,
> >
> >     I have a cluster of 3 Kafka brokers and a remote producer. Producer
> > started to send messages to *SampleTopic*. Then I blocked the network
> > connectivity between the Producer and the leader node for the topic
> > *SampleTopic* but network connectivity is healthy between the cluster and
> > producer is able to reach the other two nodes.
> >
> > *With Script*
> >
> > sh kafka-topics.sh --zookeeper localhost --describe
> > Topic:SampleTopic    PartitionCount:1    ReplicationFactor:3    Configs:
> >     Topic: SampleTopic    Partition: 0    Leader: 1    Replicas: 1,2,0
> > Isr: 1,2,0
> >
> >
> > Producer tries forever to reach the leader node by throwing connection
> > refused exception. I understand that when there is a node failure leader
> > gets switched. Why it's not switching the leader in this scenario ?
> >
> > --
> > Kamal C
> >
>
>
>
> --
> Thanks,
> Ewen
>

Reply via email to