[
https://issues.apache.org/jira/browse/KAFKA-532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Neha Narkhede updated KAFKA-532:
--------------------------------
Attachment: kafka-532-v4.patch
31. Partition.updateIsr(): I am thinking about what controllerEpoch the leader
should use when updating the leaderAndIsr path. There is probably nothing wrong
to use the controllerEpoch in replicaManager. However, it seems to make more
sense to use the controllerEpoch in the leaderAndIsr path itself, since this
update is actually not made by the controller.
You make a good point, I agree that it probably makes more sense to keep the
decision maker's controller epoch while changing the isr. Fixed it
32. ReplicaManager.controllerEpoch: Since this variable can be accessed from
different threads, it needs to be a volatile. Also, we only need to update
controllerEpoch if the one from the request is larger (but not equal). It
probably should be initialized to 0 or -1?
Good catch, fixed it.
33. LeaderElectionTest.testLeaderElectionWithStaleControllerEpoch(): I wonder
if we really need to start a new broker. Can we just send a stale controller
epoc using the controllerChannelManager in the current controller?
I just thought it makes it simpler to understand the logic if there is another
broker that acts as the new controller, but you are right. I could've just
hijacked the old controller's channel manager
34. KafkaController: There seems to be a tricky issue with incrementing the
controller epoc. We increment epoc in onControllerFailover() after the broker
becomes a controller. What could happen is that broker 1 becomes the controller
and goes to GC before we increment the epoc. Broker 2 becomes the new
controller and increments the epoc. Broker 1 comes back from gc and increments
epoc again. Now, broker 1's controller epoc is actually larger. Not sure what's
the best way to address this. One thought is that immediately after controller
epoc is incremented in onControllerFailover(), we check if this broker is still
the controller (by reading the controller path in ZK). If not, we throw an
exception. Also, epoc probably should be initialized to 0 if we want the first
controller to have epoc 1.
Implemented the fix I described earlier for this.
> Multiple controllers can co-exist during soft failures
> ------------------------------------------------------
>
> Key: KAFKA-532
> URL: https://issues.apache.org/jira/browse/KAFKA-532
> Project: Kafka
> Issue Type: Bug
> Affects Versions: 0.8
> Reporter: Neha Narkhede
> Assignee: Neha Narkhede
> Priority: Blocker
> Labels: bugs
> Attachments: kafka-532-v1.patch, kafka-532-v2.patch,
> kafka-532-v3.patch, kafka-532-v4.patch
>
> Original Estimate: 48h
> Remaining Estimate: 48h
>
> If the current controller experiences an intermittent soft failure (GC pause)
> in the middle of leader election or partition reassignment, a new controller
> might get elected and start communicating new state change decisions to the
> brokers. After recovering from the soft failure, the old controller might
> continue sending some stale state change decisions to the brokers, resulting
> in unexpected failures. We need to introduce a controller generation id that
> increments with controller election. The brokers should reject any state
> change requests by a controller with an older generation id.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira