cmccabe opened a new pull request, #16672:
URL: https://github.com/apache/kafka/pull/16672

   In MetadataVersion 3.7-IV2 and above, the broker's AssignmentsManager sends 
an RPC to the controller informing it about which directory we have chosen to 
place each new replica on. Unfortunately, the code does not check to see if the 
topic still exists in the MetadataImage before sending the RPC. It will also 
retry infinitely. Therefore, after a topic is created and deleted in rapid 
succession, we can get stuck including the now-defunct replica in our 
subsequent AssignReplicasToDirsRequests forever.
   
   In order to prevent this problem, the AssignmentsManager should check if a 
topic still exists (and is still present on the broker in question) before 
sending the RPC. In order to prevent log spam, we should not log any error 
messages until several minutes have gone past without success. Finally, rather 
than creating a new EventQueue event for each assignment request, we should 
simply modify a shared data structure and schedule a deferred event to send the 
accumulated RPCs. This will improve efficiency.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to