virajjasani commented on pull request #3675:
URL: https://github.com/apache/hadoop/pull/3675#issuecomment-998059075


   Sorry, I could not get to this PR last week. I will review later this week 
but I don't mean to block this work. If I find something odd or something as an 
improvement over this, we can anyways get it clarified later on the PR/Jira or 
create addendum PR later.
   Thanks for your work @KevinWikant, this might be really helpful going 
forward.
   
   With a quick glance, just one question for now: Overall it seems the goal is 
to improve and continue the decommissioning of healthy nodes over unhealthy 
ones (by removing and then re-queueing the entries), hence if few nodes are 
really in bad state (hardware/network issues), the plan is to keep re-queueing 
them until more nodes are getting decommissioned than max tracked nodes right? 
Since unhealthy node getting decommissioned might anyways require some sort of 
retry, shall we requeue them even if the condition is not met (i.e. total no of 
decomm in progress < max tracked nodes)? I am just thinking at high level, yet 
to catch up with the PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to