virajjasani commented on pull request #3280:
URL: https://github.com/apache/hadoop/pull/3280#issuecomment-898192874


   FYI @ferhui @amahussein filed the Jira.
   
   How flaky is resolved?
   
   The no of under-replicated blocks on DN2 can either be 3 or 4 depending on 
actual blocks available in Datanode Storage. Hence, in order to make sure that 
once both DN1 and DN2 are decommissioned -- we have 4 under replicated blocks 
-- we need to first wait for total 8 blocks to be reported (including replicas) 
by both DNs together. This is the additional check. Once we make sure of this, 
we won't run in flaky test failures where sometimes due to 1 replica not being 
reported even before we start decommissioning, we might run into case where we 
can't asset all 4 blocks to be under replicated.
   Hence, I have added additional validation before we start decommissioning 
DN1.
   
   After recent changes, haven't seen test failing in multiple test runs. Could 
you please take a look?
   Thanks


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to