zhengchenyu commented on PR #4138:
URL: https://github.com/apache/hadoop/pull/4138#issuecomment-2203171701

   @tasanuma Sorry for miss your comment. 
   
   In the case of a 6+3 ec policy, if 4 blocks are unavailable due to busy, the 
size of srcNodes is 5. If one of these 5 blocks is in the decommissioning 
state, I think block copy for the decommissioning block should be triggered. 
However, this simple block copy cannot be triggered now.
   
   I create PR [HDFS-17542](https://github.com/apache/hadoop/pull/6915). The 
test [Case 
1.7](https://github.com/zhengchenyu/hadoop/blob/f1ac383027accd5fea05f1a2fa9c07f89ff0d961/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java#L2447)
 show the problem I described. For current trunk, scheduleReconstruction will 
return null. But after this PR 
[HDFS-17542](https://github.com/apache/hadoop/pull/6915), will return a work 
for copy. 
   
   [HDFS-17542](https://github.com/apache/hadoop/pull/6915) reorganized the 
code structure. Would you be interested in taking a look at 
[HDFS-17542](https://github.com/apache/hadoop/pull/6915)?
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to