[ https://issues.apache.org/jira/browse/HDFS-16479?focusedWorklogId=752845&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-752845 ]
ASF GitHub Bot logged work on HDFS-16479: ----------------------------------------- Author: ASF GitHub Bot Created on: 05/Apr/22 12:37 Start Date: 05/Apr/22 12:37 Worklog Time Spent: 10m Work Description: ayushtkn commented on code in PR #4138: URL: https://github.com/apache/hadoop/pull/4138#discussion_r842732623 ########## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java: ########## @@ -2163,6 +2163,15 @@ BlockReconstructionWork scheduleReconstruction(BlockInfo block, return null; } + // skip if source datanodes for reconstructing ec block are not enough + if (block.isStriped()) { + BlockInfoStriped stripedBlock = (BlockInfoStriped) block; + if (stripedBlock.getDataBlockNum() > srcNodes.length) { Review Comment: Had a very quick look. Just thinking about a scenario with say RS-6-3-1024k, and we just write 1 mb, in that case the total number of blocks available will be 1 Datablock + 3 Parity. In that case BG itself will have total 4 Blocks. Will this code start returning null? Not sure if `getRealDataBlockNum` helps here or not. If it is actually a problem Issue Time Tracking ------------------- Worklog Id: (was: 752845) Time Spent: 40m (was: 0.5h) > EC: NameNode should not send a reconstruction work when the source datanodes > are insufficient > --------------------------------------------------------------------------------------------- > > Key: HDFS-16479 > URL: https://issues.apache.org/jira/browse/HDFS-16479 > Project: Hadoop HDFS > Issue Type: Bug > Components: ec, erasure-coding > Reporter: Yuanbo Liu > Priority: Critical > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > > We got this exception from DataNodes > {color:#707070}java.lang.IllegalArgumentException: No enough live striped > blocks.{color} > {color:#707070} at > com.google.common.base.Preconditions.checkArgument(Preconditions.java:141){color} > {color:#707070} at > org.apache.hadoop.hdfs.server.datanode.erasurecode.StripedReader.<init>(StripedReader.java:128){color} > {color:#707070} at > org.apache.hadoop.hdfs.server.datanode.erasurecode.StripedReconstructor.<init>(StripedReconstructor.java:135){color} > {color:#707070} at > org.apache.hadoop.hdfs.server.datanode.erasurecode.StripedBlockReconstructor.<init>(StripedBlockReconstructor.java:41){color} > {color:#707070} at > org.apache.hadoop.hdfs.server.datanode.erasurecode.ErasureCodingWorker.processErasureCodingTasks(ErasureCodingWorker.java:133){color} > {color:#707070} at > org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActive(BPOfferService.java:796){color} > {color:#707070} at > org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActor(BPOfferService.java:680){color} > {color:#707070} at > org.apache.hadoop.hdfs.server.datanode.BPServiceActor$CommandProcessingThread.processCommand(BPServiceActor.java:1314){color} > {color:#707070} at > org.apache.hadoop.hdfs.server.datanode.BPServiceActor$CommandProcessingThread.lambda$enqueue$2(BPServiceActor.java:1360){color} > {color:#707070} at > org.apache.hadoop.hdfs.server.datanode.BPServiceActor$CommandProcessingThread.processQueue(BPServiceActor.java:1287){color} > After going through the code of ErasureCodingWork.java, we found > {code:java} > targets[0].getDatanodeDescriptor().addBlockToBeErasureCoded( new > ExtendedBlock(blockPoolId, stripedBlk), getSrcNodes(), targets, > getLiveBlockIndicies(), stripedBlk.getErasureCodingPolicy()); > {code} > > the liveBusyBlockIndicies is not considered as liveBlockIndicies, hence > erasure coding reconstruction sometimes will fail as 'No enough live striped > blocks'. -- This message was sent by Atlassian Jira (v8.20.1#820001) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org