[ https://issues.apache.org/jira/browse/HDFS-16479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17853043#comment-17853043 ]
ASF GitHub Bot commented on HDFS-16479: --------------------------------------- zhengchenyu commented on PR #4138: URL: https://github.com/apache/hadoop/pull/4138#issuecomment-2154215569 @tasanuma @ayushtkn I think after this PR, the simple copy for decommissioning ec block will be ingored. For example, we have 6 + 3 storage. If the one storage is decommissioning, and the other storage is busy. The simple copy from decommissioning storage will be ignored. > EC: NameNode should not send a reconstruction work when the source datanodes > are insufficient > --------------------------------------------------------------------------------------------- > > Key: HDFS-16479 > URL: https://issues.apache.org/jira/browse/HDFS-16479 > Project: Hadoop HDFS > Issue Type: Bug > Components: ec, erasure-coding > Reporter: Yuanbo Liu > Assignee: Takanobu Asanuma > Priority: Critical > Labels: pull-request-available > Fix For: 3.4.0, 3.2.4, 3.3.5 > > Time Spent: 2h > Remaining Estimate: 0h > > We got this exception from DataNodes > {color:#707070}java.lang.IllegalArgumentException: No enough live striped > blocks.{color} > {color:#707070} at > com.google.common.base.Preconditions.checkArgument(Preconditions.java:141){color} > {color:#707070} at > org.apache.hadoop.hdfs.server.datanode.erasurecode.StripedReader.<init>(StripedReader.java:128){color} > {color:#707070} at > org.apache.hadoop.hdfs.server.datanode.erasurecode.StripedReconstructor.<init>(StripedReconstructor.java:135){color} > {color:#707070} at > org.apache.hadoop.hdfs.server.datanode.erasurecode.StripedBlockReconstructor.<init>(StripedBlockReconstructor.java:41){color} > {color:#707070} at > org.apache.hadoop.hdfs.server.datanode.erasurecode.ErasureCodingWorker.processErasureCodingTasks(ErasureCodingWorker.java:133){color} > {color:#707070} at > org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActive(BPOfferService.java:796){color} > {color:#707070} at > org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActor(BPOfferService.java:680){color} > {color:#707070} at > org.apache.hadoop.hdfs.server.datanode.BPServiceActor$CommandProcessingThread.processCommand(BPServiceActor.java:1314){color} > {color:#707070} at > org.apache.hadoop.hdfs.server.datanode.BPServiceActor$CommandProcessingThread.lambda$enqueue$2(BPServiceActor.java:1360){color} > {color:#707070} at > org.apache.hadoop.hdfs.server.datanode.BPServiceActor$CommandProcessingThread.processQueue(BPServiceActor.java:1287){color} > After going through the code of ErasureCodingWork.java, we found > {code:java} > targets[0].getDatanodeDescriptor().addBlockToBeErasureCoded( new > ExtendedBlock(blockPoolId, stripedBlk), getSrcNodes(), targets, > getLiveBlockIndicies(), stripedBlk.getErasureCodingPolicy()); > {code} > > the liveBusyBlockIndicies is not considered as liveBlockIndicies, hence > erasure coding reconstruction sometimes will fail as 'No enough live striped > blocks'. -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org