[ https://issues.apache.org/jira/browse/HDFS-13994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655987#comment-16655987 ]
Íñigo Goiri edited comment on HDFS-13994 at 10/18/18 10:56 PM: --------------------------------------------------------------- The failed unit tests are usual suspects and: * HDFS-14002 fixes TestLayoutVersion * HDFS-14004 covers TestLeaseRecovery2 [^HDFS-13994.5.patch] looks like it keeps the old functionality and makes it more efficient. was (Author: elgoiri): The failed unit tests are usual suspects and: * HDFS-14002 fixes TestLayoutVersion * HDFS-14004 covers TestLeaseRecovery2 [^HDFS-13994.5.patch] looks like it keeps the old functionality and makes it more efficient. +1 > DataNode BlockSender waitForMinLength > ------------------------------------- > > Key: HDFS-13994 > URL: https://issues.apache.org/jira/browse/HDFS-13994 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode > Affects Versions: 3.2.0 > Reporter: BELUGA BEHR > Assignee: BELUGA BEHR > Priority: Minor > Attachments: HDFS-13994.1.patch, HDFS-13994.2.patch, > HDFS-13994.3.patch, HDFS-13994.4.patch, HDFS-13994.5.patch > > > {code:java|title=BlockSender.java} > private static void waitForMinLength(ReplicaInPipeline rbw, long len) > throws IOException { > // Wait for 3 seconds for rbw replica to reach the minimum length > for (int i = 0; i < 30 && rbw.getBytesOnDisk() < len; i++) { > try { > Thread.sleep(100); > } catch (InterruptedException ie) { > throw new IOException(ie); > } > } > long bytesOnDisk = rbw.getBytesOnDisk(); > if (bytesOnDisk < len) { > throw new IOException( > String.format("Need %d bytes, but only %d bytes available", len, > bytesOnDisk)); > } > } > {code} > It is not very efficient to poll for status in this way. Instead, use > {{notifyAll}} within the {{ReplicaInPipeline}} to notify the caller when the > replica has reached a certain size. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org