bbeaudreault commented on a change in pull request #3527: URL: https://github.com/apache/hadoop/pull/3527#discussion_r736825878
########## File path: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java ########## @@ -228,53 +222,57 @@ boolean deadNodesContain(DatanodeInfo nodeInfo) { return deadNodes.containsKey(nodeInfo); } - @VisibleForTesting - void setReadTimeStampsForTesting(long timeStamp) { - setLocatedBlocksTimeStamp(timeStamp); - } - - private void setLocatedBlocksTimeStamp() { - setLocatedBlocksTimeStamp(Time.monotonicNow()); - } - - private void setLocatedBlocksTimeStamp(long timeStamp) { - this.locatedBlocksTimeStamp = timeStamp; - } - /** * Grab the open-file info from namenode * @param refreshLocatedBlocks whether to re-fetch locatedblocks */ void openInfo(boolean refreshLocatedBlocks) throws IOException { final DfsClientConf conf = dfsClient.getConf(); synchronized(infoLock) { - lastBlockBeingWrittenLength = - fetchLocatedBlocksAndGetLastBlockLength(refreshLocatedBlocks); int retriesForLastBlockLength = conf.getRetryTimesForGetLastBlockLength(); - while (retriesForLastBlockLength > 0) { + + while (true) { Review comment: This `openInfo` rewrite is functionally identical to the previous implementation, but is cleaner and makes it possible to share code between this method and the new `refreshBlockLocations` method below. I didn't want to simply re-use this method for the background refresh, because it does multiple RPC calls within the infoLock. The background `refreshBlockLocations` is meant to be minimally impactful to the hot path of reads, so it does these RPC calls outside of any locks and then quickly swaps them in with the lock. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org