virajjasani commented on code in PR #5754:
URL: https://github.com/apache/hadoop/pull/5754#discussion_r1240622243


##########
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/prefetch/SingleFilePerBlockCache.java:
##########
@@ -299,9 +395,62 @@ public void put(int blockNumber, ByteBuffer buffer, 
Configuration conf,
     // Update stream_read_blocks_in_cache stats only after blocks map is 
updated with new file
     // entry to avoid any discrepancy related to the value of 
stream_read_blocks_in_cache.
     // If stream_read_blocks_in_cache is updated before updating the blocks 
map here, closing of
-    // the input stream can lead to the removal of the cache file even before 
blocks is added with
-    // the new cache file, leading to incorrect value of 
stream_read_blocks_in_cache.
+    // the input stream can lead to the removal of the cache file even before 
blocks is added
+    // with the new cache file, leading to incorrect value of 
stream_read_blocks_in_cache.
     prefetchingStatistics.blockAddedToFileCache();
+    addToLinkedListAndEvictIfRequired(entry);
+  }
+
+  /**
+   * Add the given entry to the head of the linked list and if the LRU cache 
size
+   * exceeds the max limit, evict tail of the LRU linked list.
+   *
+   * @param entry Block entry to add.
+   */
+  private void addToLinkedListAndEvictIfRequired(Entry entry) {
+    addToHeadOfLinkedList(entry);
+    blocksLock.writeLock().lock();

Review Comment:
   yeah that should still be okay IMO given that eviction would take place 
anyways (before or after another element is added to head (by cache read or by 
writing new block entry)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to