[ 
https://issues.apache.org/jira/browse/HADOOP-18291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17738889#comment-17738889
 ] 

ASF GitHub Bot commented on HADOOP-18291:
-----------------------------------------

mehakmeet commented on code in PR #5754:
URL: https://github.com/apache/hadoop/pull/5754#discussion_r1247458556


##########
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3APrefetchingInputStream.java:
##########
@@ -77,13 +78,14 @@ public ITestS3APrefetchingInputStream() {
 
   private static final int TIMEOUT_MILLIS = 5000;
   private static final int INTERVAL_MILLIS = 500;
-
+  private static final int PREFETCH_MAX_NUM_BLOCKS = 3;
 
   @Override
   public Configuration createConfiguration() {
     Configuration conf = super.createConfiguration();
     S3ATestUtils.removeBaseAndBucketOverrides(conf, PREFETCH_ENABLED_KEY);
     conf.setBoolean(PREFETCH_ENABLED_KEY, true);
+    conf.setInt(FS_PREFETCH_MAX_BLOCKS_COUNT, PREFETCH_MAX_NUM_BLOCKS);

Review Comment:
   nit: Remove base and bucket config for this property in L86, just so that 
test is consistent in diff env.



##########
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3APrefetchingInputStream.java:
##########
@@ -301,4 +303,56 @@ public void testStatusProbesAfterClosingStream() throws 
Throwable {
 
   }
 
+  @Test
+  public void testSeeksWithLruEviction() throws Throwable {
+    IOStatistics ioStats;
+    openFS();
+
+    try (FSDataInputStream in = largeFileFS.open(largeFile)) {
+      ioStats = in.getIOStatistics();
+
+      byte[] buffer = new byte[blockSize];
+

Review Comment:
   add a comment here explaining what the test is doing on a high level, so 
that it's easier to understand the flow of how LRU is happening.



##########
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3APrefetchingInputStream.java:
##########
@@ -301,4 +303,56 @@ public void testStatusProbesAfterClosingStream() throws 
Throwable {
 
   }
 
+  @Test
+  public void testSeeksWithLruEviction() throws Throwable {
+    IOStatistics ioStats;
+    openFS();
+
+    try (FSDataInputStream in = largeFileFS.open(largeFile)) {
+      ioStats = in.getIOStatistics();
+
+      byte[] buffer = new byte[blockSize];
+
+      // Don't read block 0 completely
+      in.read(buffer, 0, blockSize - S_1K * 10);
+
+      // Seek to block 1 and don't read completely
+      in.seek(blockSize);
+      in.read(buffer, 0, 2 * S_1K);
+
+      // Seek to block 2 and don't read completely
+      in.seek(blockSize * 2L);
+      in.read(buffer, 0, 2 * S_1K);
+
+      // Seek to block 3 and don't read completely
+      in.seek(blockSize * 3L);
+      in.read(buffer, 0, 2 * S_1K);
+

Review Comment:
   Add a seek to the intersection point of two blocks(eg: 4*blockSize - 10KB) 
and read some bytes(>10KBs) to read both blocks and assert if the head of the 
list is the correct block.



##########
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3APrefetchingInputStream.java:
##########
@@ -301,4 +303,56 @@ public void testStatusProbesAfterClosingStream() throws 
Throwable {
 
   }
 
+  @Test
+  public void testSeeksWithLruEviction() throws Throwable {
+    IOStatistics ioStats;
+    openFS();
+
+    try (FSDataInputStream in = largeFileFS.open(largeFile)) {
+      ioStats = in.getIOStatistics();
+
+      byte[] buffer = new byte[blockSize];
+
+      // Don't read block 0 completely
+      in.read(buffer, 0, blockSize - S_1K * 10);
+
+      // Seek to block 1 and don't read completely
+      in.seek(blockSize);
+      in.read(buffer, 0, 2 * S_1K);
+
+      // Seek to block 2 and don't read completely
+      in.seek(blockSize * 2L);
+      in.read(buffer, 0, 2 * S_1K);
+
+      // Seek to block 3 and don't read completely
+      in.seek(blockSize * 3L);
+      in.read(buffer, 0, 2 * S_1K);
+
+      // Seek to block 4 and don't read completely
+      in.seek(blockSize * 4L);
+      in.read(buffer, 0, 2 * S_1K);
+
+      // Seek to block 5 and don't read completely
+      in.seek(blockSize * 5L);
+      in.read(buffer, 0, 2 * S_1K);
+
+      // backward seek, can't use block 0 as it is evicted
+      in.seek(S_1K * 5);
+      in.read();
+

Review Comment:
   Assert the number of evictions being done or blocks being removed from the 
list. At certain points, test what the capacity of the list is to keep it 
consistent.





> S3A prefetch - Implement LRU cache for SingleFilePerBlockCache
> --------------------------------------------------------------
>
>                 Key: HADOOP-18291
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18291
>             Project: Hadoop Common
>          Issue Type: Sub-task
>    Affects Versions: 3.4.0
>            Reporter: Ahmar Suhail
>            Assignee: Viraj Jasani
>            Priority: Major
>              Labels: pull-request-available
>
> Currently there is no limit on the size of disk cache. This means we could 
> have a large number of files on files, especially for access patterns that 
> are very random and do not always read the block fully. 
>  
> eg:
> in.seek(5);
> in.read(); 
> in.seek(blockSize + 10) // block 0 gets saved to disk as it's not fully read
> in.read();
> in.seek(2 * blockSize + 10) // block 1 gets saved to disk
> .. and so on
>  
> The in memory cache is bounded, and by default has a limit of 72MB (9 
> blocks). When a block is fully read, and a seek is issued it's released 
> [here|https://github.com/apache/hadoop/blob/feature-HADOOP-18028-s3a-prefetch/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/read/S3CachingInputStream.java#L109].
>  We can also delete the on disk file for the block here if it exists. 
>  
> Also maybe add an upper limit on disk space, and delete the file which stores 
> data of the block furthest from the current block (similar to the in memory 
> cache) when this limit is reached. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to