mehakmeet commented on code in PR #5754:
URL: https://github.com/apache/hadoop/pull/5754#discussion_r1247486332
##########
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3APrefetchingInputStream.java:
##########
@@ -301,4 +303,56 @@ public void testStatusProbesAfterClosingStream() throws
Throwable {
}
+ @Test
+ public void testSeeksWithLruEviction() throws Throwable {
+ IOStatistics ioStats;
+ openFS();
+
+ try (FSDataInputStream in = largeFileFS.open(largeFile)) {
+ ioStats = in.getIOStatistics();
+
+ byte[] buffer = new byte[blockSize];
+
+ // Don't read block 0 completely
+ in.read(buffer, 0, blockSize - S_1K * 10);
+
+ // Seek to block 1 and don't read completely
+ in.seek(blockSize);
+ in.read(buffer, 0, 2 * S_1K);
+
+ // Seek to block 2 and don't read completely
+ in.seek(blockSize * 2L);
+ in.read(buffer, 0, 2 * S_1K);
+
+ // Seek to block 3 and don't read completely
+ in.seek(blockSize * 3L);
+ in.read(buffer, 0, 2 * S_1K);
+
+ // Seek to block 4 and don't read completely
+ in.seek(blockSize * 4L);
+ in.read(buffer, 0, 2 * S_1K);
+
+ // Seek to block 5 and don't read completely
+ in.seek(blockSize * 5L);
+ in.read(buffer, 0, 2 * S_1K);
+
+ // backward seek, can't use block 0 as it is evicted
+ in.seek(S_1K * 5);
+ in.read();
+
Review Comment:
Yea, why not, it'll be good for debugging purposes if there's any difference
between them we would know that there's some issue with the proper deletion of
the files from cache. Although an overkill but never hurts 😄
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]