steveloughran commented on code in PR #5851:
URL: https://github.com/apache/hadoop/pull/5851#discussion_r1267749128


##########
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3APrefetchingInputStream.java:
##########
@@ -94,17 +97,16 @@ public void teardown() throws Exception {
     largeFileFS = null;
   }
 
-  private void openFS() throws Exception {
+  private void openFS(String fileName) throws Exception {

Review Comment:
   change to using the existing fs, and using methodPath() to generate a per 
test case path (no test uses more than one file, does it?)
   ```java
   
   void createLargeFile() throws IOException {
     Configuration conf = getConfiguration();
     byte[] data = ContractTestUtils.dataset(S_1K * 72, 'x', 26);
     largeFile = methodPath();
     FileSystem largeFileFS = getFileSystem();
     ContractTestUtils.writeDataset(largeFileFS, largeFile, data, data.length, 
16, true);
     FileStatus fileStatus = largeFileFS.getFileStatus(largeFile);
     largeFileSize = fileStatus.getLen();
     numBlocks = calculateNumBlocks(largeFileSize, BLOCK_SIZE);
   }
   ```
   



##########
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3APrefetchingInputStream.java:
##########
@@ -64,26 +63,30 @@ public ITestS3APrefetchingInputStream() {
   private static final Logger LOG =
       LoggerFactory.getLogger(ITestS3APrefetchingInputStream.class);
 
-  private static final int S_1K = 1024;
+  private static final int S_500 = 512;
+  private static final int S_1K = S_500 * 2;
   private static final int S_1M = S_1K * S_1K;
-  // Path for file which should have length > block size so 
S3ACachingInputStream is used
+  private int numBlocks;
   private Path largeFile;
   private FileSystem largeFileFS;
-  private int numBlocks;
-  private int blockSize;
+
+  // Size should be > block size so S3ACachingInputStream is used
   private long largeFileSize;
+
   // Size should be < block size so S3AInMemoryInputStream is used
-  private static final int SMALL_FILE_SIZE = S_1K * 16;
+  private static final int SMALL_FILE_SIZE = S_1K * 9;
 
   private static final int TIMEOUT_MILLIS = 5000;
   private static final int INTERVAL_MILLIS = 500;
-
+  private static final int BLOCK_SIZE = S_1K * 10;
 
   @Override
   public Configuration createConfiguration() {
     Configuration conf = super.createConfiguration();
     S3ATestUtils.removeBaseAndBucketOverrides(conf, PREFETCH_ENABLED_KEY);
+    S3ATestUtils.removeBaseAndBucketOverrides(conf, PREFETCH_BLOCK_SIZE_KEY);
     conf.setBoolean(PREFETCH_ENABLED_KEY, true);
+    conf.setInt(PREFETCH_BLOCK_SIZE_KEY, BLOCK_SIZE);

Review Comment:
   because the size is now being set in this method, the filesystem 
automatically created/destroyed is already set up for the tests; we can replace 
openFS() entirely.
   
   



##########
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3APrefetchingLruEviction.java:
##########
@@ -112,7 +111,6 @@ private void openFS() throws Exception {
     byte[] data = ContractTestUtils.dataset(SMALL_FILE_SIZE, 'x', 26);
     smallFile = path("iTestS3APrefetchingLruEviction");
     ContractTestUtils.writeDataset(getFileSystem(), smallFile, data, 
data.length, 16, true);
-    blockSize = conf.getInt(PREFETCH_BLOCK_SIZE_KEY, 
PREFETCH_BLOCK_DEFAULT_SIZE);
     smallFileFS = new S3AFileSystem();

Review Comment:
   can use use getFileSystem() here



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to