steveloughran commented on code in PR #5851:
URL: https://github.com/apache/hadoop/pull/5851#discussion_r1266680532
##########
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3APrefetchingLruEviction.java:
##########
@@ -78,39 +77,44 @@ public ITestS3APrefetchingLruEviction(final String
maxBlocks) {
LoggerFactory.getLogger(ITestS3APrefetchingLruEviction.class);
private static final int S_1K = 1024;
+ private static final int S_500 = 512;
+ private static final int SMALL_FILE_SIZE = S_1K * 56;
+
// Path for file which should have length > block size so
S3ACachingInputStream is used
- private Path largeFile;
- private FileSystem largeFileFS;
+ private Path smallFile;
+ private FileSystem smallFileFS;
private int blockSize;
- private static final int TIMEOUT_MILLIS = 5000;
+ private static final int TIMEOUT_MILLIS = 3000;
private static final int INTERVAL_MILLIS = 500;
@Override
public Configuration createConfiguration() {
Configuration conf = super.createConfiguration();
S3ATestUtils.removeBaseAndBucketOverrides(conf, PREFETCH_ENABLED_KEY);
S3ATestUtils.removeBaseAndBucketOverrides(conf, PREFETCH_MAX_BLOCKS_COUNT);
+ S3ATestUtils.removeBaseAndBucketOverrides(conf, PREFETCH_BLOCK_SIZE_KEY);
conf.setBoolean(PREFETCH_ENABLED_KEY, true);
conf.setInt(PREFETCH_MAX_BLOCKS_COUNT, Integer.parseInt(maxBlocks));
+ conf.setInt(PREFETCH_BLOCK_SIZE_KEY, S_1K * 10);
Review Comment:
given this is fixed, why have blocksize reread on L115?
proposed: make blocksize constant BLOCK_SIZE, use as appropriate
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]