steveloughran commented on PR #5832:
URL: https://github.com/apache/hadoop/pull/5832#issuecomment-1636071189
HADOOP-18184. S3A prefetch unbuffer
* Lots of statistic collection with use in tests.
* s3a prefetch tests all moved to prefetch. package
* and split into caching stream and large files tests
* large files and LRU are scale
* and testRandomReadLargeFile uses small block size to reduce read overhead
* new hadoop common org.apache.hadoop.test.Sizes sizes class with predefined
sizes (from azure; not moved existing code to it yet)
Overall, the prefetch reads of the large files are slow; while it's critical
to test multi-block files, we don't need to work on the landsat csv file.
better: one of the huge tests uses it, with a small block size of 1 MB to
force lots of work.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]