steveloughran opened a new pull request, #7281: URL: https://github.com/apache/hadoop/pull/7281
HADOOP-19229 * default min/max are now 16K and 1M * s3a and abfs use 128K as the minimum size, 2M for max. Based on Facebook's velox paper's reported values: (20K for SSD, 500K for cloud storage). Also adds new file `org.apache.hadoop.io.Sizes` which provides constants for various binary sizes, based on a file in hadoop-azure test source. This was NOT used anywhere else in the source other than in the new vector ranges -though it MAY/SHOULD be used in future. We should be aware that with a larger range, the possibility of failure of merged reads may increase, #71Ø5 HADOOP-19105. _Improve resilience in vector reads._ is intended to address this. This changes was on that commit chain, but has now been pulled out for independent review ### How was this patch tested? Existing tests rerun with assertions modified to cope with the changed defaults. Did find one unrelated regression in abfs test suites, filed as HADOOP-19382. No performance tests were done; any numbers here would be very insightful. ### For code changes: - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
