[
https://issues.apache.org/jira/browse/HADOOP-18391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17583190#comment-17583190
]
ASF GitHub Bot commented on HADOOP-18391:
-----------------------------------------
mukund-thakur opened a new pull request, #4787:
URL: https://github.com/apache/hadoop/pull/4787
part of HADOOP-18103.
<!--
Thanks for sending a pull request!
1. If this is your first time, please read our contributor guidelines:
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
2. Make sure your PR title starts with JIRA issue id, e.g.,
'HADOOP-17799. Your PR title ...'.
-->
### Description of PR
VectoredReadUtils.readInDirectBuffer should allocate a max buffer size, .e.g
4mb, then do repeated reads and copies; this ensures that you don't OOM with
many threads doing ranged requests.
### How was this patch tested?
Ran the existing tests. Also added a UT for zero byte file ranges.
### For code changes:
- [ ] Does the title or this PR starts with the corresponding JIRA issue id
(e.g. 'HADOOP-17799. Your PR title ...')?
- [ ] Object storage: have the integration tests been executed and the
endpoint declared according to the connector-specific documentation?
- [ ] If adding new dependencies to the code, are these dependencies
licensed in a way that is compatible for inclusion under [ASF
2.0](http://www.apache.org/legal/resolved.html#category-a)?
- [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`,
`NOTICE-binary` files?
> Improve VectoredReadUtils
> -------------------------
>
> Key: HADOOP-18391
> URL: https://issues.apache.org/jira/browse/HADOOP-18391
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs
> Affects Versions: 3.3.9
> Reporter: Steve Loughran
> Assignee: Mukund Thakur
> Priority: Major
>
> harden the VectoredReadUtils methods for consistent and more robust use,
> especially in those filesystems which don't have the api.
> VectoredReadUtils.readInDirectBuffer should allocate a max buffer size, .e.g
> 4mb, then do repeated reads and copies; this ensures that you don't OOM with
> many threads doing ranged requests. other libs do this.
> readVectored to call validateNonOverlappingAndReturnSortedRanges before
> iterating
> this ensures the abfs/s3a requirements are always met, and that because
> ranges will be read in order, prefetching by other clients will keep their
> performance good.
> readVectored to add special handling for 0 byte ranges
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]