[
https://issues.apache.org/jira/browse/HADOOP-16202?focusedWorklogId=756104&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-756104
]
ASF GitHub Bot logged work on HADOOP-16202:
-------------------------------------------
Author: ASF GitHub Bot
Created on: 13/Apr/22 00:22
Start Date: 13/Apr/22 00:22
Worklog Time Spent: 10m
Work Description: mukund-thakur commented on PR #2584:
URL: https://github.com/apache/hadoop/pull/2584#issuecomment-1097421960
I reviewed the s3a code as well. I think this is good to go once we have
yetus running well.
I am running tests as well.
Although I think this is going to introduce many conflicts during vectored
IO merging, we can figure that out once time comes.
Issue Time Tracking
-------------------
Worklog Id: (was: 756104)
Time Spent: 18h 50m (was: 18h 40m)
> Enhance openFile() for better read performance against object stores
> ---------------------------------------------------------------------
>
> Key: HADOOP-16202
> URL: https://issues.apache.org/jira/browse/HADOOP-16202
> Project: Hadoop Common
> Issue Type: Bug
> Components: fs, fs/s3, tools/distcp
> Affects Versions: 3.3.0
> Reporter: Steve Loughran
> Assignee: Steve Loughran
> Priority: Major
> Labels: pull-request-available
> Time Spent: 18h 50m
> Remaining Estimate: 0h
>
> The {{openFile()}} builder API lets us add new options when reading a file
> Add an option {{"fs.s3a.open.option.length"}} which takes a long and allows
> the length of the file to be declared. If set, *no check for the existence of
> the file is issued when opening the file*
> Also: withFileStatus() to take any FileStatus implementation, rather than
> only S3AFileStatus -and not check that the path matches the path being
> opened. Needed to support viewFS-style wrapping and mounting.
> and Adopt where appropriate to stop clusters with S3A reads switched to
> random IO from killing download/localization
> * fs shell copyToLocal
> * distcp
> * IOUtils.copy
--
This message was sent by Atlassian Jira
(v8.20.1#820001)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]