[
https://issues.apache.org/jira/browse/HADOOP-12444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Steve Loughran updated HADOOP-12444:
------------------------------------
Attachment: HADOOP-12444-007.patch
This is patch 007, which is patch 006 applied to the latest (006) patch of the
positioned readable tests.
h3. in {{read(byte[] buf, int off, int len)}}
* when reading a len==0 from a zero byte file at offset 0, then return value
must be zero
* the position update code had been pulled into the try/catch clause; the pos
counters would have become invalid if an exception was raised and the re-open
and read() worked
h3. other
# use multiple catch to catch the socket and socket timeout exceptions; simpler
codepath.
# in the refactored {{closeStream()}} operation, implemented HADOOP-11874: if
an IOE is thrown in {{wrappedStream.close()}} it's caught, the operation
converted to an abort().
As well as all the tests passing, I spent time reviewing the exception handling
code, to see if a problem was lurking which tests weren't catching. Remember:
object stores are often used long-haul, so do fail —recoverability matters.
I'm happy with the code as it is now: LGTM
> Consider implementing lazy seek in S3AInputStream
> -------------------------------------------------
>
> Key: HADOOP-12444
> URL: https://issues.apache.org/jira/browse/HADOOP-12444
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: 2.7.1
> Reporter: Rajesh Balamohan
> Assignee: Rajesh Balamohan
> Attachments: HADOOP-12444-004.patch, HADOOP-12444-005.patch,
> HADOOP-12444-006.patch, HADOOP-12444-007.patch, HADOOP-12444.1.patch,
> HADOOP-12444.2.patch, HADOOP-12444.3.patch, HADOOP-12444.WIP.patch,
> hadoop-aws-test-reports.tar.gz
>
>
> - Currently, "read(long position, byte[] buffer, int offset, int length)" is
> not implemented in S3AInputStream (unlike DFSInputStream). So,
> "readFully(long position, byte[] buffer, int offset, int length)" in
> S3AInputStream goes through the default implementation of seek(), read(),
> seek() in FSInputStream.
> - However, seek() in S3AInputStream involves re-opening of connection to S3
> everytime
> (https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L115).
>
> - It would be good to consider having a lazy seek implementation to reduce
> connection overheads to S3. (e.g Presto implements lazy seek.
> https://github.com/facebook/presto/blob/master/presto-hive/src/main/java/com/facebook/presto/hive/PrestoS3FileSystem.java#L623)
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)