[
https://issues.apache.org/jira/browse/HADOOP-17812?focusedWorklogId=627494&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-627494
]
ASF GitHub Bot logged work on HADOOP-17812:
-------------------------------------------
Author: ASF GitHub Bot
Created on: 26/Jul/21 02:00
Start Date: 26/Jul/21 02:00
Worklog Time Spent: 10m
Work Description: wbo4958 commented on pull request #3222:
URL: https://github.com/apache/hadoop/pull/3222#issuecomment-886314881
Hi @steveloughran
I modified the unit tests which can cover the NPE described in the JIRA, and
I ran the Integration tests and some tests failed. I don't know if the failed
unit tests are expected. I updated the test result in
https://issues.apache.org/jira/browse/HADOOP-17812?focusedCommentId=17386993&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17386993
I still think throwing exception when wrappedStream equals null may be
better. If we use
``` java
try {
if (wrappedStream == null) {
onReadFailure(null, 1, false);
}
b = wrappedStream.read();
} catch (EOFException e) {
return -1;
} catch (SocketTimeoutException e) {
onReadFailure(e, 1, true);
throw e;
} catch (IOException e) {
onReadFailure(e, 1, false);
throw e;
}
```
if onReadFailure called when detecting wrappedStream==null is failed, then
the onReadFailure will be called again in the block of `catch (IOException e)
{` , My intention is to let the retry do that.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 627494)
Time Spent: 2h (was: 1h 50m)
> NPE in S3AInputStream read() after failure to reconnect to store
> ----------------------------------------------------------------
>
> Key: HADOOP-17812
> URL: https://issues.apache.org/jira/browse/HADOOP-17812
> Project: Hadoop Common
> Issue Type: Bug
> Components: fs/s3
> Affects Versions: 3.2.2, 3.3.1
> Reporter: Bobby Wang
> Priority: Major
> Labels: pull-request-available
> Attachments: s3a-test.tar.gz
>
> Time Spent: 2h
> Remaining Estimate: 0h
>
> when [reading from S3a
> storage|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L450],
> SSLException (which extends IOException) happens, which will trigger
> [onReadFailure|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L458].
> onReadFailure calls "reopen". it will first close the original
> *wrappedStream* and set *wrappedStream = null*, and then it will try to
> [re-get
> *wrappedStream*|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L184].
> But what if the previous code [obtaining
> S3Object|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L183]
> throw exception, then "wrappedStream" will be null.
> And the
> [retry|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L446]
> mechanism may re-execute the
> [wrappedStream.read|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L450]
> and cause NPE.
>
> For more details, please refer to
> [https://github.com/NVIDIA/spark-rapids/issues/2915]
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]