[
https://issues.apache.org/jira/browse/HDFS-14048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16673168#comment-16673168
]
Erik Krogen commented on HDFS-14048:
------------------------------------
The fix is simple: when clearing the exception state, also do a
{{lastException.clear()}}.
[~szetszwo], any interest in helping to review based on involvement with past
JIRAs?
> DFSOutputStream close() throws exception on subsequent call after DataNode
> restart
> ----------------------------------------------------------------------------------
>
> Key: HDFS-14048
> URL: https://issues.apache.org/jira/browse/HDFS-14048
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: hdfs-client
> Affects Versions: 2.3.0
> Reporter: Erik Krogen
> Assignee: Erik Krogen
> Priority: Major
> Attachments: HDFS-14048.000.patch
>
>
> We recently discovered an issue in which, during a rolling upgrade, some jobs
> were failing with exceptions like (sadly this is the whole stack trace):
> {code}
> java.io.IOException: A datanode is restarting:
> DatanodeInfoWithStorage[1.1.1.1:71,BP-XXXX,DISK]
> at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:877)
> {code}
> with an earlier statement in the log like:
> {code}
> INFO [main] org.apache.hadoop.hdfs.DFSClient: A datanode is restarting:
> DatanodeInfoWithStorage[1.1.1.1:71,BP-XXXX,DISK]
> {code}
> Strangely we did not see any other logs about the {{DFSOutputStream}} failing
> after waiting for the DataNode restart. We eventually realized that in some
> cases {{DFSOutputStream#close()}} may be called more than once, and that if
> so, the {{IOException}} above is thrown on the _second_ call to {{close()}}
> (this is even with HDFS-5335; prior to this it would have been thrown on all
> calls to {{close()}} besides the first).
> The problem is that in {{DataStreamer#createBlockOutputStream()}}, after the
> new output stream is created, it resets the error states:
> {code}
> errorState.resetInternalError();
> // remove all restarting nodes from failed nodes list
> failed.removeAll(restartingNodes);
> restartingNodes.clear();
> {code}
> But it forgets to clear {{lastException}}. When
> {{DFSOutputStream#closeImpl()}} is called a second time, this block is
> triggered:
> {code}
> if (isClosed()) {
> LOG.debug("Closing an already closed stream. [Stream:{}, streamer:{}]",
> closed, getStreamer().streamerClosed());
> try {
> getStreamer().getLastException().check(true);
> {code}
> The second time, {{isClosed()}} is true, so the exception checking occurs and
> the "Datanode is restarting" exception is thrown even though the stream has
> already been successfully closed.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]