[
https://issues.apache.org/jira/browse/HDFS-12881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16284047#comment-16284047
]
Jason Lowe commented on HDFS-12881:
-----------------------------------
Thanks for the patch!
The patch updates the handling of input streams, but this bug only applies to
output streams. For an input stream, once the code has read the data it needs
then we're not interested in any errors that happen on close. We've already
read what we need to from the stream, so anything else that happens to it isn't
very interesting after that point and we don't want to fail the operation if
something with that stream does happen. However for output streams, we need
the close() to complete successfully otherwise data previously written could be
lost (e.g.: due to buffering, etc.).
> Output streams closed with IOUtils suppressing write errors
> -----------------------------------------------------------
>
> Key: HDFS-12881
> URL: https://issues.apache.org/jira/browse/HDFS-12881
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: Jason Lowe
> Assignee: Ajay Kumar
> Attachments: HDFS-12881.001.patch
>
>
> There are a few places in HDFS code that are closing an output stream with
> IOUtils.cleanupWithLogger like this:
> {code}
> try {
> ...write to outStream...
> } finally {
> IOUtils.cleanupWithLogger(LOG, outStream);
> }
> {code}
> This suppresses any IOException that occurs during the close() method which
> could lead to partial/corrupted output without throwing a corresponding
> exception. The code should either use try-with-resources or explicitly close
> the stream within the try block so the exception thrown during close() is
> properly propagated as exceptions during write operations are.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]