[
https://issues.apache.org/jira/browse/HADOOP-2926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12574668#action_12574668
]
Raghu Angadi commented on HADOOP-2926:
--------------------------------------
> Shouldn't we try to make this idiom work well with HDFS?
I am not sure why this would not work now..
> Ignoring IOExceptions on close
> ------------------------------
>
> Key: HADOOP-2926
> URL: https://issues.apache.org/jira/browse/HADOOP-2926
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Affects Versions: 0.16.0
> Reporter: Owen O'Malley
> Assignee: dhruba borthakur
> Priority: Critical
> Fix For: 0.16.1
>
>
> Currently in HDFS there are a lot of calls to IOUtils.closeStream that are
> from finally blocks. I'm worried that this can lead to data corruption in the
> file system. Take the first instance in DataNode.copyBlock: it writes the
> block and then calls closeStream on the output stream. If there is an error
> at the end of the file that is detected in the close, it will be *completely*
> ignored. Note that logging the error is not enough, the error should be
> thrown so that the client knows the failure happened.
> {code}
> try {
> file1.write(...);
> file2.write(...);
> } finally {
> IOUtils.closeStream(file);
> }
> {code}
> is *bad*. It must be rewritten as:
> {code}
> try {
> file1.write(...);
> file2.write(...);
> file1.close(...);
> file2.close(...);
> } catch (IOException ie) {
> IOUtils.closeStream(file1);
> IOUtils.closeStream(file2);
> throw ie;
> }
> {code}
> I also think that IOUtils.closeStream should be renamed
> IOUtils.cleanupFailedStream or something to make it clear it can only be used
> after the write operation has failed and is being cleaned up.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.