[ 
https://issues.apache.org/jira/browse/HDFS-43?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-43.
----------------------------------

    Resolution: Not a Problem

Closing this as not a problem.

> Ignoring IOExceptions on close
> ------------------------------
>
>                 Key: HDFS-43
>                 URL: https://issues.apache.org/jira/browse/HDFS-43
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Owen O'Malley
>            Assignee: dhruba borthakur
>            Priority: Critical
>         Attachments: closeStream.patch
>
>
> Currently in HDFS there are a lot of calls to IOUtils.closeStream that are 
> from finally blocks. I'm worried that this can lead to data corruption in the 
> file system. Take the first instance in DataNode.copyBlock: it writes the 
> block and then calls closeStream on the output stream. If there is an error 
> at the end of the file that is detected in the close, it will be *completely* 
> ignored. Note that logging the error is not enough, the error should be 
> thrown so that the client knows the failure happened.
> {code}
>    try {
>      file1.write(...);
>      file2.write(...);
>    } finally {
>       IOUtils.closeStream(file);
>   }
> {code}
> is *bad*. It must be rewritten as:
> {code}
>    try {
>      file1.write(...);
>      file2.write(...);
>      file1.close(...);
>      file2.close(...);
>    } catch (IOException ie) {
>      IOUtils.closeStream(file1);
>      IOUtils.closeStream(file2);
>      throw ie;
>    }
> {code}
> I also think that IOUtils.closeStream should be renamed 
> IOUtils.cleanupFailedStream or something to make it clear it can only be used 
> after the write operation has failed and is being cleaned up.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to