[ 
https://issues.apache.org/jira/browse/HADOOP-2067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12612437#action_12612437
 ] 

Steve Loughran commented on HADOOP-2067:
----------------------------------------

I've been seeing this when terminating services that have a DFS client. There's 
 no isOpen() call so all we can do is try and close any non-null dfs refereence 
and print the exception.

The exception could be caught and swallowed, but how do you differentiate a 
harmless "Already closed" exception from a harmful "we couldn't write the data 
and your work is lost" exception?

A backward compatible solution would be to have a special subclass of 
IOException to indicate DFS already closed, something callers could catch and 
ignore.

> multiple close() failing in Hadoop 0.14
> ---------------------------------------
>
>                 Key: HADOOP-2067
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2067
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.14.3
>            Reporter: Lohit Vijayarenu
>         Attachments: stack_trace_13_and_14.txt
>
>
> It looks like multiple close() calls, while reading files from DFS is failing 
> in hadoop 0.14. This was somehow not caught in hadoop 0.13.
> The use case was to open a file on DFS like shown below
> <code>
>  FSDataInputStream
>       fSDataInputStream =
>       fileSystem.open(new Path(propertyFileName));
>       Properties subProperties =
>       new Properties();
>       subProperties.
>       loadFromXML(fSDataInputStream);
>       fSDataInputStream.
>       close();
> </code>
> This failed with an IOException
> <exception>
> EXCEPTIN RAISED, which is java.io.IOException: Stream closed
> java.io.IOException: Stream closed
> </exception>
> The stack trace shows its being closed twice. While this used to work in 
> hadoop 0.13 which used to hide this.
> Attached with this JIRA is a text file which has stack trace for both hadoop 
> 0.13 and hadoop 0.14.
> How should this be handled from a users point of view? 
> Thanks

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to