[ 
https://issues.apache.org/jira/browse/HADOOP-4760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HADOOP-4760:
----------------------------------

    Attachment: closehdfsstream_v2.patch

Incorporated Raghu's comments, now checkOpen() is called if closed is false. 
Also fixed the DFSClient.close() method to not throw exception. 

The @Override's are pretty useful (espcecially when refactoring) and they are 
introduced intentionally by my eclipse save actions. I am in favor of keeping 
them.

import statements are reordered and * are converted to actual classes again by 
save actions. According to our guidelines, the import statements should only 
contain actual class references. As in this case here, the import statements 
are constantly switched between actual class names or * between patches. Maybe 
we should add checking for this to test-patch script.   

> HDFS streams should not throw exceptions when closed twice
> ----------------------------------------------------------
>
>                 Key: HADOOP-4760
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4760
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: fs/s3
>    Affects Versions: 0.18.4, 0.19.1, 0.20.0, 0.21.0
>         Environment: all
>            Reporter: Alejandro Abdelnur
>            Assignee: Enis Soztutar
>             Fix For: 0.20.0
>
>         Attachments: closehdfsstream_v1.patch, closehdfsstream_v2.patch
>
>
> When adding an {{InputStream}} via {{addResource(InputStream)}} to a 
> {{Configuration}} instance, if the stream is a HDFS stream the 
> {{loadResource(..)}} method fails with {{IOException}} indicating that the 
> stream has already been closed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to