[ 
https://issues.apache.org/jira/browse/HADOOP-7973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13185784#comment-13185784
 ] 

Suresh Srinivas commented on HADOOP-7973:
-----------------------------------------

bq. Let's say I'm using DFS. I use some other package that opens the same 
default DFS, does something, and then closes it. Whatever I was doing before I 
called the external routine is now invalidated. What if I was writing to an 
output stream? How would apps be able to reasonably recover from their fs being 
unexpectedly closed when there's not really an error? Or am I misunderstanding 
your intent?

The only way to handle is, never call close from the app if you use distributed 
cache. That way you get the benefit of cache where it is required (such as long 
running clients that create many file system instances)
                
> DistributedFileSystem close has severe consequences
> ---------------------------------------------------
>
>                 Key: HADOOP-7973
>                 URL: https://issues.apache.org/jira/browse/HADOOP-7973
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs
>    Affects Versions: 1.0.0
>            Reporter: Daryn Sharp
>            Assignee: Daryn Sharp
>            Priority: Blocker
>         Attachments: HADOOP-7973.patch
>
>
> The way {{FileSystem#close}} works is very problematic.  Since the 
> {{FileSystems}} are cached, any {{close}} by any caller will cause problems 
> for every other reference to it.  Will add more detail in the comments.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to