[ 
https://issues.apache.org/jira/browse/HDFS-3359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13268579#comment-13268579
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-3359:
----------------------------------------------

Hi Todd, FileContext won't call DFSClient.close().  So the socket cache won't 
be cleaned up in FileContext.  I think we should either
- change it from per DFSClient to per DFSInputStream; or
- change it to a global static cache (like what we did for LeaseRenewer.)

Thought?
                
> DFSClient.close should close cached sockets
> -------------------------------------------
>
>                 Key: HDFS-3359
>                 URL: https://issues.apache.org/jira/browse/HDFS-3359
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs client
>    Affects Versions: 0.22.0, 2.0.0
>            Reporter: Todd Lipcon
>            Assignee: Todd Lipcon
>            Priority: Critical
>             Fix For: 0.23.3, 2.0.0
>
>         Attachments: hdfs-3359-branch-0.23.txt, hdfs-3359.txt, hdfs-3359.txt
>
>
> Some applications like the TT/JT (pre-2.0) and probably the RM/NM cycle 
> through DistributedFileSystem objects reasonably frequently. So long as they 
> call close() it isn't a big problem, except that currently DFSClient.close() 
> doesn't explicitly close the SocketCache. So unless a full GC runs (causing 
> the references to get finalized), many SocketCaches can get orphaned, each 
> with many open sockets inside. We should fix the close() function to close 
> all cached sockets.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to