[ 
https://issues.apache.org/jira/browse/HADOOP-4346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12637297#action_12637297
 ] 

Bryan Duxbury commented on HADOOP-4346:
---------------------------------------

I'd love to try this out on 0.18.1. Sounds like my exact problem.

> Hadoop triggers a "soft" fd leak. 
> ----------------------------------
>
>                 Key: HADOOP-4346
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4346
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: io
>    Affects Versions: 0.17.0
>            Reporter: Raghu Angadi
>            Assignee: Raghu Angadi
>         Attachments: HADOOP-4346.patch
>
>
> Starting with Hadoop-0.17, most of the network I/O uses non-blocking NIO 
> channels. Normal blocking reads and writes are handled by Hadoop and use our 
> own cache of selectors. This cache suites well for Hadoop where I/O often 
> occurs on many short lived threads. Number of fds consumed is proportional to 
> number of threads currently blocked. 
> If blocking I/O is done using java.*, Sun's implementation uses internal 
> per-thread selectors. These selectors are closed using {{sun.misc.Cleaner}}. 
> Looks like this cleaning is kind of like finalizers and tied to GC. This is 
> pretty ill suited if we have many threads that are short lived. Until GC 
> happens, number of these selectors keeps growing. Each selector consumes 3 
> fds.
> Though blocking read and write are handled by Hadoop, {{connect()}} is still 
> the default implementation that uses per-thread selector. 
> Koji helped a lot in tracking this. Some sections from 'jmap' output and 
> other info  Koji collected led to this suspicion and will include that in the 
> next comment.
> One solution might be to handle connect() also in Hadoop using our selectors.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to