[
https://issues.apache.org/jira/browse/HBASE-24?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12662740#action_12662740
]
stack commented on HBASE-24:
----------------------------
Thank Luo Ning. Yeah, agree with your exposition above. In 0.19.0 hadoop, I
believe the timed-out socket on datanode is revived by dfsclient (hadoop-3831).
I need to test. Regardless, we need to put a bound on number of mapfiles (As
you've been saying for a good while now). Let me look at your patch (500
regions per server is really good for 0.18.x hbase).
> Scaling: Too many open file handles to datanodes
> ------------------------------------------------
>
> Key: HBASE-24
> URL: https://issues.apache.org/jira/browse/HBASE-24
> Project: Hadoop HBase
> Issue Type: Bug
> Components: regionserver
> Reporter: stack
> Priority: Blocker
> Fix For: 0.20.0
>
> Attachments: HBASE-823.patch, MonitoredReader.java
>
>
> We've been here before (HADOOP-2341).
> Today the rapleaf gave me an lsof listing from a regionserver. Had thousands
> of open sockets to datanodes all in ESTABLISHED and CLOSE_WAIT state. On
> average they seem to have about ten file descriptors/sockets open per region
> (They have 3 column families IIRC. Per family, can have between 1-5 or so
> mapfiles open per family -- 3 is max... but compacting we open a new one,
> etc.).
> They have thousands of regions. 400 regions -- ~100G, which is not that
> much -- takes about 4k open file handles.
> If they want a regionserver to server a decent disk worths -- 300-400G --
> then thats maybe 1600 regions... 16k file handles. If more than just 3
> column families..... then we are in danger of blowing out limits if they are
> 32k.
> We've been here before with HADOOP-2341.
> A dfsclient that used non-blocking i/o would help applications like hbase
> (The datanode doesn't have this problem as bad -- CLOSE_WAIT on regionserver
> side, the bulk of the open fds in the rapleaf log, don't have a corresponding
> open resource on datanode end).
> Could also just open mapfiles as needed, but that'd kill our random read
> performance and its bad enough already.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.