[ 
https://issues.apache.org/jira/browse/HBASE-24?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12613082#action_12613082
 ] 

LN commented on HBASE-24:
-------------------------

from my profiling result, memory usage of a regionserver is determined by 2 
things:
1. the mapfile index read into memory(io.map.index.skip can adjust it, buf 
allwill stay in mem weather u need it or not)
2. data output buffer used by each SequenceFile$Reader(each can measured as the 
largest value size in the file)
3. memcache, can controlled by hbase itsself. 

so, when concurrent open mapfiles limited,  memory usage of a regionserver 
limited too.  otherwise, a regionserver will cause OOME when data size 
increasing. in my test env, 100G data(200M mapfile index total, 2000 
HStoreFiles opened, 512M mamcache) used 1.5G heap memory, the max i can set 
using -Xmx.

i think this issue should be seriously concerned by Hbase, not only DFS side.

> Scaling: Too many open file handles to datanodes
> ------------------------------------------------
>
>                 Key: HBASE-24
>                 URL: https://issues.apache.org/jira/browse/HBASE-24
>             Project: Hadoop HBase
>          Issue Type: Bug
>          Components: regionserver
>            Reporter: stack
>            Priority: Critical
>
> We've been here before (HADOOP-2341).
> Today the rapleaf gave me an lsof listing from a regionserver.  Had thousands 
> of open sockets to datanodes all in ESTABLISHED and CLOSE_WAIT state.  On 
> average they seem to have about ten file descriptors/sockets open per region 
> (They have 3 column families IIRC.  Per family, can have between 1-5 or so 
> mapfiles open per family -- 3 is max... but compacting we open a new one, 
> etc.).
> They have thousands of regions.   400 regions -- ~100G, which is not that 
> much -- takes about 4k open file handles.
> If they want a regionserver to server a decent disk worths -- 300-400G -- 
> then thats maybe 1600 regions... 16k file handles.  If more than just 3 
> column families..... then we are in danger of blowing out limits if they are 
> 32k.
> We've been here before with HADOOP-2341.
> A dfsclient that used non-blocking i/o would help applications like hbase 
> (The datanode doesn't have this problem as bad -- CLOSE_WAIT on regionserver 
> side, the bulk of the open fds in the rapleaf log, don't have a corresponding 
> open resource on datanode end).
> Could also just open mapfiles as needed, but that'd kill our random read 
> performance and its bad enough already.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to