[
https://issues.apache.org/jira/browse/HBASE-4956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13182908#comment-13182908
]
stack commented on HBASE-4956:
------------------------------
Hmmm... Looking in code, we allocate a BufferedOutputStream using default
buffer size:
{code}
this.out = new DataOutputStream
(new BufferedOutputStream(NetUtils.getOutputStream(socket)));
{code}
... so, it seems that yeah, at any one time, you'd think the maximum allocation
8k as you say Lars.
Jonathan says 3 of these buffers allocated when reading. Presume similar
writing (easy to check I suppose. I havent'). Thats 6*8k per thread since
these are thread local.... which still don't seem like that much. You'd need
lots of threads to run into trouble.
We need to reproduce.
> Control direct memory buffer consumption by HBaseClient
> -------------------------------------------------------
>
> Key: HBASE-4956
> URL: https://issues.apache.org/jira/browse/HBASE-4956
> Project: HBase
> Issue Type: New Feature
> Reporter: Ted Yu
>
> As Jonathan explained here
> https://groups.google.com/group/asynchbase/browse_thread/thread/c45bc7ba788b2357?pli=1
> , standard hbase client inadvertently consumes large amount of direct memory.
> We should consider using netty for NIO-related tasks.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira