[
https://issues.apache.org/jira/browse/HBASE-15788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15648254#comment-15648254
]
stack commented on HBASE-15788:
-------------------------------
So, we have (onheap + offheap ) * 2 /*ShareableMemory*/ * 2 (NoTags)? Thats
fine doing a reduction in a different issue.
I like how you write out explanation for BBSOS. Can you put it in the class
comment since it makes sense?
How about BBWriter? That marks class as something that has BB write APIs?
The BBSOSWrapper is different though? It adds BBWriting AND OS? Is that so? Or
is it that it wraps an existing OS? Then it would be fine to call it
BBWriterWrapper?
On ByteBuff... sorry... Got confused w/ ByteBuf (smile).
> Use Offheap ByteBuffers from BufferPool to read RPC requests.
> -------------------------------------------------------------
>
> Key: HBASE-15788
> URL: https://issues.apache.org/jira/browse/HBASE-15788
> Project: HBase
> Issue Type: Sub-task
> Components: regionserver
> Reporter: ramkrishna.s.vasudevan
> Assignee: Anoop Sam John
> Fix For: 2.0.0
>
> Attachments: HBASE-15788.patch, HBASE-15788_V4.patch,
> HBASE-15788_V5.patch
>
>
> Right now, when an RPC request reaches RpcServer, we read the request into an
> on demand created byte[]. When it is write request and including many
> mutations, the request size will be some what larger and we end up creating
> many temp on heap byte[] and causing more GCs.
> We have a ByteBufferPool of fixed sized off heap BBs. This is used at
> RpcServer while sending read response only. We can make use of the same while
> reading reqs also. Instead of reading whole of the request bytes into a
> single BB, we can read into N BBs (based on the req size). When BB is not
> available from pool, we will fall back to old way of on demand on heap byte[]
> creation.
> Remember these are off heap BBs. We read many proto objects from this read
> request bytes (like header, Mutation protos etc). Thanks to PB 3 and our
> shading work as it supports off heap BB now. Also the payload cells are also
> in these DBBs now. The codec decoder can work on these and create off heap
> BBs. Whole of our write path work with Cells now. At the time of addition to
> memstore, these cells are by default copied to MSLAB ( off heap based pooled
> MSLAB issue to follow this one). If MSLAB copy is not possible, we will do a
> copy to on heap byte[].
> One possible down side of this is :
> Before adding to Memstore, we do write to WAL. So the Cells created out of
> the offheap BBs (Codec#Decoder) will be used to write to WAL. The default
> FSHLog works with an OS obtained from DFSClient. This will have only standard
> OS write APIs which is byte[] based. So just to write to WAL, we will end up
> in temp on heap copy for each of the Cell. The other WAL imp (ie. AsynWAL)
> supports writing offheap Cells directly. We have work in progress to make
> AsycnWAL as default. Also we can raise HDFS req to support BB based write
> APIs in their client OS? Until then, will try for a temp workaround solution.
> Patch to say more on this.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)