[
https://issues.apache.org/jira/browse/HBASE-3813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13036463#comment-13036463
]
Ted Yu commented on HBASE-3813:
-------------------------------
My proposal doesn't involve moving deserialization overhead into the handlers.
Primary reason is that we should determine the actual size of the parameter
object for the Call.
So in processData(), we would have:
{code}
HbaseObjectWritable objectWritable = new HbaseObjectWritable();
Writable param = HbaseObjectWritable.readObject(dis, objectWritable, conf);
{code}
I have cloned LinkedBlockingQueueBySize off of LinkedBlockingQueue. Its
declaration is:
{code}
public class LinkedBlockingQueueBySize<E extends WritableWithSize> extends
AbstractQueue<E>
implements BlockingQueue<E>, java.io.Serializable {
{code}
Then we can utilize this method in HbaseObjectWritable:
{code}
public static long getWritableSize(Object instance, Class declaredClass,
Configuration conf) {
{code}
> Change RPC callQueue size from "handlerCount * MAX_QUEUE_SIZE_PER_HANDLER;"
> ---------------------------------------------------------------------------
>
> Key: HBASE-3813
> URL: https://issues.apache.org/jira/browse/HBASE-3813
> Project: HBase
> Issue Type: Bug
> Affects Versions: 0.92.0
> Reporter: stack
> Priority: Critical
> Attachments: 3813.txt
>
>
> Yesterday debugging w/ Jack we noticed that with few handlers on a big box,
> he was seeing stats like this:
> {code}
> 2011-04-21 11:54:49,451 DEBUG org.apache.hadoop.ipc.HBaseServer: Server
> connection from X.X.X.X:60931; # active connections: 11; # queued calls: 2500
> {code}
> We had 2500 items in the rpc queue waiting to be processed.
> Turns out he had too few handlers for number of clients (but also, it seems
> like he figured hw issues in that his RAM bus was running at 1/4 the rate
> that it should have been running at).
> Chatting w/ J-D this morning, he asked if the queues hold 'data'. The queues
> hold 'Calls'. Calls are the client request. They contain data.
> Jack had 2500 items queued. If each item to insert was 1MB, thats 25k * 1MB
> of memory that is outside of our generally accounting.
> Currently the queue size is handlers * MAX_QUEUE_SIZE_PER_HANDLER where
> MAX_QUEUE_SIZE_PER_HANDLER is hardcoded to be 100.
> If the queue is full we block (LinkedBlockingQueue).
> Going to change the queue size from 100 to 10 by default -- but also will
> make it configurable and will doc. this as possible cause of OOME. Will try
> it on production here before committing patch.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira