[ 
https://issues.apache.org/jira/browse/HBASE-1177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12721597#action_12721597
 ] 

Jim Kellerman commented on HBASE-1177:
--------------------------------------

- It's not the RPC
- it's not the server
- it (sort of) appears to be in deserializing the result but I don't understand 
why it just for those keys and also why is local different from remote?
- It balloons up during the same set of rows no matter if you run the reads 
forwards or backwards.

I suppose it could have something to do with row/column keys for those rows, 
but I don't know what (and again, why local and not remote?)

Punt it to 0.21.

> Delay when client is located on the same node as the regionserver
> -----------------------------------------------------------------
>
>                 Key: HBASE-1177
>                 URL: https://issues.apache.org/jira/browse/HBASE-1177
>             Project: Hadoop HBase
>          Issue Type: Bug
>    Affects Versions: 0.19.0
>         Environment: Linux 2.6.25 x86_64
>            Reporter: Jonathan Gray
>            Assignee: Jim Kellerman
>             Fix For: 0.20.0
>
>         Attachments: Contribution of getClosest to getRow time.jpg, 
> Contribution of next to getRow time.jpg, Contribution of seekTo to getClosest 
> time.jpg, Elapsed time of RowResults.readFields.jpg, getRow + round-trip vs # 
> columns.jpg, getRow times.jpg, ReadDelayTest.java, RowResults.readFields 
> zoomed.jpg, screenshot-1.jpg, screenshot-2.jpg, screenshot-3.jpg, 
> screenshot-4.jpg, zoom of columns vs round-trip blowup.jpg
>
>
> During testing of HBASE-80, we uncovered a strange 40ms delay for random 
> reads.  We ran a series of tests and found that it only happens when the 
> client is on the same node as the RS and for a certain range of payloads (not 
> specifically related to number of columns or size of them, only total 
> payload).  It appears to be precisely 40ms every time.
> Unsure if this is particular to our architecture, but it does happen on all 
> nodes we've tried.  Issue completely goes away with very large payloads or 
> moving the client.
> Will post a test program tomorrow if anyone can test on a different 
> architecture.
> Making a blocker for 0.20.  Since this happens when you have an MR task 
> running local to the RS, and this is what we try to do, might also consider 
> making this a blocker for 0.19.1.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to