[ 
https://issues.apache.org/jira/browse/HBASE-1177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12714802#action_12714802
 ] 

Jim Kellerman commented on HBASE-1177:
--------------------------------------

All I am seeing now (on trunk) is increasing times:

Read 1 row with 7 columns 100 times in 921ms
Read 1 row with 8 columns 100 times in 4,089ms
Read 1 row with 1000 columns 100 times in 16,330ms

Read 1 row with 7 columns 100 times in 875ms
Read 1 row with 8 columns 100 times in 4,025ms
Read 1 row with 1000 columns 100 times in 16,860ms

Read 1 row with 7 columns 100 times in 993ms
Read 1 row with 8 columns 100 times in 4,087ms
Read 1 row with 1000 columns 100 times in 16,530ms

Which is what I would expect, although it does not explain why sometimes the 
fetch of 1000 columns is sometimes faster than fetching 8.

Trying to reproduce that case. Often I get fetching 8 columns is faster than 7 
as in:

Read 1 row with 7 columns 100 times in 907ms
Read 1 row with 8 columns 100 times in 622ms
Read 1 row with 1000 columns 100 times in 15,062ms


> Delay when client is located on the same node as the regionserver
> -----------------------------------------------------------------
>
>                 Key: HBASE-1177
>                 URL: https://issues.apache.org/jira/browse/HBASE-1177
>             Project: Hadoop HBase
>          Issue Type: Bug
>    Affects Versions: 0.19.0
>         Environment: Linux 2.6.25 x86_64
>            Reporter: Jonathan Gray
>            Assignee: Jim Kellerman
>            Priority: Blocker
>             Fix For: 0.20.0
>
>         Attachments: ReadDelayTest.java, screenshot-1.jpg, screenshot-2.jpg, 
> screenshot-3.jpg
>
>
> During testing of HBASE-80, we uncovered a strange 40ms delay for random 
> reads.  We ran a series of tests and found that it only happens when the 
> client is on the same node as the RS and for a certain range of payloads (not 
> specifically related to number of columns or size of them, only total 
> payload).  It appears to be precisely 40ms every time.
> Unsure if this is particular to our architecture, but it does happen on all 
> nodes we've tried.  Issue completely goes away with very large payloads or 
> moving the client.
> Will post a test program tomorrow if anyone can test on a different 
> architecture.
> Making a blocker for 0.20.  Since this happens when you have an MR task 
> running local to the RS, and this is what we try to do, might also consider 
> making this a blocker for 0.19.1.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to