[ 
https://issues.apache.org/jira/browse/HBASE-2251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12837327#action_12837327
 ] 

stack commented on HBASE-2251:
------------------------------

Whats being described sounds like the yahoo tool thats supposed to be open 
sourced any time soon.

While I think these additions to PE would be sweet, before this, before each 
release, we need to run perf tests so we find these slow downs before release 
-- even if its only PE (though as Dan Washuen pointed out -- PE currently 
clears memstore so its not factored in PE evals -- that needs fixing).

> PE defaults to 1k rows - uncommon use case, and easy to hit benchmarks
> ----------------------------------------------------------------------
>
>                 Key: HBASE-2251
>                 URL: https://issues.apache.org/jira/browse/HBASE-2251
>             Project: Hadoop HBase
>          Issue Type: Bug
>            Reporter: ryan rawson
>             Fix For: 0.20.4, 0.21.0
>
>
> The PerformanceEvaluation uses 1k rows, which I would argue is uncommon, and 
> also provides an easy to hit performance goal.  Most of the harder 
> performance issues happens at the low and high side of cell size.  In our own 
> application, our key sizes range from 4 bytes to maybe 100 bytes.  Very 
> rarely 1000 bytes.  If we have large values, they are VERY large, like 
> multiple k sizes.
> Recently a change went into HBase that ran well with PE because the overhead 
> of 1k rows is very low in memory, but under small rows, the expected 
> performance would be hit much more.  This is because the per-value overhead 
> (eg: node objects of the skip list/memstore) is amortized more with 1k 
> values. 
> We should make this a tunable setting, and have a low default.  I would argue 
> for a 10-30 byte default.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to