[ 
https://issues.apache.org/jira/browse/HBASE-26017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17365868#comment-17365868
 ] 

dingwei2019 commented on HBASE-26017:
-------------------------------------

This issue can be solved by two ways, one is throw a exception when rows exceed 
the upper of totalRows. the other is support more than 2048g(define totalRows 
by long).

supporting more than 2048g is better, but it involve many other variable 
besides totalRows. may need more evaluation.

 

which way do you prefer, i will supply a patch when the solution is achieved.

> hbase performance evaluation tool  could not support datasize more than 2048g
> -----------------------------------------------------------------------------
>
>                 Key: HBASE-26017
>                 URL: https://issues.apache.org/jira/browse/HBASE-26017
>             Project: HBase
>          Issue Type: Bug
>          Components: PE
>    Affects Versions: 2.1.0, 2.3.2, 2.4.4
>            Reporter: dingwei2019
>            Priority: Minor
>
> in our daily test, we may hope to test more datasize than 2048g, when we set 
> --size more than 2048g, pe print abnormal message like this:
> [TestClient-1] hbase.PerformanceEvaluation: Start class 
> org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest at offset 
> -1138166308 for -21474836 rows
>  
> this is due to variable totalRows in TestOptions defined by int(-2147483648  
> --- 2147483647), One GB is 1048576(1024*1024) by default. The max value of 
> totalRow is  2147483647, in this condition, we may write not larger than 
> 2147483647/1048576 = 2047.999G.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to