[
https://issues.apache.org/jira/browse/HBASE-14373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14732190#comment-14732190
]
Liu Shaohui commented on HBASE-14373:
-------------------------------------
This issue is duplicated with HBASE-13319.
There was a patch in HBASE-13319. Maybe we push that issue?
> PerformanceEvaluation tool should support huge number of rows beyond int range
> ------------------------------------------------------------------------------
>
> Key: HBASE-14373
> URL: https://issues.apache.org/jira/browse/HBASE-14373
> Project: HBase
> Issue Type: Improvement
> Components: test
> Reporter: Pankaj Kumar
> Assignee: Pankaj Kumar
> Priority: Minor
>
> We have test tool “org.apache.hadoop.hbase.PerformanceEvaluation” to evaluate
> HBase performance and scalability.
>
> Suppose this script is executed as below,
> {noformat}
> hbase org.apache.hadoop.hbase.PerformanceEvaluation --presplit=120
> --rows=10000000 randomWrite 500
> {noformat}
> Here total 500 clients and each clients have 10000000 rows.
> As per the code,
> {code}
> opts.totalRows = opts.perClientRunRows * opts.numClientThreads
> {code}
> optt.totalRows is int, so 10000000*500 will be out of range.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)