[jira] [Commented] (HBASE-14373) PerformanceEvaluation tool should support huge number of rows beyond int range
[ https://issues.apache.org/jira/browse/HBASE-14373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14904946#comment-14904946 ] Nick Dimiduk commented on HBASE-14373: -- I'm seeing build failure on branch-1.1. {noformat} [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:2.5.1:compile (default-compile) on project hbase-server: Compilation failure: Compilation failure: [ERROR] /Users/ndimiduk/repos/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java:[2065,33] error: cannot find symbol [ERROR] symbol: class DamagedWALException [ERROR] location: class FSHLog.RingBufferEventHandler [ERROR] /Users/ndimiduk/repos/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java:[2071,16] error: cannot find symbol [ERROR] symbol: class DamagedWALException [ERROR] location: class FSHLog.RingBufferEventHandler [ERROR] /Users/ndimiduk/repos/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java:[2184,18] error: cannot find symbol [ERROR] -> [Help 1] {noformat} bisect took me here. {noformat} 2966e2744a5597a8066f265a49d7528307bcb5f4 is the first bad commit commit 2966e2744a5597a8066f265a49d7528307bcb5f4 Author: stackDate: Tue Sep 22 16:55:48 2015 -0700 HBASE-14373 Backport parent 'HBASE-14317 Stuck FSHLog' issue to 1.1 and 1.0 :04 04 da8730f73da7eb359f855f8cc3e9815e11745485 a197761fc2ea67183b1fd1b83727886e4aa04b34 M hbase-server bisect run success {noformat} > PerformanceEvaluation tool should support huge number of rows beyond int range > -- > > Key: HBASE-14373 > URL: https://issues.apache.org/jira/browse/HBASE-14373 > Project: HBase > Issue Type: Improvement > Components: test >Reporter: Pankaj Kumar >Assignee: Pankaj Kumar >Priority: Minor > > We have test tool “org.apache.hadoop.hbase.PerformanceEvaluation” to evaluate > HBase performance and scalability. > > Suppose this script is executed as below, > {noformat} > hbase org.apache.hadoop.hbase.PerformanceEvaluation --presplit=120 > --rows=1000 randomWrite 500 > {noformat} > Here total 500 clients and each clients have 1000 rows. > As per the code, > {code} > opts.totalRows = opts.perClientRunRows * opts.numClientThreads > {code} > optt.totalRows is int, so 1000*500 will be out of range. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14373) PerformanceEvaluation tool should support huge number of rows beyond int range
[ https://issues.apache.org/jira/browse/HBASE-14373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14905266#comment-14905266 ] Hudson commented on HBASE-14373: FAILURE: Integrated in HBase-1.1 #677 (See [https://builds.apache.org/job/HBase-1.1/677/]) Revert "HBASE-14373 Backport parent 'HBASE-14317 Stuck FSHLog' issue to 1.1 and 1.0" (stack: rev 5b0f30d5f4dc71286ac8c6d8ed8dbc6b4f816c28) * hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MultiVersionConsistencyControl.java * hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALKey.java * hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestDistributedLogSplitting.java * hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestFSErrorsExposed.java * hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMultiVersionConsistencyControl.java * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogKey.java * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSWALEntry.java * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/LogRoller.java * hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRolling.java * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/ProtobufLogReader.java * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/ProtobufLogWriter.java * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SyncFuture.java * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java > PerformanceEvaluation tool should support huge number of rows beyond int range > -- > > Key: HBASE-14373 > URL: https://issues.apache.org/jira/browse/HBASE-14373 > Project: HBase > Issue Type: Improvement > Components: test >Reporter: Pankaj Kumar >Assignee: Pankaj Kumar >Priority: Minor > > We have test tool “org.apache.hadoop.hbase.PerformanceEvaluation” to evaluate > HBase performance and scalability. > > Suppose this script is executed as below, > {noformat} > hbase org.apache.hadoop.hbase.PerformanceEvaluation --presplit=120 > --rows=1000 randomWrite 500 > {noformat} > Here total 500 clients and each clients have 1000 rows. > As per the code, > {code} > opts.totalRows = opts.perClientRunRows * opts.numClientThreads > {code} > optt.totalRows is int, so 1000*500 will be out of range. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14373) PerformanceEvaluation tool should support huge number of rows beyond int range
[ https://issues.apache.org/jira/browse/HBASE-14373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14903722#comment-14903722 ] Hudson commented on HBASE-14373: FAILURE: Integrated in HBase-1.1 #676 (See [https://builds.apache.org/job/HBase-1.1/676/]) HBASE-14373 Backport parent 'HBASE-14317 Stuck FSHLog' issue to 1.1 and 1.0 (stack: rev 2966e2744a5597a8066f265a49d7528307bcb5f4) * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/LogRoller.java * hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALKey.java * hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestFSErrorsExposed.java * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MultiVersionConsistencyControl.java * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogKey.java * hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SyncFuture.java * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java * hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMultiVersionConsistencyControl.java * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/ProtobufLogWriter.java * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/ProtobufLogReader.java * hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRolling.java * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSWALEntry.java * hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestDistributedLogSplitting.java > PerformanceEvaluation tool should support huge number of rows beyond int range > -- > > Key: HBASE-14373 > URL: https://issues.apache.org/jira/browse/HBASE-14373 > Project: HBase > Issue Type: Improvement > Components: test >Reporter: Pankaj Kumar >Assignee: Pankaj Kumar >Priority: Minor > > We have test tool “org.apache.hadoop.hbase.PerformanceEvaluation” to evaluate > HBase performance and scalability. > > Suppose this script is executed as below, > {noformat} > hbase org.apache.hadoop.hbase.PerformanceEvaluation --presplit=120 > --rows=1000 randomWrite 500 > {noformat} > Here total 500 clients and each clients have 1000 rows. > As per the code, > {code} > opts.totalRows = opts.perClientRunRows * opts.numClientThreads > {code} > optt.totalRows is int, so 1000*500 will be out of range. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14373) PerformanceEvaluation tool should support huge number of rows beyond int range
[ https://issues.apache.org/jira/browse/HBASE-14373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14732287#comment-14732287 ] Pankaj Kumar commented on HBASE-14373: -- Thanks [~liushaohui], I will close this JIRA as duplicate of HBASE-13319. > PerformanceEvaluation tool should support huge number of rows beyond int range > -- > > Key: HBASE-14373 > URL: https://issues.apache.org/jira/browse/HBASE-14373 > Project: HBase > Issue Type: Improvement > Components: test >Reporter: Pankaj Kumar >Assignee: Pankaj Kumar >Priority: Minor > > We have test tool “org.apache.hadoop.hbase.PerformanceEvaluation” to evaluate > HBase performance and scalability. > > Suppose this script is executed as below, > {noformat} > hbase org.apache.hadoop.hbase.PerformanceEvaluation --presplit=120 > --rows=1000 randomWrite 500 > {noformat} > Here total 500 clients and each clients have 1000 rows. > As per the code, > {code} > opts.totalRows = opts.perClientRunRows * opts.numClientThreads > {code} > optt.totalRows is int, so 1000*500 will be out of range. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14373) PerformanceEvaluation tool should support huge number of rows beyond int range
[ https://issues.apache.org/jira/browse/HBASE-14373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14732190#comment-14732190 ] Liu Shaohui commented on HBASE-14373: - This issue is duplicated with HBASE-13319. There was a patch in HBASE-13319. Maybe we push that issue? > PerformanceEvaluation tool should support huge number of rows beyond int range > -- > > Key: HBASE-14373 > URL: https://issues.apache.org/jira/browse/HBASE-14373 > Project: HBase > Issue Type: Improvement > Components: test >Reporter: Pankaj Kumar >Assignee: Pankaj Kumar >Priority: Minor > > We have test tool “org.apache.hadoop.hbase.PerformanceEvaluation” to evaluate > HBase performance and scalability. > > Suppose this script is executed as below, > {noformat} > hbase org.apache.hadoop.hbase.PerformanceEvaluation --presplit=120 > --rows=1000 randomWrite 500 > {noformat} > Here total 500 clients and each clients have 1000 rows. > As per the code, > {code} > opts.totalRows = opts.perClientRunRows * opts.numClientThreads > {code} > optt.totalRows is int, so 1000*500 will be out of range. -- This message was sent by Atlassian JIRA (v6.3.4#6332)