See clean logs from scratch for hadoop and hbase after start with clean hbase rootdir.
http://sp.sistyma.com/hbase_logs.tar.gz On Tue, Jun 29, 2010 at 8:46 PM, Stack <[email protected]> wrote: > Something is seriously wrong with your setup. Please put your master logs > somewhere we can pull from. Enable debug too. Thanks > > > > On Jun 29, 2010, at 10:29 AM, Stanislaw Kogut <[email protected]> wrote: > > > 1. Stopping hbase > > 2. Removing hbase.root.dir from hdfs > > 3. Starting hbase > > 4. Doing major_compact on .META. > > 5. Starting PE > > > > 10/06/29 20:17:30 INFO hbase.PerformanceEvaluation: Table {NAME => > > 'TestTable', FAMILIES => [{NAME => 'info', COMPRESSION => 'NONE', > VERSIONS > > => '3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false', > > BLOCKCACHE => 'true'}]} created > > 10/06/29 20:17:30 INFO hbase.PerformanceEvaluation: Start class > > org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest at > offset > > 0 for 1048576 rows > > 10/06/29 20:17:42 INFO hbase.PerformanceEvaluation: 0/104857/1048576 > > 10/06/29 20:17:55 INFO hbase.PerformanceEvaluation: 0/209714/1048576 > > 10/06/29 20:18:13 INFO hbase.PerformanceEvaluation: 0/314571/1048576 > > 10/06/29 20:18:29 INFO hbase.PerformanceEvaluation: 0/419428/1048576 > > 10/06/29 20:22:37 ERROR hbase.PerformanceEvaluation: Failed > > org.apache.hadoop.hbase.client.RetriesExhaustedException: Trying to > contact > > region server -- nothing found, no 'location' returned, > > tableName=TestTable, reload=true -- for region , row '0000511450', but > > failed after 11 attempts. > > Exceptions: > > java.io.IOException: HRegionInfo was null or empty in .META. > > org.apache.hadoop.hbase.TableNotFoundException: TestTable > > org.apache.hadoop.hbase.TableNotFoundException: TestTable > > org.apache.hadoop.hbase.TableNotFoundException: TestTable > > org.apache.hadoop.hbase.TableNotFoundException: TestTable > > org.apache.hadoop.hbase.TableNotFoundException: TestTable > > org.apache.hadoop.hbase.TableNotFoundException: TestTable > > org.apache.hadoop.hbase.TableNotFoundException: TestTable > > org.apache.hadoop.hbase.TableNotFoundException: TestTable > > org.apache.hadoop.hbase.TableNotFoundException: TestTable > > > > at > > > org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getRegionLocationForRowWithRetries(HConnectionManager.java:1087) > > at > > > org.apache.hadoop.hbase.client.HConnectionManager$TableServers.access$200(HConnectionManager.java:240) > > at > > > org.apache.hadoop.hbase.client.HConnectionManager$TableServers$Batch.getRegionName(HConnectionManager.java:1183) > > at > > > org.apache.hadoop.hbase.client.HConnectionManager$TableServers$Batch.process(HConnectionManager.java:1160) > > at > > > org.apache.hadoop.hbase.client.HConnectionManager$TableServers.processBatchOfRows(HConnectionManager.java:1230) > > at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:666) > > at > > > org.apache.hadoop.hbase.PerformanceEvaluation$Test.testTakedown(PerformanceEvaluation.java:621) > > at > > > org.apache.hadoop.hbase.PerformanceEvaluation$Test.test(PerformanceEvaluation.java:637) > > at > > > org.apache.hadoop.hbase.PerformanceEvaluation.runOneClient(PerformanceEvaluation.java:889) > > at > > > org.apache.hadoop.hbase.PerformanceEvaluation.runNIsOne(PerformanceEvaluation.java:907) > > at > > > org.apache.hadoop.hbase.PerformanceEvaluation.runTest(PerformanceEvaluation.java:939) > > at > > > org.apache.hadoop.hbase.PerformanceEvaluation.doCommandLine(PerformanceEvaluation.java:1036) > > at > > > org.apache.hadoop.hbase.PerformanceEvaluation.main(PerformanceEvaluation.java:1061) > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > > at > > > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > > at > > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > > at java.lang.reflect.Method.invoke(Method.java:597) > > at org.apache.hadoop.util.RunJar.main(RunJar.java:186) > > > > > > On Tue, Jun 29, 2010 at 8:03 PM, Stack <[email protected]> wrote: > > > >> For sure you are removing the hbase dir in hdfs? > >> > >> Try major compaction of your .META. table? > >> > >> hbase> major_compact ".META." > >> > >> You seem to be suffering HBASE-1880 but if you are removing the hbase > >> dir, you shouldn't be running into this. > >> > >> St.Ack > >> > >> > > -- > > Regards, > > Stanislaw Kogut > > Sistyma LLC > -- Regards, Stanislaw Kogut Sistyma LLC
