Completely changed all hadoop configuration to almost default, PE completes
writing for 1000000 rows, but regions still come assigned to multiple RS's
hbase(main):001:0> status 'detailed'
version 0.20.5
0 regionsInTransition
6 live servers
uasstse005.ua.sistyma.com:60020 1277985198620
requests=0, regions=2, usedHeap=25, maxHeap=1196
.META.,,1
stores=2, storefiles=0, storefileSizeMB=0, memstoreSizeMB=0,
storefileIndexSizeMB=0
-ROOT-,,0
stores=1, storefiles=3, storefileSizeMB=0, memstoreSizeMB=0,
storefileIndexSizeMB=0
stas-node.ua.sistyma.com:60020 1277985198573
requests=0, regions=1, usedHeap=22, maxHeap=1996
.META.,,1
stores=2, storefiles=0, storefileSizeMB=0, memstoreSizeMB=0,
storefileIndexSizeMB=0
uasstse004.ua.sistyma.com:60020 1277985198572
requests=0, regions=1, usedHeap=23, maxHeap=1996
.META.,,1
stores=2, storefiles=0, storefileSizeMB=0, memstoreSizeMB=0,
storefileIndexSizeMB=0
uasstse006.ua.sistyma.com:60020 1277985198554
requests=0, regions=0, usedHeap=33, maxHeap=1196
uasstse002.ua.sistyma.com:60020 1277985198667
requests=0, regions=1, usedHeap=34, maxHeap=1996
.META.,,1
stores=2, storefiles=0, storefileSizeMB=0, memstoreSizeMB=0,
storefileIndexSizeMB=0
uasstse003.ua.sistyma.com:60020 1277985198550
requests=0, regions=1, usedHeap=22, maxHeap=1996
.META.,,1
stores=2, storefiles=0, storefileSizeMB=0, memstoreSizeMB=0,
storefileIndexSizeMB=0
0 dead servers
On Wed, Jun 30, 2010 at 2:49 PM, Stanislaw Kogut <[email protected]> wrote:
> See clean logs from scratch for hadoop and hbase after start with clean
> hbase rootdir.
>
> http://sp.sistyma.com/hbase_logs.tar.gz
>
>
> On Tue, Jun 29, 2010 at 8:46 PM, Stack <[email protected]> wrote:
>
>> Something is seriously wrong with your setup. Please put your master logs
>> somewhere we can pull from. Enable debug too. Thanks
>>
>>
>>
>> On Jun 29, 2010, at 10:29 AM, Stanislaw Kogut <[email protected]> wrote:
>>
>> > 1. Stopping hbase
>> > 2. Removing hbase.root.dir from hdfs
>> > 3. Starting hbase
>> > 4. Doing major_compact on .META.
>> > 5. Starting PE
>> >
>> > 10/06/29 20:17:30 INFO hbase.PerformanceEvaluation: Table {NAME =>
>> > 'TestTable', FAMILIES => [{NAME => 'info', COMPRESSION => 'NONE',
>> VERSIONS
>> > => '3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false',
>> > BLOCKCACHE => 'true'}]} created
>> > 10/06/29 20:17:30 INFO hbase.PerformanceEvaluation: Start class
>> > org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest at
>> offset
>> > 0 for 1048576 rows
>> > 10/06/29 20:17:42 INFO hbase.PerformanceEvaluation: 0/104857/1048576
>> > 10/06/29 20:17:55 INFO hbase.PerformanceEvaluation: 0/209714/1048576
>> > 10/06/29 20:18:13 INFO hbase.PerformanceEvaluation: 0/314571/1048576
>> > 10/06/29 20:18:29 INFO hbase.PerformanceEvaluation: 0/419428/1048576
>> > 10/06/29 20:22:37 ERROR hbase.PerformanceEvaluation: Failed
>> > org.apache.hadoop.hbase.client.RetriesExhaustedException: Trying to
>> contact
>> > region server -- nothing found, no 'location' returned,
>> > tableName=TestTable, reload=true -- for region , row '0000511450', but
>> > failed after 11 attempts.
>> > Exceptions:
>> > java.io.IOException: HRegionInfo was null or empty in .META.
>> > org.apache.hadoop.hbase.TableNotFoundException: TestTable
>> > org.apache.hadoop.hbase.TableNotFoundException: TestTable
>> > org.apache.hadoop.hbase.TableNotFoundException: TestTable
>> > org.apache.hadoop.hbase.TableNotFoundException: TestTable
>> > org.apache.hadoop.hbase.TableNotFoundException: TestTable
>> > org.apache.hadoop.hbase.TableNotFoundException: TestTable
>> > org.apache.hadoop.hbase.TableNotFoundException: TestTable
>> > org.apache.hadoop.hbase.TableNotFoundException: TestTable
>> > org.apache.hadoop.hbase.TableNotFoundException: TestTable
>> >
>> > at
>> >
>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getRegionLocationForRowWithRetries(HConnectionManager.java:1087)
>> > at
>> >
>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.access$200(HConnectionManager.java:240)
>> > at
>> >
>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers$Batch.getRegionName(HConnectionManager.java:1183)
>> > at
>> >
>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers$Batch.process(HConnectionManager.java:1160)
>> > at
>> >
>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.processBatchOfRows(HConnectionManager.java:1230)
>> > at
>> org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:666)
>> > at
>> >
>> org.apache.hadoop.hbase.PerformanceEvaluation$Test.testTakedown(PerformanceEvaluation.java:621)
>> > at
>> >
>> org.apache.hadoop.hbase.PerformanceEvaluation$Test.test(PerformanceEvaluation.java:637)
>> > at
>> >
>> org.apache.hadoop.hbase.PerformanceEvaluation.runOneClient(PerformanceEvaluation.java:889)
>> > at
>> >
>> org.apache.hadoop.hbase.PerformanceEvaluation.runNIsOne(PerformanceEvaluation.java:907)
>> > at
>> >
>> org.apache.hadoop.hbase.PerformanceEvaluation.runTest(PerformanceEvaluation.java:939)
>> > at
>> >
>> org.apache.hadoop.hbase.PerformanceEvaluation.doCommandLine(PerformanceEvaluation.java:1036)
>> > at
>> >
>> org.apache.hadoop.hbase.PerformanceEvaluation.main(PerformanceEvaluation.java:1061)
>> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> > at
>> >
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> > at
>> >
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> > at java.lang.reflect.Method.invoke(Method.java:597)
>> > at org.apache.hadoop.util.RunJar.main(RunJar.java:186)
>> >
>> >
>> > On Tue, Jun 29, 2010 at 8:03 PM, Stack <[email protected]> wrote:
>> >
>> >> For sure you are removing the hbase dir in hdfs?
>> >>
>> >> Try major compaction of your .META. table?
>> >>
>> >> hbase> major_compact ".META."
>> >>
>> >> You seem to be suffering HBASE-1880 but if you are removing the hbase
>> >> dir, you shouldn't be running into this.
>> >>
>> >> St.Ack
>> >>
>> >>
>> > --
>> > Regards,
>> > Stanislaw Kogut
>> > Sistyma LLC
>>
>
>
>
> --
> Regards,
> Stanislaw Kogut
> Sistyma LLC
>
--
Regards,
Stanislaw Kogut
Sistyma LLC