I removed that part of the code in
http://svn.apache.org/viewvc/hbase/branches/0.20/src/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java?r1=948360&r2=948631
So you don't need to apply that hunk :)
J-D
On Tue, Jul 6, 2010 at 5:14 PM, Ted Yu wrote:
> Here is the output from patch:
>
Here is the output from patch:
tyumac:hbase-0.20.5 tyu$ patch -p0 --dry-run < 2599-0.20.txt
patching file src/java/org/apache/hadoop/hbase/ClusterStatus.java
patching file
src/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
Hunk #1 FAILED at 520.
Hunk #2 succeeded at 616 (offset -28 l
Awesome!
Have fun HBase'ing!
J-D
On Fri, Jul 2, 2010 at 7:57 AM, Stanislaw Kogut wrote:
> Thanks, I have applied this patch and issue went out.
>
> On Thu, Jul 1, 2010 at 9:18 PM, Jean-Daniel Cryans wrote:
>
>> (sorry it took so long to answer, we were all busy with the various
>> meetings arou
Thanks, I have applied this patch and issue went out.
On Thu, Jul 1, 2010 at 9:18 PM, Jean-Daniel Cryans wrote:
> (sorry it took so long to answer, we were all busy with the various
> meetings around the Bay Area)
>
> I can see the issue:
>
> 2010-06-30 13:48:16,135 DEBUG master.BaseScanner
> (Ba
(sorry it took so long to answer, we were all busy with the various
meetings around the Bay Area)
I can see the issue:
2010-06-30 13:48:16,135 DEBUG master.BaseScanner
(BaseScanner.java:checkAssigned(580)) - Current assignment of
.META.,,1 is not valid; serverAddress=, startCode=0 unknown.
...
2
Completely changed all hadoop configuration to almost default, PE completes
writing for 100 rows, but regions still come assigned to multiple RS's
hbase(main):001:0> status 'detailed'
version 0.20.5
0 regionsInTransition
6 live servers
uasstse005.ua.sistyma.com:60020 1277985198620
See clean logs from scratch for hadoop and hbase after start with clean
hbase rootdir.
http://sp.sistyma.com/hbase_logs.tar.gz
On Tue, Jun 29, 2010 at 8:46 PM, Stack wrote:
> Something is seriously wrong with your setup. Please put your master logs
> somewhere we can pull from. Enable debug
Something is seriously wrong with your setup. Please put your master logs
somewhere we can pull from. Enable debug too. Thanks
On Jun 29, 2010, at 10:29 AM, Stanislaw Kogut wrote:
> 1. Stopping hbase
> 2. Removing hbase.root.dir from hdfs
> 3. Starting hbase
> 4. Doing major_compact on .M
1. Stopping hbase
2. Removing hbase.root.dir from hdfs
3. Starting hbase
4. Doing major_compact on .META.
5. Starting PE
10/06/29 20:17:30 INFO hbase.PerformanceEvaluation: Table {NAME =>
'TestTable', FAMILIES => [{NAME => 'info', COMPRESSION => 'NONE', VERSIONS
=> '3', TTL => '2147483647', BLOCKS
For sure you are removing the hbase dir in hdfs?
Try major compaction of your .META. table?
hbase> major_compact ".META."
You seem to be suffering HBASE-1880 but if you are removing the hbase
dir, you shouldn't be running into this.
St.Ack
On Tue, Jun 29, 2010 at 9:26 AM, Stanislaw Kogut wro
Yes, I doing hadoop fs -rmr /hbase each time.
So, here is some messages from master logs:
2010-06-29 19:15:11,309 INFO org.apache.hadoop.hbase.master.ServerManager: 5
region servers, 0 dead, average load 0.8
2010-06-29 19:15:12,146 INFO org.apache.hadoop.hbase.master.BaseScanner:
RegionManager.ro
Is this a testing install? If so remove the hbase dir in hdfs and start over.
Else on pe failure what does the master log say?
In 0.20.5 we moved so some more messages show at info level which could explain
some of the differences you are seeing?
Stack
On Jun 29, 2010, at 6:21 AM, Stanisla
Hi everyone!
Has someone noticed same behaviour of hbase-0.20.5 after upgrade from
0.20.3?
$hadoop jar hbase/hbase-0.20.5-test.jar sequentialWrite 1
10/06/29 16:03:21 INFO zookeeper.ZooKeeper: Client
environment:zookeeper.version=3.2.2-888565, built on 12/08/2009 21:51 GMT
10/06/29 16:03:21 INFO
13 matches
Mail list logo