The detailed error is :

Chain of regions in table urlhashv2 is broken; edges does not contain 
80116D7E506D87ED39EAFFE784B5B590
Table urlhashv2 is inconsistent.

How does one fix this?

Thanks,

Robert

-----Original Message-----
From: [email protected] [mailto:[email protected]] On Behalf Of Stack
Sent: Monday, May 16, 2011 2:35 PM
To: [email protected]
Subject: Re: wrong region exception

Says you have an inconsistency in your table.  Add -details and try and figure 
where the inconsistency.  Grep master logs to try and figure what happened to 
the problematic regions.  See if adding -fix to hbck will clean up your prob.

St.Ack

On Mon, May 16, 2011 at 12:04 PM, Robert Gonzalez 
<[email protected]> wrote:
> attached
>
>
> -----Original Message-----
> From: [email protected] [mailto:[email protected]] On Behalf Of 
> Stack
> Sent: Monday, May 16, 2011 12:57 PM
> To: [email protected]
> Subject: Re: wrong region exception
>
> See the rest of my email.
> St.Ack
>
> On Mon, May 16, 2011 at 8:18 AM, Robert Gonzalez 
> <[email protected]> wrote:
>> 0.90.0
>>
>> -----Original Message-----
>> From: [email protected] [mailto:[email protected]] On Behalf Of 
>> Stack
>> Sent: Friday, May 13, 2011 2:21 PM
>> To: [email protected]
>> Subject: Re: wrong region exception
>>
>> What version of hbase?  We used to see those from time to time in old
>> 0.20 hbase but haven't seen one recent.  Usually the .META. table is 'off'.  
>> If 0.90.x, try running ./bin/hbase hbck.  See what it says.
>>
>> St.Ack
>>
>> On Fri, May 13, 2011 at 11:57 AM, Robert Gonzalez 
>> <[email protected]> wrote:
>>> Anyone ever see one of these?
>>>
>>> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException:
>>> Failed 25 actions: WrongRegionException: 25 times, servers with
>>> issues: c1-s49.atxd.maxpointinteractive.com:60020,
>>> c1-s03.atxd.maxpointinteractive.com:60020,
>>>                at
>>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImpleme
>>> n
>>> t
>>> ation.processBatch(HConnectionManager.java:1220)
>>>                at
>>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImpleme
>>> n
>>> t
>>> ation.processBatchOfPuts(HConnectionManager.java:1234)
>>>                at
>>> org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:819)
>>>                at
>>> org.apache.hadoop.hbase.client.HTable.close(HTable.java:831)
>>>                at
>>> com.maxpoint.crawl.crawlmgr.SelectThumbs$SelTReducer.cleanup(SelectT
>>> h
>>> u
>>> mbs.java:453)
>>>                at
>>> org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:178)
>>>                at
>>> org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:56
>>> 6
>>> )
>>>                at
>>> org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:408)
>>>                at 
>>> org.apache.hadoop.mapred.Child.main(Child.java:170)
>>>
>>> thanks,
>>>
>>> Gonz
>>>
>>>
>>>
>>>
>>>
>>
>

Reply via email to