Thanks, Jimmy. The way I achieved step 2 is by running add_table.rb. That 
script essentially deletes the regions from .META for a given user table and 
then adds them again from their HDFS location. Since we have already deleted 
all data from hdfs for that table (keeping just the directory name), there are 
no regions to add back. 

The ruby scripts in the bin dir are kind of helpful in achieveing a lot these 
tasks if we look into them.

-Avani 
-----Original Message-----
From: Jinsong Hu [mailto:[email protected]] 
Sent: Monday, September 13, 2010 1:54 PM
To: [email protected]
Subject: Re: Failure in truncating table 

you delete the region that start with your table name.
if you go to the masters' web pages, you can see all the names there.

I was told the hbase has a race condition that caused this and it will be 
fixed.
I am a user myself so I have no idea when it will be fixed. but I found out 
from
my experience this procedure to remove a table.

Jimmy.

--------------------------------------------------
From: "Sharma, Avani" <[email protected]>
Sent: Monday, September 13, 2010 1:32 PM
To: <[email protected]>
Subject: RE: Failure in  truncating table

> Thanks. I have no idea how to achieve step 2. When I do a scan of .META, 
> the table that I want to truncate has a lot of entries in .META.
>
> Also, Can this exception be avoided in future when I have to truncate a 
> table ?
>
>
>
> -----Original Message-----
> From: Jinsong Hu [mailto:[email protected]]
> Sent: Monday, September 13, 2010 1:26 PM
> To: [email protected]
> Subject: Re: Failure in truncating table
>
> I found out that you need to do this:
>
> 1. stop the client , and wait enough time, so all the hlog records are in
> the regions.
> 2. remove the records in .META. table . there is no untility there and I
> wrote a program to do so.
> 3. shutdown hbase, then remove the /hbase/xxx directory from HDFS.
> 4. start hbase again. now create the table again .
>
> Jimmy/
>
> --------------------------------------------------
> From: "Sharma, Avani" <[email protected]>
> Sent: Monday, September 13, 2010 1:17 PM
> To: <[email protected]>
> Subject: Failure in  truncating table
>
>> I am unable to truncate a table. When I loaded data into this table using
>> an MR job (TableOutputFormat), I noticed that there were some errors in
>> the log which make me feel that all records did not get loaded. Hence, I
>> need to truncate the table and reload the data.
>>
>> Is there a way to directly delete from HDFS ?
>>
>>
>> -----Original Message-----
>> From: Sharma, Avani [mailto:[email protected]]
>> Sent: Monday, September 13, 2010 11:07 AM
>> To: [email protected]
>> Subject: tables lost when truncating another table
>>
>> Here is a log of what I did as a result of which I do not see any tables.
>> TABLE1 is a relatively bigger table for my table sizes, but not by HBase
>> standards. I had loaded 15-20 versions of ~2G files into this table.
>>
>> machine_name:~ hadoop$curl http://localhost:8080
>> TABLE1
>> TABLE2
>> TABLE3
>> TABLE4
>> machine_name:~ hadoop$hbase shell
>> HBase Shell; enter 'help<RETURN>' for list of supported commands.
>> Version: 0.20.6, r965666, Mon Jul 19 15:48:07 PDT 2010
>> hbase(main):002:0> truncate 'TABLE1'
>> Truncating TABLE1; it may take a while
>> Disabling table...
>> NativeException: org.apache.hadoop.hbase.RegionException: Retries
>> exhausted, it took too long to wait for the table TABLE1 to be disabled.
>>
>> hbase(main):003:0> disable 'TABLE1'
>> NativeException: org.apache.hadoop.hbase.RegionException: Retries
>> exhausted, it took too long to wait for the table TABLE1 to be disabled.
>>
>> hbase(main):004:0> disable 'TABLE1'
>> NativeException: org.apache.hadoop.hbase.TableNotFoundException: TABLE1
>>
>> hbase(main):005:0> truncate 'TABLE1'
>> NativeException: java.lang.NullPointerException: null
>>
>> hbase(main):006:0> list
>> 0 row(s) in 6.0320 seconds
>> hbase(main):007:0> exit
>> machine_name:~ hadoop$curl http://localhost:8080
>> machine_name:~ hadoop$
>>
>> Thanks,
>> Avani Sharma
>>
>>
>>
> 

Reply via email to