An hour to disable? That doesn't sound right at all :)

I would approach this problem like I generally do with HBase issue,
first check the master log for any weirdness regarding my problem (in
this case, grep for the table name).

Then I would look the region server log(s) of the nodes that were
hosting regions from that table. You should see the steps taken to
disable the regions (starting to close, flush, region completely
closed).

If you are able to do it while it's taking a very long time to
disable, try to jstack the process the seems to be hanging.

Finally, like I said in another thread, there's a bug in 0.20.6 that
almost prevent disabling a table (or re-enabling) if any region
recently split and the parent wasn't cleaned yet from .META. and that
is fixed in 0.90.1

J-D

On Thu, Feb 24, 2011 at 11:37 PM, Nanheng Wu <[email protected]> wrote:
> I think you are right, maybe in the long run I need to re-architect my
> system so that it doesn't need to create new and delete old tables all
> the time. In the short term I am having a really hard time with the
> disabling function, I ran a disable command on a very small table
> (probably dozen of MBs in size) and are no client using the cluster at
> all, and that took about 1 hour to complete! The weird thing is on the
> web UI only the region server carrying the META table has non-zero
> requests, all other RS have 0 requests the entire time. I would think
> they should get some request to flush the memstore at least. I *am*
> using the same RS nodes for some map reduce job at the time and top
> shows the memory usage is almost full on the META region. Would you
> have some idea of what I should investigate?
> Thanks so much.

Reply via email to