On Wed, Feb 2, 2011 at 8:41 PM, Wayne <[email protected]> wrote:
> The regions counts are the same per region server which is good. My problem
> is that I have 5 tables and several region servers only serve 1 table's
> regions.

I wonder if this an effect of our deploying splits to same server as
split parent?  Once the phenomeon goes into effect, we'll not break it
that I can see (on restart, we try our best in 0.90.0 to redeploy
regions to where they were running pre-shutdown so we don't lose
locality).

> I would like to round robin and scatter all tables across all
> region servers. Basically the distribution is not round-robin enough.
> Manually moving it is not going to help me. Frankly this goes against the
> concept of bigger/less regions. Given what I am seeing without an
> alternative I will reduce the max size of the regions and once I get into
> the 100s of regions per region server this problem will be resolved. Less
> regions is dangerous in terms of avoiding hot spots.
>
> Is there a way to turn off the memory across restarts of where a region
> lives? This might help re-balance from scratch.
>

>From AssignmentManager:

1226     // Determine what type of assignment to do on startup
1227     boolean retainAssignment = master.getConfiguration().
1228       getBoolean("hbase.master.startup.retainassign", true);

It looks like you could set the above flag in your hbase-site.xml to
false and that should do it (its on by default).

You could knock a few of your regionservers out of the cluster.  Wait
till regions assigned elsewhere, then bring them back up again.  Force
running of assignment.  That might mess stuff up enough?

St.Ack

Reply via email to