Interesting, server7.ec3.internal,60020,1423845018628 was consistently
chosen as destination for the table.
Did server7.ec3.internal,60020,1423845018628 host regions from other table ?

Cheers

On Fri, Feb 13, 2015 at 10:27 AM, Shahab Yunus <[email protected]>
wrote:

> Table name is:
> MYTABLE_RECENT_4W_V2
>
> Pastebin snippet 1: http://pastebin.com/dQzMhGyP
> Pastebin snippet 2: http://pastebin.com/Y7ZsNAgF
>
> This is the master log after invoking balancer command from hbase shell.
>
> Regards,
> Shahab
>
> On Fri, Feb 13, 2015 at 12:00 PM, Ted Yu <[email protected]> wrote:
>
> > bq. all the regions of this table were back on this same RS!
> >
> > Interesting. Please check master log around the time this RS was brought
> > online. You can pastebin the relevant snippet.
> >
> > Thanks
> >
> > On Fri, Feb 13, 2015 at 8:55 AM, Shahab Yunus <[email protected]>
> > wrote:
> >
> > > Hi Ted.
> > >
> > > Yes, the cluster itself is balanced. On average 300 regions per node on
> > 10
> > > nodes.
> > >
> > > # of tables is 53 of varying sizes.
> > >
> > > Balancer was invoked and it didn't do anything (i.e. no movement of
> > > regions) but we didn't check the master's logs. We can do that.
> > >
> > > Interestingly, we restarted the RS which was holding all the regions of
> > > this one table. The regions were nicely spread out to the remaining RS.
> > But
> > > when we brought back this RS, all the regions of this table were back
> on
> > > this same RS!
> > >
> > > Thanks.
> > >
> > >
> > > Regards,
> > > Shahab
> > >
> > > On Fri, Feb 13, 2015 at 11:46 AM, Ted Yu <[email protected]> wrote:
> > >
> > > > How many tables are there in your cluster ?
> > > >
> > > > Is the cluster balanced overall (in terms of number of regions per
> > > server)
> > > > but this table is not ?
> > > >
> > > > What happens (check master log) when you issue 'balancer' command
> > through
> > > > shell ?
> > > >
> > > > Cheers
> > > >
> > > > On Fri, Feb 13, 2015 at 8:19 AM, Shahab Yunus <
> [email protected]>
> > > > wrote:
> > > >
> > > > > CDH 5.3
> > > > > HBase 98.6
> > > > >
> > > > > We are writing data to an HBase table through a M/R job. We pre
> split
> > > the
> > > > > table before each job run. The problem is that most of the regions
> > end
> > > up
> > > > > on the same RS. This results in that one RS being severely
> overloaded
> > > and
> > > > > subsequent M/R jobs failing trying to write to the regions on that
> > RS.
> > > > >
> > > > > The balancer is on and the split policy is default. No changes
> there.
> > > It
> > > > is
> > > > > a 10 node cluster.
> > > > >
> > > > > All other related properties are defaults too.
> > > > >
> > > > > Any idea, how can we force balancing of the new regions? Do we have
> > to
> > > > > consider compaction into the equation as well? Thanks.
> > > > >
> > > > > Regards,
> > > > > Shahab
> > > > >
> > > >
> > >
> >
>

Reply via email to