I'm restarting it through Ambari. First time I specified a delay between
regionserver restarts, second time I didn't. Not sure whether Ambari uses
graceful restart script internally but I can try to use it directly.

On Mon, Oct 17, 2016 at 6:00 PM, Jeremy Carroll <phobos...@gmail.com> wrote:

> How are you restarting the cluster? From my experience a graceful rolling
> restart retains locality.
>
> For each regionserver (one at a time) run the graceful restart script to
> retain local blocks. The master configuration option you specified only
> works on a full cluster reboot (or master reboot)
>
> On Mon, Oct 17, 2016 at 8:26 AM Alexander Ilyin <alexan...@weborama.com>
> wrote:
>
> > Hi,
> >
> > We have a 7-node HBase cluster (version 1.1.2) and we change some of its
> > settings from time to time which requires a restart. The problem is that
> > every time after the restart load balancer reassigns the regions making
> > data locality low.
> >
> > To address this issue we tried the settings described here:
> > https://issues.apache.org/jira/browse/HBASE-6389,
> > "hbase.master.wait.on.regionservers.interval" in particular. We tried it
> > two times in slightly different ways but neither of them worked. First
> time
> > we did a rolling restart (master, then each of datanodes) and we saw 14
> > datanodes instead of 7 in Master UI. Half of them had the regions on it
> > while the other half was empty. We restarted master only and we got 7
> empty
> > datanodes in Master UI. After that we rollbacked the setting.
> >
> > Second time we restarted master and datanodes at the same time but master
> > failed to read meta table, moved it to a different datanode and
> reassigned
> > the regions again.
> >
> > Please advise on how to use hbase.master.wait.on.regionservers.*
> settings
> > properly. Launching major compactions for all the tables after each
> config
> > change seems to be an overkill. Attaching Master server logs with
> relevant
> > lines for two attempts mentioned above.
> >
> > Thanks in advance.
> >
>

Reply via email to