These are instances in the cloud and we're using Consul for name
resolution. Regarding network settings, your question is a bit broad...
Which settings would you recommend to check first?
On Mon, Oct 17, 2016 at 5:28 PM, Dima Spivak <dimaspi...@apache.org> wrote:
> Hey Alexander,
> Could something be amiss in your network settings? Seeing phantom datanodes
> could be tripping things up. Are these physical machines or instances in
> the cloud?
> On Monday, October 17, 2016, Alexander Ilyin <alexan...@weborama.com>
> > Hi,
> > We have a 7-node HBase cluster (version 1.1.2) and we change some of its
> > settings from time to time which requires a restart. The problem is that
> > every time after the restart load balancer reassigns the regions making
> > data locality low.
> > To address this issue we tried the settings described here:
> > https://issues.apache.org/jira/browse/HBASE-6389,
> > "hbase.master.wait.on.regionservers.interval" in particular. We tried it
> > two times in slightly different ways but neither of them worked. First
> > we did a rolling restart (master, then each of datanodes) and we saw 14
> > datanodes instead of 7 in Master UI. Half of them had the regions on it
> > while the other half was empty. We restarted master only and we got 7
> > datanodes in Master UI. After that we rollbacked the setting.
> > Second time we restarted master and datanodes at the same time but master
> > failed to read meta table, moved it to a different datanode and
> > the regions again.
> > Please advise on how to use hbase.master.wait.on.regionservers.*
> > properly. Launching major compactions for all the tables after each
> > change seems to be an overkill. Attaching Master server logs with
> > lines for two attempts mentioned above.
> > Thanks in advance.