Could something be amiss in your network settings? Seeing phantom datanodes
could be tripping things up. Are these physical machines or instances in
On Monday, October 17, 2016, Alexander Ilyin <alexan...@weborama.com> wrote:
> We have a 7-node HBase cluster (version 1.1.2) and we change some of its
> settings from time to time which requires a restart. The problem is that
> every time after the restart load balancer reassigns the regions making
> data locality low.
> To address this issue we tried the settings described here:
> "hbase.master.wait.on.regionservers.interval" in particular. We tried it
> two times in slightly different ways but neither of them worked. First time
> we did a rolling restart (master, then each of datanodes) and we saw 14
> datanodes instead of 7 in Master UI. Half of them had the regions on it
> while the other half was empty. We restarted master only and we got 7 empty
> datanodes in Master UI. After that we rollbacked the setting.
> Second time we restarted master and datanodes at the same time but master
> failed to read meta table, moved it to a different datanode and reassigned
> the regions again.
> Please advise on how to use hbase.master.wait.on.regionservers.* settings
> properly. Launching major compactions for all the tables after each config
> change seems to be an overkill. Attaching Master server logs with relevant
> lines for two attempts mentioned above.
> Thanks in advance.