Hi Marjana,

I guess OpenInx (much valid) point here is that between step#2 and #3, you
need to make sure there's no lags for ORIGINAL_PEER_ID, because if it has
huge lags, it might be that some of the edits pending on its queue came
before you added NEW_PEER_ID in step #1. In that case, since
ORIGINAL_PEER_ID will never find the slave cluster anymore, these potential
old edits that didn't come into NEW_PEER_ID queue would be lost once
ORIGINAL_PEER_ID is deleted.

Em qui, 11 de jul de 2019 às 11:06, marjana <mivko...@us.ibm.com> escreveu:

> Hi OpenInx,
> Correct, only ZK is being moved, hbase slave stays the same. I moved that
> earlier effortlessly.
> In order to move ZK, I will have to stop hbase. While it's down, hlogs will
> accumulate for the NEW_ID and ORIGINAL_ID peers. Once I start hbase, hlogs
> for NEW_ID will start replicating. hlogs for ORIGINAL_ID I hope will be
> disregarded when I drop that peer. I won't miss any data as long as I
> add_peer (step 1) before I disable_peer (step 2).
> Thanks
>
>
>
> --
> Sent from:
> http://apache-hbase.679495.n3.nabble.com/HBase-User-f4020416.html
>

Reply via email to