Hi, I looked through the code, and it doesn't look like there's anything that automatically cleans up these replication servicer advertisement endpoints when a tserver is removed. You didn't specify a version, but it looks like this is the case, regardless of which version you're using. You can clean them up manually, though, by simply deleting their nodes using the zkCli.sh shell. That will prevent new replication requests from choosing the non-existent tserver to service the RPC request (which is what these values in ZK are used for).
On Tue, Jul 20, 2021 at 11:59 AM Shailesh Ligade <slig...@fbi.gov> wrote: > > Hello, > > My current hdfs cluster is not rack aware. I need to add new > tservers/datanodes so that they will be in different AZ (AWS). I provisioned > the new tservers nodes and added to the cluster (updated accumulo slaves > file). Accumulo monitor showed the correct list. Then I shutdown the old > tserver one at a time, ensuring that hdfs replicate whatever it needs to, > before shutting down next tserver. > > At the end I cleaned accumulo slaves file and ensure that accumulo is up and > running, however, under replication section I still see it is waiting for old > tservers. Also in the zookeeper, I see list of old and new tservers under > /accumulo/replication/tservers, however /accumulo/tservers list is correct. > Accumulo monitor shows correct tablet servers. > > How can I clean this up? > > S >