Hi,

Were you trying to remove them using the process documented here: 
http://clearwater.readthedocs.io/en/latest/Clearwater_Elastic_Scaling.html? 
i.e. did you run 

     sudo service clearwater-etcd decommission

on the Ralf nodes that you were decommissioning?

Regardless, it looks like the Ralf nodes have failed to leave the memcached 
cluster. To resolve this, you should follow the process outlined here: 
http://clearwater.readthedocs.io/en/latest/Handling_Failed_Nodes.html#removing-a-failed-node
 , ensuring that you follow the instructions for "Removing a Node from a 
Datastore", including the fact that you must run the script for each of that 
datastore's failed nodes simultaneously.

As for your second question: no, you do not need to remove the Ralf nodes' IP 
addresses from the local_config file on other nodes. That value is only used on 
joining the etc cluster.

Also, It looks like you're not signed up to the mailing list with this email 
address, which means that we don't see your emails as quickly as we would 
otherwise. I'd suggest that you either use the email address with which you 
signed up to the list, or sign up to the list with this address too.

Regards,

Seb.

-----Original Message-----
From: Clearwater [mailto:[email protected]] On Behalf Of 
Faustinoni Fabrizio
Sent: 27 January 2017 11:11
To: [email protected]
Subject: Remove Ralf nodes

Hi guys,

I’ve a clearwater cluster (last version, manual installation) I wanted to 
remove the ralf nodes from the cluster because we are not using them.

I think that something went wrong because if I run:
/usr/share/clearwater/clearwater-cluster-manager/scripts/check_cluster_state

In the output I can still see the  Ralf Memcached cluster :

Describing the Homer Cassandra cluster:
  The local node is *not* in this cluster
  The cluster is stable
    192.168.3.42 is in state normal
    192.168.3.43 is in state normal
    192.168.3.44 is in state normal
    192.168.3.45 is in state normal

Describing the Homestead Cassandra cluster:
  The local node is *not* in this cluster
  The cluster is stable
    192.168.3.51 is in state normal
    192.168.3.50 is in state normal
    192.168.3.53 is in state normal
    192.168.3.52 is in state normal

Describing the Ralf Chronos cluster in site site1:
  The local node is *not* in this cluster
  The cluster is stable

Describing the Ralf Memcached cluster in site site1:
  The local node is *not* in this cluster
  The cluster is *not* stable
    192.168.3.55 is in state normal, config changed
    192.168.3.54 is in state leaving, config changed

Describing the Sprout Chronos cluster in site site1:
  The local node is *not* in this cluster
  The cluster is stable
    192.168.3.48 is in state normal
    192.168.3.56 is in state normal
    192.168.3.46 is in state normal
    192.168.3.47 is in state normal
    192.168.3.49 is in state normal

Describing the Sprout Memcached cluster in site site1:
  The local node is *not* in this cluster
  The cluster is stable
    192.168.3.48 is in state normal
    192.168.3.56 is in state normal
    192.168.3.46 is in state normal
    192.168.3.47 is in state normal
    192.168.3.49 is in state normal


I don’t think that is a bug but I think that I did something wrong.

How can I remove ralf nodes totally??
I don’t understand also if I have to remove the nodes also form local_config 
file: etcd_cluster  on all the nodes of the cluster

Thanks

_______________________________________________
Clearwater mailing list
[email protected]
http://lists.projectclearwater.org/mailman/listinfo/clearwater_lists.projectclearwater.org

Reply via email to