Thank you very much for your response!

2 things:

1) If I don't restart the node after changing the seed list this will never
become the seed and I would like to be sure that I don't find my self in a
spot where I don't have seed nodes and this means that I can not add a node
in the cluster

2) We have i3xlarge instances with data directory in the XFS filesystem
that is ephemeral and hints, commit_log and saved_caches in the EBS volume.
Whenever AWS is going to retire the instance due to degraded hardware
performance is it better:

Option 1)
   - Nodetool drain
   - Stop cassandra
   - Restart the machine from aws to be restored in a different VM from the
hypervisor
   - Start Cassandra with -Dcassandra.replace_address

OR
Option 2)
 - Add a new node and wait for the NORMAL status
 - Decommission the one that is going to be retired
 - Run cleanup with cstar across the datacenters

?

Thanks,

Sergio




Il giorno gio 13 feb 2020 alle ore 18:15 Erick Ramirez <
erick.rami...@datastax.com> ha scritto:

> I did decommission of this node and I did all the steps mentioned except
>> the -Dcassandra.replace_address and now it is streaming correctly!
>
>
> That works too but I was trying to avoid the rebalance operations (like
> streaming to restore replica counts) since they can be expensive.
>
> So basically, if I want this new node as seed should I add its IP address
>> after it joined the cluster and after
>> - nodetool drain
>> - restart cassandra?
>
>
> There's no need to restart C* after updating the seeds list. It will just
> take effect the next time you restart.
>
> I deactivated the future repair happening in the cluster while this node
>> is joining.
>> When you add a node is it better to stop the repair process?
>
>
> It's not necessary to do so if you have sufficient capacity in your
> cluster. Topology changes are just a normal part of a C* cluster's
> operation just like repairs. But when you temporarily disable repairs,
> existing nodes have more capacity to bootstrap a new node so there is a
> benefit there. Cheers!
>
>>

Reply via email to