We have same questions and not found solution for now. there is also this
github thread on same subject
Le jeudi 8 février 2018 22:52:24 UTC+1, ka...@szczygiel.io a écrit :
> I'm looking for a good solutions to replace dead Kubernetes worker node
> that was running Cassandra in Kubernetes.
> Cassandra cluster built from 3 pods
> Failure occurs on one of the Kubernetes worker nodes
> Replacement node is joining the cluster
> New pod from StatefulSet is scheduled on new node
> As pod IP address has changed, new pod is visible as new Cassandra node (4
> nodes in cluster in total) and is unable to bootstrap until dead one is
> It's very difficult to follow the official procedure (
> as Cassandra is ran as StatefulSet.
> One completely hacky workaround I've found is to use ConfigMap to supply
> JAVA_OPTS. As changing ConfigMap doesn't recreate pods (yet), you can
> manipulate running pods in such way that you will be able to follow the
> However, that's as I mentioned, super hacky. I'm wondering if anyone is
> running Cassandra on top of Kubernetes and has a better idea how to deal
> with such failure?
You received this message because you are subscribed to the Google Groups
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email
To post to this group, send email to firstname.lastname@example.org.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.