Re: Cassandra on K8S

2020-08-03 Thread manish khandelwal
I am asking this because the only case where I see IP swap can occur is when two Cassandra nodes are running on the same K8S host node. I am evaluating how safe it is two run two Cassandra nodes on a single K8S host node. *Totally Agree that swap is not the right word but one node still taking

Re: Cassandra on K8S

2020-08-03 Thread manish khandelwal
But again if Some Cassandra node (pod) with particular IP X is down, Second Cassandra node (pod) tries to take the IP X of first Cassandra node, Second Cassandra node should fail to join the cluster as the Cassandra cluster will complain that IP X is already occupied. In that sense actual swap of

Re: Cassandra on K8S

2020-08-03 Thread Christopher Bradford
In *most* k8s environments each Kubernetes worker receives its own dedicated CIDR range from the cluster’s CIDR space for allocating pod IP addresses. The issue described can occur when a k8s worker goes down then comes back up and the pods are rescheduled where either pod starts up with another

Re: Cassandra on K8S

2020-08-03 Thread manish khandelwal
> > I have started reading about how to deploy Cassandra with K8S. But as I > read more I feel there are a lot of challenges in running Cassandra on K8s. > Some of the challenges which I feel are > > 1. POD IPs identification - If the pods go down and when they come up > their IPs change, how is

Re: Re: streaming stuck on joining a node with TBs of data

2020-08-03 Thread Jeff Jirsa
Memtable really isn't involved here, each data file is copied over as-is and turned into a new data file, it doesn't read into the memtable (though it does deserialize and re-serialize, which temporarily has it in memory, but isn't in the memtable itself). You can cut down on the number of data

Re: many instances of org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier$1 on the heap

2020-08-03 Thread jelmer
It did look like there where repairs running at the time. The LiveSSTableCount for the entire node is about 2200 tables, for the keyspace that was being repaired its just 150 We run cassandra 3.11.6 so we should be unaffected by cassandra-14096 We use http://cassandra-reaper.io/ for the repairs

Fwd: Re: streaming stuck on joining a node with TBs of data

2020-08-03 Thread onmstester onmstester
IMHO (reading system.log) each streamed-in file from any node would be write down as a separate sstable to the disk and won't be wait in memtable until enough amount of memtable has been created inside memory, so there would be more compactions because of multiple small sstables. Is there any