Re: Adding new DC with different version of Cassandra

2019-07-01 Thread Rahul Reddy
Thanks Jeff, We want to migrate to Apache 3.11.3 once entire cluster in apache we eventually decommission datastax DC On Mon, Jul 1, 2019, 9:31 AM Jeff Jirsa wrote: > Should be fine, but you probably want to upgrade anyway, there were a few > really important bugs fixed since 3.11.0 > > > On

Re: Need help on dealing with Cassandra robustness and zombie data

2019-07-01 Thread Jeff Jirsa
What you’re describing is likely impossible to do in cassandra the way you’re thinking The only practical way to do it is extending gcgs and making the tombstone reads less expensive (ordering the clustering columns so you’re not scanning the tombstones, or breaking the partitions into buckets

Re: Need help on dealing with Cassandra robustness and zombie data

2019-07-01 Thread yuping wang
Thank you; very helpful. But we do have some difficulties #1 Cassandra process itself didn’t go down when marked as “DN”... (the node itself might just be temporary having some hiccup and not reachable )... so would not auto-start still help? #2 we can’t set longer gc grace because we are very

Re: Running Node Repair After Changing RF or Replication Strategy for a Keyspace

2019-07-01 Thread Jeff Jirsa
RF=5 allows you to lose two hosts without losing quorum Many teams can calculate their hardware failure rate and replacement time. If you can do both of these things you can pick and RF that meets your durability and availability SLO. For sufficiently high SLOs you’ll need RF > 3 > On Jun

Re: Adding new DC with different version of Cassandra

2019-07-01 Thread Jeff Jirsa
Should be fine, but you probably want to upgrade anyway, there were a few really important bugs fixed since 3.11.0 > On Jul 1, 2019, at 3:25 AM, Rahul Reddy wrote: > > Hello All, > > We have datastax Cassandra cluster which uses 3.11.0 and we want to add new > DC with apache Cassandra

Re: Need help on dealing with Cassandra robustness and zombie data

2019-07-01 Thread Rhys Campbell
#1 Set the cassandra service to not auto-start. #2 Longer gc_grace time would help #3 Rebootstrap? If the node doesn't come back within gc_grace,_seconds, remove the node, wipe it, and bootstrap it again. https://docs.datastax.com/en/archived/cassandra/2.0/cassandra/dml/dml_about_deletes_c.html

Need help on dealing with Cassandra robustness and zombie data

2019-07-01 Thread yuping wang
Hi all, Sorry for the interruption. But I need help. Due to specific reasons of our use case, we have gc grace on the order of 10 minutes instead of default 10 days. Since we have a large amount of nodes in our Cassandra fleet, not surprisingly, we encounter occasionally node status

Adding new DC with different version of Cassandra

2019-07-01 Thread Rahul Reddy
Hello All, We have datastax Cassandra cluster which uses 3.11.0 and we want to add new DC with apache Cassandra 3.11.3. we tried doing the same and data got streamed to new DC. Since we are able to stream data any other issues we need to consider. Is it because of same type of sstables used in

Re: how to change a write's and a read's consistency level separately in cqlsh?

2019-07-01 Thread Oleksandr Shulgin
On Sat, Jun 29, 2019 at 6:19 AM Nimbus Lin wrote: > > On the 2nd question, would you like to tell me how to change a > write's and a read's consistency level separately in cqlsh? > Not that I know of special syntax for that, but you may add an explicit "CONSISTENCY " command before every

Re: Running Node Repair After Changing RF or Replication Strategy for a Keyspace

2019-07-01 Thread Oleksandr Shulgin
On Sat, Jun 29, 2019 at 5:49 AM Jeff Jirsa wrote: > If you’re at RF= 3 and read/write at quorum, you’ll have full visibility > of all data if you switch to RF=4 and continue reading at quorum because > quorum if 4 is 3, so you’re guaranteed to overlap with at least one of the > two nodes that