Re: DC aware failover

2017-11-15 Thread Alexander Dejanovski
Hi, The policy is used in production at least in my former company. I can help if you have issues using it. Cheers, Le jeu. 16 nov. 2017 à 08:32, CPC a écrit : > Hi, > > We want to implement DC aware failover policy. For example if application > could not reach some part

DC aware failover

2017-11-15 Thread CPC
Hi, We want to implement DC aware failover policy. For example if application could not reach some part of the ring or if we loose 50% of local DC then we want our application automatically to switch other DC. We found this project on GitHub

Re: Repair failing after it was interrupted once

2017-11-15 Thread Erick Ramirez
Check that there are no running repair threads on the nodes with nodetool netstats. For those that do have running repairs, restart C* on them to kill the repair threads and you should be able to repair the nodes again. Cheers! On Wed, Nov 15, 2017 at 8:08 PM, Dipan Shah

Re: Executing a check before replication / manual replication

2017-11-15 Thread Subroto Barua
turn on audit on tables in question, scan the audit logs (using tools like Splunk) and send alerts based on the activity... On Wednesday, November 15, 2017, 12:33:30 PM PST, Abdelkrim Fitouri wrote: Hi, I know that cassandra handel properly data replication

Executing a check before replication / manual replication

2017-11-15 Thread Abdelkrim Fitouri
Hi, I know that cassandra handel properly data replication between cluster nodes, but for some security reasons I am wonderning how to avoid data replication after a server node have been compromised and someone is executing modification via cqlsh ? is there a posibility on Cassandra to execute

Re: CQL Map vs clustering keys

2017-11-15 Thread Jon Haddad
In 3.0, clustering columns are not actually part of the column name anymore. Yay. Aaron Morton wrote a detailed analysis of the 3.x storage engine here: http://thelastpickle.com/blog/2016/03/04/introductiont-to-the-apache-cassandra-3-storage-engine.html

Re: CQL Map vs clustering keys

2017-11-15 Thread DuyHai Doan
Yes, your remark is correct. However, once CASSANDRA-7396 (right now in 4.0 trunk) get released, you will be able to get a slice of map values using their (sorted) keys SELECT map[fromKey ... toKey] FROM TABLE ... Needless to say, it will be also possible to get a single element from the map by

Re: Reaper 1.0

2017-11-15 Thread Jon Haddad
Apache 2 Licensed, just like Cassandra. https://github.com/thelastpickle/cassandra-reaper/blob/master/LICENSE.txt Feel free to modify, put in prod, fork or improve. Unfortunately I had to re-upload the Getting

RE: Reaper 1.0

2017-11-15 Thread Harika Vangapelli -T (hvangape - AKRAYA INC at Cisco)
Open source, free to use in production? Any License constraints, Please let me know. I experimented with it yesterday, really liked it. [http://wwwin.cisco.com/c/dam/cec/organizations/gmcc/services-tools/signaturetool/images/logo/logo_gradient.png] Harika Vangapelli Engineer - IT

Re: TWCS decommission and cleanups

2017-11-15 Thread Jeff Jirsa
It does the right thing - sstables sent to other nodes maintain their min/max timestamps so they’ll go to the right buckets The bucket is selected using the timestamp of the newest cell in the sstable If you run a major compaction, you would undo the same bucketing Cleanup works by compacting

CQL Map vs clustering keys

2017-11-15 Thread eugene miretsky
Hi, What would be the tradeoffs between using 1) Map ( id UUID PRIMARY KEY, myMap map ); 2) Clustering key ( id UUID PRIMARY KEY, key int, val text, PRIMARY KEY (id, key)) ); My understanding is that maps are stored very similarly to clustering columns, where the map key

TWCS decommission and cleanups

2017-11-15 Thread Benjamin Heiskell
Hello all, How does TWCS work when decommissioning a node? Does the data distribute across the other nodes to the current time window's sstable (like read-repairs)? Or will it compact into the sstables for the prior windows? If that's how it works, how does it decide what sstable to compact with?

Re: Node Failure Scenario

2017-11-15 Thread Anshu Vajpayee
Thank you Jonathan and all. On Tue, Nov 14, 2017 at 10:53 PM, Jonathan Haddad wrote: > Anthony’s suggestions using replace_address_first_boot lets you avoid that > requirement, and it’s specifically why it was added in 2.2. > On Tue, Nov 14, 2017 at 1:02 AM Anshu Vajpayee

Re: High IO Util using TimeWindowCompaction

2017-11-15 Thread Alexander Dejanovski
Hi Kurt, it seems highly unlikely that TWCS is responsible for your problems since you're throttling compaction way below what i3 instances can provide. For such instances, we would advise to use 8 concurrent compactors with high compaction throughput (>200MB/s, if not unthrottled). We've had

Repair failing after it was interrupted once

2017-11-15 Thread Dipan Shah
Hello, I was running a "nodetool repair -pr" command on one node and due to some network issues I lost connection to the server. Now when I am running the same command on that and other servers too, the repair job if failing with the following log: [2017-11-15 03:55:19,965] Some repair