Compaction task not available in dcos-cassandra-service

2017-09-18 Thread Akshit Jain
Hi, there isn't a compaction task feature in mesosphere/dcos-cassandra-service like repair and cleanup. Is anybody working on it or is there any plan to add in later releases? Regards

Re: Multi-node repair fails after upgrading to 3.0.14

2017-09-18 Thread Jeff Jirsa
The command you're running will cause anticompaction and the range borders for all instances at the same time Since only one repair session can anticompact any given sstable, it's almost guaranteed to fail Run it on one instance at a time -- Jeff Jirsa > On Sep 18, 2017, at 1:11 AM,

RE: GC/CPU increase after upgrading to 3.0.14 (from 2.1.18)

2017-09-18 Thread Steinmaurer, Thomas
Hello again, digged a bit further. Comparing 1hr flight recording sessions for both, 2.1 and 3.0 with the same incoming simulated load from our loadtest environment. We are heavily write than read bound in this environment/scenario and it looks like there is a noticeable/measurable difference

RE: Multi-node repair fails after upgrading to 3.0.14

2017-09-18 Thread Steinmaurer, Thomas
Hi Jeff, understood. That’s quite a change then coming from 2.1 from an operational POV. Thanks again. Thomas From: Jeff Jirsa [mailto:jji...@gmail.com] Sent: Montag, 18. September 2017 15:56 To: user@cassandra.apache.org Subject: Re: Multi-node repair fails after upgrading to 3.0.14 The

Re: Multi-node repair fails after upgrading to 3.0.14

2017-09-18 Thread Jeff Jirsa
Sorry I may be wrong about the cause - didn't see -full Mea culpa, its early here and I'm not awake -- Jeff Jirsa > On Sep 18, 2017, at 7:01 AM, Steinmaurer, Thomas > wrote: > > Hi Jeff, > > understood. That’s quite a change then coming from 2.1 from an

Re[6]: Modify keyspace replication strategy and rebalance the nodes

2017-09-18 Thread Dominik Petrovic
@jeff what do you think is the best approach here to fix this problem? Thank you all for helping me. >Thursday, September 14, 2017 3:28 PM -07:00 from kurt greaves >: > >Sorry that only applies our you're using NTS. You're right that simple >strategy won't work very well

Re: Re[6]: Modify keyspace replication strategy and rebalance the nodes

2017-09-18 Thread Jeff Jirsa
The hard part here is nobody's going to be able to tell you exactly what's involved in fixing this because nobody sees your ring And since you're using vnodes and have a nontrivial number of instances, sharing that ring (and doing anything actionable with it) is nontrivial. If you weren't

Re: Modify keyspace replication strategy and rebalance the nodes

2017-09-18 Thread Jeff Jirsa
For what its worth, the problem isn't the snitch it's the replication strategy - he's using the right snitch but SimpleStrategy ignores it That's the same reason that adding a new DC doesn't work - the relocation strategy is dc agnostic and changing it safely IS the problem -- Jeff Jirsa

ConsitencyLevel and Mutations : Behaviour if the update of the commitlog fails

2017-09-18 Thread Leleu Eric
Hi Cassandra users, I have a question about the ConsistencyLevel and the MUTATION operation. According to the write path documentation, the first action executed by a replica node is to write the mutation into the commitlog, the mutation is ACK only if this action is performed. I suppose that

RE: Re[6]: Modify keyspace replication strategy and rebalance the nodes

2017-09-18 Thread Myron A. Semack
How would setting the consistency to ALL help? Wouldn’t that just cause EVERY read/write to fail after the ALTER until the repair is complete? Sincerely, Myron A. Semack From: Jeff Jirsa [mailto:jji...@gmail.com] Sent: Monday, September 18, 2017 2:42 PM To: user@cassandra.apache.org Subject:

Re: Modify keyspace replication strategy and rebalance the nodes

2017-09-18 Thread Jeff Jirsa
No worries, that makes both of us, my first contribution to this thread was similarly going-too-fast and trying to remember things I don't use often (I thought originally SimpleStrategy would consult the EC2 snitch, but it doesn't). - Jeff On Mon, Sep 18, 2017 at 1:56 PM, Jon Haddad

Re: Re[6]: Modify keyspace replication strategy and rebalance the nodes

2017-09-18 Thread Jeff Jirsa
Using CL:ALL basically forces you to always include the first replica in the query. The first replica will be the same for both SimpleStrategy/SimpleSnitch and NetworkTopologyStrategy/EC2Snitch. It's basically the only way we can guarantee we're not going to lose a row because it's only written

Re: Re[6]: Modify keyspace replication strategy and rebalance the nodes

2017-09-18 Thread kurt greaves
So I haven't completely thought through this, so don't just go ahead and do it. Definitely test first. Also if anyone sees something terribly wrong don't be afraid to say. Seeing as you're only using SimpleStrategy and it doesn't care about racks, you could change to SimpleSnitch, or

Re: Multi-node repair fails after upgrading to 3.0.14

2017-09-18 Thread kurt greaves
https://issues.apache.org/jira/browse/CASSANDRA-13153 implies full repairs still triggers anti-compaction on non-repaired SSTables (if I'm reading that right), so might need to make sure you don't run multiple repairs at the same time across your nodes (if your using vnodes), otherwise could still

Re: ConsitencyLevel and Mutations : Behaviour if the update of the commitlog fails

2017-09-18 Thread kurt greaves
> ​Does the coordinator "cancel" the mutation on the "committed" nodes (and > how)? No. Those mutations are applied on those nodes. > Is it an heuristic case where two nodes have the data whereas they > shouldn't and we hope that HintedHandoff will replay the mutation ? Yes. But really you

Re: Maturity and Stability of Enabling CDC

2017-09-18 Thread Michael Fong
Thanks Jeff! On Mon, Sep 18, 2017 at 9:31 AM, Jeff Jirsa wrote: > Haven't tried out CDC, but the answer based on the design doc is yes - you > have to manually dedup CDC at the consumer level > > > > > -- > Jeff Jirsa > > > On Sep 17, 2017, at 6:21 PM, Michael Fong

Re: Wide rows splitting

2017-09-18 Thread Stefano Ortolani
You might find this interesting: https://medium.com/@foundev/synthetic-sharding-in-cassandra-to-deal-with-large-partitions-2124b2fd788b Cheers, Stefano On Mon, Sep 18, 2017 at 5:07 AM, Adam Smith wrote: > Dear community, > > I have a table with inlinks to URLs, i.e.

RE: Multi-node repair fails after upgrading to 3.0.14

2017-09-18 Thread Steinmaurer, Thomas
Hi Alex, I now ran nodetool repair –full –pr keyspace cfs on all nodes in parallel and this may pop up now: 0.176.38.128 (progress: 1%) [2017-09-18 07:59:17,145] Some repair failed [2017-09-18 07:59:17,151] Repair command #3 finished in 0 seconds error: Repair job has failed with the error

Re: Multi-node repair fails after upgrading to 3.0.14

2017-09-18 Thread Alexander Dejanovski
You could dig a bit more in the logs to see what precisely failed. I suspect anticompaction to still be responsible for conflicts with validation compaction (so you should see validation failures on some nodes). The only way to fully disable anticompaction will be to run subrange repairs. The two