Re: Warning for large batch sizes with a small number of statements

2017-11-13 Thread Erick Ramirez
You can increase it if you're sure that it fits your use case. For an explanation of why batch size vs number of statements, see the discussion in CASSANDRA-6487. Cheers! On Mon, Nov 13, 2017 at 6:31 PM, Tim Moore wrote: > Hi, > > I'm trying to understand some of the

Re: Cassandra Query

2017-11-13 Thread Erick Ramirez
That is one of the most asked questions that it prompted me to write a blog post last month -- https://academy.datastax.com/support-blog/counting-keys-might-well-be-counting-stars On Mon, Nov 13, 2017 at 6:55 PM, Hareesh Veduraj wrote: > Hi Team, > > I have a new

Securing Cassandra database

2017-11-13 Thread Mokkapati, Bhargav (Nokia - IN/Chennai)
Hi Team, We are using Apache Cassandra 3.0.13 version. As part of Cassandra database security, we have created database super user authentication, but from driver side we have default cql connection syntax as "cqlsh " not like "cqlsh -u username and -p password". So cqlsh connection failing

best practice for repair

2017-11-13 Thread Peng Xiao
Hi there, we need to repair a huge CF,just want to clarify 1.nodetool repair -pr keyspace cf 2.nodetool repair -st -et -dc which will be better? or any other advice? Thanks, Peng Xiao

Re: best practice for repair

2017-11-13 Thread Peng Xiao
sub-range repair is much like primary range repair, except that each sub-range repair operation focuses even smaller subset of data. repair is a tough process.any advise? Thanks -- Original -- From: "";<2535...@qq.com>; Date: Mon, Nov 13, 2017

Re: STCS leaving sstables behind

2017-11-13 Thread Alexander Dejanovski
And actually, full repair with 3.0/3.x would have the same effect (anticompation) unless you're using subrange repair. On Mon, Nov 13, 2017 at 3:28 PM Jeff Jirsa wrote: > Running incremental repair puts sstables into a “repaired” set (and an > unrepaired set), which results in

Re: STCS leaving sstables behind

2017-11-13 Thread Nicolas Guyomar
Hi, I'm using default "nodetool repair" in 3.0.13 which I believe is full by default. I'm not using subrange repair Jeff you're right, "Nov 11 01:23 mc-6474-big-Data.db" is not yet marked as repaired, my repair routine is broken (sorry Alexander I'm not using repear yet ;) ) I'm gonna fix my

Re: Securing Cassandra database

2017-11-13 Thread DuyHai Doan
You can pass in login/password from the client side and encrypt the client / cassandra connection... Le 13 nov. 2017 12:16, "Mokkapati, Bhargav (Nokia - IN/Chennai)" < bhargav.mokkap...@nokia.com> a écrit : Hi Team, We are using Apache Cassandra 3.0.13 version. As part of Cassandra

Re: STCS leaving sstables behind

2017-11-13 Thread Nicolas Guyomar
Quick follow up : triggering a repair did the trick, sstables are starting to get compacted. Thank you On 13 November 2017 at 15:53, Nicolas Guyomar wrote: > Hi, > > I'm using default "nodetool repair" in 3.0.13 which I believe is full by > default. I'm not using

Re: best practice for repair

2017-11-13 Thread Jon Haddad
We (The Last Pickle) maintain Reaper, an open source repair tool, specifically to address all the complexity around repairs. http://cassandra-reaper.io/ Jon > On Nov 13, 2017, at 3:18 AM, Peng Xiao <2535...@qq.com> wrote: > > sub-range repair is much like primary

STCS leaving sstables behind

2017-11-13 Thread Nicolas Guyomar
Hi everyone, I'm facing quite a strange behavior on STCS on 3.0.13, the strategy seems to have "forgotten" about old sstables, and started a completely new cycle from scratch, leaving the old sstables on disk untouched : Something happened on Nov 10 on every node, which resulted in all those

Re: STCS leaving sstables behind

2017-11-13 Thread Jeff Jirsa
Running incremental repair puts sstables into a “repaired” set (and an unrepaired set), which results in something similar to what you’re describing. Were you running / did you run incremental repair ? -- Jeff Jirsa > On Nov 13, 2017, at 5:04 AM, Nicolas Guyomar

High IO Util using TimeWindowCompaction

2017-11-13 Thread Kurtis Norwood
I've been testing out cassandra 3.11 (currently using 3.7) and have been observing really high io util occasionally that sometimes results in temporary flatlining at 100% io util for an extended period. I think my use case is pretty simple and currently only testing part of it on this new version

Re: running C* on AWS EFS storage ...

2017-11-13 Thread Subroto Barua
From our experience, the ebs remount process was quite painful Subroto > On Nov 12, 2017, at 4:18 PM, kurt greaves wrote: > > What's wrong with just detaching the EBS volume and then attaching it to the > new node?​ Assuming you have a separate mount for your C* data

Re: Node Failure Scenario

2017-11-13 Thread Anthony Grasso
Hi Anshu, To add to Erick's comment, remember to remove the *replace_address* method from the *cassandra-env.sh* file once the node has rejoined successfully. The node will fail the next restart otherwise. Alternatively, use the *replace_address_first_boot* method which works exactly the same