High Bloom filter false ratio

2016-02-17 Thread Anishek Agarwal
Hello, We have a table with composite partition key with humungous cardinality, its a combination of (long,long). On the table we have bloom_filter_fp_chance=0.01. On doing "nodetool cfstats" on the 5 nodes we have in the cluster we are seeing "Bloom filter false ratio:" in the range of 0.7

Re: Cassandra nodes reduce disks per node

2016-02-17 Thread Anishek Agarwal
Hey Branton, Please do let us know if you face any problems doing this. Thanks anishek On Thu, Feb 18, 2016 at 3:33 AM, Branton Davis wrote: > We're about to do the same thing. It shouldn't be necessary to shut down > the entire cluster, right? > > On Wed, Feb

Re: Debugging write timeouts on Cassandra 2.2.5

2016-02-17 Thread Mike Heffner
Jaydeep, No, we don't use any light weight transactions. Mike On Wed, Feb 17, 2016 at 6:44 PM, Jaydeep Chovatia < chovatia.jayd...@gmail.com> wrote: > Are you guys using light weight transactions in your write path? > > On Thu, Feb 11, 2016 at 12:36 AM, Fabrice Facorat < >

Re: Forming a cluster of embedded Cassandra instances

2016-02-17 Thread Binil Thomas
Thanks for sharing your experience! I also found a similar solution in TitanDB[1], but that also seem to be intended for development use. I think the consensus here seems to be that one should not be embedding Cassandra into another JVM. > For production, we have to support single node clusters

Re: Debugging write timeouts on Cassandra 2.2.5

2016-02-17 Thread Jaydeep Chovatia
Are you guys using light weight transactions in your write path? On Thu, Feb 11, 2016 at 12:36 AM, Fabrice Facorat wrote: > Are your commitlog and data on the same disk ? If yes, you should put > commitlogs on a separate disk which don't have a lot of IO. > > Others

Re: Cassandra nodes reduce disks per node

2016-02-17 Thread Ben Bromhead
you can do this in a "rolling" fashion (one node at a time). On Wed, 17 Feb 2016 at 14:03 Branton Davis wrote: > We're about to do the same thing. It shouldn't be necessary to shut down > the entire cluster, right? > > On Wed, Feb 17, 2016 at 12:45 PM, Robert Coli

Re: Do I have to use repair -inc with the option -par forcely?

2016-02-17 Thread Jean Carlo
Hi , Thx @alain for you reply. Yes we have the 2.1.12. We are definitely facing CASSANDRA-10422 . However I cannot run incremental repairs without add -par @carlos if what you say it's correct, it will be really nice because the process

Re: Re : decommissioned nodes shows up in "nodetool describecluster" as UNREACHABLE in 2.1.12 version

2016-02-17 Thread Alain RODRIGUEZ
Hi, nodetool gossipinfo shows the decommissioned nodes as "LEFT" I believe this is the expected behavior, we keep some a trace of leaving nodes for a few days, this shouldn't be an issue for you nodetool describecluster shows the decommissioned nodes as UNREACHABLE. > This is a weird

Re: Cassandra nodes reduce disks per node

2016-02-17 Thread Robert Coli
On Tue, Feb 16, 2016 at 11:29 PM, Anishek Agarwal wrote: > > To accomplish this can I just copy the data from disk1 to disk2 with in > the relevant cassandra home location folders, change the cassanda.yaml > configuration and restart the node. before starting i will shutdown

Re: Re : decommissioned nodes shows up in "nodetool describecluster" as UNREACHABLE in 2.1.12 version

2016-02-17 Thread sai krishnam raju potturi
thanks Rajesh. What we have observed is the decommissioned nodes show up as "UNREACHABLE" in "nodetool describecluster" command. Their status shows up as "LEFT" in "nodetool gossipinfo". This is observed in 2.1.12 version. Decommissioned nodes did not show up in the "nodetool describecluster" and

Re: Cassandra nodes reduce disks per node

2016-02-17 Thread Anishek Agarwal
Additional note we are using cassandra 2.0.15 have 5 nodes in cluster , going to expand to 8 nodes. On Wed, Feb 17, 2016 at 12:59 PM, Anishek Agarwal wrote: > Hello, > > We started with two 800GB SSD on each cassandra node based on our initial > estimations of read/write

Cassandra nodes reduce disks per node

2016-02-17 Thread Anishek Agarwal
Hello, We started with two 800GB SSD on each cassandra node based on our initial estimations of read/write rate. As we started on boarding additional traffic we find that CPU is becoming a bottleneck and we are not able to run the NICE jobs like compaction very well. We have started expanding the

Re: Sudden disk usage

2016-02-17 Thread Ben Bromhead
+1 to checking for snapshots. Cassandra by default will automatically snapshot tables before destructive actions like drop or truncate. Some general advice regarding cleanup. Cleanup will result in a temporary increase in both disk I/O load and disk space usage (especially with STCS). It should