Re: Modify keyspace replication strategy and rebalance the nodes

2017-09-13 Thread Fabrice Facorat
Hi, the steps are: - ALTER KEYSPACE to change your replication strategy - "nodetool repair -pr " on ALL nodes or full repair "nodetool repair " on enough replica to distribute and rebalance your data to replicas - nodetool cleanup on every node to remove superfluous data Please note that you'd

Re: Schema Changes

2016-11-17 Thread Fabrice Facorat
Schema are propagated by GOSSIP you can check schema propagation cluster wide with nodetool describecluster or "nodetool gossipinfo | grep SCHEMA | cut -f3 -d: | sort | uniq -c" You'd better send your DDL instruction to only one node (for example by using the whitelist load balancing policy with

Re: Can nodes in c* cluster run different versions ?

2016-11-17 Thread Fabrice Facorat
As said already by Alain you should make this as short as possible: - streaming operations won't work (repair, bootstrap) - Hinted Handoff won't work as 2 differents major version of cassandra can't shared the same schema version - So no DDL operations (CREATE/ALTER) as you change won't be

Re: Some questions to updating and tombstone

2016-11-15 Thread Fabrice Facorat
If you don't want tombstones, don't generate them ;) More seriously, tombstones are generated when: - doing a DELETE - TTL expiration - set a column to NULL However tombstones are an issue only if for the same value, you have many tombstones (i.e you keep overwriting the same values with datas

Re: SSTable count at 10K during repair (won't decrease)

2016-05-20 Thread Fabrice Facorat
Are you using repairParallelism = sequential or parallel ? As said by Alain: - try to decrease streamthroughput to avoid overflooding nodes with a lots of (small) streamed sstables - if you are using // repair, switch to sequential - don't start too much repair simultaneously. - Do you really

Re: Increase compaction performance

2016-05-20 Thread Fabrice Facorat
ported. > > Hope you will find a way to mitigate thing though, or already have. Bonne > chance ;-). > > C*heers, > --- > Alain Rodriguez - al...@thelastpickle.com > France > > The Last Pickle - Apache Cassandra Consulting > http://www.thelastpickle.com > &g

Re: Pending compactions not going down on some nodes of the cluster

2016-03-21 Thread Fabrice Facorat
Are you running repairs ? You may try: - increase concurrentçcompaction to 8 (max in 2.1.x) - increase compaction_throupghput to more than 16MB/s (48 may be a good start) What kind of data are you storing in theses tables ? timeseries ? 2016-03-21 23:37 GMT+01:00 Gianluca Borello

Re: Increase compaction performance

2016-03-04 Thread Fabrice Facorat
Any news on this ? We also have issues during repairs when using many LCS tables. We end up with 8k sstables, many pending tasks and dropped mutations We are using Cassandra 2.0.10, on a 24 cores server, with multithreaded compactions enabled. ~$ nodetool getstreamthroughput Current stream

Re: Debugging write timeouts on Cassandra 2.2.5

2016-02-11 Thread Fabrice Facorat
Are your commitlog and data on the same disk ? If yes, you should put commitlogs on a separate disk which don't have a lot of IO. Others IO may have great impact impact on your commitlog writing and it may even block. An example of impact IO may have, even for Async writes:

Re: Issue restarting cassandra with a cluster running Cassandra 1.2.x and Cassandra 2.0.x

2015-03-04 Thread Fabrice Facorat
a good idea to move up to 2.0.12 while your at it. There have been a number of bugfixes. On Tue, Mar 3, 2015 at 12:37 PM, Fabrice Facorat fabrice.faco...@gmail.com wrote: Hi, we have a 52 Cassandra nodes cluster running Apache Cassandra 1.2.13. As we are planning to migrate to Cassandra

Issue restarting cassandra with a cluster running Cassandra 1.2.x and Cassandra 2.0.x

2015-03-03 Thread Fabrice Facorat
Hi, we have a 52 Cassandra nodes cluster running Apache Cassandra 1.2.13. As we are planning to migrate to Cassandra 2.0.10, we decide to do some tests and we noticed that once a node in the cluster have been upgraded to Cassandra 2.0.x, restarting a Cassandra 1.2.x will fail. The tests were

Re: Gossip intermittently marks node as DOWN

2014-03-04 Thread Fabrice Facorat
From what I understand, this can happen when having many nodes and vnodes by node. How many vnodes did you configure on your nodes ? 2014-03-04 11:37 GMT+01:00 Phil Luckhurst phil.luckhu...@powerassure.com: The VMs are hosted on the same ESXi server and they are just running Cassandra. We seem

Why repair -pr doesn't work when RF=0 for 1 DC

2014-02-27 Thread Fabrice Facorat
Hi, we have a cluster with 3 DC, and for one DC ( stats ), RF=0 for a keyspace using NetworkTopologyStrategy. cqlsh SELECT * FROM system.schema_keyspaces WHERE keyspace_name='foobar'; keyspace_name | durable_writes | strategy_class | strategy_options

Re: Why repair -pr doesn't work when RF=0 for 1 DC

2014-02-27 Thread Fabrice Facorat
...@gmail.com: Yes, it is expected behavior since 1.2.5(https://issues.apache.org/jira/browse/CASSANDRA-5424). Since you set foobar not to replicate to stats dc, primary range of foobar keyspace for nodes in stats is empty. On Thu, Feb 27, 2014 at 10:16 AM, Fabrice Facorat fabrice.faco...@gmail.com

Re: Reduce Cassandra GC

2013-06-19 Thread Fabrice Facorat
2013/6/19 Takenori Sato ts...@cloudian.com: GC options are not set. You should see the followings. -XX:+PrintGCDateStamps -XX:+PrintPromotionFailure -Xloggc:/var/log/cassandra/gc-1371603607.log Is it normal to have two processes like this? No. You are running two processes. It's normal

Re: State of Cassandra and Java 7

2012-12-23 Thread Fabrice Facorat
At Orange portails we are presently testing Cassandra 1.2.0 beta/rc with Java 7, and presnetly we have no issues 2012/12/22 Brian Tarbox tar...@cabotresearch.com: What I saw in all cases was a) set JAVA_HOME to java7, run program fail b) set JAVA_HOME to java6, run program success I should