Re: New token allocation and adding a new DC

2018-01-24 Thread Dikang Gu
I fixed the new allocation algorithm in non bootstrap case, https://issues.apache.org/jira/browse/CASSANDRA-13080?filter=-2, the fix is in 3.12+, but not in 3.0. On Wed, Jan 24, 2018 at 9:32 AM, Oleksandr Shulgin < oleksandr.shul...@zalando.de> wrote: > On Thu, Jan 18, 2018 at 5:19 AM, kurt

RE: Cassandra Repair Duration.

2018-01-24 Thread King, Marshall
You should run “primary range” repairs on all nodes. That will go a lot faster than full repair. In C* 2x you can do this in parallel.. you can determine how many you can run at the same time.. basically how much pressure you can put on ur system. Marshall From: Karthick V

Re: New token allocation and adding a new DC

2018-01-24 Thread Oleksandr Shulgin
On Thu, Jan 18, 2018 at 5:19 AM, kurt greaves wrote: > Didn't know that about auto_bootstrap and the algorithm. We should > probably fix that. Can you create a JIRA for that issue? > Will do. > Workaround for #2 would be to truncate system.available_ranges after >

Re: Cassandra Repair Duration.

2018-01-24 Thread brian . spindler
Hi Karthick, repairs can be tricky. You can (and probably should) run repairs as apart of routine maintenance. And of course absolutely if you lose a node in a bad way. If you decommission a node for example, no “extra” repair needed. If you are using TWCS you should probably not run

RequestResponseStage Threadpool

2018-01-24 Thread Robert Emery
Hiya, We have our Cassandra 2.1 cluster being monitored by checking the JMX on the org.apache.cassandra.metrics type=ThreadPools bean We've got a threshold of 15 for warnings as per https://blog.pythian.com/guide-to-cassandra-thread-pools/ however we don't know if this is a sensible warning

Re: Cassandra Repair Duration.

2018-01-24 Thread Karthick V
Periodically I have been running Full repair process befor GC Grace period as mentioned in the best practices.Initially, all went well but as the data size increases Repair duration has increased drastically and we are also facing Query timeouts during that time and we have tried incremental

Upgrading sstables not using all available compaction slots on version 2.2

2018-01-24 Thread Oleksandr Shulgin
Hello, In the process of upgrading our cluster from 2.1 to 2.2 we have triggered the SSTable rewriting process like this: $ nodetool upgradesstables -j 4 # concurrent_compactors=5 Then if we would immediately check the compactionstats, we see that 4 compactions of type 'Upgrade sstables' are

Cassandra Repair Duration.

2018-01-24 Thread Karthick V
Hi,