>
> What we did have was some sort of overlapping between our daily repair
> cronjob and the newly added node still in progress joining. Don’t know if
> this sort of combination might causing troubles.
I wouldn't be surprised if this caused problems. Probably want to avoid
that.
with waiting a f
the same node.
Thanks,
Thomas
From: kurt greaves [mailto:k...@instaclustr.com]
Sent: Montag, 05. März 2018 01:10
To: User
Subject: Re: Cassandra 2.1.18 - Concurrent nodetool repair resulting in > 30K
SSTables for a single small (GBytes) CF
Repairs with vnodes is likely to cause a lot of sm
Repairs with vnodes is likely to cause a lot of small SSTables if you have
inconsistencies (at least 1 per vnode). Did you have any issues when adding
nodes, or did you add multiple nodes at a time? Anything that could have
lead to a bit of inconsistency could have been the cause.
I'd probably avo
Hello,
Production, 9 node cluster with Cassandra 2.1.18, vnodes, default 256 tokens,
RF=3, compaction throttling = 16, concurrent compactors = 4, running in AWS
using m4.xlarge at ~ 35% CPU AVG
We have a nightly cronjob starting a "nodetool repair -pr ks cf1 cf2"
concurrently on all nodes, whe