Ben, Benjamin thanks for reply,
What your doing here is to change from LeveledCompactions to
SizeTieredCompaction. This task is in progress and we are going to measure
the results just for some column families.
Ben, thanks for the procedure, will try it later again. When the problem
happened
Hm, this MAY somehow relate to the issue I encountered recently:
https://issues.apache.org/jira/browse/CASSANDRA-12730
I also made a proposal to mitigate excessive (unnecessary) flushes during
repair streams but unfortunately nobody commented on it yet.
Maybe there are some opinions on it around
What I’ve seen happen a number of times is you get in a negative feedback
loop:
not enough capacity to keep up with compactions (often triggered by repair
or compaction hitting a large partition) -> more sstables -> more expensive
reads -> even less capacity to keep up with compactions -> repeat
Hey guys,
Do we have any conclusions about this case? Ezra, did you solve your
problem?
We are facing a very similar problem here. LeveledCompaction with VNodes
and looks like a node went to a weird state and start to consume lot of
CPU, the compaction process seems to be stucked and the number
I just want to chime in and say that we also had issues keeping up with
compaction once (with vnodes/ssd disks) and I also want to recommend
keeping track of your open file limit which might bite you.
Cheers,
Jens
On Friday, August 19, 2016, Mark Rose wrote:
> Hi Ezra,
>
Hi Ezra,
Are you making frequent changes to your rows (including TTL'ed
values), or mostly inserting new ones? If you're only inserting new
data, it's probable using size-tiered compaction would work better for
you. If you are TTL'ing whole rows, consider date-tiered.
If leveled compaction is
t;
>
> How many cores?
>
>
>
> How many concurrent compactors?
>
>
>
>
>
>
>
> *From: *Ezra Stuetzel <ezra.stuet...@riskiq.net>
> *Reply-To: *"user@cassandra.apache.org" <user@cassandra.apache.org>
> *Date: *Wednesday, August 17, 2016 at
I have one node in my cluster 2.2.7 (just upgraded from 2.2.6 hoping to fix
issue) which seems to be stuck in a weird state -- with a large number of
pending compactions and sstables. The node is compacting about 500gb/day,
number of pending compactions is going up at about 50/day. It is at about