Thanks for the suggestions but I'd already removed the compression when
your message came thru. That alleviated the problem but didn't solve it.
I'm still looking at a few other possible causes, I'll post back if I work
out what's going on, for now I am running rolling repairs to avoid another
One thing you may want to look at is the meanRowSize from nodetool
cfstats and your compression block size. In our case the mean
compacted size is 560 bytes and 64KB block size caused CPU tickets and
a lot of short lived memory. I have brought by block size down to 16K.
The result tables are not
However, when I run a repair my CMS usage graph no longer shows sudden drops
but rather gradual slopes and only manages to clear around 300MB each GC.
This seems to occur on 2 other nodes in my cluster around the same time, I
assume this is because they're the replicas (we use 3 replicas).
The only downside of compression is it does cause more memory
pressure. I can imagine something like repair could confound this.
Since it would seem like building the merkle tree would involve
decompressing every block on disk.
I have been attempting to determine if the block size being larger or
Hi all
Running Cassandra 1.0.7, I recently changed a few read heavy column
families from SizeTieredCompactionStrategy to LeveledCompactionStrategy and
added in SnappyCompressor, all with defaults so 5MB files and if memory
serves me correctly 64k chunk size for compression.
The results were