Hi,

We have a 6-node cassandra cluster that got into an unstable state because
a few servers were very low on Java heap space for a while. This resulted
in them flushing an SSTable to disk for almost every write, such that some
column families ended up with 1000+ SSTables, most of which contain between
1 and 10 rows each.
The memory problem is solved now, and the cluster serves reads & writes
fine, but it doesn't seem to be possible to compact this huge number of
SSTables. If we try to run a major compaction, Cassandra dies with an
OutOfMemoryException, probably because opening an SSTable brings quite some
memory overhead? Increasing the heap by 1GB didn't help either.

Would it be possible to trigger a manual partial compaction, to first
compact 4x 256 tables? Could this be added to nodetool if it doesn't exist
already?

Best regards,

Mathijs Vogelzang

Reply via email to