+1 to checking for snapshots. Cassandra by default will automatically
snapshot tables before destructive actions like drop or truncate.
Some general advice regarding cleanup. Cleanup will result in a temporary
increase in both disk I/O load and disk space usage (especially with STCS).
It should
On Sat, Feb 13, 2016 at 4:30 PM, Branton Davis
wrote:
> We use SizeTieredCompaction. The nodes were about 67% full and we were
> planning on adding new nodes (doubling the cluster to 6) soon.
>
Be sure to add those new nodes one at a time.
Have you checked for, and
Hi,
what kind of compaction strategy do you use? What you are about to see is a
compaction likely - think of 4 sstables of 50gb each, compacting those can take
up 200g while rewriting the new sstable. After that the old ones are deleted
and space will be freed again.
If using
We use SizeTieredCompaction. The nodes were about 67% full and we were
planning on adding new nodes (doubling the cluster to 6) soon. I've been
watching the disk space used, and the nodes were taking about 100GB during
compaction, so I thought we were going to be okay for another week. The
One of our clusters had a strange thing happen tonight. It's a 3 node
cluster, running 2.1.10. The primary keyspace has RF 3, vnodes with 256
tokens.
This evening, over the course of about 6 hours, disk usage increased from
around 700GB to around 900GB on only one node. I was at a loss as to