Re: Anticompaction causing significant increase in disk usage

2018-09-12 Thread Martin Mačura
Hi Alain, thank you for your response. I'm using incremental repair. I'm afraid subrange repair is not a viable alternative, because it's very slow - takes over a week to complete. I've found at least a partial solution - specifying '-local' or '-dc' pa

Re: Anticompaction causing significant increase in disk usage

2018-09-12 Thread Alain RODRIGUEZ
without anticompaction in “modern” versions > of Apache Cassandra is subrange repair, which fully skips anticompaction. > To perform a subrange repair correctly, you have three options : > - Compute valid token subranges yourself and script repairs accordingly > - Use the Cassandra

Anticompaction causing significant increase in disk usage

2018-09-12 Thread Martin Mačura
Hi, we're on cassandra 3.11.2 . During an anticompaction after repair, TotalDiskSpaceUsed value of one table gradually went from 700GB to 1180GB, and then suddenly jumped back to 700GB. This happened on all nodes involved in the repair. There was no change in PercentRepaired during or after

Anticompaction

2017-10-30 Thread Vlad
Hi, I run repair, then I see that anticompaction started on all nodes.Does it mean that all data is already repaired. Actually I increased RF, so can I already use database? Thanks.

Re: Anticompaction Question

2016-10-25 Thread Rajath Subramanyam
Hi Anubhav, According to the Datastax documentation here , after the anti-compaction process splits the ranges to repaired and un-repaired SSTables, they are compacted in their own separate pools. Regards, Rajath

Anticompaction Question

2016-10-25 Thread Anubhav Kale
Hello, If incremental repairs are enabled, there is logic in every compaction strategy to make sure not to mix repaired and unrepaired SS Tables. Does this mean if some SS Table files are repaired and some aren't and incremental repairs don't work reliably, the unrepaired tables will never get

What are the repercussions of a restart during anticompaction?

2015-11-05 Thread Bryan Cheng
Hey list, Tried to find an answer to this elsewhere, but turned up nothing. We ran our first incremental repair after a large dc migration two days ago; the cluster had been running full repairs prior to this during the migration. Our nodes are currently going through anticompaction, as expected