Can't say I have too many ideas. If load is low during the repair it
shouldn't be happening. Your disks aren't overutilised correct? No other
processes writing loads of data to them?
That is not happening anymore since I am repairing a keyspace with
much less data (the other one is still there in write-only mode).
The command I am using is the most boring (even shed the -pr option so
to keep anticompactions to a minimum): nodetool -h localhost repair
It's executed sequentially
Blowing out to 1k SSTables seems a bit full on. What args are you passing
to repair?
Kurt Greaves
k...@instaclustr.com
www.instaclustr.com
On 31 October 2016 at 09:49, Stefano Ortolani wrote:
> I've collected some more data-points, and I still see dropped
> mutations with compaction_throughput_
I've collected some more data-points, and I still see dropped
mutations with compaction_throughput_mb_per_sec set to 8.
The only notable thing regarding the current setup is that I have
another keyspace (not being repaired though) with really wide rows
(100MB per partition), but that shouldn't have
That's what I was thinking. Maybe GC pressure?
Some more details: during anticompaction I have some CFs exploding to 1K
SStables (to be back to ~200 upon completion).
HW specs should be quite good (12 cores/32 GB ram) but, I admit, still
relying on spinning disks, with ~150GB per node.
Current vers
That's pretty low already, but perhaps you should lower to see if it will
improve the dropped mutations during anti-compaction (even if it increases
repair time), otherwise the problem might be somewhere else. Generally
dropped mutations is a signal of cluster overload, so if there's nothing
else w
Not yet. Right now I have it set at 16.
Would halving it more or less double the repair time?
On Tue, Aug 9, 2016 at 7:58 PM, Paulo Motta
wrote:
> Anticompaction throttling can be done by setting the usual
> compaction_throughput_mb_per_sec knob on cassandra.yaml or via nodetool
> setcompactiont
Anticompaction throttling can be done by setting the usual
compaction_throughput_mb_per_sec knob on cassandra.yaml or via nodetool
setcompactionthroughput. Did you try lowering that and checking if that
improves the dropped mutations?
2016-08-09 13:32 GMT-03:00 Stefano Ortolani :
> Hi all,
>
> I
Hi all,
I am running incremental repaird on a weekly basis (can't do it every day
as one single run takes 36 hours), and every time, I have at least one node
dropping mutations as part of the process (this almost always during the
anticompaction phase). Ironically this leads to a system where repa