Hi,

The compaction throughput is indeed shared by all compactors.
I would not advise to go below 8MB/s per compactor as slowing down
compactions put more pressure on the heap.

When tuning compaction, the first thing to do is evaluate the maximum
throughput your disks can sustain without impacting p99 read latencies.
Then you can consider raising the number of compactors if you're still
seeing contention.

So the advice would be : don't raise the number of compactors, 4 is
probably enough already and tune the compaction throughput if you're
running on SSDs or if you have an array of HDDs.

Cheers,

On Tue, Jun 5, 2018 at 10:48 AM Steinmaurer, Thomas <
thomas.steinmau...@dynatrace.com> wrote:

> Hello,
>
>
>
> most likely obvious and perhaps already answered in the past, but just
> want to be sure …
>
>
>
> E.g. I have set:
>
> concurrent_compactors: 4
>
> compaction_throughput_mb_per_sec: 16
>
>
>
> I guess this will lead to ~ 4MB/s per Thread if I have 4 compactions
> running in parallel?
>
>
>
> So, in case of upscaling a machine and following the recommendation in
> cassandra.yaml I may set:
>
>
>
> concurrent_compactors: 8
>
>
>
>
>
> If this throughput remains unchanged, does this mean that we have 2 MB/s
> per Thread then, e.g. largish compactions running on a single thread taking
> twice the time then?
>
>
>
> Using Cassandra 2.1 and 3.11 in case this matters.
>
>
>
>
>
> Thanks a lot!
>
> Thomas
>
>
> The contents of this e-mail are intended for the named addressee only. It
> contains information that may be confidential. Unless you are the named
> addressee or an authorized designee, you may not copy or use it, or
> disclose it to anyone else. If you received it in error please notify us
> immediately and then destroy it. Dynatrace Austria GmbH (registration
> number FN 91482h) is a company registered in Linz whose registered office
> is at 4040 Linz, Austria, Freistädterstraße 313
>
-- 
-----------------
Alexander Dejanovski
France
@alexanderdeja

Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com

Reply via email to