I was wondering if I should always complete 2 repairs cycles with reaper
even if one repair cycle finishes in 7 hours.

Currently, I have around 200GB in column family data size to be repaired
and I was scheduling once repair a week and I was not having too much
stress on my 8 nodes cluster with i3xlarge nodes.

Thanks,

Sergio

Il giorno mer 22 gen 2020 alle ore 08:28 Sergio <lapostadiser...@gmail.com>
ha scritto:

> Thank you very much! Yes I am using reaper!
>
> Best,
>
> Sergio
>
> On Wed, Jan 22, 2020, 8:00 AM Reid Pinchback <rpinchb...@tripadvisor.com>
> wrote:
>
>> Sergio, if you’re looking for a new frequency for your repairs because of
>> the change, if you are using reaper, then I’d go for repair_freq <=
>> gc_grace / 2.
>>
>>
>>
>> Just serendipity with a conversation I was having at work this morning.
>> When you actually watch the reaper logs then you can see situations where
>> unlucky timing with skipped nodes can make the time to remove a tombstone
>> be up to 2 x repair_run_time.
>>
>>
>>
>> If you aren’t using reaper, your mileage will vary, particularly if your
>> repairs are consistent in the ordering across nodes.  Reaper can be
>> moderately non-deterministic hence the need to be sure you can complete at
>> least two repair runs.
>>
>>
>>
>> R
>>
>>
>>
>> *From: *Sergio <lapostadiser...@gmail.com>
>> *Reply-To: *"user@cassandra.apache.org" <user@cassandra.apache.org>
>> *Date: *Tuesday, January 21, 2020 at 7:13 PM
>> *To: *"user@cassandra.apache.org" <user@cassandra.apache.org>
>> *Subject: *Re: Is there any concern about increasing gc_grace_seconds
>> from 5 days to 8 days?
>>
>>
>>
>> *Message from External Sender*
>>
>> Thank you very much for your response.
>>
>> The considerations mentioned are the ones that I was expecting.
>>
>> I believe that I am good to go.
>>
>> I just wanted to make sure that there was no need to run any other extra
>> command beside that one.
>>
>>
>>
>> Best,
>>
>>
>>
>> Sergio
>>
>>
>>
>> On Tue, Jan 21, 2020, 3:55 PM Jeff Jirsa <jji...@gmail.com> wrote:
>>
>> Note that if you're actually running repairs within 5 days, and you
>> adjust this to 8, you may stream a bunch of tombstones across in that 5-8
>> day window, which can increase disk usage / compaction (because as you pass
>> 5 days, one replica may gc away the tombstones, the others may not because
>> the tombstones shadow data, so you'll re-stream the tombstone to the other
>> replicas)
>>
>>
>>
>> On Tue, Jan 21, 2020 at 3:28 PM Elliott Sims <elli...@backblaze.com>
>> wrote:
>>
>> In addition to extra space, queries can potentially be more expensive
>> because more dead rows and tombstones will need to be scanned.  How much of
>> a difference this makes will depend drastically on the schema and access
>> pattern, but I wouldn't expect going from 5 days to 8 to be very noticeable.
>>
>>
>>
>> On Tue, Jan 21, 2020 at 2:14 PM Sergio <lapostadiser...@gmail.com> wrote:
>>
>> https://stackoverflow.com/a/22030790
>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__stackoverflow.com_a_22030790&d=DwMFaQ&c=9Hv6XPedRSA-5PSECC38X80c1h60_XWA4z1k_R1pROA&r=OIgB3poYhzp3_A7WgD7iBCnsJaYmspOa2okNpf6uqWc&m=qt1NAYTks84VVQ4WGXWkK6pw85m3FcuUjPRJPdIHMdw&s=aEgz5F5HRxPT3w4hpfNXQRhcchwRjrpf7KB3QyywO_Q&e=>
>>
>>
>>
>> For CQLSH
>>
>> alter table <table_name> with GC_GRACE_SECONDS = <seconds>;
>>
>>
>>
>>
>>
>> Il giorno mar 21 gen 2020 alle ore 13:12 Sergio <
>> lapostadiser...@gmail.com> ha scritto:
>>
>> Hi guys!
>>
>> I just wanted to confirm with you before doing such an operation. I
>> expect to increase the space but nothing more than this. I  need to perform
>> just :
>>
>> UPDATE COLUMN FAMILY cf with GC_GRACE = 691,200; //8 days
>>
>> Is it correct?
>>
>> Thanks,
>>
>> Sergio
>>
>>

Reply via email to