You won't be able to have less segments than vnodes, so just use 256
segments per node, use parallel as repair parallelism, and set intensity to
1.

You apparently have more than 3TB per node, and that kind of density is
always challenging when it comes to run "fast" repairs.

Cheers,

Le mar. 22 mai 2018 à 07:28, Surbhi Gupta <surbhi.gupt...@gmail.com> a
écrit :

> We are on Dse 4.8.15 and it is cassandra 2.1.
> What are the best configuration to use for reaper for 144 nodes with 256
> vnodes and it shows around 532TB data when we start opscenter repairs.
>
> We need to finish repair soon.
>
> On Mon, May 21, 2018 at 10:53 AM Alexander Dejanovski <
> a...@thelastpickle.com> wrote:
>
>> Hi Subri,
>>
>> Reaper might indeed be your best chance to reduce the overhead of vnodes
>> there.
>> The latest betas include a new feature that will group vnodes sharing the
>> same replicas in the same segment. This will allow to have less segments
>> than vnodes, and is available with Cassandra 2.2 and onwards (the
>> improvement is especially beneficial with Cassandra 3.0+ as such token
>> ranges will be repaired in a single session).
>>
>> We have a gitter that you can join if you want to ask questions.
>>
>> Cheers,
>>
>> Le lun. 21 mai 2018 à 15:29, Surbhi Gupta <surbhi.gupt...@gmail.com> a
>> écrit :
>>
>>> Thanks Abdul
>>>
>>> On Mon, May 21, 2018 at 6:28 AM Abdul Patel <abd786...@gmail.com> wrote:
>>>
>>>> We have a paramater in reaper yaml file called
>>>> repairManagerSchrdulingIntervalSeconds default is 10 seconds   , i tested
>>>> with 8,6,5 seconds and found 5 seconds optimal for my environment ..you go
>>>> down further but it will have cascading effects in cpu and memory
>>>> consumption.
>>>> So test well.
>>>>
>>>>
>>>> On Monday, May 21, 2018, Surbhi Gupta <surbhi.gupt...@gmail.com> wrote:
>>>>
>>>>> Thanks a lot for your inputs,
>>>>> Abdul, how did u tune reaper?
>>>>>
>>>>> On Sun, May 20, 2018 at 10:10 AM Jonathan Haddad <j...@jonhaddad.com>
>>>>> wrote:
>>>>>
>>>>>> FWIW the largest deployment I know about is a single reaper instance
>>>>>> managing 50 clusters and over 2000 nodes.
>>>>>>
>>>>>> There might be bigger, but I either don’t know about it or can’t
>>>>>> remember.
>>>>>>
>>>>>> On Sun, May 20, 2018 at 10:04 AM Abdul Patel <abd786...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> I recently tested reaper and it actually helped us alot. Even with
>>>>>>> our small footprint 18 node reaper takes close to 6 hrs.<intially took 
>>>>>>> 13
>>>>>>> hrs ,i was able to tune it 50%>. But it really depends on number nodes. 
>>>>>>> For
>>>>>>> example if you have 4 nodes then it runs on 4*256<vnodes> =1024 
>>>>>>> segements ,
>>>>>>> so for your env. Ut will be 256*144 close to 36k segements.
>>>>>>> Better test on poc box how much time it takes and then proceed
>>>>>>> further ..i have tested so far in 1 dc only , we can actually have 
>>>>>>> seperate
>>>>>>> reaper instance handling seperate dc but havent tested it yet.
>>>>>>>
>>>>>>>
>>>>>>> On Sunday, May 20, 2018, Surbhi Gupta <surbhi.gupt...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> We have a cluster with 144 nodes( 3 datacenter) with 256 Vnodes .
>>>>>>>> When we tried to start repairs from opscenter then it showed
>>>>>>>> 1.9Million ranges to repair .
>>>>>>>> And even after doing compaction and strekamthroughput to 0 ,
>>>>>>>> opscenter is not able to help us much to finish repair in 9 days 
>>>>>>>> timeframe .
>>>>>>>>
>>>>>>>> What is your thought on Reaper ?
>>>>>>>> Do you think , Reaper might be able to help us in this scenario ?
>>>>>>>>
>>>>>>>> Thanks
>>>>>>>> Surbhi
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>> Jon Haddad
>>>>>> http://www.rustyrazorblade.com
>>>>>> twitter: rustyrazorblade
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>
>>> --
>> -----------------
>> Alexander Dejanovski
>> France
>> @alexanderdeja
>>
>> Consultant
>> Apache Cassandra Consulting
>> http://www.thelastpickle.com
>>
>>
>> --
-----------------
Alexander Dejanovski
France
@alexanderdeja

Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com

Reply via email to