Re: Incremental Repair Migration

2017-01-10 Thread Bhuvan Rawal
Hi Amit,

You can try reaper, it makes repairs effortless. There are a host of other
benefits but most importantly it offers a Single portal to manage & track
ongoing as well as past repairs.

 For incremental repairs it breaks it into single segment per node, if you
find that it's indeed the case, you may have to increase segment timeout
when you run it for the first time as it repairs whole set of sstables.

Regards,
Bhuvan

On Jan 10, 2017 8:44 PM, "Jonathan Haddad" <j...@jonhaddad.com> wrote:

Reaper suppers incremental repair.
On Mon, Jan 9, 2017 at 11:27 PM Amit Singh F <amit.f.si...@ericsson.com>
wrote:

> Hi Jonathan,
>
>
>
> Really appreciate your response.
>
>
>
> It will not be possible for us to move to Reaper as of now, we are in
> process to migrate to Incremental repair.
>
>
>
> Also Running repair constantly will be costly affair in our case . For
> migrating to incremental repair with large set of dataset will take hours
> to be finished if we go ahead with procedure shared by Datastax.
>
>
>
> So any quick method to reduce that ?
>
>
>
> Regards
>
> Amit Singh
>
>
>
> *From:* Jonathan Haddad [mailto:j...@jonhaddad.com]
> *Sent:* Tuesday, January 10, 2017 11:50 AM
> *To:* user@cassandra.apache.org
> *Subject:* Re: Incremental Repair Migration
>
>
>
> Your best bet is to just run repair constantly. We maintain an updated
> fork of Spotify's reaper tool to help manage it: https://github.com/
> thelastpickle/cassandra-reaper
>
> On Mon, Jan 9, 2017 at 10:04 PM Amit Singh F <amit.f.si...@ericsson.com>
> wrote:
>
> Hi All,
>
>
>
> We are thinking of migrating from primary range repair (-pr) to
> incremental repair.
>
>
>
> Environment :
>
>
>
> · Cassandra 2.1.16
>
> • 25 Node cluster ,
>
> • RF 3
>
> • Data size up to 450 GB per nodes
>
>
>
> We found that running full repair will be taking around 8 hrs per node
> which *means 200 odd hrs*. for migrating the entire cluster to
> incremental repair. Even though there is zero downtime, it is quite
> unreasonable to ask for 200 hr maintenance window for migrating repairs.
>
>
>
> Just want to know how Cassandra users in community optimize the procedure
> to reduce migration time ?
>
>
>
> Thanks & Regards
>
> Amit Singh
>
>


Re: Incremental Repair Migration

2017-01-10 Thread Jonathan Haddad
Reaper suppers incremental repair.
On Mon, Jan 9, 2017 at 11:27 PM Amit Singh F <amit.f.si...@ericsson.com>
wrote:

> Hi Jonathan,
>
>
>
> Really appreciate your response.
>
>
>
> It will not be possible for us to move to Reaper as of now, we are in
> process to migrate to Incremental repair.
>
>
>
> Also Running repair constantly will be costly affair in our case . For
> migrating to incremental repair with large set of dataset will take hours
> to be finished if we go ahead with procedure shared by Datastax.
>
>
>
> So any quick method to reduce that ?
>
>
>
> Regards
>
> Amit Singh
>
>
>
> *From:* Jonathan Haddad [mailto:j...@jonhaddad.com]
> *Sent:* Tuesday, January 10, 2017 11:50 AM
> *To:* user@cassandra.apache.org
> *Subject:* Re: Incremental Repair Migration
>
>
>
> Your best bet is to just run repair constantly. We maintain an updated
> fork of Spotify's reaper tool to help manage it:
> https://github.com/thelastpickle/cassandra-reaper
>
> On Mon, Jan 9, 2017 at 10:04 PM Amit Singh F <amit.f.si...@ericsson.com>
> wrote:
>
> Hi All,
>
>
>
> We are thinking of migrating from primary range repair (-pr) to
> incremental repair.
>
>
>
> Environment :
>
>
>
> · Cassandra 2.1.16
>
> • 25 Node cluster ,
>
> • RF 3
>
> • Data size up to 450 GB per nodes
>
>
>
> We found that running full repair will be taking around 8 hrs per node
> which *means 200 odd hrs*. for migrating the entire cluster to
> incremental repair. Even though there is zero downtime, it is quite
> unreasonable to ask for 200 hr maintenance window for migrating repairs.
>
>
>
> Just want to know how Cassandra users in community optimize the procedure
> to reduce migration time ?
>
>
>
> Thanks & Regards
>
> Amit Singh
>
>


RE: Incremental Repair Migration

2017-01-09 Thread Amit Singh F
Hi Jonathan,

Really appreciate your response.

It will not be possible for us to move to Reaper as of now, we are in process 
to migrate to Incremental repair.

Also Running repair constantly will be costly affair in our case . For 
migrating to incremental repair with large set of dataset will take hours to be 
finished if we go ahead with procedure shared by Datastax.

So any quick method to reduce that ?

Regards
Amit Singh

From: Jonathan Haddad [mailto:j...@jonhaddad.com]
Sent: Tuesday, January 10, 2017 11:50 AM
To: user@cassandra.apache.org
Subject: Re: Incremental Repair Migration

Your best bet is to just run repair constantly. We maintain an updated fork of 
Spotify's reaper tool to help manage it: 
https://github.com/thelastpickle/cassandra-reaper
On Mon, Jan 9, 2017 at 10:04 PM Amit Singh F 
<amit.f.si...@ericsson.com<mailto:amit.f.si...@ericsson.com>> wrote:
Hi All,

We are thinking of migrating from primary range repair (-pr) to incremental 
repair.

Environment :


• Cassandra 2.1.16
• 25 Node cluster ,
• RF 3
• Data size up to 450 GB per nodes

We found that running full repair will be taking around 8 hrs per node which 
means 200 odd hrs. for migrating the entire cluster to incremental repair. Even 
though there is zero downtime, it is quite unreasonable to ask for 200 hr 
maintenance window for migrating repairs.

Just want to know how Cassandra users in community optimize the procedure to 
reduce migration time ?

Thanks & Regards
Amit Singh


Re: Incremental Repair Migration

2017-01-09 Thread Jonathan Haddad
Your best bet is to just run repair constantly. We maintain an updated fork
of Spotify's reaper tool to help manage it:
https://github.com/thelastpickle/cassandra-reaper
On Mon, Jan 9, 2017 at 10:04 PM Amit Singh F 
wrote:

> Hi All,
>
>
>
> We are thinking of migrating from primary range repair (-pr) to
> incremental repair.
>
>
>
> Environment :
>
>
>
> · Cassandra 2.1.16
>
> • 25 Node cluster ,
>
> • RF 3
>
> • Data size up to 450 GB per nodes
>
>
>
> We found that running full repair will be taking around 8 hrs per node
> which *means 200 odd hrs*. for migrating the entire cluster to
> incremental repair. Even though there is zero downtime, it is quite
> unreasonable to ask for 200 hr maintenance window for migrating repairs.
>
>
>
> Just want to know how Cassandra users in community optimize the procedure
> to reduce migration time ?
>
>
>
> Thanks & Regards
>
> Amit Singh
>