Re: Incremental repair

2018-08-20 Thread Alexander Dejanovski
Hi Pratchi,

Incremental has been the default since C* 2.2.

You can run a full repair by adding the "--full" flag to your nodetool
command.

Cheers,


Le lun. 20 août 2018 à 19:50, Prachi Rath  a écrit :

> Hi Community,
>
> I am currently creating a new cluster with cassandra 3.11.2 ,while
> enabling repair noticed that incremental repair is true in logfile.
>
>
> (parallelism: parallel, primary range: true, incremental: true, job
> threads: 1, ColumnFamilies: [], dataCenters: [], hosts: [], # of ranges:
> 20, pull repair: false)
>
> i was running repair by -pr option only.
>
> Question:Is incremental repair is the default repair for cassandra 3.11.2
> version.
>
> Thanks,
> Prachi
>
>
> --
-
Alexander Dejanovski
France
@alexanderdeja

Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com


Re: Incremental Repair

2017-03-13 Thread Paulo Motta
> there are some nasty edge cases when you mix incremental repair and full
repair ( https://issues.apache.org/jira/browse/CASSANDRA-13153 )

mixing incremental and full repairs will just make that more likely to
happen, but although unlikely it's still possible for a similar condition
to happen even when using incremental repair alone and this is ultimately
fixed by CASSANDRA-9143 on 4.0 so I'd probably stick to Blake's suggestions
on #13153:

> It seems like we should recommend that users who delete data:
> 1. Stop using incremental repair (pre-4.0)
> 2. Run a full repair after upgrading to 4.0 before using incremental
repair again
> We should also recommend that even if users don't delete data, they
should take a look at the amount of streaming their incremental repair is
doing, and decide if it might be less expensive to just do full repairs
instead.

2017-03-13 1:15 GMT-03:00 Jeff Jirsa :

>
>
> On 2017-03-12 10:44 (-0700), Anuj Wadehra  wrote:
> > Hi,
> >
> > Our setup is as follows:
> > 2 DCS with N nodes, RF=DC1:3,DC2:3, Hinted Handoff=3 hours, Incremental
> Repair scheduled once on every node (ALL DCs) within the gc grace period.
> >
> > I have following queries regarding incremental repairs:
> >
> > 1. When a node is down for X hours (where x > hinted handoff hours and
> less than gc grace time), I think incremental repair is sufficient rather
> than doing the full repair. Is the understanding correct ?
> >
>
> Incremental repair SHOULD provide the same guarantees as regular repair.
>
> > 2. DataStax recommends "Run incremental repair daily, run full repairs
> weekly to monthly". Does that mean that I have to run full repairs every
> week to month EVEN IF I do daily incremental repairs? If yes, whats the
> reasoning of running full repair when inc repair is already run?
> >
> > Reference: https://docs.datastax.com/en/cassandra/3.0/cassandra/
> operations/opsRepairNodesWhen.html
> >
>
> I don't know why datastax suggests this, there are some nasty edge cases
> when you mix incremental repair and full repair (
> https://issues.apache.org/jira/browse/CASSANDRA-13153 )
>
> > 3. We run inc repair at least once in gc grace instead of the general
> recommendation that inc repair should be run daily. Do you see any problem
> with the approach?
> >
> >
>
> The more often you run it, the less data will be transferred, and the less
> painful it will be.  By running it weekly, you're making each run do 7x as
> much as work compared to running it daily, increasing the chance of having
> it impact your typical latencies.
>
>
>


Re: Incremental Repair

2017-03-12 Thread Jeff Jirsa


On 2017-03-12 10:44 (-0700), Anuj Wadehra  wrote: 
> Hi,
> 
> Our setup is as follows:
> 2 DCS with N nodes, RF=DC1:3,DC2:3, Hinted Handoff=3 hours, Incremental 
> Repair scheduled once on every node (ALL DCs) within the gc grace period.
> 
> I have following queries regarding incremental repairs:
> 
> 1. When a node is down for X hours (where x > hinted handoff hours and less 
> than gc grace time), I think incremental repair is sufficient rather than 
> doing the full repair. Is the understanding correct ? 
> 

Incremental repair SHOULD provide the same guarantees as regular repair.

> 2. DataStax recommends "Run incremental repair daily, run full repairs weekly 
> to monthly". Does that mean that I have to run full repairs every week to 
> month EVEN IF I do daily incremental repairs? If yes, whats the reasoning of 
> running full repair when inc repair is already run?
> 
> Reference: 
> https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsRepairNodesWhen.html
> 

I don't know why datastax suggests this, there are some nasty edge cases when 
you mix incremental repair and full repair ( 
https://issues.apache.org/jira/browse/CASSANDRA-13153 ) 

> 3. We run inc repair at least once in gc grace instead of the general 
> recommendation that inc repair should be run daily. Do you see any problem 
> with the approach? 
> 
>

The more often you run it, the less data will be transferred, and the less 
painful it will be.  By running it weekly, you're making each run do 7x as much 
as work compared to running it daily, increasing the chance of having it impact 
your typical latencies.




Re: Incremental Repair Migration

2017-01-10 Thread Bhuvan Rawal
Hi Amit,

You can try reaper, it makes repairs effortless. There are a host of other
benefits but most importantly it offers a Single portal to manage & track
ongoing as well as past repairs.

 For incremental repairs it breaks it into single segment per node, if you
find that it's indeed the case, you may have to increase segment timeout
when you run it for the first time as it repairs whole set of sstables.

Regards,
Bhuvan

On Jan 10, 2017 8:44 PM, "Jonathan Haddad" <j...@jonhaddad.com> wrote:

Reaper suppers incremental repair.
On Mon, Jan 9, 2017 at 11:27 PM Amit Singh F <amit.f.si...@ericsson.com>
wrote:

> Hi Jonathan,
>
>
>
> Really appreciate your response.
>
>
>
> It will not be possible for us to move to Reaper as of now, we are in
> process to migrate to Incremental repair.
>
>
>
> Also Running repair constantly will be costly affair in our case . For
> migrating to incremental repair with large set of dataset will take hours
> to be finished if we go ahead with procedure shared by Datastax.
>
>
>
> So any quick method to reduce that ?
>
>
>
> Regards
>
> Amit Singh
>
>
>
> *From:* Jonathan Haddad [mailto:j...@jonhaddad.com]
> *Sent:* Tuesday, January 10, 2017 11:50 AM
> *To:* user@cassandra.apache.org
> *Subject:* Re: Incremental Repair Migration
>
>
>
> Your best bet is to just run repair constantly. We maintain an updated
> fork of Spotify's reaper tool to help manage it: https://github.com/
> thelastpickle/cassandra-reaper
>
> On Mon, Jan 9, 2017 at 10:04 PM Amit Singh F <amit.f.si...@ericsson.com>
> wrote:
>
> Hi All,
>
>
>
> We are thinking of migrating from primary range repair (-pr) to
> incremental repair.
>
>
>
> Environment :
>
>
>
> · Cassandra 2.1.16
>
> • 25 Node cluster ,
>
> • RF 3
>
> • Data size up to 450 GB per nodes
>
>
>
> We found that running full repair will be taking around 8 hrs per node
> which *means 200 odd hrs*. for migrating the entire cluster to
> incremental repair. Even though there is zero downtime, it is quite
> unreasonable to ask for 200 hr maintenance window for migrating repairs.
>
>
>
> Just want to know how Cassandra users in community optimize the procedure
> to reduce migration time ?
>
>
>
> Thanks & Regards
>
> Amit Singh
>
>


Re: Incremental Repair Migration

2017-01-10 Thread Jonathan Haddad
Reaper suppers incremental repair.
On Mon, Jan 9, 2017 at 11:27 PM Amit Singh F <amit.f.si...@ericsson.com>
wrote:

> Hi Jonathan,
>
>
>
> Really appreciate your response.
>
>
>
> It will not be possible for us to move to Reaper as of now, we are in
> process to migrate to Incremental repair.
>
>
>
> Also Running repair constantly will be costly affair in our case . For
> migrating to incremental repair with large set of dataset will take hours
> to be finished if we go ahead with procedure shared by Datastax.
>
>
>
> So any quick method to reduce that ?
>
>
>
> Regards
>
> Amit Singh
>
>
>
> *From:* Jonathan Haddad [mailto:j...@jonhaddad.com]
> *Sent:* Tuesday, January 10, 2017 11:50 AM
> *To:* user@cassandra.apache.org
> *Subject:* Re: Incremental Repair Migration
>
>
>
> Your best bet is to just run repair constantly. We maintain an updated
> fork of Spotify's reaper tool to help manage it:
> https://github.com/thelastpickle/cassandra-reaper
>
> On Mon, Jan 9, 2017 at 10:04 PM Amit Singh F <amit.f.si...@ericsson.com>
> wrote:
>
> Hi All,
>
>
>
> We are thinking of migrating from primary range repair (-pr) to
> incremental repair.
>
>
>
> Environment :
>
>
>
> · Cassandra 2.1.16
>
> • 25 Node cluster ,
>
> • RF 3
>
> • Data size up to 450 GB per nodes
>
>
>
> We found that running full repair will be taking around 8 hrs per node
> which *means 200 odd hrs*. for migrating the entire cluster to
> incremental repair. Even though there is zero downtime, it is quite
> unreasonable to ask for 200 hr maintenance window for migrating repairs.
>
>
>
> Just want to know how Cassandra users in community optimize the procedure
> to reduce migration time ?
>
>
>
> Thanks & Regards
>
> Amit Singh
>
>


RE: Incremental Repair Migration

2017-01-09 Thread Amit Singh F
Hi Jonathan,

Really appreciate your response.

It will not be possible for us to move to Reaper as of now, we are in process 
to migrate to Incremental repair.

Also Running repair constantly will be costly affair in our case . For 
migrating to incremental repair with large set of dataset will take hours to be 
finished if we go ahead with procedure shared by Datastax.

So any quick method to reduce that ?

Regards
Amit Singh

From: Jonathan Haddad [mailto:j...@jonhaddad.com]
Sent: Tuesday, January 10, 2017 11:50 AM
To: user@cassandra.apache.org
Subject: Re: Incremental Repair Migration

Your best bet is to just run repair constantly. We maintain an updated fork of 
Spotify's reaper tool to help manage it: 
https://github.com/thelastpickle/cassandra-reaper
On Mon, Jan 9, 2017 at 10:04 PM Amit Singh F 
<amit.f.si...@ericsson.com<mailto:amit.f.si...@ericsson.com>> wrote:
Hi All,

We are thinking of migrating from primary range repair (-pr) to incremental 
repair.

Environment :


• Cassandra 2.1.16
• 25 Node cluster ,
• RF 3
• Data size up to 450 GB per nodes

We found that running full repair will be taking around 8 hrs per node which 
means 200 odd hrs. for migrating the entire cluster to incremental repair. Even 
though there is zero downtime, it is quite unreasonable to ask for 200 hr 
maintenance window for migrating repairs.

Just want to know how Cassandra users in community optimize the procedure to 
reduce migration time ?

Thanks & Regards
Amit Singh


Re: Incremental Repair Migration

2017-01-09 Thread Jonathan Haddad
Your best bet is to just run repair constantly. We maintain an updated fork
of Spotify's reaper tool to help manage it:
https://github.com/thelastpickle/cassandra-reaper
On Mon, Jan 9, 2017 at 10:04 PM Amit Singh F 
wrote:

> Hi All,
>
>
>
> We are thinking of migrating from primary range repair (-pr) to
> incremental repair.
>
>
>
> Environment :
>
>
>
> · Cassandra 2.1.16
>
> • 25 Node cluster ,
>
> • RF 3
>
> • Data size up to 450 GB per nodes
>
>
>
> We found that running full repair will be taking around 8 hrs per node
> which *means 200 odd hrs*. for migrating the entire cluster to
> incremental repair. Even though there is zero downtime, it is quite
> unreasonable to ask for 200 hr maintenance window for migrating repairs.
>
>
>
> Just want to know how Cassandra users in community optimize the procedure
> to reduce migration time ?
>
>
>
> Thanks & Regards
>
> Amit Singh
>


Re: Incremental repair for the first time

2017-01-09 Thread Kathiresan S
Thanks Amit & Oskar

Thanks,
Kathir

On Mon, Jan 9, 2017 at 3:23 AM, Oskar Kjellin <oskar.kjel...@gmail.com>
wrote:

> There is no harm in running it tho. If it's not needed it will simply
> terminate. Better to be safe
>
> Sent from my iPhone
>
> On 9 Jan 2017, at 08:13, Amit Singh F <amit.f.si...@ericsson.com> wrote:
>
> Hi ,
>
>
>
> Generally Upgradesstables are only recommended when you plan to move with
> Major version like  from 2.0 to 2.1  or from 2.1 to 2.2 etc. Since you are
> doing minor version upgrade no need to run upgradesstables utility.
>
>
>
> Link by Datastax might be helpful to you :
>
>
>
> https://support.datastax.com/hc/en-us/articles/208040036-
> Nodetool-upgradesstables-FAQ
>
>
>
> *From:* Kathiresan S [mailto:kathiresanselva...@gmail.com
> <kathiresanselva...@gmail.com>]
> *Sent:* Wednesday, January 04, 2017 12:22 AM
> *To:* user@cassandra.apache.org
> *Subject:* Re: Incremental repair for the first time
>
>
>
> Thank you!
>
>
>
> We are planning to upgrade to 3.0.10 for this issue.
>
>
>
> From the NEWS txt file (https://github.com/apache/
> cassandra/blob/trunk/NEWS.txt), it looks like there is no need for
> sstableupgrade when we upgrade from 3.0.4 to 3.0.10 (i.e. Just installing
> 3.0.10 Cassandra would suffice and it will work with the sstables created
> by 3.0.4 ?)
>
>
>
> Could you please confirm (if i'm reading the upgrade instructions
> correctly)?
>
>
>
> Thanks,
>
> Kathir
>
>
>
> On Tue, Dec 20, 2016 at 5:28 PM, kurt Greaves <k...@instaclustr.com>
> wrote:
>
> No workarounds, your best/only option is to upgrade (plus you get the
> benefit of loads of other bug fixes).
>
>
>
> On 16 December 2016 at 21:58, Kathiresan S <kathiresanselva...@gmail.com>
> wrote:
>
> Thank you!
>
>
>
> Is any work around available for this version?
>
>
>
> Thanks,
>
> Kathir
>
>
>
> On Friday, December 16, 2016, Jake Luciani <jak...@gmail.com> wrote:
>
> This was fixed post 3.0.4 please upgrade to latest 3.0 release
>
>
>
> On Fri, Dec 16, 2016 at 4:49 PM, Kathiresan S <
> kathiresanselva...@gmail.com> wrote:
>
> Hi,
>
>
>
> We have a brand new Cassandra cluster (version 3.0.4) and we set up
> nodetool repair scheduled for every day (without any options for repair).
> As per documentation, incremental repair is the default in this case.
>
> Should we do a full repair for the very first time on each node once and
> then leave it to do incremental repair afterwards?
>
>
>
> *Problem we are facing:*
>
>
>
> On a random node, the repair process throws validation failed error,
> pointing to some other node
>
>
>
> For Eg. Node A, where the repair is run (without any option), throws below
> error
>
>
>
> *Validation failed in /Node B*
>
>
>
> In Node B when we check the logs, below exception is seen at the same
> exact time...
>
>
>
> *java.lang.RuntimeException: Cannot start multiple repair sessions over
> the same sstables*
>
> *at
> org.apache.cassandra.db.compaction.CompactionManager.doValidationCompaction(CompactionManager.java:1087)
> ~[apache-cassandra-3.0.4.jar:3.0.4]*
>
> *at
> org.apache.cassandra.db.compaction.CompactionManager.access$700(CompactionManager.java:80)
> ~[apache-cassandra-3.0.4.jar:3.0.4]*
>
> *at
> org.apache.cassandra.db.compaction.CompactionManager$10.call(CompactionManager.java:700)
> ~[apache-cassandra-3.0.4.jar:3.0.4]*
>
> *at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> ~[na:1.8.0_73]*
>
> *at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> ~[na:1.8.0_73]*
>
>
>
> Can you please help on how this can be fixed?
>
>
>
> Thanks,
>
> Kathir
>
>
>
>
> --
>
> http://twitter.com/tjake
>
>
>
>
>
>


Re: Incremental repair for the first time

2017-01-09 Thread Oskar Kjellin
There is no harm in running it tho. If it's not needed it will simply 
terminate. Better to be safe

Sent from my iPhone

> On 9 Jan 2017, at 08:13, Amit Singh F <amit.f.si...@ericsson.com> wrote:
> 
> Hi ,
>  
> Generally Upgradesstables are only recommended when you plan to move with 
> Major version like  from 2.0 to 2.1  or from 2.1 to 2.2 etc. Since you are 
> doing minor version upgrade no need to run upgradesstables utility.
>  
> Link by Datastax might be helpful to you :
>  
> https://support.datastax.com/hc/en-us/articles/208040036-Nodetool-upgradesstables-FAQ
>  
> From: Kathiresan S [mailto:kathiresanselva...@gmail.com] 
> Sent: Wednesday, January 04, 2017 12:22 AM
> To: user@cassandra.apache.org
> Subject: Re: Incremental repair for the first time
>  
> Thank you!
>  
> We are planning to upgrade to 3.0.10 for this issue.
>  
> From the NEWS txt file 
> (https://github.com/apache/cassandra/blob/trunk/NEWS.txt), it looks like 
> there is no need for sstableupgrade when we upgrade from 3.0.4 to 3.0.10 
> (i.e. Just installing 3.0.10 Cassandra would suffice and it will work with 
> the sstables created by 3.0.4 ?)
>  
> Could you please confirm (if i'm reading the upgrade instructions correctly)?
>  
> Thanks,
> Kathir
>  
> On Tue, Dec 20, 2016 at 5:28 PM, kurt Greaves <k...@instaclustr.com> wrote:
> No workarounds, your best/only option is to upgrade (plus you get the benefit 
> of loads of other bug fixes).
>  
> On 16 December 2016 at 21:58, Kathiresan S <kathiresanselva...@gmail.com> 
> wrote:
> Thank you!
>  
> Is any work around available for this version? 
>  
> Thanks,
> Kathir
> 
> 
> On Friday, December 16, 2016, Jake Luciani <jak...@gmail.com> wrote:
> This was fixed post 3.0.4 please upgrade to latest 3.0 release
>  
> On Fri, Dec 16, 2016 at 4:49 PM, Kathiresan S <kathiresanselva...@gmail.com> 
> wrote:
> Hi,
>  
> We have a brand new Cassandra cluster (version 3.0.4) and we set up nodetool 
> repair scheduled for every day (without any options for repair). As per 
> documentation, incremental repair is the default in this case. 
> Should we do a full repair for the very first time on each node once and then 
> leave it to do incremental repair afterwards?
>  
> Problem we are facing:
>  
> On a random node, the repair process throws validation failed error, pointing 
> to some other node
>  
> For Eg. Node A, where the repair is run (without any option), throws below 
> error
>  
> Validation failed in /Node B
>  
> In Node B when we check the logs, below exception is seen at the same exact 
> time...
>  
> java.lang.RuntimeException: Cannot start multiple repair sessions over the 
> same sstables
> at 
> org.apache.cassandra.db.compaction.CompactionManager.doValidationCompaction(CompactionManager.java:1087)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.db.compaction.CompactionManager.access$700(CompactionManager.java:80)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$10.call(CompactionManager.java:700)
>  ~[apache-cassandra-3.0.4.jar:3.0.4]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_73]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_73]
>  
> Can you please help on how this can be fixed?
>  
> Thanks,
> Kathir
> 
> 
> 
> --
> http://twitter.com/tjake
>  
>  


RE: Incremental repair for the first time

2017-01-08 Thread Amit Singh F
Hi ,

Generally Upgradesstables are only recommended when you plan to move with Major 
version like  from 2.0 to 2.1  or from 2.1 to 2.2 etc. Since you are doing 
minor version upgrade no need to run upgradesstables utility.

Link by Datastax might be helpful to you :

https://support.datastax.com/hc/en-us/articles/208040036-Nodetool-upgradesstables-FAQ

From: Kathiresan S [mailto:kathiresanselva...@gmail.com]
Sent: Wednesday, January 04, 2017 12:22 AM
To: user@cassandra.apache.org
Subject: Re: Incremental repair for the first time

Thank you!

We are planning to upgrade to 3.0.10 for this issue.

From the NEWS txt file 
(https://github.com/apache/cassandra/blob/trunk/NEWS.txt), it looks like there 
is no need for sstableupgrade when we upgrade from 3.0.4 to 3.0.10 (i.e. Just 
installing 3.0.10 Cassandra would suffice and it will work with the sstables 
created by 3.0.4 ?)

Could you please confirm (if i'm reading the upgrade instructions correctly)?

Thanks,
Kathir

On Tue, Dec 20, 2016 at 5:28 PM, kurt Greaves 
<k...@instaclustr.com<mailto:k...@instaclustr.com>> wrote:
No workarounds, your best/only option is to upgrade (plus you get the benefit 
of loads of other bug fixes).

On 16 December 2016 at 21:58, Kathiresan S 
<kathiresanselva...@gmail.com<mailto:kathiresanselva...@gmail.com>> wrote:
Thank you!

Is any work around available for this version?

Thanks,
Kathir


On Friday, December 16, 2016, Jake Luciani 
<jak...@gmail.com<mailto:jak...@gmail.com>> wrote:
This was fixed post 3.0.4 please upgrade to latest 3.0 release

On Fri, Dec 16, 2016 at 4:49 PM, Kathiresan S 
<kathiresanselva...@gmail.com<mailto:kathiresanselva...@gmail.com>> wrote:
Hi,

We have a brand new Cassandra cluster (version 3.0.4) and we set up nodetool 
repair scheduled for every day (without any options for repair). As per 
documentation, incremental repair is the default in this case.
Should we do a full repair for the very first time on each node once and then 
leave it to do incremental repair afterwards?

Problem we are facing:

On a random node, the repair process throws validation failed error, pointing 
to some other node

For Eg. Node A, where the repair is run (without any option), throws below error

Validation failed in /Node B

In Node B when we check the logs, below exception is seen at the same exact 
time...

java.lang.RuntimeException: Cannot start multiple repair sessions over the same 
sstables
at 
org.apache.cassandra.db.compaction.CompactionManager.doValidationCompaction(CompactionManager.java:1087)
 ~[apache-cassandra-3.0.4.jar:3.0.4]
at 
org.apache.cassandra.db.compaction.CompactionManager.access$700(CompactionManager.java:80)
 ~[apache-cassandra-3.0.4.jar:3.0.4]
at 
org.apache.cassandra.db.compaction.CompactionManager$10.call(CompactionManager.java:700)
 ~[apache-cassandra-3.0.4.jar:3.0.4]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
~[na:1.8.0_73]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
~[na:1.8.0_73]

Can you please help on how this can be fixed?

Thanks,
Kathir



--
http://twitter.com/tjake




Re: Incremental repair for the first time

2017-01-03 Thread Kathiresan S
Thank you!

We are planning to upgrade to 3.0.10 for this issue.

>From the NEWS txt file (
https://github.com/apache/cassandra/blob/trunk/NEWS.txt), it looks like
there is no need for sstableupgrade when we upgrade from 3.0.4 to 3.0.10
(i.e. Just installing 3.0.10 Cassandra would suffice and it will work with
the sstables created by 3.0.4 ?)

Could you please confirm (if i'm reading the upgrade instructions
correctly)?

Thanks,
Kathir

On Tue, Dec 20, 2016 at 5:28 PM, kurt Greaves  wrote:

> No workarounds, your best/only option is to upgrade (plus you get the
> benefit of loads of other bug fixes).
>
> On 16 December 2016 at 21:58, Kathiresan S 
> wrote:
>
>> Thank you!
>>
>> Is any work around available for this version?
>>
>> Thanks,
>> Kathir
>>
>>
>> On Friday, December 16, 2016, Jake Luciani  wrote:
>>
>>> This was fixed post 3.0.4 please upgrade to latest 3.0 release
>>>
>>> On Fri, Dec 16, 2016 at 4:49 PM, Kathiresan S <
>>> kathiresanselva...@gmail.com> wrote:
>>>
 Hi,

 We have a brand new Cassandra cluster (version 3.0.4) and we set up
 nodetool repair scheduled for every day (without any options for repair).
 As per documentation, incremental repair is the default in this case.
 Should we do a full repair for the very first time on each node once
 and then leave it to do incremental repair afterwards?

 *Problem we are facing:*

 On a random node, the repair process throws validation failed error,
 pointing to some other node

 For Eg. Node A, where the repair is run (without any option), throws
 below error

 *Validation failed in /Node B*

 In Node B when we check the logs, below exception is seen at the same
 exact time...

 *java.lang.RuntimeException: Cannot start multiple repair sessions over
 the same sstables*
 *at
 org.apache.cassandra.db.compaction.CompactionManager.doValidationCompaction(CompactionManager.java:1087)
 ~[apache-cassandra-3.0.4.jar:3.0.4]*
 *at
 org.apache.cassandra.db.compaction.CompactionManager.access$700(CompactionManager.java:80)
 ~[apache-cassandra-3.0.4.jar:3.0.4]*
 *at
 org.apache.cassandra.db.compaction.CompactionManager$10.call(CompactionManager.java:700)
 ~[apache-cassandra-3.0.4.jar:3.0.4]*
 *at java.util.concurrent.FutureTask.run(FutureTask.java:266)
 ~[na:1.8.0_73]*
 *at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 ~[na:1.8.0_73]*

 Can you please help on how this can be fixed?

 Thanks,
 Kathir

>>>
>>>
>>>
>>> --
>>> http://twitter.com/tjake
>>>
>>
>


Re: Incremental repair for the first time

2016-12-20 Thread kurt Greaves
No workarounds, your best/only option is to upgrade (plus you get the
benefit of loads of other bug fixes).

On 16 December 2016 at 21:58, Kathiresan S 
wrote:

> Thank you!
>
> Is any work around available for this version?
>
> Thanks,
> Kathir
>
>
> On Friday, December 16, 2016, Jake Luciani  wrote:
>
>> This was fixed post 3.0.4 please upgrade to latest 3.0 release
>>
>> On Fri, Dec 16, 2016 at 4:49 PM, Kathiresan S <
>> kathiresanselva...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> We have a brand new Cassandra cluster (version 3.0.4) and we set up
>>> nodetool repair scheduled for every day (without any options for repair).
>>> As per documentation, incremental repair is the default in this case.
>>> Should we do a full repair for the very first time on each node once and
>>> then leave it to do incremental repair afterwards?
>>>
>>> *Problem we are facing:*
>>>
>>> On a random node, the repair process throws validation failed error,
>>> pointing to some other node
>>>
>>> For Eg. Node A, where the repair is run (without any option), throws
>>> below error
>>>
>>> *Validation failed in /Node B*
>>>
>>> In Node B when we check the logs, below exception is seen at the same
>>> exact time...
>>>
>>> *java.lang.RuntimeException: Cannot start multiple repair sessions over
>>> the same sstables*
>>> *at
>>> org.apache.cassandra.db.compaction.CompactionManager.doValidationCompaction(CompactionManager.java:1087)
>>> ~[apache-cassandra-3.0.4.jar:3.0.4]*
>>> *at
>>> org.apache.cassandra.db.compaction.CompactionManager.access$700(CompactionManager.java:80)
>>> ~[apache-cassandra-3.0.4.jar:3.0.4]*
>>> *at
>>> org.apache.cassandra.db.compaction.CompactionManager$10.call(CompactionManager.java:700)
>>> ~[apache-cassandra-3.0.4.jar:3.0.4]*
>>> *at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>> ~[na:1.8.0_73]*
>>> *at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>> ~[na:1.8.0_73]*
>>>
>>> Can you please help on how this can be fixed?
>>>
>>> Thanks,
>>> Kathir
>>>
>>
>>
>>
>> --
>> http://twitter.com/tjake
>>
>


Re: Incremental repair for the first time

2016-12-16 Thread Kathiresan S
Thank you!

Is any work around available for this version?

Thanks,
Kathir

On Friday, December 16, 2016, Jake Luciani  wrote:

> This was fixed post 3.0.4 please upgrade to latest 3.0 release
>
> On Fri, Dec 16, 2016 at 4:49 PM, Kathiresan S <
> kathiresanselva...@gmail.com
> > wrote:
>
>> Hi,
>>
>> We have a brand new Cassandra cluster (version 3.0.4) and we set up
>> nodetool repair scheduled for every day (without any options for repair).
>> As per documentation, incremental repair is the default in this case.
>> Should we do a full repair for the very first time on each node once and
>> then leave it to do incremental repair afterwards?
>>
>> *Problem we are facing:*
>>
>> On a random node, the repair process throws validation failed error,
>> pointing to some other node
>>
>> For Eg. Node A, where the repair is run (without any option), throws
>> below error
>>
>> *Validation failed in /Node B*
>>
>> In Node B when we check the logs, below exception is seen at the same
>> exact time...
>>
>> *java.lang.RuntimeException: Cannot start multiple repair sessions over
>> the same sstables*
>> *at
>> org.apache.cassandra.db.compaction.CompactionManager.doValidationCompaction(CompactionManager.java:1087)
>> ~[apache-cassandra-3.0.4.jar:3.0.4]*
>> *at
>> org.apache.cassandra.db.compaction.CompactionManager.access$700(CompactionManager.java:80)
>> ~[apache-cassandra-3.0.4.jar:3.0.4]*
>> *at
>> org.apache.cassandra.db.compaction.CompactionManager$10.call(CompactionManager.java:700)
>> ~[apache-cassandra-3.0.4.jar:3.0.4]*
>> *at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>> ~[na:1.8.0_73]*
>> *at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>> ~[na:1.8.0_73]*
>>
>> Can you please help on how this can be fixed?
>>
>> Thanks,
>> Kathir
>>
>
>
>
> --
> http://twitter.com/tjake
>


Re: Incremental repair for the first time

2016-12-16 Thread Jake Luciani
This was fixed post 3.0.4 please upgrade to latest 3.0 release

On Fri, Dec 16, 2016 at 4:49 PM, Kathiresan S 
wrote:

> Hi,
>
> We have a brand new Cassandra cluster (version 3.0.4) and we set up
> nodetool repair scheduled for every day (without any options for repair).
> As per documentation, incremental repair is the default in this case.
> Should we do a full repair for the very first time on each node once and
> then leave it to do incremental repair afterwards?
>
> *Problem we are facing:*
>
> On a random node, the repair process throws validation failed error,
> pointing to some other node
>
> For Eg. Node A, where the repair is run (without any option), throws below
> error
>
> *Validation failed in /Node B*
>
> In Node B when we check the logs, below exception is seen at the same
> exact time...
>
> *java.lang.RuntimeException: Cannot start multiple repair sessions over
> the same sstables*
> *at
> org.apache.cassandra.db.compaction.CompactionManager.doValidationCompaction(CompactionManager.java:1087)
> ~[apache-cassandra-3.0.4.jar:3.0.4]*
> *at
> org.apache.cassandra.db.compaction.CompactionManager.access$700(CompactionManager.java:80)
> ~[apache-cassandra-3.0.4.jar:3.0.4]*
> *at
> org.apache.cassandra.db.compaction.CompactionManager$10.call(CompactionManager.java:700)
> ~[apache-cassandra-3.0.4.jar:3.0.4]*
> *at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> ~[na:1.8.0_73]*
> *at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> ~[na:1.8.0_73]*
>
> Can you please help on how this can be fixed?
>
> Thanks,
> Kathir
>



-- 
http://twitter.com/tjake


Re: Incremental repair from the get go

2015-11-02 Thread Robert Coli
On Mon, Nov 2, 2015 at 3:02 PM, Maciek Sakrejda  wrote:

> Following up on this older question: as per the docs, one *should* still
> do full repair periodically (the docs say weekly), right? And run
> incremental more often to fill in?
>

Something that amounts to full repair once every gc_grace_seconds, unless
you never do anything that results in a tombstone. In that (very rare)
case, one should probably still occasionally (2x a year?) run repair to
cover bitrot and similar (very rare) cases.

"Something that amounts to full repair" is either a full repair or an
incremental repair that covers 100% of the new data since gc_grace_seconds.

=Rob


Re: Incremental repair from the get go

2015-11-02 Thread Maciek Sakrejda
Following up on this older question: as per the docs, one *should* still do
full repair periodically (the docs say weekly), right? And run incremental
more often to fill in?


Re: Incremental repair from the get go

2015-09-04 Thread Marcus Eriksson
Starting up fresh it is totally OK to just start using incremental repairs

On Thu, Sep 3, 2015 at 10:25 PM, Jean-Francois Gosselin <
jfgosse...@gmail.com> wrote:

>
> On fresh install of Cassandra what's the best approach to start using
> incremental repair from the get go (I'm using LCS) ?
>
> Run nodetool repair -inc after inserting a few rows , or we still need to
> follow the migration procedure with sstablerepairedset ?
>
> From the documentation "... If you use the leveled compaction strategy
> and perform an incremental repair for the first time, Cassandra performs
> size-tiering on all SSTables because the repair/unrepaired status is
> unknown. This operation can take a long time. To save time, migrate to
> incremental repair one node at a time. ..."
>
> With almost no data size-tiering should be quick ?  Basically is there a
> short cut to avoid the migration via sstablerepairedset  on a fresh install
> ?
>
> Thanks
>
> JF
>