Re: Switching to Incremental Repair

2024-02-15 Thread Chris Lohfink
into IR sstables with more caveats. Probably worth a jira to add a faster solution On Thu, Feb 15, 2024 at 12:50 PM Kristijonas Zalys wrote: > Hi folks, > > One last question regarding incremental repair. > > What would be a safe approach to temporarily stop running incre

Re: Switching to Incremental Repair

2024-02-15 Thread Bowen Song via user
to running out of disk space, and you should address that issue first before even considering upgrading Cassandra. On 15/02/2024 18:49, Kristijonas Zalys wrote: Hi folks, One last question regarding incremental repair. What would be a safe approach to temporarily stop running incremental repair

Re: Switching to Incremental Repair

2024-02-15 Thread Kristijonas Zalys
Hi folks, One last question regarding incremental repair. What would be a safe approach to temporarily stop running incremental repair on a cluster (e.g.: during a Cassandra major version upgrade)? My understanding is that if we simply stop running incremental repair, the cluster's nodes can

Re: Switching to Incremental Repair

2024-02-07 Thread Bowen Song via user
The over-streaming is only problematic for the repaired SSTables, but it can be triggered by inconsistencies within the unrepaired SSTables during an incremental repair session. This is because although an incremental repair will only compare the unrepaired SSTables, but it will stream both

Re: Switching to Incremental Repair

2024-02-07 Thread Sebastian Marsching
Thank you very much for your explanation. Streaming happens on the token range level, not the SSTable level, right? So, when running an incremental repair before the full repair, the problem that “some unrepaired SSTables are being marked as repaired on one node but not on another” should

Re: Switching to Incremental Repair

2024-02-07 Thread Bowen Song via user
assumed that a full repair on a cluster that is also using incremental repair pretty much works like on a cluster that is not using incremental repair at all, the only difference being that the set of repaired und unrepaired data is repaired separately, so the Merkle trees that are calculated

Re: Switching to Incremental Repair

2024-02-07 Thread Sebastian Marsching
this for me. So far, I assumed that a full repair on a cluster that is also using incremental repair pretty much works like on a cluster that is not using incremental repair at all, the only difference being that the set of repaired und unrepaired data is repaired separately, so the Merkle tr

Re: Switching to Incremental Repair

2024-02-07 Thread Bowen Song via user
Caution, using the method you described, the amount of data streamed at the end with the full repair is not the amount of data written between stopping the first node and the last node, but depends on the table size, the number of partitions written, their distribution in the ring and the

Re: Switching to Incremental Repair

2024-02-07 Thread Sebastian Marsching
> That's a feature we need to implement in Reaper. I think disallowing the > start of the new incremental repair would be easier to manage than pausing > the full repair that's already running. It's also what I think I'd expect as > a user. > > I'll create an issue to trac

Re: Switching to Incremental Repair

2024-02-07 Thread Sebastian Marsching
> Full repair running for an entire week sounds excessively long. Even if > you've got 1 TB of data per node, 1 week means the repair speed is less than > 2 MB/s, that's very slow. Perhaps you should focus on finding the bottleneck > of the full repair speed and work on that instead. We store

Re: Switching to Incremental Repair

2024-02-07 Thread Bowen Song via user
Just one more thing. Make sure you run 'nodetool repair -full' instead of just 'nodetool repair'. That's because the command's default was changed in Cassandra 2.x. The default was full repair before that change, but the new default now is incremental repair. On 07/02/2024 10:28, Bowen Song

Re: Switching to Incremental Repair

2024-02-07 Thread Bowen Song via user
ration? Thanks, Kristijonas On Sun, Feb 4, 2024 at 12:18 AM Alexander DEJANOVSKI wrote: Hi Sebastian, That's a feature we need to implement in Reaper. I think disallowing the start of the new incremental repair would be easier to manage than pausing the full repair that's a

Re: Switching to Incremental Repair

2024-02-06 Thread Kristijonas Zalys
Reaper. I think disallowing the > start of the new incremental repair would be easier to manage than pausing > the full repair that's already running. It's also what I think I'd expect > as a user. > > I'll create an issue to track this. > > Le sam. 3 févr. 2024, 16:19, Sebastian Marschin

Re: Switching to Incremental Repair

2024-02-04 Thread Alexander DEJANOVSKI
Hi Sebastian, That's a feature we need to implement in Reaper. I think disallowing the start of the new incremental repair would be easier to manage than pausing the full repair that's already running. It's also what I think I'd expect as a user. I'll create an issue to track this. Le sam. 3

Re: Switching to Incremental Repair

2024-02-03 Thread Bowen Song via user
it till Monday morning if it happens at Friday night. Does anyone know how such a schedule can be created in Cassandra Reaper? I recently learned the hard way that running both a full and an incremental repair for the same keyspace and table in parallel is not a good idea (it caused a very

Re: Switching to Incremental Repair

2024-02-03 Thread Sebastian Marsching
nday morning if it happens at Friday night. > Does anyone know how such a schedule can be created in Cassandra Reaper? I recently learned the hard way that running both a full and an incremental repair for the same keyspace and table in parallel is not a good idea (it caused a very unpleasant ove

Re: Switching to Incremental Repair

2024-02-03 Thread Bowen Song via user
Hi Kristijonas, It is not possible to run two repairs, regardless whether they are incremental or full, for the same token range and on the same table concurrently. You have two options: 1. create a schedule that's don't overlap, e.g. run incremental repair daily except the 1st of each

Re: Switching to Incremental Repair

2024-02-02 Thread manish khandelwal
> Thanks, > Kristijonas > > On Fri, Feb 2, 2024 at 3:36 PM Bowen Song via user < > user@cassandra.apache.org> wrote: > >> Hi Kristijonas, >> >> To answer your questions: >> >> 1. It's still necessary to run full repair on a cluster on which >

Re: Switching to Incremental Repair

2024-02-02 Thread Kristijonas Zalys
at 3:36 PM Bowen Song via user < user@cassandra.apache.org> wrote: > Hi Kristijonas, > > To answer your questions: > > 1. It's still necessary to run full repair on a cluster on which > incremental repair is run periodically. The frequency of full repair is > more of an

Re: Switching to Incremental Repair

2024-02-02 Thread Bowen Song via user
Hi Kristijonas, To answer your questions: 1. It's still necessary to run full repair on a cluster on which incremental repair is run periodically. The frequency of full repair is more of an art than science. Generally speaking, the less reliable the storage media, the more frequently full

Switching to Incremental Repair

2024-02-02 Thread Kristijonas Zalys
Hi folks, I am working on switching from full to incremental repair in Cassandra v4.0.6 (soon to be v4.1.3) and I have a few questions. 1. Is it necessary to run regular full repair on a cluster if I already run incremental repair? If yes, what frequency would you recommend for full

Re: Migrating to incremental repair in C* 4.x

2023-11-27 Thread Bowen Song via user
by disabling auto compaction . It sounds very much out of date or its optimized for fixing one node in a cluster somehow. It didn’t make sense in the 4.0 era. Instead I’d leave compaction running and slowly run incremental repair across parts of the token range, slowing down as pending compactions

Re: Migrating to incremental repair in C* 4.x

2023-11-27 Thread Jeff Jirsa
era. Instead I’d leave compaction running and slowly run incremental repair across parts of the token range, slowing down as pending compactions increase I’d choose token ranges such that you’d repair 5-10% of the data on each node at a time > On Nov 23, 2023, at 11:31 PM, Sebast

Re: Migrating to incremental repair in C* 4.x

2023-11-27 Thread Bowen Song via user
Hi Sebastian, It's better to walk down the path on which others have walked before you and had great success, than a path that nobody has ever walked. For the former, you know it's relatively safe and it works. The same can hardly be said for the later. You said it takes a week to run the

Migrating to incremental repair in C* 4.x

2023-11-23 Thread Sebastian Marsching
Hi, we are currently in the process of migrating from C* 3.11 to C* 4.1 and we want to start using incremental repairs after the upgrade has been completed. It seems like all the really bad bugs that made using incremental repairs dangerous in C* 3.x have been fixed in 4.x, and for our

RE: Configuration parameter to reject incremental repair?

2018-09-09 Thread Steinmaurer, Thomas
incremental repair? No flag currently exists. Probably a good idea considering the serious issues with incremental repairs since forever, and the change of defaults since 3.0. On 7 August 2018 at 16:44, Steinmaurer, Thomas mailto:thomas.steinmau...@dynatrace.com>> wrote: Hello, we are r

Re: Configuration parameter to reject incremental repair?

2018-08-20 Thread kurt greaves
Yeah I meant 2.2. Keep telling myself it was 3.0 for some reason. On 20 August 2018 at 19:29, Oleksandr Shulgin wrote: > On Mon, Aug 13, 2018 at 1:31 PM kurt greaves wrote: > >> No flag currently exists. Probably a good idea considering the serious >> issues with incremental repairs since

Re: Incremental repair

2018-08-20 Thread Alexander Dejanovski
andra 3.11.2 ,while > enabling repair noticed that incremental repair is true in logfile. > > > (parallelism: parallel, primary range: true, incremental: true, job > threads: 1, ColumnFamilies: [], dataCenters: [], hosts: [], # of ranges: > 20, pull repair: false) > > i was running

Incremental repair

2018-08-20 Thread Prachi Rath
Hi Community, I am currently creating a new cluster with cassandra 3.11.2 ,while enabling repair noticed that incremental repair is true in logfile. (parallelism: parallel, primary range: true, incremental: true, job threads: 1, ColumnFamilies: [], dataCenters: [], hosts: [], # of ranges: 20

Re: Configuration parameter to reject incremental repair?

2018-08-20 Thread Oleksandr Shulgin
On Mon, Aug 13, 2018 at 1:31 PM kurt greaves wrote: > No flag currently exists. Probably a good idea considering the serious > issues with incremental repairs since forever, and the change of defaults > since 3.0. > Hi Kurt, Did you mean since 2.2 (when incremental became the default one)? Or

Re: Configuration parameter to reject incremental repair?

2018-08-13 Thread kurt greaves
No flag currently exists. Probably a good idea considering the serious issues with incremental repairs since forever, and the change of defaults since 3.0. On 7 August 2018 at 16:44, Steinmaurer, Thomas < thomas.steinmau...@dynatrace.com> wrote: > Hello, > > > > we are running Cassandra in AWS

Configuration parameter to reject incremental repair?

2018-08-07 Thread Steinmaurer, Thomas
Hello, we are running Cassandra in AWS and On-Premise at customer sites, currently 2.1 in production with 3.11 in loadtest. In a migration path from 2.1 to 3.11.x, I'm afraid that at some point in time we end up in incremental repairs being enabled / ran a first time unintentionally, cause:

Re:Re: Why Cassandra need full repair after incremental repair

2017-11-05 Thread dayu
Thanks for your reply, Blake So what's your advise, as you say the incremental repair has some flaws, should i use it mixed with full repair or just run full repair only ? Dayu At 2017-11-02 20:42:14, "Blake Eggleston" <beggles...@apple.com> wrote: Because in theory, co

Re: Why Cassandra need full repair after incremental repair

2017-11-02 Thread Blake Eggleston
Because in theory, corruption of your repaired dataset is possible, which incremental repair won’t fix. In practice pre-4.0 incremental repair has some flaws that can bring deleted data back to life in some cases, which this would address. You should also evaluate whether pre-4.0 incremental

Re:Re: Why Cassandra need full repair after incremental repair

2017-11-02 Thread dayu
https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsRepairNodesWhen.html So you means i am misleading by this statements. The full repair only needed when node failure + replacement, or adding a datacenter. right? At 2017-11-02 15:54:49, "kurt greaves"

Re: Why Cassandra need full repair after incremental repair

2017-11-02 Thread kurt greaves
Where are you seeing this? If your incremental repairs work properly, full repair is only needed in certain situations, like after node failure + replacement, or adding a datacenter.​

Why Cassandra need full repair after incremental repair

2017-11-02 Thread dayu
Hello everyone, I have used cassandra for a while, the version is 3.0.9. I have a question why does cassandra still need full repair after used incremental repair? the full repair takes too long time. And I have searched a lot, but didn’t found any suitable answer. Can anyone answer my

Re: Need help with incremental repair

2017-10-30 Thread Blake Eggleston
sstables as unrepaired? That's right, but he mentioned that he is using reaper which uses subrange repair if I'm not mistaken, which doesn't do anti-compaction. So in that case he should probably mark data as unrepaired when no longer using incremental repair. 2017-10-31 3:52 GMT+11:00 Blake

Re: Need help with incremental repair

2017-10-30 Thread Paulo Motta
en, which doesn't do anti-compaction. So in that case he should probably mark data as unrepaired when no longer using incremental repair. 2017-10-31 3:52 GMT+11:00 Blake Eggleston <beggles...@apple.com>: >> Once you run incremental repair, your data is permanently marked as >> repaire

Re: Need help with incremental repair

2017-10-30 Thread Blake Eggleston
> Once you run incremental repair, your data is permanently marked as repaired This is also the case for full repairs, if I'm not mistaken. I'll admit I'm not as familiar with the quirks of repair in 2.2, but prior to 4.0/CASSANDRA-9143, any global repair ends with an anticompaction that ma

Re: Need help with incremental repair

2017-10-30 Thread kurt greaves
Yes mark them as unrepaired first. You can get sstablerepairedset from source if you need (probably make sure you get the correct branch/tag). It's just a shell script so as long as you have C* installed in a default/canonical location it should work.

Re: Need help with incremental repair

2017-10-29 Thread Aiman Parvaiz
n get back to my non incremental repair regiment. I assume that I should mark the SSTs to un repaired first and then run a full repair? Also, although I am installing Cassandra from package dsc22 on my CentOS 7 I couldn't find sstable tools installed, need to figure th

Re: Need help with incremental repair

2017-10-29 Thread Paulo Motta
> Assuming the situation is just "we accidentally ran incremental repair", you > shouldn't have to do anything. It's not going to hurt anything Once you run incremental repair, your data is permanently marked as repaired, and is no longer compacted with new non-incremental

Re: Need help with incremental repair

2017-10-28 Thread Blake Eggleston
Hey Aiman, Assuming the situation is just "we accidentally ran incremental repair", you shouldn't have to do anything. It's not going to hurt anything. Pre-4.0 incremental repair has some issues that can cause a lot of extra streaming, and inconsistencies in some edge cases, b

Need help with incremental repair

2017-10-28 Thread Aiman Parvaiz
Hi everyone, We seek your help in a issue we are facing in our 2.2.8 version. We have 24 nodes cluster spread over 3 DCs. Initially, when the cluster was in a single DC we were using The Last Pickle reaper 0.5 to repair it with incremental repair set to false. We added 2 more DCs. Now

Re: Incremental Repair

2017-03-13 Thread Paulo Motta
> there are some nasty edge cases when you mix incremental repair and full repair ( https://issues.apache.org/jira/browse/CASSANDRA-13153 ) mixing incremental and full repairs will just make that more likely to happen, but although unlikely it's still possible for a similar condition to hap

Re: Incremental Repair

2017-03-12 Thread Jeff Jirsa
On 2017-03-12 10:44 (-0700), Anuj Wadehra <anujw_2...@yahoo.co.in> wrote: > Hi, > > Our setup is as follows: > 2 DCS with N nodes, RF=DC1:3,DC2:3, Hinted Handoff=3 hours, Incremental > Repair scheduled once on every node (ALL DCs) within the gc grace period. > &g

Incremental Repair

2017-03-12 Thread Anuj Wadehra
Hi, Our setup is as follows: 2 DCS with N nodes, RF=DC1:3,DC2:3, Hinted Handoff=3 hours, Incremental Repair scheduled once on every node (ALL DCs) within the gc grace period. I have following queries regarding incremental repairs: 1. When a node is down for X hours (where x > hinted hand

Fwd: Node failure due to Incremental repair

2017-02-28 Thread Karthick V
Hi, Recently I have enabled incremental repair in one of my test cluster setup which consists of 8 nodes(DC1 - 4, DC2 - 4) with C* version of 2.1.13. Currently, I am facing node failure scenario in this cluster with the following exception during the incremental repair process exception occurred

Re: Incremental Repair Migration

2017-01-10 Thread Bhuvan Rawal
case, you may have to increase segment timeout when you run it for the first time as it repairs whole set of sstables. Regards, Bhuvan On Jan 10, 2017 8:44 PM, "Jonathan Haddad" <j...@jonhaddad.com> wrote: Reaper suppers incremental repair. On Mon, Jan 9, 2017 at 11:27 PM Ami

Re: Incremental Repair Migration

2017-01-10 Thread Jonathan Haddad
Reaper suppers incremental repair. On Mon, Jan 9, 2017 at 11:27 PM Amit Singh F <amit.f.si...@ericsson.com> wrote: > Hi Jonathan, > > > > Really appreciate your response. > > > > It will not be possible for us to move to Reaper as of now, we are in > proc

RE: Incremental Repair Migration

2017-01-09 Thread Amit Singh F
Hi Jonathan, Really appreciate your response. It will not be possible for us to move to Reaper as of now, we are in process to migrate to Incremental repair. Also Running repair constantly will be costly affair in our case . For migrating to incremental repair with large set of dataset

Re: Incremental Repair Migration

2017-01-09 Thread Jonathan Haddad
thinking of migrating from primary range repair (-pr) to > incremental repair. > > > > Environment : > > > > · Cassandra 2.1.16 > > • 25 Node cluster , > > • RF 3 > > • Data size up to 450 GB per nodes > > > > We found th

Incremental Repair Migration

2017-01-09 Thread Amit Singh F
Hi All, We are thinking of migrating from primary range repair (-pr) to incremental repair. Environment : * Cassandra 2.1.16 * 25 Node cluster , * RF 3 * Data size up to 450 GB per nodes We found that running full repair will be taking around 8 hrs per node which means

Re: Incremental repair for the first time

2017-01-09 Thread Kathiresan S
gmail.com>] > *Sent:* Wednesday, January 04, 2017 12:22 AM > *To:* user@cassandra.apache.org > *Subject:* Re: Incremental repair for the first time > > > > Thank you! > > > > We are planning to upgrade to 3.0.10 for this issue. > > > > From the NEW

Re: Incremental repair for the first time

2017-01-09 Thread Oskar Kjellin
Nodetool-upgradesstables-FAQ > > From: Kathiresan S [mailto:kathiresanselva...@gmail.com] > Sent: Wednesday, January 04, 2017 12:22 AM > To: user@cassandra.apache.org > Subject: Re: Incremental repair for the first time > > Thank you! > > We are planning to upgrade to 3.0.10 f

RE: Incremental repair for the first time

2017-01-08 Thread Amit Singh F
/hc/en-us/articles/208040036-Nodetool-upgradesstables-FAQ From: Kathiresan S [mailto:kathiresanselva...@gmail.com] Sent: Wednesday, January 04, 2017 12:22 AM To: user@cassandra.apache.org Subject: Re: Incremental repair for the first time Thank you! We are planning to upgrade to 3.0.10

Re: Incremental repair for the first time

2017-01-03 Thread Kathiresan S
rote: >>> >>>> Hi, >>>> >>>> We have a brand new Cassandra cluster (version 3.0.4) and we set up >>>> nodetool repair scheduled for every day (without any options for repair). >>>> As per documentation, incremental repair is the default

Re: Incremental repair for the first time

2016-12-20 Thread kurt Greaves
t;>> Hi, >>> >>> We have a brand new Cassandra cluster (version 3.0.4) and we set up >>> nodetool repair scheduled for every day (without any options for repair). >>> As per documentation, incremental repair is the default in this case. >>

Re: Incremental repair for the first time

2016-12-16 Thread Kathiresan S
sanselva...@gmail.com > <javascript:_e(%7B%7D,'cvml','kathiresanselva...@gmail.com');>> wrote: > >> Hi, >> >> We have a brand new Cassandra cluster (version 3.0.4) and we set up >> nodetool repair scheduled for every day (without any options for repair). >> As

Re: Incremental repair for the first time

2016-12-16 Thread Jake Luciani
ithout any options for repair). > As per documentation, incremental repair is the default in this case. > Should we do a full repair for the very first time on each node once and > then leave it to do incremental repair afterwards? > > *Problem we are facing:* > > On a random node, the

Incremental repair for the first time

2016-12-16 Thread Kathiresan S
Hi, We have a brand new Cassandra cluster (version 3.0.4) and we set up nodetool repair scheduled for every day (without any options for repair). As per documentation, incremental repair is the default in this case. Should we do a full repair for the very first time on each node once

full repair or incremental repair after scrub?

2016-11-30 Thread Kai Wang
Hi, do I have to do a full repair after scrub? Is it enough to just do incremental repair? BTW I do nightly incremental repair.

Re: problem starting incremental repair using TheLastPicke Reaper

2016-10-19 Thread Alexander Dejanovski
d and it worked but my question is if the passed value of > incremental repair flag is different from the existing value then it > should allow to create new repair_unit instead of getting repair_unit based > on cluster name/ keyspace /column combination. > > and also if i d

Re: problem starting incremental repair using TheLastPicke Reaper

2016-10-19 Thread Abhishek Aggarwal
Hi Alex, that i already did and it worked but my question is if the passed value of incremental repair flag is different from the existing value then it should allow to create new repair_unit instead of getting repair_unit based on cluster name/ keyspace /column combination. and also if i

Re: problem starting incremental repair using TheLastPicke Reaper

2016-10-19 Thread Alexander Dejanovski
Hi Abhishek, This shows you have two repair units for the same keyspace/table with different incremental repair settings. Can you delete your prior repair run (the one with incremental repair set to false) and then create the new one with incremental repair set to true ? Let me know how

problem starting incremental repair using TheLastPicke Reaper

2016-10-19 Thread Abhishek Aggarwal
is there a way to start the incremental repair using the reaper. we completed full repair successfully and after that i tried to run the incremental run but getting the below error. A repair run already exist for the same cluster/keyspace/table but with a different incremental repair

Re: full and incremental repair consistency

2016-08-19 Thread Jérôme Mainaud
> - Either way, with or without the flag will actually be equivalent when > none of the sstables are marked as repaired (this will change after the > first inc repair). > So, if I well understand, the repair -full -local command resets the flag of sstables previously repaired. So even if I had

Re: full and incremental repair consistency

2016-08-19 Thread Paulo Motta
gmail.com>: > >> Running repair with -local flag does not mark sstables as repaired, since >> you can't guarantee data in other DCs are repaired. In order to support >> incremental repair, you need to run a full repair without the -local flag, >> and then in the next time you ru

Re: full and incremental repair consistency

2016-08-19 Thread Jérôme Mainaud
are repaired. In order to support > incremental repair, you need to run a full repair without the -local flag, > and then in the next time you run repair, previously repaired sstables are > skipped. > > 2016-08-19 9:55 GMT-03:00 Jérôme Mainaud <jer...@mainaud.com>: > >> Hello

Re: full and incremental repair consistency

2016-08-19 Thread Paulo Motta
Running repair with -local flag does not mark sstables as repaired, since you can't guarantee data in other DCs are repaired. In order to support incremental repair, you need to run a full repair without the -local flag, and then in the next time you run repair, previously repaired sstables

full and incremental repair consistency

2016-08-19 Thread Jérôme Mainaud
Hello, I have a 2.2.6 Cassandra cluster with two DC of 15 nodes each. A continuous incremental repair process deal with anti-entropy concern. Due to some untraced operation by someone, we choose to do a full repair on one DC with the command : nodetool repair --full -local -j 4 Daily

confusion about migrating to incremental repair

2016-01-06 Thread Kai Wang
Hi, I am running a cluster with 2.2.4. I have some table on LCS and plan to use incremental repair. I read the post at http://www.datastax.com/dev/blog/anticompaction-in-cassandra-2-1 and am a little confused. especially: "This means that *once you do an incremental repair you will

Re: Transitioning to incremental repair

2015-12-02 Thread Marcus Eriksson
Bryan, this should be improved with https://issues.apache.org/jira/browse/CASSANDRA-10768 - could you try it out? On Tue, Dec 1, 2015 at 10:58 PM, Bryan Cheng wrote: > Sorry if I misunderstood, but are you asking about the LCS case? > > Based on our experience, I would

Re: Transitioning to incremental repair

2015-12-02 Thread Bryan Cheng
Ah Marcus, that looks very promising- unfortunately we have already switched back to full repairs and our test cluster has been re-purposed for other tasks atm. I will be sure to apply the patch/try a fixed version of Cassandra if we attempt to migrate to incremental repair again.

Re: Transitioning to incremental repair

2015-12-01 Thread Bryan Cheng
Sorry if I misunderstood, but are you asking about the LCS case? Based on our experience, I would absolutely recommend you continue with the migration procedure. Even if the compaction strategy is the same, the process of anticompaction is incredibly painful. We observed our test cluster running

Re: Transitioning to incremental repair

2015-12-01 Thread Marcus Eriksson
ke to transition them to > incremental repair. According to the documentation, this is a very > manual (and likely time-consuming) process: > > > http://docs.datastax.com/en/cassandra/2.1/cassandra/operations/opsRepairNodesMigration.html > > Our understanding is that this proc

Transitioning to incremental repair

2015-12-01 Thread Sam Klock
Hi folks, A question like this was recently asked, but I don't think anyone ever supplied an unambiguous answer. We have a set of clusters currently using sequential repair, and we'd like to transition them to incremental repair. According to the documentation, this is a very manual

Re: Incremental repair from the get go

2015-11-02 Thread Robert Coli
uot; is either a full repair or an incremental repair that covers 100% of the new data since gc_grace_seconds. =Rob

Re: Incremental repair from the get go

2015-11-02 Thread Maciek Sakrejda
Following up on this older question: as per the docs, one *should* still do full repair periodically (the docs say weekly), right? And run incremental more often to fill in?

Re: Incremental repair from the get go

2015-09-04 Thread Marcus Eriksson
Starting up fresh it is totally OK to just start using incremental repairs On Thu, Sep 3, 2015 at 10:25 PM, Jean-Francois Gosselin < jfgosse...@gmail.com> wrote: > > On fresh install of Cassandra what's the best approach to start using > incremental repair from the get go

Incremental repair from the get go

2015-09-03 Thread Jean-Francois Gosselin
On fresh install of Cassandra what's the best approach to start using incremental repair from the get go (I'm using LCS) ? Run nodetool repair -inc after inserting a few rows , or we still need to follow the migration procedure with sstablerepairedset ? >From the documentation "... If

Mixing incremental repair with sequential

2015-06-26 Thread Carl Hu
Dear colleagues, We are using incremental repair and have noticed that every few repairs, the cluster experiences pauses. We run the repair with the following command: nodetool repair -par -inc I have tried to run it not in parallel, but get the following error: It is not possible to mix

Re: Mixing incremental repair with sequential

2015-06-26 Thread Alain RODRIGUEZ
up and running and an easy way to detect errors on your logs. C*heers, Alain 2015-06-26 16:26 GMT+02:00 Carl Hu m...@carlhu.com: Dear colleagues, We are using incremental repair and have noticed that every few repairs, the cluster experiences pauses. We run the repair with the following

Re: Mixing incremental repair with sequential

2015-06-26 Thread Carl Hu
. You need to troubleshot this and give us more informations. I hope you have a monitoring tool up and running and an easy way to detect errors on your logs. C*heers, Alain 2015-06-26 16:26 GMT+02:00 Carl Hu m...@carlhu.com: Dear colleagues, We are using incremental repair and have

Re: Mixing incremental repair with sequential

2015-06-26 Thread Alain RODRIGUEZ
us more informations. I hope you have a monitoring tool up and running and an easy way to detect errors on your logs. C*heers, Alain 2015-06-26 16:26 GMT+02:00 Carl Hu m...@carlhu.com: Dear colleagues, We are using incremental repair and have noticed that every few repairs, the cluster

Re: Mixing incremental repair with sequential

2015-06-26 Thread Carl Hu
on your logs. C*heers, Alain 2015-06-26 16:26 GMT+02:00 Carl Hu m...@carlhu.com: Dear colleagues, We are using incremental repair and have noticed that every few repairs, the cluster experiences pauses. We run the repair with the following command: nodetool repair -par -inc I have

Re: Did not get positive replies from all endpoints error on incremental repair

2014-10-31 Thread Juho Mäkinen
for adding logging info, but I'll most probably end up adding the logging by myself and I'll start digging through into the actual root cause. I also ran one nodetool repair -par (ie. without incremental repair) and it seems that the repair started. Guess I need to go over the sources if there's

Re: Did not get positive replies from all endpoints error on incremental repair

2014-10-31 Thread Robert Coli
On Fri, Oct 31, 2014 at 8:55 AM, Juho Mäkinen juho.maki...@gmail.com wrote: I can't yet call this conclusive, but it seems that I can't run incremental repairs on the current 2.1.1 and I'm still wondering if anybody else is experiencing the same problem. You have repro steps, if I were you I

Did not get positive replies from all endpoints error on incremental repair

2014-10-30 Thread Juho Mäkinen
I'm having problems running nodetool repair -inc -par -pr on my 2.1.1 cluster due to Did not get positive replies from all endpoints error. Here's an example output: root@db08-3:~# nodetool repair -par -inc -pr [2014-10-30 10:33:02,396] Nothing to repair for keyspace 'system' [2014-10-30

Re: Did not get positive replies from all endpoints error on incremental repair

2014-10-30 Thread Rahul Neelakantan
It appears to come from the ActiveRepairService.prepareForRepair portion of the Code. Are you sure all nodes are reachable from the node you are initiating repair on, at the same time? Any Node up/down/died messages? Rahul Neelakantan On Oct 30, 2014, at 6:37 AM, Juho Mäkinen

Re: Did not get positive replies from all endpoints error on incremental repair

2014-10-30 Thread Juho Mäkinen
No, the cluster seems to be performing just fine. It seems that the prepareForRepair callback() could be easily modified to print which node(s) are unable to respond, so that the debugging effort could be focused better. This of course doesn't help this case as it's not trivial to add the log

Question about incremental repair

2014-10-01 Thread John Sumsion
If you only run incremental repairs, does that mean that bitrot will go undetected for already repaired sstables? If so, is there any other process that will detect bitrot for all the repaired sstables other than full repair (or an unfortunate user)? John... NOTICE: This email message is

Re: Question about incremental repair

2014-10-01 Thread Tyler Hobbs
Compressed SSTables store a checksum for every compressed block, which is checked each time the block is decompressed. I believe there's a ticket out there to add something similar for non-compressed SSTables. We also store the sha1 hash of SSTables in its own file on disk. On Wed, Oct 1, 2014

Re: Question about incremental repair

2014-10-01 Thread Robert Coli
On Wed, Oct 1, 2014 at 3:11 PM, Tyler Hobbs ty...@datastax.com wrote: Compressed SSTables store a checksum for every compressed block, which is checked each time the block is decompressed. I believe there's a ticket out there to add something similar for non-compressed SSTables. We also

Detecting bitrot with incremental repair

2014-09-11 Thread John Sumsion
jbellis talked about incremental repair, which is great, but as I understood, repair was also somewhat responsible for detecting and repairing bitrot on long-lived sstables. If repair doesn't do it, what will? Thanks, John... NOTICE: This email message is for the sole use of the intended

Re: Detecting bitrot with incremental repair

2014-09-11 Thread Robert Coli
On Thu, Sep 11, 2014 at 9:44 AM, John Sumsion sumsio...@familysearch.org wrote: jbellis talked about incremental repair, which is great, but as I understood, repair was also somewhat responsible for detecting and repairing bitrot on long-lived sstables. SSTable checksums, and the checksums