Hi everyone,
Could you please answer the following questions regarding materialized views or
point me to the right direction in the documentation? We are currently using
Cassandra v4.0.11.
1. Are incremental repairs supported for the base table of Materialized
views?
2. Are incremental
t; We're on 4.0.1 and switched to incremental repairs a couple of months ago.
> They work fine about 95% of the time, but once in a while a session will
> get stuck and will have to be cancelled (with `nodetool repair_admin cancel
> -s `). Typically the session will be in REPAIRING but nothing w
Could you file a jira with the details?
Dinesh
> On Nov 26, 2021, at 2:40 PM, James Brown wrote:
>
>
> We're on 4.0.1 and switched to incremental repairs a couple of months ago.
> They work fine about 95% of the time, but once in a while a session will get
> s
We're on 4.0.1 and switched to incremental repairs a couple of months ago.
They work fine about 95% of the time, but once in a while a session will
get stuck and will have to be cancelled (with `nodetool repair_admin cancel
-s `). Typically the session will be in REPAIRING but nothing
action on the
first run, I'd recommend to:
- mark all sstables as repaired
- run a full repair
- schedule very regular (daily) incremental repairs
Bye,
Alex
Le jeu. 16 sept. 2021 à 23:03, C. Scott Andreas a
écrit :
> Hi James, thanks for reaching out.
>
> A large number of fi
repairs are fatally flawed in Cassandra 3.x or whether
they're still a good default. What's the current best thinking? The most recent 3.x
documentation still advocates in favor of using incremental repairs...CASSANDRA-9143
is marked as fixed in 4.0; did any improvements make it into any
There's been a lot of back and forth on the wider Internet and in this
mailing list about whether incremental repairs are fatally flawed in
Cassandra 3.x or whether they're still a good default. What's the current
best thinking? The most recent 3.x documentation
<http://cassandra.apache.org/
Hi
We would like to migrate from incremental repairs to regular full repairs
on cassandra cluster running on 3.11 apache cassandra . There is a
procedure for it for datastax mentioned inside the document mentioed below
but the nodetool option mentioned inside the document is not available
you can get away with loading from only one node if you're positive all
data is consistent. A repair prior to loading should be enough, but if that
doesn't work just load from all nodes.
On 11 Nov. 2017 23:15, "Brice Figureau"
wrote:
> On 10/11/17 21:18, kurt
On 10/11/17 21:18, kurt greaves wrote:
> If everything goes smoothly the next incremental should cut it, but a
> full repair post load is probably a good idea anyway. Make sure you
> sstableload every sstable from every node if you want to keep consistency.
If the previous cluster had 3 nodes
If everything goes smoothly the next incremental should cut it, but a full
repair post load is probably a good idea anyway. Make sure you sstableload
every sstable from every node if you want to keep consistency.
to incremental repairs when I
moved it to 3.0.
Should I need to perform again a full repair after migrating or is
running daily incremental enough?
Thanks!
--
Brice Figureau
-
To unsubscribe, e-mail: user-unsubscr
node- communication network cards on
your C* host machines.
§ If possible, reduce # of vnodes!
From: Chris Stokesmore [mailto:chris.elsm...@demandlogic.co]
Sent: Monday, June 19, 2017 4:50 AM
To: anujw_2...@yahoo.co.in
Cc: user@cassandra.apache.org
Subject: Re: Partition range incremental rep
> previously when running with the partition range option they were taking
>> more like 8-9 hours.
>>
>> As I understand it, using incremental should have sped this process up as
>> all three sets of data on each repair job should be marked as repaired
>> however t
es.apache.org/jira/browse/CASSANDRA-9143>.
>
> TL;DR: Do not use incremental repair before 4.0.
Hi Jonathan,
Thanks for your reply, this is a slightly scary message for us! 2.2 has been
out for nearly 2 years and incremental repairs are the default - and it has
horrible bugs!?
I
d however
> this does not seem to be the case. Any ideas?
>
> Chris
>
>> On 6 Jun 2017, at 16:08, Anuj Wadehra <anujw_2...@yahoo.co.in.INVALID
>> <mailto:anujw_2...@yahoo.co.in.INVALID>> wrote:
>>
>> Hi Chris,
>>
>> Using pr with incremental
they were taking
> more like 8-9 hours.
>
> As I understand it, using incremental should have sped this process up as
> all three sets of data on each repair job should be marked as repaired
> however this does not seem to be the case. Any ideas?
>
> Chris
>
> On 6 Jun
n.INVALID> wrote:
Hi Chris,
Using pr with incremental repairs does not make sense. Primary range repair is
an optimization over full repair. If you run full repair on a n node cluster
with RF=3, you would be repairing each data thrice. E.g. in a 5 node cluster
with RF=3, a range may exist
up as all
three sets of data on each repair job should be marked as repaired however this
does not seem to be the case. Any ideas?
Chris
> On 6 Jun 2017, at 16:08, Anuj Wadehra <anujw_2...@yahoo.co.in.INVALID> wrote:
>
> Hi Chris,
>
> Using pr with incremental repair
Hi Chris,
Using pr with incremental repairs does not make sense. Primary range repair is
an optimization over full repair. If you run full repair on a n node cluster
with RF=3, you would be repairing each data thrice. E.g. in a 5 node cluster
with RF=3, a range may exist on node A,B and C
use more anticompaction and
> http://docs.datastax.com/en/archived/cassandra/2.2/cassandra/tools/toolsRepair.html
> says 'Performing partitioner range repairs by using the -pr option is
> generally considered a good choice for doing manual repairs. However, this
> option cannot be used wit
owever, this
option cannot be used with incremental repairs (default for Cassandra 2.2 and
later).
Only problem is our -pr repairs were taking about 8 hours, and now the non-pr
repair are taking 24+ - I guess this makes sense, repairing 1/7 of data
increased to 3/7, except I was hoping to see a speed up
;
>>> > Le lun. 24 oct. 2016 18:05, Sean Bridges <sean.brid...@globalrelay.net>
>>> a
>>> > écrit :
>>> >
>>> > > Hey,
>>> > >
>>> > > In the datastax documentation on repair [1], it says,
>>> > &
>> a
>> > écrit :
>> >
>> > > Hey,
>> > >
>> > > In the datastax documentation on repair [1], it says,
>> > >
>> > > "The partitioner range option is recommended for routine maintenance.
>> Do
>&g
[1], it says,
> > >
> > > "The partitioner range option is recommended for routine maintenance.
> Do
> > > not use it to repair a downed node. Do not use with incremental repair
> > > (default for Cassandra 3.0 and later)."
> > &g
> >
> > "The partitioner range option is recommended for routine maintenance. Do
> > not use it to repair a downed node. Do not use with incremental repair
> > (default for Cassandra 3.0 and later)."
> >
> > Why is it not recommended to use -pr with in
Can't say I have too many ideas. If load is low during the repair it
shouldn't be happening. Your disks aren't overutilised correct? No other
processes writing loads of data to them?
That is not happening anymore since I am repairing a keyspace with
much less data (the other one is still there in write-only mode).
The command I am using is the most boring (even shed the -pr option so
to keep anticompactions to a minimum): nodetool -h localhost repair
It's executed
Blowing out to 1k SSTables seems a bit full on. What args are you passing
to repair?
Kurt Greaves
k...@instaclustr.com
www.instaclustr.com
On 31 October 2016 at 09:49, Stefano Ortolani wrote:
> I've collected some more data-points, and I still see dropped
> mutations with
I've collected some more data-points, and I still see dropped
mutations with compaction_throughput_mb_per_sec set to 8.
The only notable thing regarding the current setup is that I have
another keyspace (not being repaired though) with really wide rows
(100MB per partition), but that shouldn't
Thanks.
Sean
From: Alexander Dejanovski [a...@thelastpickle.com]
Sent: Monday, October 24, 2016 10:39 AM
To: user@cassandra.apache.org
Subject: Re: incremental repairs with -pr flag?
Hi Sean,
In order to mitigate its impact, anticompaction is not fully executed
ssandra 3.0 and later)."
>
> Why is it not recommended to use -pr with incremental repairs?
>
> Thanks,
>
> Sean
>
> [1]
> https://docs.datastax.com/en/cassandra/3.x/cassandra/operations/opsRepairNodesManualRepair.html
> --
>
> Sean Bridges
>
> senior
to use -pr with incremental repairs?
Thanks,
Sean
[1]
https://docs.datastax.com/en/cassandra/3.x/cassandra/operations/opsRepairNodesManualRepair.html
--
Sean Bridges
senior systems architect
Global Relay
_sean.bridges@globalrelay.net_ <mailto:sean.brid...@globalrelay.net>
*866.484.6630 *
Ne
probably because i was looking the wrong version of the codebase :p
, Alexander Dejanovski <a...@thelastpickle.com
>> > wrote:
>>
>> There aren't that many tools I know to orchestrate repairs and we
>> maintain a fork of Reaper, that was made by Spotify, and handles
>> incremental repair : https://github.com/thelastpickle/cassandra-reaper
>
cassandra-reaper
>
>
> Looks like you're using subranges with incremental repairs. This will
> generate a lot of anticompactions as you'll only repair a portion of the
> SSTables. You should use forceRepairAsync for incremental repairs so that
> it's possible for the repair to act
-reaper
Looks like you're using subranges with incremental repairs. This will
generate a lot of anticompactions as you'll only repair a portion of the
SSTables. You should use forceRepairAsync for incremental repairs so that
it's possible for the repair to act on the whole SSTable, minimising
Sorry I shouldn't have said adding a node. Sometimes data seems to be corrupted
or inconsistent in which case would like to run a repair.
Sent from my iPhone
> On Oct 19, 2016, at 10:10 AM, Sean Bridges
> wrote:
>
> Thanks, we will try that.
>
> Sean
>
>> On
There aren't that many tools I know to orchestrate repairs and we maintain
a fork of Reaper, that was made by Spotify, and handles incremental repair
: https://github.com/thelastpickle/cassandra-reaper
We just added Cassandra as storage back end (only postgres currently) in
one of the branches,
Thanks, we will try that.
Sean
On 16-10-19 09:34 AM, Alexander Dejanovski wrote:
Hi Sean,
you should be able to do that by running subrange repairs, which is
the only type of repair that wouldn't trigger anticompaction AFAIK.
Beware that now you will have sstables marked as repaired and
Can you explain why you would want to run repair for new nodes?
Aren't you talking about bootstrap, which is not related to repair actually?
Le mer. 19 oct. 2016 18:57, Kant Kodali a écrit :
> Thanks! How do I do an incremental repair when I add a new node?
>
> Sent from my
Also any suggestions on a tool to orchestrate the incremental repair? Like say
most commonly used
Sent from my iPhone
> On Oct 19, 2016, at 9:54 AM, Alexander Dejanovski
> wrote:
>
> Hi Kant,
>
> subrange is a form of full repair, so it will just split the repair
Thanks! How do I do an incremental repair when I add a new node?
Sent from my iPhone
> On Oct 19, 2016, at 9:54 AM, Alexander Dejanovski
> wrote:
>
> Hi Kant,
>
> subrange is a form of full repair, so it will just split the repair process
> in smaller yet sequential
Hi Kant,
subrange is a form of full repair, so it will just split the repair process
in smaller yet sequential pieces of work (repair is started giving a start
and end token). Overall, you should not expect improvements other than
having less overstreaming and better chances of success if your
Another question on a same note would be what would be the fastest way to do
repairs of size 10TB cluster ? Full repairs are taking days. So among repair
parallel or repair sub range which is faster in the case of say adding a new
node to the cluster?
Sent from my iPhone
> On Oct 19, 2016, at
Hi Sean,
you should be able to do that by running subrange repairs, which is the
only type of repair that wouldn't trigger anticompaction AFAIK.
Beware that now you will have sstables marked as repaired and others marked
as unrepaired, which will never be compacted together.
You might want to
Hey,
We are upgrading from cassandra 2.1 to cassandra 2.2.
With cassandra 2.1 we would periodically repair all nodes, using the -pr
flag.
With cassandra 2.2, the same repair takes a very long time, as cassandra
does an anti compaction after the repair. This anti compaction causes
most
>>>> "The best way to predict the future is to invent it" Alan Kay
>>>>
>>>> On Tue, Jun 21, 2016 at 4:34 PM, Vlad <qa23d-...@yahoo.com> wrote:
>>>>
>>>>> Thanks for answer!
>>>>>
>>>>> >
i <ostef...@gmail.com>:
>>>>>
>>>>>> I see. Didn't think about it that way. Thanks for clarifying!
>>>>>>
>>>>>>
>>>>>> On Fri, Aug 26, 2016 at 2:14 PM, Paulo Motta <
>>>>>> paulori
t;>> > wrote:
>>>>>
>>>>>> > What is the underlying reason?
>>>>>>
>>>>>> Basically to minimize the amount of anti-compaction needed, since
>>>>>> with RF=3 you'd need to perform anti-
herefore manual migration procedure should be
>>>> UNnecessary"
>>>>
>>>> On Mon, Jun 20, 2016 at 3:21 PM, Bryan Cheng <br...@blockcypher.com>
>>>> wrote:
>>>>
>>>> I don't use 3.x so hopefully someone with operational exp
ded, since with
>>>>> RF=3 you'd need to perform anti-compaction 3 times in a particular node to
>>>>> get it fully repaired, while without it you can just repair the full
>>>>> node's
>>>>> range in one run. Assuming you run
ryan Cheng <br...@blockcypher.com>
>>> wrote:
>>>
>>>
>>> Sorry, meant to say "therefore manual migration procedure should be
>>> UNnecessary"
>>>
>>> On Mon, Jun 20, 2016 at 3:21 PM, Bryan Cheng <br...@blockcypher.
m>
>> wrote:
>>
>>
>> Sorry, meant to say "therefore manual migration procedure should be
>> UNnecessary"
>>
>> On Mon, Jun 20, 2016 at 3:21 PM, Bryan Cheng <br...@blockcypher.com>
>> wrote:
>>
>> I don't use 3.x so hopefull
ode to
>>>> get it fully repaired, while without it you can just repair the full node's
>>>> range in one run. Assuming you run repair frequent enough this will not be
>>>> a big deal, since you will skip already repaired data in the next round so
>>>>
l skip already repaired data in the next round so
>>> you will not have the problem of re-doing work as in non-inc non-pr repair.
>>>
>>> 2016-08-26 7:57 GMT-03:00 Stefano Ortolani <ostef...@gmail.com>:
>>>
>>>> Hi Paulo, could you elaborate on
next round so
>> you will not have the problem of re-doing work as in non-inc non-pr repair.
>>
>> 2016-08-26 7:57 GMT-03:00 Stefano Ortolani <ostef...@gmail.com>:
>>
>>> Hi Paulo, could you elaborate on 2?
>>> I didn't know incremental repairs were
gt; you will not have the problem of re-doing work as in non-inc non-pr repair.
>
> 2016-08-26 7:57 GMT-03:00 Stefano Ortolani <ostef...@gmail.com>:
>
>> Hi Paulo, could you elaborate on 2?
>> I didn't know incremental repairs were not compatible with -pr
>> What is the
?
> I didn't know incremental repairs were not compatible with -pr
> What is the underlying reason?
>
> Regards,
> Stefano
>
>
> On Fri, Aug 26, 2016 at 1:25 AM, Paulo Motta <pauloricard...@gmail.com>
> wrote:
>
>> 1. Migration procedure is no longer nece
gt; I didn't know incremental repairs were not compatible with -pr
> What is the underlying reason?
>
> Regards,
> Stefano
>
>
> On Fri, Aug 26, 2016 at 1:25 AM, Paulo Motta <pauloricard...@gmail.com>
> wrote:
>
>> 1. Migration procedure is no longer necessa
Hi Paulo, could you elaborate on 2?
I didn't know incremental repairs were not compatible with -pr
What is the underlying reason?
Regards,
Stefano
On Fri, Aug 26, 2016 at 1:25 AM, Paulo Motta <pauloricard...@gmail.com>
wrote:
> 1. Migration procedure is no longer necessary after CASSA
l repair is not supported with -pr, -local or -st/-et
>>> options, so you should run incremental repair in all nodes in all DCs
>>> sequentially (you should be aware that this will probably generate inter-DC
>>> traffic), no need to disable autocompaction or stopping nodes.
&
-03:00 Aleksandr Ivanov <ale...@gmail.com>:
>>
>>> I’m new in Cassandra and trying to figure out how to _start_ using
>>> incremental repairs. I have seen article about “Migrating to incremental
>>> repairs” but since I didn’t use repairs before at all and
>> I’m new in Cassandra and trying to figure out how to _start_ using
>> incremental repairs. I have seen article about “Migrating to incremental
>> repairs” but since I didn’t use repairs before at all and I use Cassandra
>> version v3.0.8, then maybe not all steps are
Cassandra and trying to figure out how to _start_ using
> incremental repairs. I have seen article about “Migrating to incremental
> repairs” but since I didn’t use repairs before at all and I use Cassandra
> version v3.0.8, then maybe not all steps are needed which are mentioned in
I’m new in Cassandra and trying to figure out how to _start_ using
incremental repairs. I have seen article about “Migrating to incremental
repairs” but since I didn’t use repairs before at all and I use Cassandra
version v3.0.8, then maybe not all steps are needed which are mentioned in
Datastax
That's what I was thinking. Maybe GC pressure?
Some more details: during anticompaction I have some CFs exploding to 1K
SStables (to be back to ~200 upon completion).
HW specs should be quite good (12 cores/32 GB ram) but, I admit, still
relying on spinning disks, with ~150GB per node.
Current
That's pretty low already, but perhaps you should lower to see if it will
improve the dropped mutations during anti-compaction (even if it increases
repair time), otherwise the problem might be somewhere else. Generally
dropped mutations is a signal of cluster overload, so if there's nothing
else
Not yet. Right now I have it set at 16.
Would halving it more or less double the repair time?
On Tue, Aug 9, 2016 at 7:58 PM, Paulo Motta
wrote:
> Anticompaction throttling can be done by setting the usual
> compaction_throughput_mb_per_sec knob on cassandra.yaml or
Anticompaction throttling can be done by setting the usual
compaction_throughput_mb_per_sec knob on cassandra.yaml or via nodetool
setcompactionthroughput. Did you try lowering that and checking if that
improves the dropped mutations?
2016-08-09 13:32 GMT-03:00 Stefano Ortolani
Hi all,
I am running incremental repaird on a weekly basis (can't do it every day
as one single run takes 36 hours), and every time, I have at least one node
dropping mutations as part of the process (this almost always during the
anticompaction phase). Ironically this leads to a system where
gration procedure should be
UNnecessary"
On Mon, Jun 20, 2016 at 3:21 PM, Bryan Cheng <br...@blockcypher.com> wrote:
I don't use 3.x so hopefully someone with operational experience can chime in,
however my understanding is: 1) Incremental repairs should be the default in
the 3.x r
Sorry, meant to say "therefore manual migration procedure should be
UNnecessary"
On Mon, Jun 20, 2016 at 3:21 PM, Bryan Cheng <br...@blockcypher.com> wrote:
> I don't use 3.x so hopefully someone with operational experience can chime
> in, however my understanding is:
I don't use 3.x so hopefully someone with operational experience can chime
in, however my understanding is: 1) Incremental repairs should be the
default in the 3.x release branch and 2) sstable repairedAt is now properly
set in all sstables as of 2.2.x for standard repairs and therefore manual
Hi,
assuming I have new, empty Cassandra cluster, how should I start using
incremental repairs? Is incremental repair is default now (as I don't see -inc
option in nodetool) and nothing is needed to use it, or should we perform
migration procedure anyway? And what happens to new column families
are not done, the first incremental
repair could take a very long time.
Can anyone clarify this point please?
Did anyone try incremental repairs without the migration procedure with
a sensible amount of data to migrate?
How much longer did it take?
Thank you very much for your help
Kind regards
As far as I know, docs is quite inconsistent on the matter.
Based on some research here and on IRC, recent versions of Cassandra do no
require anything specific when migrating to incremental repairs but the the
-inc switch even on LCS.
Any confirmation on the matter is more than welcome.
Regards
currently have a 3 nodes Cassandra cluster with RF = 3.
We are using Cassandra 2.1.7.
We would like to start using incremental repairs.
We have some tables using LCS compaction strategy and some others
using STCS.
Here is the procedure written in the documentation:
To migrate to incremental repair
Hi,
We currently have a 3 nodes Cassandra cluster with RF = 3.
We are using Cassandra 2.1.7.
We would like to start using incremental repairs.
We have some tables using LCS compaction strategy and some others using
STCS.
Here is the procedure written in the documentation:
To migrate
Hi,
I am currently trying to migrate my test cluster to incremental repairs.
These are the steps I'm doing on every node:
- touch marker
- nodetool disableautocompation
- nodetool repair
- cassandra stop
- find all *Data*.db files older then marker
- invoke sstablerepairedset on those
wait
until 2.1.3 is out, https://issues.apache.org/jira/browse/CASSANDRA-8316
fixes a bunch of issues with incremental repairs
-pr is sufficient, same rules apply as before, if you run -pr you need to
repair every node
/Marcus
On Thu, Jan 8, 2015 at 9:16 AM, Roland Etzenhammer
r.etzenham...@t
and with your
hint I take a look at sstablemetadata from a non migrated node and
there are indeed Repaired at entries on some sstables already. So if I
got this right, in 2.1.2+ there is nothing to do to switch to
incremental repairs (apart from running the repairs themself).
But one thing I see during
this
right, in 2.1.2+ there is nothing to do to switch to incremental repairs
(apart from running the repairs themself).
But one thing I see during testing is that there are many sstables, with
small size:
- in total there are 5521 sstables on one node
- 115 sstables are bigger than 1MB
- 4949
Hi Marcus,
thanks a lot for those pointers. Now further testing can begin - and
I'll wait for 2.1.3. Right now on production repair times are really
painful, maybe that will become better. At least I hope so :-)
On Thu, Jan 8, 2015 at 12:28 AM, Marcus Eriksson krum...@gmail.com wrote:
But, if you are running 2.1 in production, I would recommend that you wait
until 2.1.3 is out, https://issues.apache.org/jira/browse/CASSANDRA-8316
fixes a bunch of issues with incremental repairs
There are other
I'm having problems understanding how incremental repairs are supposed to
be run.
If I try to do nodetool repair -inc cassandra will complain that It is
not possible to mix sequential repair and incremental repairs. However it
seems that running nodetool repair -inc -par does the job, but I
On Wed, Oct 22, 2014 at 2:39 PM, Juho Mäkinen juho.maki...@gmail.com
wrote:
I'm having problems understanding how incremental repairs are supposed to
be run.
If I try to do nodetool repair -inc cassandra will complain that It is
not possible to mix sequential repair and incremental repairs
On Wed, Oct 22, 2014 at 5:47 AM, Marcus Eriksson krum...@gmail.com wrote:
no, if you get a corrupt sstable for example, you will need to run an old
style repair on that node (without -inc).
As a general statement, if you get a corrupt SSTable, restoring it from a
backup (with the node down)
88 matches
Mail list logo