Thanks Erick! It is clear now.
On Tue, Sep 7, 2021 at 4:07 PM Erick Ramirez
wrote:
> No, I'm just saying that [-pr] is the same as [-pr -full], NOT the same as
> just [-full] on its own. Primary range repairs are not compatible with
> incremental repairs so by definition, -pr is a [-pr -full]
No, I'm just saying that [-pr] is the same as [-pr -full], NOT the same as
just [-full] on its own. Primary range repairs are not compatible with
incremental repairs so by definition, -pr is a [-pr -full] repair. I think
you're confusing the concept of a full repair vs incremental. This document
Thanks Erick for the response. So in option 3, -pr is not taken into
consideration which essentially means option 3 is the same as option 1
(which is the full repair).
Right, just want to be sure?
Best,
Deepak
On Tue, Sep 7, 2021 at 3:41 PM Erick Ramirez
wrote:
>
>1. Will perform a full
1. Will perform a full repair vs incremental which is the default in
some later versions.
2. As you said, will only repair the token range(s) on the node for
which it is a primary owner.
3. The -full flag with -pr is redundant -- primary range repairs are
always done as a full
Hi There,
We are on Cassandra 3.0.11 and I want to understand what is the
difference between following two commands
1. nodetool repair -full
2. nodetool repair -pr
3. nodetool repair -full -pr
As per my understanding 1. will do the full repair across all keyspaces. 2.
with -pr, restricts repair
Shouldn't cause GCs.
You can usually think of heap memory separately from the rest. It's
already allocated as far as the OS is concerned, and it doesn't know
anything about GC going on inside of that allocation. You can set
"-XX:+AlwaysPreTouch" to make sure it's physically allocated on
Thanks. I guess some earlier thread got truncated.
I already applied Erick's recommendations and that seem to have worked in
reducing the ram consumption by around 50%.
Regarding cheap memory and hardware, we are already running 96GB boxes and
getting multiple larger ones might be a little
I think Erick posted https://community.datastax.com/questions/6947/.
explained very clearly.
We hit same issue only on a huge table when upgrade, and we changed back
after done.
My understanding, Which option to chose, shall depend on your user case.
If chasing high performance on a big table,
Missed the heap part, not sure why is that happening
On Tue, Aug 3, 2021 at 8:59 AM manish khandelwal <
manishkhandelwa...@gmail.com> wrote:
> mmap is used for faster reads and as you guessed right you might see read
> performance degradation. If you are seeing high memory usage after repairs
>
mmap is used for faster reads and as you guessed right you might see read
performance degradation. If you are seeing high memory usage after repairs
due to mmaped files, the only way to reduce the memory usage is to trigger
some other process which requires memory. *mmapped* files use buffer/cache
Can anyone please help with the above questions? To summarise:
1) What is the impact of using mmap only for indices besides a degradation
in read performance?
2) Why does the off heap consumed during Cassandra full repair remains
occupied 12+ hours after the repair completion and is there a
Hi Erick,
Limiting mmap to index only seems to have resolved the issue. The max ram
usage remained at 60% this time. Could you please point me to the
limitations for setting this param? - For starters, I can see read
performance getting reduced up to 30% (CASSANDRA-8464
Thanks, Bowen, don't think that's an issue - but yes I can try upgrading to
3.11.5 and limit the merkle tree size to bring down the memory utilization.
Thanks, Erick, let me try that.
Can someone please share documentation relating to internal functioning of
full repairs - if there exists one?
Based on the symptoms you described, it's most likely caused by SSTables
being mmap()ed as part of the repairs.
Set `disk_access_mode: mmap_index_only` so only index files get mapped and
not the data files. I've explained it in a bit more detail in this article
--
Could it be related to
https://issues.apache.org/jira/browse/CASSANDRA-14096 ?
On 28/07/2021 13:55, Amandeep Srivastava wrote:
Hi team,
My Cluster configs: DC1 - 9 nodes, DC2 - 4 nodes
Node configs: 12 core x 96GB ram x 1 TB HDD
Repair params: -full -pr -local
Cassandra version: 3.11.4
I'm
Hi team,
My Cluster configs: DC1 - 9 nodes, DC2 - 4 nodes
Node configs: 12 core x 96GB ram x 1 TB HDD
Repair params: -full -pr -local
Cassandra version: 3.11.4
I'm running a full repair on DC2 nodes - one node and one keyspace at a
time. During the repair, ram usage on all 4 nodes spike up to
Also Reaper will skip the anticompaction phase which you might be going
through with nodetool (depending on your version of Cassandra).
That'll reduce the overall time spent on repair and will remove some
compaction pressure.
But as Erick said, unless you have past repairs to rely on and a stable
On Mon, Mar 23, 2020 at 5:49 AM Shishir Kumar
wrote:
> Hi,
>
> Is it possible to get/predict how much time it will take for *nodetool
> -pr *to complete on a node? Currently in one of my env (~800GB data per
> node in 6 node cluster), it is running since last 3 days.
>
Cassandra Reaper used to
There's a lot of moving parts with repairs and how long it takes depends on
various factors including (but not limited to):
- how busy the nodes are
- how fast the CPUs are
- how fast the disks are
- how much network bandwidth is available
- how much data needs to be repaired
It's
Hi,
Is it possible to get/predict how much time it will take for *nodetool -pr *to
complete on a node? Currently in one of my env (~800GB data per node in 6
node cluster), it is running since last 3 days.
Regards,
Shishir
Hello Shalom,
Someone already tried a rolling restart of Cassandra. I will probably try
rebooting the OS.
Repair seems to work if you do it a keyspace at a time.
Thanks for your input.
Rhys
On Sun, May 5, 2019 at 2:14 PM shalom sagges wrote:
> Hi Rhys,
>
> I encountered this error after
Hi Rhys,
I encountered this error after adding new SSTables to a cluster and running
nodetool refresh (v3.0.12).
The refresh worked, but after starting repairs on the cluster, I got the
"Validation failed in /X.X.X.X" error on the remote DC.
A rolling restart solved the issue for me.
Hope this
> Hello,
>
> I’m having issues running repair on an Apache Cassandra Cluster. I’m getting
> "Failed creating a merkle tree“ errors on the replication partner nodes.
> Anyone have any experience of this? I am running 2.2.13.
>
> Further details here…
>
lo everyone..
>> >
>> >
>> >
>> > I have a 6 node Cassandra datacenter, 3 nodes on each datacenter. If
>> one of the node goes down and remain down for more than 3 hr, I have to run
>> nodetool repair. Just wanted to ask if Cassandra automatically tra
>> On Tue, 9 Apr 2019 at 04:19, Kunal wrote:
>> >
>> > Hello everyone..
>> >
>> >
>> >
>> > I have a 6 node Cassandra datacenter, 3 nodes on each datacenter. If one
>> > of the node goes down and remain down for more than 3 hr, I
one..
> >
> >
> >
> > I have a 6 node Cassandra datacenter, 3 nodes on each datacenter. If one
> of the node goes down and remain down for more than 3 hr, I have to run
> nodetool repair. Just wanted to ask if Cassandra automatically tracks the
> time when one of the
if possible.
Regards
On Tue, 9 Apr 2019 at 04:19, Kunal wrote:
>
> Hello everyone..
>
>
>
> I have a 6 node Cassandra datacenter, 3 nodes on each datacenter. If one of
> the node goes down and remain down for more than 3 hr, I have to run nodetool
> repair. Just wa
Hi Kunal,
where do you have that "more than 3 hours" from?
Regards
On Tue, 9 Apr 2019 at 04:19, Kunal wrote:
>
> Hello everyone..
>
>
>
> I have a 6 node Cassandra datacenter, 3 nodes on each datacenter. If one of
> the node goes down and remain down for more th
Hello everyone..
I have a 6 node Cassandra datacenter, 3 nodes on each datacenter. If one of
the node goes down and remain down for more than 3 hr, I have to run
nodetool repair. Just wanted to ask if Cassandra automatically tracks the
time when one of the Cassandra node goes down or do I need
i'm short of resources)
> and WCl=ONE and RCL=ONE in a cluster of 10 nodes in a insert-only scenario.
> The problem: i dont want to use nodetool repair because it would put hige
> load on my cluster for a long time, but also i need data consistency
> and fault tolerance in a way that:
:28 PM, onmstester onmstester onmstes...@zoho.com
wrote:
I'm using RF=2 (i know it should be at least 3 but i'm short of resources) and
WCl=ONE and RCL=ONE in a cluster of 10 nodes in a insert-only scenario.
The problem: i dont want to use nodetool repair because it would put hige load
on my
> On Jun 9, 2018, at 10:28 PM, onmstester onmstester
> wrote:
>
>
> I'm using RF=2 (i know it should be at least 3 but i'm short of resources)
> and WCl=ONE and RCL=ONE in a cluster of 10 nodes in a insert-only scenario.
> The problem: i dont want to use nodetool re
I'm using RF=2 (i know it should be at least 3 but i'm short of resources) and
WCl=ONE and RCL=ONE in a cluster of 10 nodes in a insert-only scenario.
The problem: i dont want to use nodetool repair because it would put hige load
on my cluster for a long time, but also i need data
PM Igor Zubchenok wrote:
> According docs at
> http://cassandra.apache.org/doc/latest/operating/repair.html?highlight=single
>
>
> *The -pr flag will only repair the “primary” ranges on a node, so you can
> repair your entire cluster by running nodetool repair -pr on each
According docs at
http://cassandra.apache.org/doc/latest/operating/repair.html?highlight=single
*The -pr flag will only repair the “primary” ranges on a node, so you can
repair your entire cluster by running nodetool repair -pr on each node in
a single datacenter.*
But I saw many places, where
I want to repair all nodes at all data centers.
Example:
DC1
nodeA
nodeB
nodeC
DC2
node D
node E
node F
If I run `nodetool repair -pr` at nodeA nodeB and nodeC, will all ranges be
repaired?
On Fri, 8 Jun 2018 at 17:57 Rahul Singh
wrote:
> From DS dox : "Do not use -pr with thi
>From DS dox : "Do not use -pr with this option to repair only a local data
>center."
On Jun 8, 2018, 10:42 AM -0400, user@cassandra.apache.org, wrote:
>
> nodetool repair -pr
Hi!
I want to repair all nodes in all datacenters.
Should I run *nodetool repair -pr* at all nodes of a SINGLE datacenter or
at all nodes of ALL datacenters?
--
Regards,
Igor Zubchenok
CTO at Multi Brains LLC
Founder of taxistartup.com saytaxi.com chauffy.com
Skype: igor.zubchenok
te:
>>
>> Hi All,
>>
>> I have 18 node cluster across 3 dc , if i tey to run incremental repair
>> on singke node it takes forever sometome 45 to 1hr and sometime times out
>> ..so i started running "nodetool repair -dc dc1" for each dc one by one
f i tey to run incremental repair on
> singke node it takes forever sometome 45 to 1hr and sometime times out ..so
> i started running "nodetool repair -dc dc1" for each dc one by one ..which
> works fine ..do we have an better way to handle this?
> I am thinking abo
the wheel.
On Apr 12, 2018, 11:02 PM -0400, Abdul Patel <abd786...@gmail.com>, wrote:
> Hi All,
>
> I have 18 node cluster across 3 dc , if i tey to run incremental repair on
> singke node it takes forever sometome 45 to 1hr and sometime times out ..so i
> started running &qu
Hi All,
I have 18 node cluster across 3 dc , if i tey to run incremental repair on
singke node it takes forever sometome 45 to 1hr and sometime times out ..so
i started running "nodetool repair -dc dc1" for each dc one by one ..which
works fine ..do we have an better way to handle
Network Ltd, co.
Add: 2003,20F No.35 Luojia creative city,Luoyu Road,Wuhan,HuBei
Mob: +86 13797007811|Tel: + 86 27 5024 2516
发件人: James Shaw <jxys...@gmail.com>
发送时间: 2018年4月2日 21:56
收件人: user@cassandra.apache.org
主题: Re: nodetool repair and compact
you may use: nodetool upgradess
; .com/blog/2016/07/27/about-deletes-and-tombstones.html
>>>
>>> Repair doesn’t clean up tombstones, they’re only removed through
>>> compaction. I advise taking care with nodetool compact, most of the time
>>> it’s not a great idea for a variety of reasons. Check out
; if you still have questions, ask away.
>>
>>
>> On Apr 1, 2018, at 9:41 PM, Xiangfei Ni <xiangfei...@cm-dt.com> wrote:
>>
>> Hi All,
>> I want to delete the expired tombstone, someone uses nodetool repair
>> ,but someone uses compact,so I want to know w
still have questions, ask away.
>
>
> On Apr 1, 2018, at 9:41 PM, Xiangfei Ni <xiangfei...@cm-dt.com> wrote:
>
> Hi All,
> I want to delete the expired tombstone, someone uses nodetool repair
> ,but someone uses compact,so I want to know which one is the correct way,
> I hav
wrote:
>
> Hi All,
> I want to delete the expired tombstone, someone uses nodetool repair ,but
> someone uses compact,so I want to know which one is the correct way,
> I have read the below pages from Datastax,but the page just tells us how to
> use the command,but doesn’
Hi All,
I want to delete the expired tombstone, someone uses nodetool repair ,but
someone uses compact,so I want to know which one is the correct way,
I have read the below pages from Datastax,but the page just tells us how to
use the command,but doesn’t tell us what it is exactly dose
you don't have any zombie data or other problems.
On 17 March 2018 at 15:52, Hannu Kröger <hkro...@gmail.com> wrote:
> Hi Jonathan,
>
> If you want to repair just one node (for example if it has been down for
> more than 3h), run “nodetool repair -full” on that node. This
Hi Jonathan,
If you want to repair just one node (for example if it has been down for more
than 3h), run “nodetool repair -full” on that node. This will bring all data on
that node up to date.
If you want to repair all data on the cluster, run “nodetool repair -full -pr”
on each node
Hi Community,
Can someone confirm, as the documentation out on the web is so contradictory
and vague.
Nodetool repair -full if I call this, do I need to run this on ALL my nodes or
is just the once sufficient?
Thanks
J
Jonathan Baynes
DBA
Tradeweb Europe Limited
Moor Place * 1 Fore Street
>
> What we did have was some sort of overlapping between our daily repair
> cronjob and the newly added node still in progress joining. Don’t know if
> this sort of combination might causing troubles.
I wouldn't be surprised if this caused problems. Probably want to avoid
that.
with waiting a
of
combination might causing troubles.
I did some further testing and run on the same node the following repair call.
nodetool repair -pr ks cf1 cf2
with waiting a few minutes after each finished execution and every time I see
“… out of sync …” log messages in context of the repair, so it looks like
avoid running the repairs across all the nodes simultaneously
and instead spread them out over a week. That likely made it worse. Also
worth noting that in versions 3.0+ you won't be able to run nodetool repair
in such a way because anti-compaction will be triggered which will fail if
multiple anti
Hello,
Production, 9 node cluster with Cassandra 2.1.18, vnodes, default 256 tokens,
RF=3, compaction throttling = 16, concurrent compactors = 4, running in AWS
using m4.xlarge at ~ 35% CPU AVG
We have a nightly cronjob starting a "nodetool repair -pr ks cf1 cf2"
concurrently on
three availability zones, is that right?
>> You removed one node at a time or three at once?
>>
>> On Wed, Feb 21, 2018 at 10:20 AM, Fd Habash <fmhab...@gmail.com> wrote:
>>
>>> We have had a 15 node cluster across three zones and cluster repairs
>>&g
2.79
> 36157.19 88148 4
>
> 07:41:42Min 0.00 9.89
> 20.5087 0
>
> 07:41:42Max 5.00219.34
> 155469.30 12108970 4
>
>
>
Thank you
From: Fd Habash
Sent: Thursday, February 22, 2018 9:00 AM
To: user@cassandra.apache.org
Subject: RE: Cluster Repairs 'nodetool repair -pr' Cause Severe IncreaseinRead
Latency After Shrinking Cluster
“ data was allowed to fully rebalance/repair/drain before the next node
4
Thank you
From: Carl Mueller
Sent: Wednesday, February 21, 2018 4:33 PM
To: user@cassandra.apache.org
Subject: Re: Cluster Repairs 'nodetool repair -pr' Cause Severe Increase inRead
Latency After Shrinking Cluster
Hm nodetool decommision performs the streamout
; What is your replication factor?
>>> Single datacenter, three availability zones, is that right?
>>> You removed one node at a time or three at once?
>>>
>>> On Wed, Feb 21, 2018 at 10:20 AM, Fd Habash <fmhab...@gmail.com> wrote:
>>>
>&g
Fd Habash <fmhab...@gmail.com> wrote:
>>
>>> We have had a 15 node cluster across three zones and cluster repairs
>>> using ‘nodetool repair -pr’ took about 3 hours to finish. Lately, we shrunk
>>> the cluster to 12. Since then, same repair job has taken up to 12
once?
>
> On Wed, Feb 21, 2018 at 10:20 AM, Fd Habash <fmhab...@gmail.com> wrote:
>
>> We have had a 15 node cluster across three zones and cluster repairs
>> using ‘nodetool repair -pr’ took about 3 hours to finish. Lately, we shrunk
>> the cluster to 12. Since t
21, 2018 at 10:20 AM, Fd Habash <fmhab...@gmail.com> wrote:
>
>> We have had a 15 node cluster across three zones and cluster repairs
>> using ‘nodetool repair -pr’ took about 3 hours to finish. Lately, we shrunk
>> the cluster to 12. Since then, same repair job has taken
oss three zones and cluster repairs using
> ‘nodetool repair -pr’ took about 3 hours to finish. Lately, we shrunk the
> cluster to 12. Since then, same repair job has taken up to 12 hours to finish
> and most times, it never does.
>
> More importantly, at some point during the repair c
irs using
> ‘nodetool repair -pr’ took about 3 hours to finish. Lately, we shrunk the
> cluster to 12. Since then, same repair job has taken up to 12 hours to
> finish and most times, it never does.
>
>
>
> More importantly, at some point during the repair cycle, we see read
>
We have had a 15 node cluster across three zones and cluster repairs using
‘nodetool repair -pr’ took about 3 hours to finish. Lately, we shrunk the
cluster to 12. Since then, same repair job has taken up to 12 hours to finish
and most times, it never does.
More importantly, at some point
Over time the various nodes likely got slightly out of sync - dropped mutations
primarily, during Long GC pauses or maybe network failures
In that case, repair will make all of the data match - how long it takes
depends on size of data (more data takes longer to validate), size of your
Hi Roger,
As you provide incomplete information which is so tough to analyse .
But if you like to refer then please check below JIRA link to check out is
it useful or not. ?
https://issues.apache.org/jira/browse/CASSANDRA-6616
Thanks.
On Thu, Nov 30, 2017 at 9:42 AM, Roger Warner
What would running a repair on a cluster do when there are no deletes nor have
there ever been?I have no deletes yet on my data.Yet running a repair
took over 9 hours on a 5 node cluster?
Roger?
read requests while running nodetool repair
You can accomplish this by manually tweaking the values in the dynamic snitch
mbean so other nodes won’t select it for reads
--
Jeff Jirsa
On Oct 18, 2017, at 3:24 AM, Steinmaurer, Thomas
<thomas.steinmau...@dynatrace.com<mailto:thomas.st
gt; thinks that some sort of write-only is appropriate.
>
> Thanks,
> Thomas
>
> From: Nicolas Guyomar [mailto:nicolas.guyo...@gmail.com]
> Sent: Mittwoch, 18. Oktober 2017 09:58
> To: user@cassandra.apache.org
> Subject: Re: Not serving read requests while running no
thinks
that some sort of write-only is appropriate.
Thanks,
Thomas
From: Nicolas Guyomar [mailto:nicolas.guyo...@gmail.com]
Sent: Mittwoch, 18. Oktober 2017 09:58
To: user@cassandra.apache.org
Subject: Re: Not serving read requests while running nodetool repair
Hi Thomas,
AFAIK temporarily reading
Hi Thomas,
AFAIK temporarily reading at LOCAL_QUORUM/QUORUM until nodetool repair is
finished is the way to go. You can still disable binary/thrift on the node
to "protect" it from acting as a coordinator, and complete its repair
quietly, but I'm not sure that would make such a huge
, from an operational POV, we will trigger a nodetool repair after the
recovered node has start up, but to my understanding, this still may cause
reading stale data from this particular node until nodetool repair is finished,
which may take several hours. Is this correct?
Is there a way (e.g
.
On September 28, 2017 at 11:46:47 AM, Dmitry Buzolin (dbuz5ga...@gmail.com)
wrote:
Hi All,
Can someone confirm if
"nodetool repair -pr -j2" does run with -inc too? I see the docs mention -inc
is set by default, but I am not sure if it is enabled when -pr option is used
Screen and/or subrange repair (e.g. reaper)
--
Jeff Jirsa
> On Sep 28, 2017, at 8:23 PM, Mitch Gitman <mgit...@gmail.com> wrote:
>
> I'm on Apache Cassandra 3.10. I'm interested in moving over to Reaper for
> repairs, but in the meantime, I want to get nodetool repai
I'm on Apache Cassandra 3.10. I'm interested in moving over to Reaper for
repairs, but in the meantime, I want to get nodetool repair working a
little more gracefully.
What I'm noticing is that, when I'm running a repair for the first time
with the --full option after a large initial load of data
Hi All,
Can someone confirm if
"nodetool repair -pr -j2" does run with -inc too? I see the docs mention -inc
is set by default, but I am not sure if it is enabled when -pr option is used.
Thanks!
-
To unsubscribe, e-
What is your GC_GRACE_SECONDS ?
What kind repair option do you use for nodetool repair on a keyspace ?
Did you start the repair on one node? did you use nodetool repair -pr ? or
just "nodetool repair keyspace" ? How many nodetool repair processes do you
use on the nodes?
On Sun, Ju
:
Hello All,
I have a 6 node ring with 3 nodes in DC1 and 3 nodes in DC2. I ssh into node5
on DC2 was in a “DN” state. I ran “nodetool repair”. I’ve had this situation
before and ran “nodetool repair -dc DC2”. I’m trying what if anything is
different between those commands. What are they actually
Hello All,
I have a 6 node ring with 3 nodes in DC1 and 3 nodes in DC2. I ssh into node5
on DC2 was in a “DN” state. I ran “nodetool repair”. I’ve had this situation
before and ran “nodetool repair -dc DC2”. I’m trying what if anything is
different between those commands. What
On 2017-07-27 21:36 (-0700), Mitch Gitman <mgit...@gmail.com> wrote:
> Now, the particular symptom to which that response refers is not what I was
> seeing, but the response got me thinking that perhaps the failures I was
> getting were on account of attempting to run &
You need check the node that failed validation to find the relevant error.
The IP should be in the logs of the node you started repair on.
You shouldn't run multiple repairs on the same table from multiple nodes
unless you really know what you're doing and not using vnodes. The failure
you are
Michael, thanks for the input. I don't think I'm going to need to upgrade
to 3.11 for the sake of getting nodetool repair working for me. Instead, I
have another plausible explanation and solution for my particular situation.
First, I should say that disk usage proved to be a red herring
On 07/27/2017 12:10 PM, Mitch Gitman wrote:
> I'm using Apache Cassandra 3.10.
> this is a dev cluster I'm talking about.
> Further insights welcome...
Upgrade and see if one of the many fixes for 3.11.0 helped?
https://github.com/apache/cassandra/blob/cassandra-3.11.0/CHANGES.txt#L1-L129
If
partitioner range:
nodetool repair --partitioner-range
On a couple nodes, I was seeing the repair fail with the vague "Some repair
failed" message:
[2017-07-27 15:30:59,283] Some repair failed
[2017-07-27 15:30:59,286] Repair command #2 finished in 10 seconds
error: Repair job
a <var...@uber.com> wrote:
I do not see the need to run repair, as long as cluster was in healthy state on
adding new nodes.
On Fri, Jul 7, 2017 at 8:37 AM, vasu gunja <vasu.no...@gmail.com> wrote:
Hi ,
I have a question regarding "nodetool repair -dc" option. recently we added
mul
un repair, as long as cluster was in healthy
> state on adding new nodes.
>
> On Fri, Jul 7, 2017 at 8:37 AM, vasu gunja <vasu.no...@gmail.com> wrote:
>
>> Hi ,
>>
>> I have a question regarding "nodetool repair -dc" option. recently we
>> added mu
I do not see the need to run repair, as long as cluster was in healthy
state on adding new nodes.
On Fri, Jul 7, 2017 at 8:37 AM, vasu gunja <vasu.no...@gmail.com> wrote:
> Hi ,
>
> I have a question regarding "nodetool repair -dc" option. recently we
> added multipl
Hi ,
I have a question regarding "nodetool repair -dc" option. recently we added
multiple nodes to one DC center, we want to perform repair only on current
DC.
Here is my question.
Do we need to perform "nodetool repair -dc" on all nodes belongs to that DC
?
or only
: nodetool repair failure
It did not help much. But other issue or error I saw when I repair the keyspace
was it says
"Sync failed between /xx.xx.xx.93 and /xx.xx.xx.94" this was run from .91 node.
On Thu, Jun 29, 2017 at 4:44 PM, Akhil Mehra
<akhilme...@gmail.com<mailto:akhi
ressListene r.java:77)
>> at com.sun.jmx.remote.internal. ClientNotifForwarder$ NotifFetcher.
>> dispatchNotification( ClientNotifForwarder.java:583)
>> at com.sun.jmx.remote.internal. ClientNotifForwarder$ NotifFetcher.doRun(
>> ClientNotifForwarder.java:533)
>> at com.sun
tification( ClientNotifForwarder.java:583)
> at com.sun.jmx.remote.internal. ClientNotifForwarder$
> NotifFetcher.doRun( ClientNotifForwarder.java:533)
> at com.sun.jmx.remote.internal. ClientNotifForwarder$ NotifFetcher.run(
> ClientNotifForwarder.java:452)
>
remote.internal. ClientNotifForwarder$ NotifFetcher.run(
> ClientNotifForwarder.java:452)
> at com.sun.jmx.remote.internal. ClientNotifForwarder$
> LinearExecutor$1.run( ClientNotifForwarder.java:108)
>
>
>
> FYI I am running repair from xx.xx.xx.91 node and its a 5 node cluster
>
gmail.com> wrote:
nodetool repair has a trace option
nodetool repair -tr yourkeyspacename
see if that provides you with additional information.
Regards,Akhil
On 28/06/2017, at 2:25 AM, Balaji Venkatesan <venkatesan.bal...@gmail.com>
wrote:
We use Apache Cassandra 3.10-13
On Ju
lme...@gmail.com> wrote:
> nodetool repair has a trace option
>
> nodetool repair -tr yourkeyspacename
>
> see if that provides you with additional information.
>
> Regards,
> Akhil
>
> On 28/06/2017, at 2:25 AM, Balaji Venkatesan <venkatesan.bal...@gmail.com>
&
nodetool repair has a trace option
nodetool repair -tr yourkeyspacename
see if that provides you with additional information.
Regards,
Akhil
> On 28/06/2017, at 2:25 AM, Balaji Venkatesan <venkatesan.bal...@gmail.com>
> wrote:
>
>
> We use Apache Cassandra 3.10-13
&
We use Apache Cassandra 3.10-13
On Jun 26, 2017 8:41 PM, "Michael Shuler" <mich...@pbandjelly.org> wrote:
What version of Cassandra?
--
Michael
On 06/26/2017 09:53 PM, Balaji Venkatesan wrote:
> Hi All,
>
> When I run nodetool repair on a keyspace I constantly get &qu
What version of Cassandra?
--
Michael
On 06/26/2017 09:53 PM, Balaji Venkatesan wrote:
> Hi All,
>
> When I run nodetool repair on a keyspace I constantly get "Some repair
> failed" error, there are no sufficient info to debug more. Any help?
>
Hi All,
When I run nodetool repair on a keyspace I constantly get "Some repair
failed" error, there are no sufficient info to debug more. Any help?
Here is the stacktrace
==
[2017-06-27 02:44:34,275] Some repair fa
1 - 100 of 495 matches
Mail list logo