nodetool repair, it uses all disk io and the server
load
goes up quickly, just wonder is there anyway to make it smoother.
The validating compaction that is part of repair is subject to
compaction throttling.
The streaming of sstables afterwards is not however. In 1.0 there is
thottling
I saw the ticket about compaction throttling, just wonder is that necessary
to add an option or is there anyway to do repair throttling?
every time I run nodetool repair, it uses all disk io and the server load
goes up quickly, just wonder is there anyway to make it smoother.
I saw the ticket about compaction throttling, just wonder is that necessary
to add an option or is there anyway to do repair throttling?
every time I run nodetool repair, it uses all disk io and the server load
goes up quickly, just wonder is there anyway to make it smoother.
The validating
a
while, the nodetool repair never returns. We have checked the system.log,
nothing seems to be out of ordinary, no errors, no exceptions. The data is
only 50 mb, and it is consistently updated.
Shutting down one node during the repair process could cause similar
symptom. So, our original
environment, we got two nodes with RF=2 running 0.8.4. We
tried to test the repair functions of cassandra, however, every once a while,
the nodetool repair never returns. We have checked the system.log, nothing
seems to be out of ordinary, no errors, no exceptions. The data is only 50
mb
Hi,
In our testing environment, we got two nodes with RF=2 running 0.8.4. We
tried to test the repair functions of cassandra, however, every once a
while, the nodetool repair never returns. We have checked the system.log,
nothing seems to be out of ordinary, no errors, no exceptions. The data
Would Cassandra-2433 cause this?
On Wed, Aug 24, 2011 at 7:23 PM, Boris Yen yulin...@gmail.com wrote:
Hi,
In our testing environment, we got two nodes with RF=2 running 0.8.4. We
tried to test the repair functions of cassandra, however, every once a
while, the nodetool repair never returns
El sáb, 20-08-2011 a las 01:22 +0200, Peter Schuller escribió:
Is there any chance that the entire file from source node got streamed to
destination node even though only small amount of data in hte file from
source node is supposed to be streamed destination node?
Yes, but the thing
After having done so many tries, I am not sure which log entries correspond
to what. However, there were many of this type:
WARN [CompactionExecutor:14] 2011-08-18 18:47:00,596 CompactionManager.java
(line 730) Index file contained a different key or row size; using key from
data file
And
Do you have an indication that at least the disk space is in fact
consistent with the amount of data being streamed between the nodes? I
think you had 90 - ~ 450 gig with RF=3, right? Still sounds like a
lot assuming repairs are not running concurrently (and compactions are
able to run after
Péter,
In our case they get created exclusively during repairs. Compactionstats
showed a huge number of sstable build compactions
On Aug 20, 2011 1:23 AM, Peter Schuller peter.schul...@infidyne.com
wrote:
Is there any chance that the entire file from source node got streamed to
destination node
In our case they get created exclusively during repairs. Compactionstats
showed a huge number of sstable build compactions
Do you have an indication that at least the disk space is in fact
consistent with the amount of data being streamed between the nodes? I
think you had 90 - ~ 450 gig with
The compactions ettings do not affect repair. (Thinking out loud, or does it
? Validation compactions and table builds.)
It does.
--
/ Peter Schuller (@scode on twitter)
Is it normal that the repair takes 4+ hours for every node, with only about
10G data? If this is not expected, do we have any hint what could be causing
this?
It does not seem entirely crazy, depending on the nature of your data
and how CPU-intensive it is per byte to compact.
Assuming
After upgrading to cass 0.8.4 from cass 0.6.11. I ran scrub. That worked
fine. Then I ran nodetool repair on one of the nodes. The disk usage on
data directory increased from 40GB to 480GB, and it's still growing.
If you check your data directory, does it contain a lot of
*Compacted files
19, 2011 at 2:26 PM, Peter Schuller peter.schul...@infidyne.com
wrote:
After upgrading to cass 0.8.4 from cass 0.6.11. I ran scrub. That
worked
fine. Then I ran nodetool repair on one of the nodes. The disk usage on
data directory increased from 40GB to 480GB, and it's still growing
There were few Compacted files. I thought that might have been the cause,
but it wasn't it. We have a CF that is 23GB, and while repair is running,
there are multiple instances of that CF created along with other CFs.
To confirm - are you saying the data directory size is huge, but the
live
To confirm - are you saying the data directory size is huge, but the
live size as reported by nodetool ring and nodetool info does NOT
reflect this inflated size?
That's correct.
What files *do* you have in the data directory? Any left-over *tmp*
files for example?
The files that
Is there any chance that the entire file from source node got streamed to
destination node even though only small amount of data in hte file from
source node is supposed to be streamed destination node?
Yes, but the thing that's annoying me is that even if so - you should
not be seeing a 40 gb
threads. I've encountered the same thing
and got some pointers/answers.
On Aug 17, 2011 4:03 PM, Huy Le hu...@springpartners.com wrote:
Hi,
After upgrading to cass 0.8.4 from cass 0.6.11. I ran scrub. That
worked
fine. Then I ran nodetool repair on one of the nodes. The disk usage
Unfortunately repairing one cf at a time didn't help in my case because it
still streams all CF and that triggers lots of compactions
On Aug 18, 2011 3:48 PM, Huy Le hu...@springpartners.com wrote:
Thanks. I won't try that then.
So in our environment, after upgrading from 0.6.11 to 0.8.4, we have to run
scrub on all nodes before we can run repair on them. Is there any chance
that running scrub on the nodes causing data from all SSTables being
streamed to/from other nodes on running
Hi,
Is it normal that the repair takes 4+ hours for every node, with only about 10G
data? If this is not expected, do we have any hint what could be causing this?
The ring looks like below, we're using 0.8.1. Our repair is scheduled to run
once per week for all nodes.
Compaction related
No scrub is a local operation only.
Cheers
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com
On 19/08/2011, at 6:36 AM, Huy Le wrote:
Thanks. I won't try that then.
So in our environment, after upgrading from 0.6.11 to 0.8.4, we have
The compactions ettings do not affect repair. (Thinking out loud, or does it ?
Validation compactions and table builds.)
Watch the logs or check
nodetool compactionstats to see when the Validation completes completes.
and
nodetool netstats to see how long the data transfer takes
It sounds a
Hi,
After upgrading to cass 0.8.4 from cass 0.6.11. I ran scrub. That worked
fine. Then I ran nodetool repair on one of the nodes. The disk usage on
data directory increased from 40GB to 480GB, and it's still growing.
The cluster has 4 nodes with replica factor 3. The ring shows:
Address
Look at my last two or three threads. I've encountered the same thing and
got some pointers/answers.
On Aug 17, 2011 4:03 PM, Huy Le hu...@springpartners.com wrote:
Hi,
After upgrading to cass 0.8.4 from cass 0.6.11. I ran scrub. That worked
fine. Then I ran nodetool repair on one of the nodes
. That worked
fine. Then I ran nodetool repair on one of the nodes. The disk usage on
data directory increased from 40GB to 480GB, and it's still growing.
The cluster has 4 nodes with replica factor 3. The ring shows:
Address DC Rack Status State Load
Owns Token
threads. I've encountered the same thing and
got some pointers/answers.
On Aug 17, 2011 4:03 PM, Huy Le hu...@springpartners.com wrote:
Hi,
After upgrading to cass 0.8.4 from cass 0.6.11. I ran scrub. That
worked
fine. Then I ran nodetool repair on one of the nodes. The disk usage
Running v0.8.2, I can't see how to monitor the status/progress of a nodetool
repair. Any advice?
The nodetool repair command from the command line is not returning, so I
assume it's still running. But there's little CPU or disk activity.
Using jconsole to look at the AntiEntropyStage attributes
On Sun, Jul 31, 2011 at 2:25 AM, Jason Baker ja...@apture.com wrote:
When I run nodetool repair on a node on my 3-node cluster, I see 3 messages
like the following:
INFO [manual-repair-6d9a617f-c496-4744-9002-a56909b83d5b] 2011-07-30
18:50:28,464 AntiEntropyService.java (line 636
...@apture.com wrote:
When I run nodetool repair on a node on my 3-node cluster, I see 3 messages
like the following:
INFO [manual-repair-6d9a617f-c496-4744-9002-a56909b83d5b] 2011-07-30
18:50:28,464 AntiEntropyService.java (line 636) No neighbors to repair with
for system
When I run nodetool repair on a node on my 3-node cluster, I see 3 messages
like the following:
INFO [manual-repair-6d9a617f-c496-4744-9002-a56909b83d5b] 2011-07-30
18:50:28,464 AntiEntropyService.java (line 636) No neighbors to repair with
for system on (0,56713727820156410577229101238628035242
I would guess that means you've only configured a single replica per row.
On Sat, Jul 30, 2011 at 7:25 PM, Jason Baker ja...@apture.com wrote:
When I run nodetool repair on a node on my 3-node cluster, I see 3 messages
like the following:
INFO [manual-repair-6d9a617f-c496-4744-9002
Hi all,
Maybe I'm doing something wrong, but calling ./nodetool -h host repair
mykeyspace mycolumnfamily should only repair mycolumnfamily right?
Everytime I try a repair it repairs the whole key space instead of just
one column family. I'm on cassandra 0.8.1
https://issues.apache.org/jira/browse/CASSANDRA-2280
2011/7/19 Héctor Izquierdo Seliva izquie...@strands.com:
Hi all,
Maybe I'm doing something wrong, but calling ./nodetool -h host repair
mykeyspace mycolumnfamily should only repair mycolumnfamily right?
Everytime I try a repair it repairs
Are there any plans to backport this to 0.8?
El mar, 19-07-2011 a las 11:43 -0500, Jonathan Ellis escribió:
https://issues.apache.org/jira/browse/CASSANDRA-2280
2011/7/19 Héctor Izquierdo Seliva izquie...@strands.com:
Hi all,
Maybe I'm doing something wrong, but calling ./nodetool -h
Short answer: no.
Long answer: https://issues.apache.org/jira/browse/CASSANDRA-2818
2011/7/19 Héctor Izquierdo Seliva izquie...@strands.com:
Are there any plans to backport this to 0.8?
El mar, 19-07-2011 a las 11:43 -0500, Jonathan Ellis escribió:
From Cassandra the definitive guide - Basic Maintenance - Repair
Running nodetool repair causes Cassandra to execute a major compaction.
During a major compaction (see “Compaction” in the Glossary), the
server initiates a
TreeRequest/TreeReponse conversation to exchange Merkle trees
From Cassandra the definitive guide - Basic Maintenance - Repair
Running nodetool repair causes Cassandra to execute a major compaction.
During a major compaction (see “Compaction” in the Glossary), the
server initiates a
TreeRequest/TreeReponse conversation to exchange Merkle trees
Just confirming. Thanks for the clarification.
On Tue, Jul 12, 2011 at 10:53 AM, Peter Schuller
peter.schul...@infidyne.com wrote:
From Cassandra the definitive guide - Basic Maintenance - Repair
Running nodetool repair causes Cassandra to execute a major compaction.
During a major
Instead of doing nodetool repair, is it not a cheaper operation to
keep tab of failed writes (be it deletes or inserts or updates) and
read these failed writes at a set frequency in some batch job ? By
reading them, RR would get triggered and they would get to a
consistent state.
Because
Never mind. I see the issue with this. I will be able to catch the
writes as failed only if I set CL=ALL. For other CLs, I may not know
that it failed on some node.
On Mon, Jul 11, 2011 at 2:33 PM, A J s5a...@gmail.com wrote:
Instead of doing nodetool repair, is it not a cheaper operation
Hi experts,
Are there any benchmarks that quantify how long nodetool repair takes?
Something which says on this kind of hardware, with this much of data,
nodetool repair takes this long. The other question that I have is since
Cassandra recommends running nodetool repair within
On Tue, Jul 5, 2011 at 1:27 PM, Raj N raj.cassan...@gmail.com wrote:
Hi experts,
Are there any benchmarks that quantify how long nodetool repair takes?
Something which says on this kind of hardware, with this much of data,
nodetool repair takes this long. The other question that I have
I know it doesn't. But is this a valid enhancement request?
On Tue, Jul 5, 2011 at 1:32 PM, Edward Capriolo edlinuxg...@gmail.comwrote:
On Tue, Jul 5, 2011 at 1:27 PM, Raj N raj.cassan...@gmail.com wrote:
Hi experts,
Are there any benchmarks that quantify how long nodetool repair
I am little confused of the reason why nodetool repair has to run
within GCGraceSeconds.
The documentation at:
http://wiki.apache.org/cassandra/Operations#Frequency_of_nodetool_repair
is not very clear to me.
How can a delete be 'unforgotten' if I don't run nodetool repair? (I
understand
On Thu, Jun 30, 2011 at 4:25 PM, A J s5a...@gmail.com wrote:
I am little confused of the reason why nodetool repair has to run
within GCGraceSeconds.
The documentation at:
http://wiki.apache.org/cassandra/Operations#Frequency_of_nodetool_repair
is not very clear to me.
How can a delete
: 'foo':'bar'
We have the infamous undelete.
- Original Message -
From: A J s5a...@gmail.com
To: user@cassandra.apache.org
Sent: Thursday, June 30, 2011 8:25:29 PM
Subject: Meaning of 'nodetool repair has to run within GCGraceSeconds'
I am little confused of the reason why nodetool repair has
On Thu, Jun 30, 2011 at 3:47 PM, Edward Capriolo edlinuxg...@gmail.com wrote:
Read repair does NOT repair tombstones.
It does, but you can't rely on RR to repair _all_ tombstones, because
RR only happens if the row in question is requested by a client.
--
Jonathan Ellis
Project Chair, Apache
Thanks all !
In other words, I think it is safe to say that a node as a whole can
be made consistent only on 'nodetool repair'.
Has there been enough interest in providing anti-entropy without
compaction as a separate operation (nodetool repair does both) ?
On Thu, Jun 30, 2011 at 5:27 PM
It would be helpful if this was automated some how.
On Thu, Jun 30, 2011 at 5:27 PM, Jonathan Ellis jbel...@gmail.com wrote:
On Thu, Jun 30, 2011 at 3:47 PM, Edward Capriolo edlinuxg...@gmail.com
wrote:
Read repair does NOT repair tombstones.
It does, but you can't rely on RR to repair _all_ tombstones, because
RR only happens if the row in
#Dealing_with_the_consequences_of_nodetool_repair_not_running_within_GCGraceSeconds
The sequence of events was like this:
1) set GCGraceSeconds to some huge value
2) perform rolling upgrade from 0.7.4 to 0.7.6-2
3) run nodetool repair on the first node in cluster ~10pm. It has a ~30G
database
3) 2.30am decide
) perform rolling upgrade from 0.7.4 to 0.7.6-2
3) run nodetool repair on the first node in cluster ~10pm. It has a ~30G
database
3) 2.30am decide to leave it running all night and wake up 9am to find
still running
4) late morning investigation shows that db size has increased to 370G
of events was like this:
1) set GCGraceSeconds to some huge value
2) perform rolling upgrade from 0.7.4 to 0.7.6-2
3) run nodetool repair on the first node in cluster ~10pm. It has a ~30G
database
3) 2.30am decide to leave it running all night and wake up 9am to find still
running
4) late
2) perform rolling upgrade from 0.7.4 to 0.7.6-2
3) run nodetool repair on the first node in cluster ~10pm. It has a ~30G
database
3) 2.30am decide to leave it running all night and wake up 9am to find
still running
4) late morning investigation shows that db size has increased to 370G
#Dealing_with_the_consequences_of_nodetool_repair_not_running_within_GCGraceSeconds
The sequence of events was like this:
1) set GCGraceSeconds to some huge value
2) perform rolling upgrade from 0.7.4 to 0.7.6-2
3) run nodetool repair on the first node in cluster ~10pm. It has a ~30G
database
3) 2.30am decide to leave it running all
#Dealing_with_the_consequences_of_nodetool_repair_not_running_within_GCGraceSeconds
The sequence of events was like this:
1) set GCGraceSeconds to some huge value
2) perform rolling upgrade from 0.7.4 to 0.7.6-2
3) run nodetool repair on the first node in cluster ~10pm. It has
a ~30G
Hi Everyone ,
We looking for help with upgrading our Cassandra from 0.6 to 0.7.2 here in
Israel.
If there is anyone hear that can help out with consulting, please email me.
Thanks
Or Offer
SimilarGroup
or.of...@similargroup.com
for a given tombstone t, that each node will get t within
gc_grace_period. This means that if a node dies, you need it to be up again
and have nodetool repair ran before gc_grace_period, otherwise there may
be some tombstones that this node will never see (and thus deleted data
could be resurrected
On 04/05/2011 03:49 PM, Jonathan Ellis wrote:
Sounds like https://issues.apache.org/jira/browse/CASSANDRA-2324
Yes, that sounds like the issue I'm having. Any chance for a fix for
this being backported to 0.7.x?
Anyway, I guess I might as well share the test case I've used to
reproduce this
On Tue, Apr 5, 2011 at 12:01 AM, Maki Watanabe watanabe.m...@gmail.com wrote:
Hello,
On reading O'Reilly's Cassandra book and wiki, I'm a bit confusing on
nodetool repair and compact.
I believe we need to run nodetool repair regularly, and it synchronize
all replica nodes at the end
and wiki, I'm a bit confusing on
nodetool repair and compact.
I believe we need to run nodetool repair regularly, and it synchronize
all replica nodes at the end.
According to the documents the repair invokes major compaction also
(as side effect?).
Those documents are wrong then. A repair does
Hi,
I have a 6 node 0.7.4 cluster with replication_factor=3 where nodetool
repair keyspace behaves really strange.
The keyspace contains three column families and about 60GB data in total
(i.e 30GB on each node).
Even though no data has been added or deleted since the last repair, a
repair
I am experiencing the same behavior but had it on previous versions of 0.7 as
well.
-Ursprüngliche Nachricht-
Von: Jonas Borgström [mailto:jonas.borgst...@trioptima.com]
Gesendet: Montag, 4. April 2011 12:26
An: user@cassandra.apache.org
Betreff: Strange nodetool repair behaviour
Hi
On Monday 04 of April 2011, Jonas Borgström wrote:
I have a 6 node 0.7.4 cluster with replication_factor=3 where nodetool
repair keyspace behaves really strange.
I think I am observing similar issue.
I have three 0.7.4 nodes with RF=3.
After compaction I see about 7GB load in node but after
@cassandra.apache.org
Betreff: Strange nodetool repair behaviour
Hi,
I have a 6 node 0.7.4 cluster with replication_factor=3 where nodetool
repair keyspace behaves really strange.
The keyspace contains three column families and about 60GB data in total
(i.e 30GB on each node).
Even though no data
Hello,
On reading O'Reilly's Cassandra book and wiki, I'm a bit confusing on
nodetool repair and compact.
I believe we need to run nodetool repair regularly, and it synchronize
all replica nodes at the end.
According to the documents the repair invokes major compaction also
(as side effect
Woud you cassandra team think to add an alias name for nodetool
repair command?
That thought has crossed my mind lately too; particularly in one of
the recent threads.
The problem seems analogous to 'fsck', and the distinction between
fully expected by-design behavior needing fsck/repair
Woud you cassandra team think to add an alias name for nodetool
repair command?
I mean, the word repair scares some of people.
When I say we need to run nodetool repair regularly on cassandra
nodes, they think OH... Those are broken so often!.
So if I can say it in more soft word, ex. sync, tune
On Mon, Mar 21, 2011 at 8:33 PM, A J s5a...@gmail.com wrote:
I am trying to estimate the time it will take to rebuild a node. After
loading reasonable data,
...
For some reason, the repair command runs forever. I just have 3G of
data per node but still the repair is running for more than an
0.7.4
On Tue, Mar 22, 2011 at 11:49 AM, Robert Coli rc...@digg.com wrote:
On Mon, Mar 21, 2011 at 8:33 PM, A J s5a...@gmail.com wrote:
I am trying to estimate the time it will take to rebuild a node. After
loading reasonable data,
...
For some reason, the repair command runs forever. I just
I am trying to estimate the time it will take to rebuild a node. After
loading reasonable data, I brought down a node and manually removed
all its datafiles for a given keyspace (Keyspace1)
I then restarted the node and got i back in the ring. At this point, I
wish to run nodetool repair (bin
its datafiles for a given keyspace (Keyspace1)
I then restarted the node and got i back in the ring. At this point, I
wish to run nodetool repair (bin/nodetool -h 127.0.0.1 repair
Keyspace1) and estimate the time the time to rebuild from the time it
takes to repair.
For some reason, the repair
/Operations#Repairing_missing_or_inconsistent_data
Aaron
On 16 Mar 2011, at 06:58, Daniel Doubleday wrote:
At least if you are using RackUnawareStrategy
Cheers,
Daniel
On Mar 15, 2011, at 6:44 PM, Huy Le wrote:
Hi,
We have a cluster with 12 servers and use RF=3. When running nodetool
Hi,
We have a cluster with 12 servers and use RF=3. When running nodetool
repair, do we have to run it on all nodes on the cluster or can we run on
every 3rd node? Thanks!
Huy
--
Huy Le
Spring Partners, Inc.
http://springpadit.com
At least if you are using RackUnawareStrategy
Cheers,
Daniel
On Mar 15, 2011, at 6:44 PM, Huy Le wrote:
Hi,
We have a cluster with 12 servers and use RF=3. When running nodetool
repair, do we have to run it on all nodes on the cluster or can we run on
every 3rd node? Thanks!
Huy
:
Hi,
We have a cluster with 12 servers and use RF=3. When running nodetool
repair, do we have to run it on all nodes on the cluster or can we run on
every 3rd node? Thanks!
Huy
--
Huy Le
Spring Partners, Inc.
http://springpadit.com
, Huy Le wrote:
Hi,
We have a cluster with 12 servers and use RF=3. When running nodetool
repair, do we have to run it on all nodes on the cluster or can we run on
every 3rd node? Thanks!
Huy
--
Huy Le
Spring Partners, Inc.
http://springpadit.com
--
Jonathan Ellis
Project Chair
I never saw this before upgrading to 0.7.3 but now I do nodetool repair
and it sits there for hours. Previously it took about 20 minutes per
node (about 10GB of data per node).
I had some OOM crashes, but haven't seen them since I increased the heap
size and decreased the key cache
I just saw repair hang here too, it's actually very easy to reproduce. I'm
looking at it right now.
--
Sylvain
On Tue, Mar 8, 2011 at 4:30 PM, Karl Hiramoto k...@hiramoto.org wrote:
I never saw this before upgrading to 0.7.3 but now I do nodetool repair and
it sits there for hours
On 08/03/2011 16:34, Sylvain Lebresne wrote:
I just saw repair hang here too, it's actually very easy to reproduce.
I'm looking at it right now.
--
Thanks. Should i bump GCGraceSeconds since i can no longer repair?
I tried repair on 3 nodes of a 6 node cluster and they all hang.
I suspect you are in the case of
https://issues.apache.org/jira/browse/CASSANDRA-2290.
That is some neighbor node died or was unable to perform its part of the
repair. You can always
retry making sure all node are and stay alive to see if it is the former
one. But seeing the
other exception in
Just to ensure.
So this should be done manually by the cluster operators?
Thanks!
--
On 19 January 2011 12:15, Donal Zang zan...@ihep.ac.cn wrote:
Just to ensure.
So this should be done manually by the cluster operators?
you could use crontab to automate it according to a schedule
Thanks!
--
There is a lot of information on care and feeding of your Cassandra cluster available on the wiki operations pagehttp://wiki.apache.org/cassandra/OperationsThere is also a section on how frequently repair should be runhttp://wiki.apache.org/cassandra/Operations#Frequency_of_nodetool_repairHope
, 2010 at 1:54 PM, B. Todd Burruss bburr...@real.com wrote:
if i have N=3 and run nodetool repair on node X. i assume that merkle
trees (at a minimum) are calculated on nodes X, X+1, and X+2 (since
N=3). when the repair is finished are nodes X, X+1, and X+2 all in sync
with respect to node
=2 I would repair nodes 1 and 3
and with 6 nodes and RF=3 I would repair nodes 1 and 4
and that would lead to a synched cluster?
On Thu, Jul 15, 2010 at 1:54 PM, B. Todd Burruss bburr...@real.com wrote:
if i have N=3 and run nodetool repair on node X. i assume that merkle
trees (at a minimum
if i have N=3 and run nodetool repair on node X. i assume that merkle
trees (at a minimum) are calculated on nodes X, X+1, and X+2 (since
N=3). when the repair is finished are nodes X, X+1, and X+2 all in sync
with respect to node X's data? or does X have the latest data and X+1
and X+2 still
On Thu, Jul 15, 2010 at 1:54 PM, B. Todd Burruss bburr...@real.com wrote:
if i have N=3 and run nodetool repair on node X. i assume that merkle
trees (at a minimum) are calculated on nodes X, X+1, and X+2 (since
N=3). when the repair is finished are nodes X, X+1, and X+2 all in sync
Did you watch in the logs to confirm that repair had actually finished? The
`nodetool repair` call is not blocking before 0.6.3 (unreleased): see
https://issues.apache.org/jira/browse/CASSANDRA-1090
-Original Message-
From: James Golick jamesgol...@gmail.com
Sent: Sunday, May 30, 2010 3
? The
`nodetool repair` call is not blocking before 0.6.3 (unreleased): see
https://issues.apache.org/jira/browse/CASSANDRA-1090
-Original Message-
From: James Golick jamesgol...@gmail.com
Sent: Sunday, May 30, 2010 3:43am
To: cassandra-u...@incubator.apache.org
Subject: Inconsistency even
after nodetool repair?
It may not have actually finished at that point. Though, according to JMX,
both compactions of each CF had completed, so I assumed it was done.
On Mon, May 31, 2010 at 11:29 AM, Stu Hood stu.h...@rackspace.com wrote:
Did you watch in the logs to confirm that repair had
, presumably from the
read-repair, since everything else looked healthy. So, I ran nodetool repair
on both of our nodes, but even after running the repairs, I'm still seeing
the same thing in the logs and high load on the nodes.
Does that sound right?
401 - 495 of 495 matches
Mail list logo