Thanks, this is a straight forward answer, exactly what I needed !
2015-01-22 19:22 GMT+01:00 Jan cne...@yahoo.com:
Running a 'nodetool repair' will 'not' bring the node down.
Your question:
does a nodetool repair make the server stop serving requests, or does it
just use a lot
On Thu, Jan 22, 2015 at 10:53 AM, SEGALIS Morgan msega...@gmail.com wrote:
what do you mean by operating correctly ?
I mean that if you are operating near failure, repair might trip a node
into failure. But if you are operating correctly, repair should not.
=Rob
Don't think it is near failure, it uses only 3% of the CPU and 40% of the
RAM if that is what you meant.
2015-01-22 19:58 GMT+01:00 Robert Coli rc...@eventbrite.com:
On Thu, Jan 22, 2015 at 10:53 AM, SEGALIS Morgan msega...@gmail.com
wrote:
what do you mean by operating correctly ?
I mean
On Thu, Jan 22, 2015 at 9:36 AM, SEGALIS Morgan msega...@gmail.com wrote:
So I wondered, does a nodetool repair make the server stop serving
requests, or does it just use a lot of ressources but still serves request ?
In pathological cases, repair can cause a node to seriously degrade. If you
I don't think you can do nodetool repair on a single node cluster.
Still, one day or another you'll have to reboot your server, at which point
your cluster will be down. If you want high availability, you should use a
3 nodes cluster with RF = 3.
On 22 January 2015 at 18:10, Robert Coli rc
Running a 'nodetool repair' will 'not' bring the node down.
Your question: does a nodetool repair make the server stop serving requests, or
does it just use a lot of ressources but still serves request
Answer: NO, the server will not stop serving requests. It will use
some
I have been searching all over documentation but could not find an straight
answer.
For a project I'm using a single node cassandra database (so far)... It has
always worked well, but I'm reading everywhere that I should do a nodetool
repair at least every week, especially if I delete rows, which
Hi guys,
We have two DC, we are planning to schedule running nodetool repair weekly,
my question is : nodetool repair is cross cluster or not? it's sufficient
to run it without options on a node or should be scheduled on every node
with the host option.
Thanks
On Fri, Jan 9, 2015 at 8:01 AM, Adil adil.cha...@gmail.com wrote:
We have two DC, we are planning to schedule running nodetool repair
weekly, my question is : nodetool repair is cross cluster or not? it's
sufficient to run it without options on a node or should be scheduled on
every node
The official recommendation is 100k:
http://www.datastax.com/documentation/cassandra/2.0/cassandra/install/installRecommendSettings.html
I wonder if there's an advantage to this over unlimited if you're running
servers which are dedicated to your Cassandra cluster (which you should be
for
On Sat, Dec 6, 2014 at 8:05 AM, Eric Stevens migh...@gmail.com wrote:
The official recommendation is 100k:
http://www.datastax.com/documentation/cassandra/2.0/cassandra/install/installRecommendSettings.html
I wonder if there's an advantage to this over unlimited if you're running
servers
On Wed, Dec 3, 2014 at 6:37 AM, Rafał Furmański rfurman...@opera.com
wrote:
I see “Too many open files” exception in logs, but I’m sure that my limit
is now 150k.
Should I increase it? What’s the reasonable limit of open files for
cassandra?
Why provide any limit? ulimit allows unlimited?
Hi All!
We have a 8 nodes cluster in 2 DC (4 per DC, RF=3) running Cassandra 2.1.2 on
Linux Debian Wheezy.
I executed “nodetool repair” on one of the nodes, and this command returned
following error:
Exception occurred during clean-up.
java.lang.reflect.UndeclaredThrowableException
error: JMX
rfurman...@opera.com wrote:
Hi All!
We have a 8 nodes cluster in 2 DC (4 per DC, RF=3) running Cassandra 2.1.2 on
Linux Debian Wheezy.
I executed “nodetool repair” on one of the nodes, and this command returned
following error:
Exception occurred during clean-up
in 2 DC (4 per DC, RF=3) running Cassandra 2.1.2
on Linux Debian Wheezy.
I executed “nodetool repair” on one of the nodes, and this command returned
following error:
Exception occurred during clean-up.
java.lang.reflect.UndeclaredThrowableException
error: JMX connection closed. You
, Nov 11, 2014 at 10:48 AM, venkat sam samvenkat...@outlook.com
wrote:
I have a 5 node cluster. In one node one of the data directory partition
got crashed. After disk replacement I restarted the Cassandra daemon and
gave nodetool repair to restore the missing replica’s. But nodetool repair
On Wed, Nov 12, 2014 at 6:50 AM, Eric Stevens migh...@gmail.com wrote:
Wouldn't it be a better idea to issue removenode on the crashed node, wipe
the whole data directory (including system) and let it bootstrap cleanly so
that it's not part of the cluster while it gets back up to
Yes, with
rc...@eventbrite.com wrote:
On Tue, Nov 11, 2014 at 10:48 AM, venkat sam samvenkat...@outlook.com wrote:
I have a 5 node cluster. In one node one of the data directory partition got
crashed. After disk replacement I restarted the Cassandra daemon and gave
nodetool repair to restore
Hi All,
We're running 2 cassandra 2.1 clusters (development and production) and
whenever I run a nodetool repair on indexed tables I get an java exception
about creating snapshots:
Command line:
[2014-09-29 11:25:24,945] Repair session
73c0d390-47e4-11e4-ba0f-c7788dc924ec for range
On Mon, Sep 29, 2014 at 8:35 AM, Jeronimo de A. Barros
jeronimo.bar...@gmail.com wrote:
We're running 2 cassandra 2.1 clusters (development and production) and
whenever I run a nodetool repair on indexed tables I get an java exception
about creating snapshots:
Don't run 2.1 in production
Hi again,
On Mon, Sep 29, 2014 at 3:16 PM, Robert Coli rc...@eventbrite.com wrote:
Don't run 2.1 in production yet if you don't want to deal with bugs like
this in production.
Well, I got the last stable cassandra... going back to 2.0 then.
If you do file a JIRA, please let the list know
How do I watch the progress of nodetool repair.
Looks like the folklore from the list says to just use
nodetool compactionstats
nodetool netstats
… but the repair seems locked/stalled and neither of these are showing any
progress..
granted , this is a lot of data, but it would be nice
/Is-it-safe-to-stop-a-read-repair-and-any-suggestion-on-speeding-up-repairs-td6607367.html
You might find this helpful.
Thanks
On Thu, Aug 21, 2014 at 12:32 PM, Kevin Burton bur...@spinn3r.com wrote:
How do I watch the progress of nodetool repair.
Looks like the folklore from the list says to just
On Thu, Aug 21, 2014 at 12:32 PM, Kevin Burton bur...@spinn3r.com wrote:
How do I watch the progress of nodetool repair.
This is a very longstanding operational problem in Cassandra. Repair barely
works and is opaque, yet one is expected to run it once a week in the
default configuration
:
How do I watch the progress of nodetool repair.
This is a very longstanding operational problem in Cassandra. Repair barely
works and is opaque, yet one is expected to run it once a week in the default
configuration.
An unreasonably-hostile-in-tone-but-otherwise-accurate description
| www.instaclustr.com | @instaclustr
http://twitter.com/instaclustr | +61 415 936 359
On 22/08/2014, at 6:12 AM, Robert Coli rc...@eventbrite.com wrote:
On Thu, Aug 21, 2014 at 12:32 PM, Kevin Burton bur...@spinn3r.com wrote:
How do I watch the progress of nodetool repair.
This is a very
| www.instaclustr.com | @instaclustr | +61 415 936 359
On 22/08/2014, at 6:12 AM, Robert Coli rc...@eventbrite.com wrote:
On Thu, Aug 21, 2014 at 12:32 PM, Kevin Burton bur...@spinn3r.com wrote:
How do I watch the progress of nodetool repair.
This is a very longstanding operational
21, 2014 at 12:32 PM, Kevin Burton bur...@spinn3r.com
wrote:
How do I watch the progress of nodetool repair.
This is a very longstanding operational problem in Cassandra. Repair
barely works and is opaque, yet one is expected to run it once a week in
the default configuration
', '-1844674407370955162', '1844674407370955161',
'5534023222112865484'
Everything looked good so I changed the replication factor for my keyspace
from 1 to 2 and started running nodetool repair on each node. The first
node ran for a while then threw an error:
Repair session 8d2a1190-25aa-11e4-8a15
Some questions on nodetool repair.
1. This tool repairs inconsistencies across replicas of the row. Since
latest update always wins, I dont see inconsistencies other than ones
resulting from the combination of deletes, tombstones, and crashed nodes.
Technically, if data is never deleted from
consistency, they are only helpers/optimization and are not regarded
as operations that ensure consistency.
2. Want to understand the performance of 'nodetool repair' in a Cassandra
multi data center setup. As we add nodes to the cluster in various data
centers, does the performance of nodetool repair
wrote:
Some questions on nodetool repair.
1. This tool repairs inconsistencies across replicas of the row. Since
latest update always wins, I dont see inconsistencies other than ones
resulting from the combination of deletes, tombstones, and crashed nodes.
Technically, if data is never
Thanks Mark,
Since we have replicas in each data center, addition of a new data center
(and new replicas) has a performance implication on nodetool repair.
I do understand that adding nodes without increasing number of replicas may
improve repair performance, but in this case we are adding new
repair.
1. This tool repairs inconsistencies across replicas of the row. Since
latest update always wins, I dont see inconsistencies other than ones
resulting from the combination of deletes, tombstones, and crashed nodes.
Technically, if data is never deleted from cassandra, then nodetool repair
On Tue, Aug 12, 2014 at 4:46 PM, Viswanathan Ramachandran
vish.ramachand...@gmail.com wrote:
Andrey, QUORUM consistency and no deletes makes perfect sense.
I believe we could modify that to EACH_QUORUM or QUORUM consistency and no
deletes - isnt that right?
yes.
nodetool repair after doing this type of
alter.
The problem is that this command sometimes finishes very quickly. When it
does finishes like that it will normally say 'Lost notification...' and
exit
code is not zero.
So I just repeat this 'nodetool repair' until it finishes without error. I
also
the following command in cqlsh
ALTER KEYSPACE mykeyspace WITH REPLICATION = { 'class' :
'SimpleStrategy', 'replication_factor' : 2 };
I then tried to run recommended nodetool repair after doing this type of
alter.
The problem is that this command sometimes finishes very quickly. When it
does
I also expanded on a script originally written by Matt Stump @ Datastax.
The readme has the reasoning behind requiring sub-range repairs.
https://github.com/hancockks/cassandra_range_repair
On Mon, Jun 30, 2014 at 10:20 PM, Phil Burress philburress...@gmail.com
wrote:
@Paulo, this is very
I have a six node cluster in AWS (repl:3) and recently noticed that repair
was hanging. I've run with the -pr switch.
I see this output in the nodetool command line (and also in that node's
system.log):
Starting repair command #9, repairing 256 ranges for keyspace dev_a
but then no other
if the boxes are idle, you could use jstack and look at the stack… perhaps
it's locked somewhere.
Worth a shot.
On Tue, Jul 1, 2014 at 9:24 AM, Brian Tarbox tar...@cabotresearch.com
wrote:
I have a six node cluster in AWS (repl:3) and recently noticed that repair
was hanging. I've run with
On Tue, Jul 1, 2014 at 9:24 AM, Brian Tarbox tar...@cabotresearch.com
wrote:
I have a six node cluster in AWS (repl:3) and recently noticed that repair
was hanging. I've run with the -pr switch.
It'll do that.
What version of Cassandra?
=Rob
We're running 1.2.13.
Any chance that doing a rolling-restart would help?
Would running without the -pr improve the odds?
Thanks.
On Tue, Jul 1, 2014 at 1:40 PM, Robert Coli rc...@eventbrite.com wrote:
On Tue, Jul 1, 2014 at 9:24 AM, Brian Tarbox tar...@cabotresearch.com
wrote:
I have a
Does this output from jstack indicate a problem?
ReadRepairStage:12170 daemon prio=10 tid=0x7f9dcc018800 nid=0x7361
waiting on condition [0x7f9db540c000]
java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for
On Tue, Jul 1, 2014 at 11:09 AM, Brian Tarbox tar...@cabotresearch.com
wrote:
We're running 1.2.13.
1.2.17 contains a few streaming fixes which might help.
Any chance that doing a rolling-restart would help?
Probably not.
Would running without the -pr improve the odds?
No, that'd
Given that an upgrade is (for various internal reasons) not an option at
this point...is there anything I can do to get repair working again? I'll
also mention that I see this behavior from all nodes.
Thanks.
On Tue, Jul 1, 2014 at 2:51 PM, Robert Coli rc...@eventbrite.com wrote:
On Tue, Jul
On Tue, Jul 1, 2014 at 11:54 AM, Brian Tarbox tar...@cabotresearch.com
wrote:
Given that an upgrade is (for various internal reasons) not an option at
this point...is there anything I can do to get repair working again? I'll
also mention that I see this behavior from all nodes.
I think
For what purpose are you running repair? Because I read that we should!
:-)
We do delete data from one column family quite regularly...from the other
CFs occasionally. We almost never run with less than 100% of our nodes up.
In this configuration do we *need* to run repair?
Thanks,
On Tue,
Thanks! We retrieved all the ranges and started running repair on them. We
ran through all of them but found one single range which brought the ENTIRE
cluster down. All of the other ranges ran quickly and smoothly. This one
problematic range reliably brings it down every time we try to run repair
On Tue, Jul 1, 2014 at 3:53 PM, Phil Burress philburress...@gmail.com
wrote:
Thanks! We retrieved all the ranges and started running repair on them. We
ran through all of them but found one single range which brought the ENTIRE
cluster down. All of the other ranges ran quickly and smoothly.
We are running into an issue with nodetool repair. One or more of our nodes
will die with OOM errors when running nodetool repair on a single node. Was
reading this http://www.datastax.com/dev/blog/advanced-repair-techniques
and it mentioned using the -snapshot option, however, that doesn't appear
Repair uses snapshot option by default since 2.0.2 (see NEWS.txt).
So you don't have to specify in your version.
Do you have stacktrace when OOMed?
On Mon, Jun 30, 2014 at 4:54 PM, Phil Burress philburress...@gmail.com wrote:
We are running into an issue with nodetool repair. One or more of our
don't have to specify in your version.
Do you have stacktrace when OOMed?
On Mon, Jun 30, 2014 at 4:54 PM, Phil Burress philburress...@gmail.com
wrote:
We are running into an issue with nodetool repair. One or more of our
nodes
will die with OOM errors when running nodetool repair
On Mon, Jun 30, 2014 at 3:08 PM, Yuki Morishita mor.y...@gmail.com wrote:
Repair uses snapshot option by default since 2.0.2 (see NEWS.txt).
As a general meta comment, the process by which operationally important
defaults change in Cassandra seems ad-hoc and sub-optimal.
For to record, my
We are running repair -pr. We've tried subrange manually and that seems to
work ok. I guess we'll go with that going forward. Thanks for all the info!
On Mon, Jun 30, 2014 at 6:52 PM, Jaydeep Chovatia
chovatia.jayd...@gmail.com wrote:
Are you running full repair or on subset? If you are
One last question. Any tips on scripting a subrange repair?
On Mon, Jun 30, 2014 at 7:12 PM, Phil Burress philburress...@gmail.com
wrote:
We are running repair -pr. We've tried subrange manually and that seems to
work ok. I guess we'll go with that going forward. Thanks for all the info!
If you find it useful, I created a tool where you input the node IP,
keyspace, column family, and optionally the number of partitions (default:
32K), and it outputs the list of subranges for that node, CF, partition
size: https://github.com/pauloricardomg/cassandra-list-subranges
So you can
@Paulo, this is very cool! Thanks very much for the link!
On Mon, Jun 30, 2014 at 9:37 PM, Paulo Ricardo Motta Gomes
paulo.mo...@chaordicsystems.com wrote:
If you find it useful, I created a tool where you input the node IP,
keyspace, column family, and optionally the number of partitions
Have a test cluster with three nodes each in two datacenters. The
following causes nodetool repair to go into an (apparent) infinite
loop. This is with 2.0.6.
On node 10.140.140.101:
cqlsh CREATE KEYSPACE looptest WITH replication = {
... 'class': 'NetworkTopologyStrategy',
... '140
In fact, it did eventually finish in ~20 minutes. Is this duration
expected/normal?
--Kevin
On Wed, Apr 9, 2014 at 9:32 AM, Kevin McLaughlin kmcla...@gmail.com wrote:
Have a test cluster with three nodes each in two datacenters. The
following causes nodetool repair to go into an (apparent
On Wed, Apr 9, 2014 at 7:09 AM, Kevin McLaughlin kmcla...@gmail.com wrote:
In fact, it did eventually finish in ~20 minutes. Is this duration
expected/normal?
https://issues.apache.org/jira/browse/CASSANDRA-5220
=Rob
handoffs
(which prevent deleted data from reappearing), I believe. “Periodic repair”
refers to running “nodetool repair” (aka Anti-Entropy).
I too have wondered if setting gc_grace_seconds to zero and skipping “nodetool
repair” are safe.
We’re using C* 2.0.6. In the 2.0.X versions, with vnodes
of not storing hinted
handoffs (which prevent deleted data from reappearing), I believe.
“Periodic repair” refers to running “nodetool repair” (aka Anti-Entropy).
I too have wondered if setting gc_grace_seconds to zero and skipping
“nodetool repair” are safe.
We’re using C* 2.0.6. In the 2.0.X
nodetool repair once
a week on every node, on different days. Currently I have
like 4 repair sessions running on each node, one since 3
weeks and none has finished.
Reading the logs I didn't find any exception, apparently one
of the repair session got stuck
Hi,
I have two nodes with Cassandra 2.0.3, where repair sessions hang for an
undefinite time. I'm running nodetool repair once a week on every node,
on different days. Currently I have like 4 repair sessions running on
each node, one since 3 weeks and none has finished.
Reading the logs I
On Wed, Jan 8, 2014 at 8:52 AM, Paolo Crosato paolo.cros...@targaubiest.com
wrote:
I have two nodes with Cassandra 2.0.3, where repair sessions hang for an
undefinite time. I'm running nodetool repair once a week on every node, on
different days. Currently I have like 4 repair sessions
paolo.cros...@targaubiest.com wrote:
I have two nodes with Cassandra 2.0.3, where repair sessions hang for an
undefinite time. I'm running nodetool repair once a week on every node, on
different days. Currently I have like 4 repair sessions running on each
node, one since 3 weeks and none has
On Mon, Dec 9, 2013 at 6:39 PM, David Laube d...@stormpath.com wrote:
Hi All,
We are running Cassandra 2.0.2 and have recently stumbled upon an issue with
nodetool repair. Upon running nodetool repair on each of the 5 nodes in the
ring (one at a time) we observe the following exceptions
On Wed, Dec 11, 2013 at 11:02 AM, Sven Stark sven.st...@m-square.com.auwrote:
Corollary:
what is getting shipped over the wire? The ganglia screenshot shows the
network traffic on all the three hosts on which I ran the nodetool repair.
[image: Inline image 1]
remember
UN 10.1.2.11
the three hosts on which I ran the nodetool repair.
[image: Inline image 1]
remember
UN 10.1.2.11 107.47 KB 256 32.9%
1f800723-10e4-4dcd-841f-73709a81d432 rack1
UN 10.1.2.10 127.67 KB 256 32.4%
bd6b2059-e9dc-4b01-95ab-d7c4fc0ec639 rack1
UN 10.1.2.12 107.62 KB 256
On Wed, Dec 11, 2013 at 1:35 AM, Sven Stark sven.st...@m-square.com.auwrote:
thanks for replying. Could you please be a bit more specific, though. Eg
what exactly is being compacted - there is/was no data at all in the
cluster save for a few hundred kB in the system CF (see the nodetool status
, Dec 9, 2013 at 6:39 PM, David Laube d...@stormpath.com wrote:
Hi All,
We are running Cassandra 2.0.2 and have recently stumbled upon an issue with
nodetool repair. Upon running nodetool repair on each of the 5 nodes in the
ring (one at a time) we observe the following exceptions returned
Corollary:
what is getting shipped over the wire? The ganglia screenshot shows the
network traffic on all the three hosts on which I ran the nodetool repair.
[image: Inline image 1]
remember
UN 10.1.2.11 107.47 KB 256 32.9%
1f800723-10e4-4dcd-841f-73709a81d432 rack1
UN 10.1.2.10
Hi All,
We are running Cassandra 2.0.2 and have recently stumbled upon an issue with
nodetool repair. Upon running nodetool repair on each of the 5 nodes in the
ring (one at a time) we observe the following exceptions returned to standard
out;
[2013-12-08 11:04:02,047] Repair session
My experience is that you must upgrade to 2.0.3 ASAP to fix this.
Michael
On Mon, Dec 9, 2013 at 6:39 PM, David Laube d...@stormpath.com wrote:
Hi All,
We are running Cassandra 2.0.2 and have recently stumbled upon an issue
with nodetool repair. Upon running nodetool repair on each
have the same setup: one keyspace per client, and currently about 300
keyspaces. nodetool repair takes a long time, 4 hours with -pr on a single
node. We have a 4 node cluster with about 10 gb per node. Unfortunately,
we haven't been keeping track of the running time as keyspaces, or load
We have the same setup: one keyspace per client, and currently about 300
keyspaces. nodetool repair takes a long time, 4 hours with -pr on a single
node. We have a 4 node cluster with about 10 gb per node. Unfortunately,
we haven't been keeping track of the running time as keyspaces, or load
...@academicworks.com wrote:
We have the same setup: one keyspace per client, and currently about 300
keyspaces. nodetool repair takes a long time, 4 hours with -pr on a single
node. We have a 4 node cluster with about 10 gb per node. Unfortunately,
we haven't been keeping track of the running time
On Mon, Nov 25, 2013 at 12:28 PM, John Pyeatt john.pye...@singlewire.comwrote:
Are you using Vnodes? We are and they are set to 256
What version of cassandra are you running. We are running 1.2.9
Vnode performance vis a vis repair is this JIRA issue :
We have an application that has been designed to use potentially 100s of
keyspaces (one for each company).
One thing we are noticing is that nodetool repair across all of the
keyspaces seems to increase linearly based on the number of keyspaces. For
example, if we have a 6 node ec2 (m1.large
Afternoon,
We are noticing nodetool repair processes are not completing after a weeks
worth of time, and have resulted in some Cassandra nodes having more than one
process running do to cron scheduled. We are also chasing some performance
degradation after upgrading all nodes to version 1.2.8
nodetool repair just triggers repair procedure. You can kill nodetool after
start, it doesn't change anything. To stop repair you have to use nodetool
stop VALIDATION|COMPACTION
Thank you,
Andrey
On Thu, Aug 8, 2013 at 1:00 PM, Andy Losey and...@addthis.com wrote:
Afternoon,
We
But node might be streaming data as well, in that case only option is to
restart node that started streaming operation
Sent from my iPhone
On Aug 8, 2013, at 5:56 PM, Andrey Ilinykh ailin...@gmail.com wrote:
nodetool repair just triggers repair procedure. You can kill nodetool after
start
Hello,
I read in the docs that `nodetool repair` should be regularly run unless no
delete is ever performed. In my app, I never delete, but I heavily use the
ttl feature. Should repair still be run regularly? Also, does repair take
less time if it is run regularly? If not, is there a way
We observed the same behavior. During last repair the data distribution on
nodes was imbalanced as well resulting in one node bloating.
On Aug 1, 2013 12:36 PM, Carl Lerche m...@carllerche.com wrote:
Hello,
I read in the docs that `nodetool repair` should be regularly run unless
no delete
Subject: How often to run `nodetool repair`
Hello,
I read in the docs that `nodetool repair` should be regularly run unless no
delete is ever performed. In my app, I never delete, but I heavily use the ttl
feature. Should repair still be run regularly? Also, does repair take less time
amounts of writes rather.
Regards,
Arthur
*From:* Carl Lerche m...@carllerche.com
*Sent:* Thursday, August 01, 2013 12:35 PM
*To:* user@cassandra.apache.org
*Subject:* How often to run `nodetool repair`
Hello,
I read in the docs that `nodetool repair` should be regularly run unless
On Thu, Aug 1, 2013 at 12:26 PM, Robert Coli rc...@eventbrite.com wrote:
On Thu, Aug 1, 2013 at 9:35 AM, Carl Lerche m...@carllerche.com wrote:
I read in the docs that `nodetool repair` should be regularly run unless
no delete is ever performed. In my app, I never delete, but I heavily use
: Thursday, August 01, 2013 3:03 PM
To: user@cassandra.apache.org ; Arthur Zubarev
Subject: Re: How often to run `nodetool repair`
Arthur,
Yes, my use case for this Cassandra cluster is analytics. I am building a
google dapper (application tracing) like system. I collect application traces
and write
TTL is effectively DELETE; you need to run a repair once every
gc_grace_seconds. If you don't, data might un-delete itself.
The undelete part is not true. btw: With CASSANDRA-4917 TTLed columns will
not even create a tombstone (assuming ttl gc_grace).
The rest of your mail I agree with :-)
On Thu, Aug 1, 2013 at 1:16 PM, Andrey Ilinykh ailin...@gmail.com wrote:
On Thu, Aug 1, 2013 at 12:26 PM, Robert Coli rc...@eventbrite.com wrote:
TTL is effectively DELETE; you need to run a repair once every
gc_grace_seconds. If you don't, data might un-delete itself.
How is it possible?
On 08/01/2013 01:16 PM, Andrey Ilinykh wrote:
TTL is effectively DELETE; you need to run a repair once every
gc_grace_seconds. If you don't, data might un-delete itself.
How is it possible? Every replica has TTL, so it when it expires every
replica has tombstone. I don't see how you
nodetool repair is not coming back on the command line
As a side, nodetool command makes a call to the server for each KS you are
repairing. The calls are done in serial and if your terminal session times out
the repair will stop after the last call nodetool made.
If I'm manually running
I am using Cassandra 1.1.5.
nodetool repair is not coming back on the command line. Did it ran
successfully? Did it hang? How do you find if the repair was successful?I did
not find anything in the logs.nodetool compactionstats and nodetool
netstats are clean.
nodetool compactionstats pending
check nodetool tpstats and looking for AntiEntropySessions/AntiEntropyStages
grep the log and looking for repair and merkle tree
- Original Message -
From: S C as...@outlook.com
To: user@cassandra.apache.org
Sent: Monday, March 25, 2013 2:55:30 PM
Subject: nodetool repair hung?
I am
Thank you. It helped me.
Date: Mon, 25 Mar 2013 15:22:32 -0700
From: wz1...@yahoo.com
Subject: Re: nodetool repair hung?
To: user@cassandra.apache.org
check nodetool tpstats and looking for AntiEntropySessions/AntiEntropyStages
grep the log and looking for repair and merkle tree
into Tombstones and then purged after gc_grace.
Nodetool repair will ask the neighbour node say node 2 to generate the
merkle tree. As I understand, currently the repair introduces 2 compactions.
Repairs currently require 2 major compactions: one to validate a column
family
Hi Guys - I have a question on Vnodes and nodetool repair. If I have
configured the nodes as vnodes, say for example 2 nodes with Rf=2.
Questions -
*There are some columns set with TTL as X. After X Cassandra will mark
them as tombstones. Is there still a probability of running
Hi,
I am new to Cassandra and I am not sure if this is the normal behavior but
nodetool repair runs for too long even for small dataset per node. As I am
writing I started a nodetool repair last night at 18:41 and now it's 9:18
and it's still running, the size of my data is only ~500mb per node
@cassandra.apache.org
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Long running nodetool repair
Hi,
I am new to Cassandra and I am not sure if this is the normal behavior but
nodetool repair runs for too long even for small dataset per node. As I am
writing I started a nodetool repair
@cassandra.apache.org
Sent: Tuesday, February 19, 2013 1:29:19 AM
Subject: Long running nodetool repair
Hi,
I am new to Cassandra and I am not sure if this is the normal behavior but
nodetool repair runs for too long even for small dataset per node. As I am
writing I started a nodetool repair last
201 - 300 of 495 matches
Mail list logo