repaired.
Cheers
-
Aaron Morton
Freelance Cassandra Developer
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 18/02/2013, at 11:12 AM, Marco Matarazzo marco.matara...@hexkeep.com wrote:
So, to me, it's like the nodetool repair command is running always on the
same
Greetings.
I'm trying to run nodetool repair on a Cassandra 1.2.1 cluster of 3 nodes
with 256 vnodes each.
On a pre-1.2 cluster I used to launch a nodetool repair on every node every
24hrs. Now I'm getting a differenf behavior, and I'm sure I'm missing something.
What I see on the command
I'm a bit late, but for reference.
Repair runs in two stages, first differences are detected. You an monitor the
validation compaction with nodetool compactionstats.
Then the differences are streamed between the nodes, you can monitor that with
nodetool netstats.
Nodetool repair command
…so it seems to me that it is running on all vnodes ranges.
Yes.
Also, whatever the node which I launch the command on is, only one node log
is moving and is always the same node.
Not sure what you mean here.
So, to me, it's like the nodetool repair command is running always on the
same
So, to me, it's like the nodetool repair command is running always on the
same single node and repairing everything.
If you use nodetool repair without the -pr flag in your setup (3 nodes and I
assume RF 3) it will repair all token ranges in the cluster.
That's correct, 3 nodes and RF 3
Hi,
I am new to Cassandra and I would like to hear your thoughts on this.
We are running our tests with Cassandra 1.2.1, in relatively small dataset
~60GB.
Nodetool repair command has been running for almost 24hours and I can't see
any activity from the logs or JMX.
What am I missing
: neither explicit
'nodetool repair' nor implicit 'hinted handoffs/read repairs' resolve
inconsistencies in data I get from secondary indexes.
I observe this for both one- and 2-datacenter deployments, independent
of caching settings. Rebuilding/droping and creating index or
restarting nodes
Hi again,
Once started playing with CCM it's hard to stop, such a great tool.
My issue with secondary indexes is following: neither explicit
'nodetool repair' nor implicit 'hinted handoffs/read repairs' resolve
inconsistencies in data I get from secondary indexes.
I observe this for both one
will be compacted and redundant
will be removed? Is it true?
if we issue nodetool repair -pr on node 3, apart from streaming data from node
4, 5 to 3. We also see data stream between node 4, 5 since they hold the
replicates. But I don't see log regarding merkle tree calculation on node
4,5
I decided to dig in to the source code, looks like in the case of nodetool
repair, if the current node sees the difference between the remote nodes based
on the merkle tree calculation, it will start a streamrepair session to ask the
remote nodes to stream data between each other.
But I am
On Thu, Jan 31, 2013 at 12:19 PM, Wei Zhu wz1...@yahoo.com wrote:
But I am still not sure how about the my first question regarding the
bootstrap, anyone?
As I understand it, bootstrap occurs from a single replica. Which
replica is chosen is based on some internal estimation of which is
@cassandra.apache.org
Sent: Thursday, January 31, 2013 1:50 PM
Subject: Re: General question regarding bootstrap and nodetool repair
On Thu, Jan 31, 2013 at 12:19 PM, Wei Zhu wz1...@yahoo.com wrote:
But I am still not sure how about the my first question regarding the
bootstrap, anyone?
As I
@cassandra.apache.org
Sent: Thursday, January 31, 2013 1:50 PM
Subject: Re: General question regarding bootstrap and nodetool repair
On Thu, Jan 31, 2013 at 12:19 PM, Wei Zhu wz1...@yahoo.com wrote:
But I am still not sure how about the my first question regarding the
bootstrap, anyone?
As I
On Thu, Jan 31, 2013 at 3:31 PM, Wei Zhu wz1...@yahoo.com wrote:
The only reason I can think of is that the new node has the same IP as the
dead node we tried to replace? After reading the bootstrap code, it
shouldn't be the case. Is it a bug? Or anyone tried to replace a dead node
with the
Will that throttle the network traffic caused by nodetool repair?
yes.
Should I call it to all the nodes on the cluster?
Or set it in the yaml file.
Cheers
-
Aaron Morton
Freelance Cassandra Developer
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 25/01/2013
traffic caused by nodetool repair?
Thanks.
-Wei
Hi,
Is nodetool repair only usable if the node to repair has a valid (=
up-to-date with its neighbors) schema?
If the data records are completely broken on a node with token, is it valid
to clean the (data) records and to execute replace_token=token on the *same*
node?
Thanks.
Regards
On Mon, Jan 7, 2013 at 9:05 AM, DE VITO Dominique
dominique.dev...@thalesgroup.com wrote:
Is nodetool repair only usable if the node to repair has a valid (=
up-to-date with its neighbors) schema?
If the node is in the cluster, it should have the correct schema. If
it doesn't have the correct
Hey everyone,
I'm seeing some conflicting advice out there about whether you need to run
nodetool repair within GCGraceSeconds with 1.x. Can someone clarify two
things:
(1) Do I need to run repair if I'm running 1.x?
(2) Should I bother running repair if I don't have any deletes? Anything
I have a 4 node cluster, version 1.1.2, replication factor of 4,
read/write consistency of 3, level compaction. Several questions.
1) Should nodetool repair be run regularly to assure it has
completed before gc_grace? If it is not run, what are the exposures?
2) If a node goes
On Thursday, November 15, 2012, Dwight Smith dwight.sm...@genesyslab.com
wrote:
I have a 4 node cluster, version 1.1.2, replication factor of 4,
read/write consistency of 3, level compaction. Several questions.
1) Should nodetool repair be run regularly to assure it has
completed before
Thanks
From: Edward Capriolo [mailto:edlinuxg...@gmail.com]
Sent: Thursday, November 15, 2012 4:30 PM
To: user@cassandra.apache.org
Subject: Re: Question regarding the need to run nodetool repair
On Thursday, November 15, 2012, Dwight Smith
dwight.sm...@genesyslab.com wrote:
I have a 4
and 1.0.3 [2]). Upgrade to 1.1.6 ASAP so that the answers below
actually apply, because working Hinted Handoff is involved.
1) Should nodetool repair be run regularly to assure it has completed
before gc_grace? If it is not run, what are the exposures?
If you do DELETE logical operations, yes
This is a problem for us as well.
Our current planned approach is to parse the logs for repair errors.
Having nodetool repair return an exit code for some of this failures
would be *very* useful.
Cheers,
Edward
On 12-10-08 06:49 PM, David Daeschler wrote:
Hello.
In the process of trying
Hello.
In the process of trying to streamline and provide better reporting
for various data storage systems, I've realized that although we're
verifying that nodetool repair runs, we're not verifying that it is
successful.
I found a bug relating to the exit code for nodetool repair, where
I think this JIRA answers your question:
https://issues.apache.org/jira/browse/CASSANDRA-2610
which in order not to duplicate work (creation of Merkle trees) repair
is done on all replicas for a range.
Cheers,
Omid
On Tue, Sep 25, 2012 at 8:27 AM, Sergey Tryuber stryu...@gmail.com wrote:
Hi
Hi Radim
Unfortunately number of compaction tasks is not overestimated. The number
is decremented one-by-one and this process takes several hours for our 40GB
node(( Also, when a lot of compaction tasks appears, we see that total disk
space used (via JMX) is doubled and Cassandra really tries to
Hi Guys
We've noticed a strange behavior on our 3-nodes staging Cassandra cluster
with RF=2 and LeveledCompactionStrategy. When we run nodetool repair
keyspace cfname -pr on a node, the other nodes start validation
process and when this process is finished one of the other 2 nodes reports
Repair process by itself is going well in a background, but the issue
I'm concerned is a lot of unnecessary compaction tasks
number in compaction tasks counter is over estimated. For example i have
1100 tasks left and if I will stop inserting data, all tasks will finish
within 30 minutes.
I
Staggering the repairs also gives the DynamicSnitch a chance to route around
nodes which maybe running slow.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 29/08/2012, at 11:19 AM, Omid Aladini omidalad...@gmail.com wrote:
Secondly,
edward.sargis...@globalrelay.net wrote:
Hi all,
So nodetool repair has to be run regularly on all nodes. Does anybody have
any interesting strategies or tools for doing this or is everybody just
setting up cron to do it?
For example, one could write some Puppet code to splay the cron times around
time.
On Mon, Aug 27, 2012 at 4:52 PM, Edward Sargisson
edward.sargis...@globalrelay.net wrote:
Hi all,
So nodetool repair has to be run regularly on all nodes. Does anybody have
any interesting strategies or tools for doing this or is everybody just
setting up cron to do it?
For example, one
Is there any reason why cassandra doesn't do nodetool repair out of the box
at some fixed intervals?
On Tue, Aug 28, 2012 at 9:08 PM, Aaron Turner synfina...@gmail.com wrote:
Funny you mention that... i just was hearing on #cassandra this
morning that it repairs the replica set by default. I
Thanks a very nice approach.
If every nodetool repair uses -pr does that satisfy the requirement to
run a repair before GCGraceSeconds expires? In otherwords, will we get a
correct result using -pr everywhere.
Secondly, what's the need for sleep 120?
Cheers,
Edward
On 12-08-28 07:03 AM
On Tue, Aug 28, 2012 at 1:42 PM, Edward Sargisson
edward.sargis...@globalrelay.net wrote:
Thanks a very nice approach.
If every nodetool repair uses -pr does that satisfy the requirement to run a
repair before GCGraceSeconds expires? In otherwords, will we get a correct
result using -pr
Secondly, what's the need for sleep 120?
just give the cluster a chance to settle down between repairs...
there's no real need for it, just is there because.
Actually, repair could cause unreplicated data to be streamed and new
sstables to be created. New sstables could cause pending
Hi all,
So nodetool repair has to be run regularly on all nodes. Does anybody
have any interesting strategies or tools for doing this or is everybody
just setting up cron to do it?
For example, one could write some Puppet code to splay the cron times
around so that only one should be running
don't overlap over time.
On Mon, Aug 27, 2012 at 4:52 PM, Edward Sargisson
edward.sargis...@globalrelay.net wrote:
Hi all,
So nodetool repair has to be run regularly on all nodes. Does anybody have
any interesting strategies or tools for doing this or is everybody just
setting up cron to do
database is primarily all
counters and we don't do any
deletes.
Does nodetool repair do anything for such a database. All the docs I read
for nodetool repair suggests
that nodetool repair is needed only if there is deletes.
Since 1.0, repair is only needed if a node crashes. If a node
Rangaswamy
senthil...@gmail.com wrote:
We are running Cassandra 1.1.2 on EC2. Our database is primarily all
counters and we don't do any
deletes.
Does nodetool repair do anything for such a database. All the docs I read
for nodetool repair suggests
that nodetool repair is needed only
We are running Cassandra 1.1.2 on EC2. Our database is primarily all
counters and we don't do any
deletes.
Does nodetool repair do anything for such a database. All the docs I read
for nodetool repair suggests
that nodetool repair is needed only if there is deletes.
Thanks,
Senthil
On Wed, Aug 22, 2012 at 8:37 AM, Senthilvel Rangaswamy
senthil...@gmail.com wrote:
We are running Cassandra 1.1.2 on EC2. Our database is primarily all
counters and we don't do any
deletes.
Does nodetool repair do anything for such a database. All the docs I read
for nodetool repair suggests
I would take a look at the replication: whats the RF per DC and what does
nodetool ring say. It's hard (as in no recommended) to get NTS with rack
allocation working correctly. Without know much more I would try to understand
what the topology is and if it can be simplified.
Additionally,
...@thelastpickle.commailto:aa...@thelastpickle.com
Reply-To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
Date: Fri, 17 Aug 2012 20:40:54 +1200
To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Re: nodetool repair uses insane amount of disk space
I would take a look
How come a node would consume 5x its normal data size during the repair
process?
https://issues.apache.org/jira/browse/CASSANDRA-2699
It's likely a variation based on how out of synch you happen to be,
and whether you have a neighbor that's also been repaired and bloated
up already.
My setup
Occasionally as I'm doing my regular anti-entropy repair I end up with a
node that uses an exceptional amount of disk space (node should have about
5-6 GB of data on it, but ends up with 25+GB, and consumes the limited
amount of disk space I have available)
How come a node would consume 5x its
What version are using ? There were issues with repair using lots-o-space in
0.8.X, it's fixed in 1.X
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 17/08/2012, at 2:56 AM, Michael Morris michael.m.mor...@gmail.com wrote:
Occasionally
Upgraded to 1.1.3 from 1.0.8 about 2 weeks ago.
On Thu, Aug 16, 2012 at 5:57 PM, aaron morton aa...@thelastpickle.comwrote:
What version are using ? There were issues with repair using lots-o-space
in 0.8.X, it's fixed in 1.X
Cheers
-
Aaron Morton
Freelance Developer
I am developing an automated script for our server maintenance. It would
execute a nodetool repair ever weekend. We have 3 nodes in DC1 and 3 in
DC2. We are currently on Cassandra 0.8.4.
I am trying to understand effects of what would happens if connectivity
between DC1 and DC2 is lost or couple
be expressly aware of it.
On Sat, Jul 14, 2012 at 2:00 PM, Michael Theroux mthero...@yahoo.com wrote:
Hello,
I'm looking at nodetool repair with the -pr, vs. non -pr option.
Looking around, I'm seeing a lot of conflicting information out there.
Almost universally, the recommendation is to run
We have a 3-node cluster. We use RF of 3 and CL of ONE for both reads
and writes…. Is there a reason I should schedule a regular nodetool
repair job ?
Thanks,
Oleg
nodetool repair job ?
Thanks,
Oleg
...@gmail.com:
We have a 3-node cluster. We use RF of 3 and CL of ONE for both reads and
writes…. Is there a reason I should schedule a regular nodetool repair job ?
Thanks,
Oleg
[mailto:david.daesch...@gmail.com]
Sent: Tuesday, June 05, 2012 08:59
To: user@cassandra.apache.org
Subject: nodetool repair -pr enough in this scenario?
Hello,
Currently I have a 4 node cassandra cluster on CentOS64. I have been running
nodetool repair (no -pr option) on a weekly schedule like:
Host1: Tue
:* nodetool repair -pr enough in this scenario?
** **
Hello,
** **
Currently I have a 4 node cassandra cluster on CentOS64. I have been
running nodetool repair (no -pr option) on a weekly schedule like:
** **
Host1: Tue, Host2: Wed, Host3: Thu, Host4: Fri
.
*From:* David Daeschler [mailto:david.daesch...@gmail.com]
*Sent:* Tuesday, June 05, 2012 08:59
*To:* user@cassandra.apache.org
*Subject:* nodetool repair -pr enough in this scenario?
** **
Hello,
** **
Currently I have a 4 node cassandra cluster on CentOS64. I have been
@cassandra.apache.org
Subject: Re: nodetool repair -pr enough in this scenario?
On Tue, Jun 5, 2012 at 8:44 AM, Viktor Jevdokimov
viktor.jevdoki...@adform.commailto:viktor.jevdoki...@adform.com wrote:
Understand simple mechanics first, decide how to act later.
Without -PR there's no difference from which host
and
irrevocably delete this message and any copies.
From: Sylvain Lebresne [mailto:sylv...@datastax.com]
Sent: Tuesday, June 05, 2012 11:02
To: user@cassandra.apache.org
Subject: Re: nodetool repair -pr enough in this scenario?
On Tue, Jun 5, 2012 at 8:44 AM, Viktor Jevdokimov
Thank you for all the replies. It has been enlightening to read. I think I
now have a better idea of repair, ranges, replicas and how the data is
distributed. It also seems that using -pr would be the best way to go in my
scenario with 1.x+
Thank you for all the feedback. Glad to see such an
Hello,
Currently I have a 4 node cassandra cluster on CentOS64. I have been
running nodetool repair (no -pr option) on a weekly schedule like:
Host1: Tue, Host2: Wed, Host3: Thu, Host4: Fri
In this scenario, if I were to add the -pr option, would this still be
sufficient to prevent forgotten
and ran nodetool repair again. The entire cluster of 6 nodes was
repaired in 10 hours. I am also contemplating since all the 6 nodes are
replicas of each other, do I even need to run repair on all the nodes.
Wouldn't running it on the first node suffice since it will repair all the
ranges its
On Sat, May 19, 2012 at 8:14 AM, Raj N raj.cassan...@gmail.com wrote:
Hi experts,
[ repair seems to be hanging forever ]
https://issues.apache.org/jira/browse/CASSANDRA-2433
Affects 0.8.4.
I also believe there is a contemporaneous bug (reported by Stu Hood?)
regarding failed repair resulting
to
be a *lot* of hints.
The third is that compaction has fallen behind.
This week its even worse, the nodetool repair has been running for the last
15 hours just on the first node and when I run nodetool compactionstats I
constantly see this -
pending tasks: 3
First check the logs
%
113427455640312814857969558651062452224
DC2 RAC9Up Normal 50.83 GB0.00%
113427455640312814857969558651062452225
They are all replicas of each other. All reads and writes are done at
LOCAL_QUORUM. We are on Cassandra 0.8.4. I see that our weekend nodetool
repair runs for more
://www.thelastpickle.com
On 14/05/2012, at 4:57 AM, Igor wrote:
On 05/13/2012 07:18 PM, Thanh Ha wrote:
Hi All,
Do I have to do maintenance nodetool repair on CFs that do not have
deletions?
Probably you should (depending how you do reads), if your nodes for some
reasons have different data (like
Hi All,
Do I have to do maintenance nodetool repair on CFs that do not have
deletions?
I only perform deletes on two column families in my cluster.
Thanks
As per the documentation, you don't have to if you don't delete or update.
On Sun, May 13, 2012 at 9:18 AM, Thanh Ha javaby...@gmail.com wrote:
Hi All,
Do I have to do maintenance nodetool repair on CFs that do not have
deletions?
I only perform deletes on two column families in my cluster
Thanks Kamal
On Sun, May 13, 2012 at 9:30 AM, Kamal Bahadur mailtoka...@gmail.com wrote:
As per the documentation, you don't have to if you don't delete or update.
On Sun, May 13, 2012 at 9:18 AM, Thanh Ha javaby...@gmail.com wrote:
Hi All,
Do I have to do maintenance nodetool repair
On 05/13/2012 07:18 PM, Thanh Ha wrote:
Hi All,
Do I have to do maintenance nodetool repair on CFs that do not have
deletions?
Probably you should (depending how you do reads), if your nodes for some
reasons have different data (like connectivity problems, node down, etc).
I only perform
DC1=3, DC2=3 with 60 GB data on each node.
I was bulk loading data over the weekend. But we forgot to turn off the
weekly nodetool repair job. As a result, repair was interfering when we were
bulk loading data. I canceled repair by restarting the nodes. But
unfortunately after the restart
cassandra cluster DC1=3, DC2=3 with 60 GB data on each
node. I was bulk loading data over the weekend. But we forgot to turn off
the weekly nodetool repair job. As a result, repair was interfering when we
were bulk loading data. I canceled repair by restarting the nodes. But
unfortunately after
raj.cassan...@gmail.com wrote:
I have a 6 node cassandra cluster DC1=3, DC2=3 with 60 GB data on each
node. I was bulk loading data over the weekend. But we forgot to turn off
the weekly nodetool repair job. As a result, repair was interfering when we
were bulk loading data. I canceled
I have a 6 node cassandra cluster DC1=3, DC2=3 with 60 GB data on each
node. I was bulk loading data over the weekend. But we forgot to turn off
the weekly nodetool repair job. As a result, repair was interfering when we
were bulk loading data. I canceled repair by restarting the nodes
My cluster is very small (300 MB) and compact was taking more than 2 hours.
I ended up bouncing all the nodes. After that, I was able to run repair
on all nodes, and each one takes less than a minute.
If this happens again I will be sure to run compactionstats and netstats.
Thanks for that
How much data do you have and how long is a while? In my experience repairs
can take a very long time. Check to see if validation compactions are running
(nodetool compactionstats) or if files are streaming (nodetool netstats). If
either of those are in progress then your repair should be
I am running 1.0.8. I am adding a new data center to an existing cluster.
Following steps outlined in another thread on the mailing list, things went
fine except for the last step, which is to run repair on all the nodes in
the new data center. Repair seems to be hanging indefinitely. There is
Hello
I have follow question, if we Read and write to cassandra claster with
QUORUM consistency level, does this allow to us do not call nodetool repair
regular? (i.e. every GCGraceSeconds)
have follow question, if we Read and write to cassandra claster with
QUORUM consistency level, does this allow to us do not call nodetool repair
regular? (i.e. every GCGraceSeconds)
--
With kind regards,
Robin Verlangen
www.robinverlangen.nl
http://wiki.apache.org/cassandra/Operations#Repairing_missing_or_inconsistent_data
(point
2)
2012/4/11 ruslan usifov ruslan.usi...@gmail.com
Hello
I have follow question, if we Read and write to cassandra claster with
QUORUM consistency level, does this allow to us do not call nodetool
and write to cassandra claster with
QUORUM consistency level, does this allow to us do not call nodetool repair
regular? (i.e. every GCGraceSeconds)
--
With kind regards,
Robin Verlangen
www.robinverlangen.nl
--
With kind regards,
Robin Verlangen
www.robinverlangen.nl
I have follow question, if we Read and write to cassandra
claster with QUORUM consistency level, does this allow to
us do not call nodetool repair regular? (i.e. every
GCGraceSeconds)
--
With kind regards,
Robin Verlangen
consistency level, does this allow to us do not call nodetool repair
regular? (i.e. every GCGraceSeconds)
--
With kind regards,
Robin Verlangen
www.robinverlangen.nl
--
With kind regards,
Robin Verlangen
www.robinverlangen.nl
and write to
cassandra claster with QUORUM consistency level,
does this allow to us do not call nodetool repair
regular? (i.e. every GCGraceSeconds)
--
With kind regards,
Robin Verlangen
/11 ruslan usifov ruslan.usi...@gmail.com
Hello
I have follow question, if we Read and write to cassandra claster with
QUORUM consistency level, does this allow to us do not call nodetool
repair regular? (i.e. every GCGraceSeconds)
--
With kind regards,
Robin Verlangen
Next time I will finish my morning coffee first :)
A
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 23/12/2011, at 5:08 AM, Peter Schuller wrote:
One other thing to consider is are you creating a few very large rows ? You
can check the min, max
The ring is balanced and the difference is pretty small.
One other thing to consider is are you creating a few very large rows ? You can
check the min, max and average row size using nodetool cfstats.
If all is fine don't worry about it. If you want to see the numbers get closer
nodetool
One other thing to consider is are you creating a few very large rows ? You
can check the min, max and average row size using nodetool cfstats.
Normall I agree, but assuming the two-node cluster has RF 2 it would
actually not matter ;)
--
/ Peter Schuller (@scode,
have been playing around with Cassandra for a few months now. Starting to
explore more of the routine maintenance and backup strategies and I have a
general question about nodetool repair. After reading the following page:
http://www.datastax.com/docs/0.8/operations/cluster_management it has
playing around with Cassandra for a few months now. Starting
to explore more of the routine maintenance and backup strategies and I have
a general question about nodetool repair. After reading the following page:
http://www.datastax.com/docs/0.8/operations/cluster_management it has
occurred to me
with Cassandra for a few months now. Starting to
explore more of the routine maintenance and backup strategies and I have a
general question about nodetool repair. After reading the following page:
http://www.datastax.com/docs/0.8/operations/cluster_management it has
occurred to me
://www.thelastpickle.com
On 21/12/2011, at 2:44 PM, Blake Starkenburg wrote:
I have been playing around with Cassandra for a few months now. Starting
to explore more of the routine maintenance and backup strategies and I have
a general question about nodetool repair. After reading the following
Could the lack of routine repair be why nodetool ring reports: node(1) Load
- 78.24 MB and node(2) Load - 67.21 MB? The load span between the two
nodes has been increasing ever so slowly...
No.
Generally there will be a variation in load depending on what state
compaction happens to be in on
I have been playing around with Cassandra for a few months now. Starting to
explore more of the routine maintenance and backup strategies and I have a
general question about nodetool repair. After reading the following page:
http://www.datastax.com/docs/0.8/operations/cluster_management it has
Hello all,
Right now, I have 10 machines running Cassandra 0.8.7, and mostly they are
working fine. However, during a nodetool repair of one machine, I'm seeing:
ERROR [AntiEntropySessions:12] 2011-10-24 11:17:52,154
AbstractCassandraDaemon.java (line 139) Fatal exception in thread
Thread
On Mon, Oct 24, 2011 at 6:23 PM, Scott Fines scott.fi...@nisc.coop wrote:
Hello all,
Right now, I have 10 machines running Cassandra 0.8.7, and mostly they are
working fine. However, during a nodetool repair of one machine, I'm seeing:
ERROR [AntiEntropySessions:12] 2011-10-24 11:17:52,154
during nodetool repair
On Mon, Oct 24, 2011 at 6:23 PM, Scott Fines scott.fi...@nisc.coop wrote:
Hello all,
Right now, I have 10 machines running Cassandra 0.8.7, and mostly they are
working fine. However, during a nodetool repair of one machine, I'm seeing:
ERROR [AntiEntropySessions:12] 2011
earlier: http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/how-does-compaction-throughput-kb-per-sec-affect-disk-io-td6831711.html
might not directly throttle the disk I/O?
Again: Compaction throttling will throttle compaction, which affects
both CPU and I/O for fundamental
so how about disk io? is there anyway to use ionice to control it?
I have tried to adjust the priority by ionice -c3 -p [cassandra pid].
seems not working...
Compaction throttling (and in 1.0 internode streaming throttling) both
address disk I/O.
--
/ Peter Schuller (@scode on twitter)
as I asked earlier:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/how-does-compaction-throughput-kb-per-sec-affect-disk-io-td6831711.html
might not directly throttle the disk I/O?
it would be easy if ionice could work with cassandra. not sure it is because
of jvm or something
, just wonder is that
necessary
to add an option or is there anyway to do repair throttling?
every time I run nodetool repair, it uses all disk io and the server load
goes up quickly, just wonder is there anyway to make it smoother.
The validating compaction that is part of repair is subject
301 - 400 of 495 matches
Mail list logo