Hi All,
We are facing problems of failure of Read-Repair stages with error Digest
Mismatch and count is 300+ per day per node.
At the same time, we are experiencing node is getting overloaded for a
quick couple of seconds due to long GC pauses (of around 7-8 seconds). We
are not running a repair
I would zero in on network throughput, especially interrack trunks
sent from my mobile
Daemeon Reiydelle
skype daemeon.c.m.reiydelle
USA 415.501.0198
On Mar 17, 2017 2:07 PM, "Roland Otta" wrote:
> hello,
>
> we are quite inexperienced with cassandra at the moment
ummit-2016
From: Roland Otta <roland.o...@willhaben.at>
Date: Friday, March 17, 2017 at 5:47 PM
To: "user@cassandra.apache.org" <user@cassandra.apache.org>
Subject: Re: repair performance
did not recognize that so far.
thank you for the hint. i will definitely give it a try
O
dra-summit-2016
From: Roland Otta <roland.o...@willhaben.at>
Date: Friday, March 17, 2017 at 5:47 PM
To: "user@cassandra.apache.org" <user@cassandra.apache.org>
Subject: Re: repair performance
did not recognize that so far.
thank you for the hint. i will definitely give it
did not recognize that so far.
thank you for the hint. i will definitely give it a try
On Fri, 2017-03-17 at 22:32 +0100, benjamin roth wrote:
The fork from thelastpickle is. I'd recommend to give it a try over pure
nodetool.
2017-03-17 22:30 GMT+01:00 Roland Otta
The fork from thelastpickle is. I'd recommend to give it a try over pure
nodetool.
2017-03-17 22:30 GMT+01:00 Roland Otta :
> forgot to mention the version we are using:
>
> we are using 3.0.7 - so i guess we should have incremental repairs by
> default.
> it also
... maybe i should just try increasing the job threads with --job-threads
shame on me
On Fri, 2017-03-17 at 21:30 +, Roland Otta wrote:
forgot to mention the version we are using:
we are using 3.0.7 - so i guess we should have incremental repairs by default.
it also prints out
forgot to mention the version we are using:
we are using 3.0.7 - so i guess we should have incremental repairs by default.
it also prints out incremental:true when starting a repair
INFO [Thread-7281] 2017-03-17 09:40:32,059 RepairRunnable.java:125 - Starting
repair command #7, repairing
It depends a lot ...
- Repairs can be very slow, yes! (And unreliable, due to timeouts, outages,
whatever)
- You can use incremental repairs to speed things up for regular repairs
- You can use "reaper" to schedule repairs and run them sliced, automated,
failsafe
The time repairs actually may
hello,
we are quite inexperienced with cassandra at the moment and are playing
around with a new cluster we built up for getting familiar with
cassandra and its possibilites.
while getting familiar with that topic we recognized that repairs in
our cluster take a long time. To get an idea of our
monitor the repair using nodetool compactionstats to see the merkle trees being
created, and nodetool netstats to see data streaming.
Also look in the logs for messages from AntiEntropyService.java , that will
tell you how long the node waited for each replica to get back to it.
Cheers
Hi,
most has been resolved - the failed to uncompress error was really a
bug in cassandra (see
https://issues.apache.org/jira/browse/CASSANDRA-5391) and the problem
with different load reporting is a change between 1.2.1 (reports 100%
for 3 replicas/3 nodes/2 DCs setup I have) and 1.2.3 which
During one of my tests - see this thread in this mailing list:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/java-io-IOException-FAILED-TO-UNCOMPRESS-5-exception-when-running-nodetool-rebuild-td7586494.html
That thread has been updated, check the bug ondrej created.
How
Hi all,
I have 2 DCs, 3 nodes each, RF:3, I use local quorum for both reads and writes.
Currently I test various operational qualities of the setup.
During one of my tests - see this thread in this mailing list:
14 matches
Mail list logo