Mutation dropped and Read-Repair performance issue

2020-12-19 Thread sunil pawar
Hi All, We are facing problems of failure of Read-Repair stages with error Digest Mismatch and count is 300+ per day per node. At the same time, we are experiencing node is getting overloaded for a quick couple of seconds due to long GC pauses (of around 7-8 seconds). We are not running a repair

Re: repair performance

2017-03-20 Thread daemeon reiydelle
I would zero in on network throughput, especially interrack trunks sent from my mobile Daemeon Reiydelle skype daemeon.c.m.reiydelle USA 415.501.0198 On Mar 17, 2017 2:07 PM, "Roland Otta" wrote: > hello, > > we are quite inexperienced with cassandra at the moment

Re: repair performance

2017-03-20 Thread Roland Otta
ummit-2016 From: Roland Otta <roland.o...@willhaben.at> Date: Friday, March 17, 2017 at 5:47 PM To: "user@cassandra.apache.org" <user@cassandra.apache.org> Subject: Re: repair performance did not recognize that so far. thank you for the hint. i will definitely give it a try O

Re: repair performance

2017-03-18 Thread Thakrar, Jayesh
dra-summit-2016 From: Roland Otta <roland.o...@willhaben.at> Date: Friday, March 17, 2017 at 5:47 PM To: "user@cassandra.apache.org" <user@cassandra.apache.org> Subject: Re: repair performance did not recognize that so far. thank you for the hint. i will definitely give it

Re: repair performance

2017-03-17 Thread Roland Otta
did not recognize that so far. thank you for the hint. i will definitely give it a try On Fri, 2017-03-17 at 22:32 +0100, benjamin roth wrote: The fork from thelastpickle is. I'd recommend to give it a try over pure nodetool. 2017-03-17 22:30 GMT+01:00 Roland Otta

Re: repair performance

2017-03-17 Thread benjamin roth
The fork from thelastpickle is. I'd recommend to give it a try over pure nodetool. 2017-03-17 22:30 GMT+01:00 Roland Otta : > forgot to mention the version we are using: > > we are using 3.0.7 - so i guess we should have incremental repairs by > default. > it also

Re: repair performance

2017-03-17 Thread Roland Otta
... maybe i should just try increasing the job threads with --job-threads shame on me On Fri, 2017-03-17 at 21:30 +, Roland Otta wrote: forgot to mention the version we are using: we are using 3.0.7 - so i guess we should have incremental repairs by default. it also prints out

Re: repair performance

2017-03-17 Thread Roland Otta
forgot to mention the version we are using: we are using 3.0.7 - so i guess we should have incremental repairs by default. it also prints out incremental:true when starting a repair INFO [Thread-7281] 2017-03-17 09:40:32,059 RepairRunnable.java:125 - Starting repair command #7, repairing

Re: repair performance

2017-03-17 Thread benjamin roth
It depends a lot ... - Repairs can be very slow, yes! (And unreliable, due to timeouts, outages, whatever) - You can use incremental repairs to speed things up for regular repairs - You can use "reaper" to schedule repairs and run them sliced, automated, failsafe The time repairs actually may

repair performance

2017-03-17 Thread Roland Otta
hello, we are quite inexperienced with cassandra at the moment and are playing around with a new cluster we built up for getting familiar with cassandra and its possibilites. while getting familiar with that topic we recognized that repairs in our cluster take a long time. To get an idea of our

Re: nodetool status inconsistencies, repair performance and system keyspace compactions

2013-04-05 Thread aaron morton
monitor the repair using nodetool compactionstats to see the merkle trees being created, and nodetool netstats to see data streaming. Also look in the logs for messages from AntiEntropyService.java , that will tell you how long the node waited for each replica to get back to it. Cheers

Re: nodetool status inconsistencies, repair performance and system keyspace compactions

2013-04-04 Thread Ondřej Černoš
Hi, most has been resolved - the failed to uncompress error was really a bug in cassandra (see https://issues.apache.org/jira/browse/CASSANDRA-5391) and the problem with different load reporting is a change between 1.2.1 (reports 100% for 3 replicas/3 nodes/2 DCs setup I have) and 1.2.3 which

Re: nodetool status inconsistencies, repair performance and system keyspace compactions

2013-03-27 Thread aaron morton
During one of my tests - see this thread in this mailing list: http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/java-io-IOException-FAILED-TO-UNCOMPRESS-5-exception-when-running-nodetool-rebuild-td7586494.html That thread has been updated, check the bug ondrej created. How

nodetool status inconsistencies, repair performance and system keyspace compactions

2013-03-26 Thread Ondřej Černoš
Hi all, I have 2 DCs, 3 nodes each, RF:3, I use local quorum for both reads and writes. Currently I test various operational qualities of the setup. During one of my tests - see this thread in this mailing list: