Hi,
We are also experiencing the same issue.we have 3 DCs(DC1 RF=3,DC2 
RF=3,DC3,RF=1),if we use local_quorum,we are not meant to loss any data,right?
if we use local_one, maybe loss data? then we need to run repair regularly?
Could anyone advise?


Thanks






------------------ 原始邮件 ------------------
发件人: "Jon Haddad";<[email protected]>;
发送时间: 2017年7月28日(星期五) 凌晨1:37
收件人: "user"<[email protected]>; 

主题: Re: Data Loss irreparabley so



We (The Last Pickle) maintain an open source tool to help manage repairs across 
your clusters called Reaper.  It’s a lot easier to set up and manage than 
trying to manage it through cron.


http://thelastpickle.com/reaper.html

On Jul 27, 2017, at 12:38 AM, Daniel Hölbling-Inzko 
<[email protected]> wrote:

In that vein, Cassandra support Auto compaction and incremental repair. 
Does this mean I have to set up cron jobs on each node to do a nodetool repair 
or is this taken care of by Cassandra anyways?
How often should I run nodetool repair

Greetings Daniel
Jeff Jirsa <[email protected]> schrieb am Do. 27. Juli 2017 um 07:48:


 
 On 2017-07-25 15:49 (-0700), Roger Warner <[email protected]> wrote:
 > This is a quick informational question.     I know that Cassandra can detect 
 > failures of nodes and repair them given replication and multiple DC.
 >
 > My question is can Cassandra tell if data was lost after a failure and 
 > node(s) “fixed” and resumed operation?
 >
 
 Sorta concerned by the way you're asking this - Cassandra doesn't "fix" failed 
nodes. It can route requests around a down node, but the "fixing" is entirely 
manual.
 
 If you have a node go down temporarily, and it comes back up (with it's disk 
intact), you can see it "repair" data with a combination of active 
(anti-entropy) repair via nodetool repair, or by watching 'nodetool netstats' 
and see the read repair counters increase over time (which will happen 
naturally as data is requested and mismatches are detected in the data, based 
on your consistency level).
 
 
 
 ---------------------------------------------------------------------
 To unsubscribe, e-mail: [email protected]
 For additional commands, e-mail: [email protected]

Reply via email to