Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for 
change notification.

The "Operations" page has been changed by JonathanEllis.
The comment on this change is: add notes on repair.
http://wiki.apache.org/cassandra/Operations?action=diff&rev1=55&rev2=56

--------------------------------------------------

  Cassandra repairs data in two ways:
  
   1. Read Repair: every time a read is performed, Cassandra compares the 
versions at each replica (in the background, if a low consistency was requested 
by the reader to minimize latency), and the newest version is sent to any 
out-of-date replicas.
-  1. Anti-Entropy: when `nodetool repair` is run, Cassandra performs a major 
compaction, computes a Merkle Tree of the data on that node, and compares it 
with the versions on other replicas, to catch any out of sync data that hasn't 
been read recently.  This is intended to be run infrequently (e.g., weekly) 
since major compaction is relatively expensive.
+  1. Anti-Entropy: when `nodetool repair` is run, Cassandra computes a Merkle 
tree of the data on that node, and compares it with the versions on other 
replicas, to catch any out of sync data that hasn't been read recently.  This 
is intended to be run infrequently (e.g., weekly) since computing the Merkle 
tree is relatively expensive in disk i/o and CPU, since it scans ALL the data 
on the machine (but it is is very network efficient).  
+ 
+ Running `nodetool repair`:
+ Like all nodetool operations, repair is non-blocking; it sends the command to 
the given node, but does not wait for the repair to actually finish.  You can 
tell that repair is finished when (a) there are no active or pending tasks in 
the CompactionManager, and after that when (b) there are no active or pending 
tasks on AE-SERVICE-STAGE.
+ 
+ Repair should be run against one machine at a time.  (This limitation will be 
fixed in 0.7.)
  
  === Handling failure ===
  If a node goes down and comes back up, the ordinary repair mechanisms will be 
adequate to deal with any inconsistent data.  Remember though that if a node is 
down longer than your configured GCGraceSeconds (default: 10 days), it could 
have missed remove operations permanently.  Unless your application performs no 
removes, you should wipe its data directory, re-bootstrap it, and removetoken 
its old entry in ghe ring (see below).

Reply via email to