Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for 
change notification.

The "Operations" page has been changed by Chris Goffinet.
http://wiki.apache.org/cassandra/Operations?action=diff&rev1=20&rev2=21

--------------------------------------------------

  === Handling failure ===
  If a node goes down and comes back up, the ordinary repair mechanisms will be 
adequate to deal with any inconsistent data.  If a node goes down entirely, 
then you have two options:
  
+  1. (Recommended approach) Bring up the replacement node with a new IP 
address, and !AutoBootstrap set to true in storage-conf.xml. This will place 
the replacement node in the cluster and find the appropriate position 
automatically. Then the bootstrap process begins. While this process runs, the 
node will not receive reads until finished. 
-  1. (Recommended approach) Run `nodeprobe removetoken` on all live nodes. You 
will need to supply the token of the dead node. You can obtain this by running 
`nodeprobe ring` on any live node to find the token (Unless there was some kind 
of outage, and the others came up but not the down one).
-   
  
-  Next, bring up the replacement node with a new IP address, and 
!AutoBootstrap set to true in storage-conf.xml. This will place the replacement 
node in the cluster and find the appropriate position automatically. Then the 
bootstrap process begins. While this process runs, the node will not receive 
reads until finished. 
+ Once this process is finished on the replacement node, run `nodeprobe 
removetoken` on all live nodes. You will need to supply the token of the dead 
node. You can obtain this by running `nodeprobe ring` on any live node to find 
the token (Unless there was some kind of outage, and the others came up but not 
the down one).
  
   1. (Advanced approach) Bring up a replacement node with the same IP and 
token as the old, and run `nodeprobe repair`. Until the repair process is 
complete, clients reading only from this node may get no data back.  Using a 
higher !ConsistencyLevel on reads will avoid this. You can obtain the old token 
by running `nodeprobe ring` on any live node to find the token (Unless there 
was some kind of outage, and the others came up but not the down one).
  

Reply via email to