Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for 
change notification.

The "Operations" page has been changed by JonathanEllis.
http://wiki.apache.org/cassandra/Operations?action=diff&rev1=22&rev2=23

--------------------------------------------------

  Cassandra is smart enough to transfer data from the nearest source node(s), 
if your !EndpointSnitch is configured correctly.  So, the new node doesn't need 
to be in the same datacenter as the primary replica for the Range it is 
bootstrapping into, as long as another replica is in the datacenter with the 
new one.
  
  == Removing nodes entirely ==
- You can take a node out of the cluster with `nodeprobe decommission` to a 
live node, or `nodeprobe removetoken` (to any other machine) to remove a dead 
one.
+ You can take a node out of the cluster with `nodeprobe decommission` to a 
live node, or `nodeprobe removetoken` (to any other machine) to remove a dead 
one.  This will assign the ranges the old node was responsible for to other 
nodes, and replicate the appropriate data there.
  
- Again, no data is removed automatically, so if you want to put the node back 
into service and you don't need the data on it anymore, it should be removed 
manually.
+ No data is removed automatically from the node being decommissioned, so if 
you want to put the node back into service at a different token on the ring, it 
should be removed manually.
  
  === Moving nodes ===
  `nodeprobe move`: move the target node to to a given Token. Moving is 
essentially a convenience over decommission + bootstrap.
@@ -86, +86 @@

  === Handling failure ===
  If a node goes down and comes back up, the ordinary repair mechanisms will be 
adequate to deal with any inconsistent data.  If a node goes down entirely, 
then you have two options:
  
-  1. (Recommended approach) Bring up the replacement node with a new IP 
address, and !AutoBootstrap set to true in storage-conf.xml. This will place 
the replacement node in the cluster and find the appropriate position 
automatically. Then the bootstrap process begins. While this process runs, the 
node will not receive reads until finished. Once this process is finished on 
the replacement node, run `nodeprobe removetoken` on all live nodes. You will 
need to supply the token of the dead node. You can obtain this by running 
`nodeprobe ring` on any live node to find the token (Unless there was some kind 
of outage, and the others came up but not the down one).
+  1. (Recommended approach) Bring up the replacement node with a new IP 
address, and !AutoBootstrap set to true in storage-conf.xml. This will place 
the replacement node in the cluster and find the appropriate position 
automatically. Then the bootstrap process begins. While this process runs, the 
node will not receive reads until finished. Once this process is finished on 
the replacement node, run `nodeprobe removetoken` once, suppling the token of 
the dead node, and `nodeprobe cleanup` on each node.
+  * You can obtain the dead node's token by running `nodeprobe ring` on any 
live node, unless there was some kind of outage, and the others came up but not 
the down one -- in that case, you can retrieve the token from the live nodes' 
system tables.
  
-  1. (Advanced approach) Bring up a replacement node with the same IP and 
token as the old, and run `nodeprobe repair`. Until the repair process is 
complete, clients reading only from this node may get no data back.  Using a 
higher !ConsistencyLevel on reads will avoid this. You can obtain the old token 
by running `nodeprobe ring` on any live node to find the token (Unless there 
was some kind of outage, and the others came up but not the down one).
+  1. (Alternative approach) Bring up a replacement node with the same IP and 
token as the old, and run `nodeprobe repair`. Until the repair process is 
complete, clients reading only from this node may get no data back.  Using a 
higher !ConsistencyLevel on reads will avoid this. 
  
- The reason why you run `nodeprobe removetoken` on all live nodes is so that 
Hinted Handoff can stop collecting writes for the dead node.
+ The reason why you run `nodeprobe cleanup` on all live nodes is to remove old 
Hinted Handoff writes stored for the dead node.
  
  == Backing up data ==
  Cassandra can snapshot data while online using `nodeprobe snapshot`.  You can 
then back up those snapshots using any desired system, although leaving them 
where they are is probably the option that makes the most sense on large 
clusters.

Reply via email to