Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for 
change notification.

The "Operations" page has been changed by JonathanEllis.
http://wiki.apache.org/cassandra/Operations?action=diff&rev1=4&rev2=5

--------------------------------------------------

  
  == Token selection ==
  
- Using a strong hash function means !RandomPartitioner keys will, on average, 
be evenly spread across the Token space, but you can still have imbalances if 
your Tokens do not divide up the range evenly, so you should specify 
InitialToken to your first nodes as `i * (2**127 / N)` for i = 1 .. N.
+ Using a strong hash function means !RandomPartitioner keys will, on average, 
be evenly spread across the Token space, but you can still have imbalances if 
your Tokens do not divide up the range evenly, so you should specify 
!InitialToken to your first nodes as `i * (2**127 / N)` for i = 1 .. N.
  
  With order preserving partioners, your key distribution will be 
application-dependent.  You should still take your best guess at specifying 
initial tokens (guided by sampling actual data, if possible), but you will be 
more dependent on active load balancing (see below) and/or adding new nodes to 
hot spots.
  
@@ -70, +70 @@

  
  If a node goes down and comes back up, the ordinary repair mechanisms will be 
adequate to deal with any inconsistent data.  If a node goes down entirely, you 
should be aware of the following as well:
   1. Remove the old node from the ring first, or bring up a replacement node 
with the same IP and Token as the old; otherwise, the old node will stay part 
of the ring in a "down" state, which will degrade your replication factor for 
the affected Range
-   * If you don't know the Token of the old node, you can retrieve it from any 
of the other nodes' `system` keyspace, ColumnFamily `LocationInfo`, key `L`.
+   * If you don't know the Token of the old node, you can retrieve it from any 
of the other nodes' `system` keyspace, !ColumnFamily `LocationInfo`, key `L`.
   1. Removing the old node, then bootstrapping the new one, may be more 
performant than using Anti-Entropy.  Testing needed.
    * Even brute-force rsyncing of data from the relevant replicas and running 
cleanup on the replacement node may be more performant
  

Reply via email to