Does this exist for Cassandra 3.x?  I know it was in DSE for DSE’s 3.x 
equivalent, and seems to be in Cassandra 4.x cassandra.yaml.  I don’t see it 
here, though:

https://github.com/apache/cassandra/blob/cassandra-3.11/conf/cassandra.yaml 
<https://github.com/apache/cassandra/blob/cassandra-3.11/conf/cassandra.yaml>

BTW:  Wow - what a difference allocate_tokens_* makes.  Living in the RF=3 with 
3 nodes world for so many years, I had no idea.  :-)

> On Mar 16, 2023, at 3:28 am, Bowen Song via user <user@cassandra.apache.org> 
> wrote:
> 
> You may find "allocate_tokens_for_local_replication_factor" more useful than 
> "allocate_tokens_for_keyspace" when you are spinning up a new DC.
> 
> On 16/03/2023 06:25, Max Campos wrote:
>> Update:  I figured out the problem!
>> 
>> The “allocate_tokens_for_keyspace” value needs to be set for a keyspace that 
>> has RF=3 for the DC being added.  I just had the RF=3 set for the existing 
>> DC.
>> 
>> I created a dummy keyspace with RF=3 for the new DC, set 
>> “allocate_tokens_for_keyspace=<dummy ks>” and then added the nodes … voila!  
>> Problem solved!
>> 
>> <Screen Shot 2023-03-15 at 11.21.36 pm.png>
>> 
>>> On Mar 15, 2023, at 10:50 pm, Max Campos <mc_cassand...@core43.com 
>>> <mailto:mc_cassand...@core43.com>> wrote:
>>> 
>>> Hi All -
>>> 
>>> I’m having a lot of trouble adding a new DC and getting a balanced ring 
>>> (i.e. every node has the same percentage of the token ring).
>>> 
>>> My config:
>>> 
>>> GossipingPropertyFileSnitch
>>> allocate_tokens_for_keyspace: <points to a NetworkTopologyStrategy RF=3 
>>> keyspace in the existing DC>
>>> num_tokens = 16
>>> 
>>> 6 nodes in the new DC / 3 nodes in the existing DC
>>> Cassandra 3.0.23
>>> 
>>> I add the nodes to the new DC one-by-one, waiting for “Startup complete” … 
>>> then create a new test keyspace with RF=3:
>>> 
>>> create keyspace test_tokens with replication = {'class': 
>>> 'NetworkTopologyStrategy', 'ies3': '3'}
>>> 
>>> … but then when I run “nodetool status test_tokens”, i see that the “Owns 
>>> (effective)” is way out of balance (see attached image — “ies3” is the new 
>>> DC).
>>> *.62 / node1 / rack1 - 71.8%
>>> *.63 / node2 / rack2 - 91.4%
>>> *.64 / node3 / rack3 - 91.6%
>>> *.66 / node4 / rack1 - 28.2%
>>> *.67 / node5 / rack2 - 8.6%
>>> *.68 / node6 / rack3 - 8.4%
>>> 
>>> node1 & node2 are seed nodes, along with 2 nodes from the existing DC.
>>> 
>>> How can I get even token distribution — “Owns (effective) = 50%" (or 1/6 of 
>>> the token range for each node)?
>>> 
>>> Also: I’ve made several attempts to try to figure this out (ex: all nodes 
>>> in 1 rack? each node has own rack?  2 nodes per rack?).  Between each 
>>> attempt I’m running “nodetool decommission” one-by-one,  blowing away 
>>> /var/lib/cassandra/*, etc.  Is it possible that the existing DC’s gossip is 
>>> remembering the token range & thus causing problems when I recreate the new 
>>> DC with some other configuration parameters?  Do I need to do something to 
>>> clear out the gossip between attempts?
>>> 
>>> Thanks everyone.
>>> 
>>> - Max
>>> 
>>> <Screen Shot 2023-03-15 at 7.19.50 pm.png>
>>> 
>> 

Reply via email to