[
https://issues.apache.org/jira/browse/CASSANDRA-5671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Michael Shuler resolved CASSANDRA-5671.
---------------------------------------
Resolution: Invalid
This is really a configuration question for multi-datacenter set up, and you
would likely get a lot more help using the cassandra user mailing list to get
some ideas on how to best implement your cluster.
To subscribe: [email protected]
> cassandra automatic token generation issue: each datacenter doesnot span the
> complete set of tokens in NetworkTopologyStrategy
> ------------------------------------------------------------------------------------------------------------------------------
>
> Key: CASSANDRA-5671
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5671
> Project: Cassandra
> Issue Type: Bug
> Components: Core
> Affects Versions: 1.2.5
> Reporter: Rao Repaka
>
> When a route is saved, some of the routes save time is taking log longer
> (200ms+) than the other routes (<30ms). When analysed, it looks like the
> routeId (primary key which is a UUID) has a token that maps to a different
> datacenter than the current one, so the request is going accross dc and is
> taking more time.
> We have the following configuration for the keyspace: 2 nodes in each
> datacenter and with replication factor of 2.
> CREATE KEYSPACE grd WITH replication = {
> 'class': 'NetworkTopologyStrategy',
> 'HYWRCA02': '2',
> 'CHRLNCUN': '2'
> };
> Cassandra Version: Cassandra 1.2.5
> Using Virtual tokens generated (num_tokens: 256)
> partitioner: org.apache.cassandra.dht.Murmur3Partitioner
> On save we are using the consistency level of ONE.
> On read we are using the consistency level of local_quorum.
> So in this case am expecting the the tokens to be generated in such a way
> that the each datacenter spans the complete set of tokens. So when a save
> happens it always goes to the local data center. Also on reads too, it should
> go to the local dc.
> some examples of the nodetool getendpoints:
> [cassdra@hltd217 conf]$ nodetool -h hltd217.hydc.sbc.com -p 20000
> getendpoints grd route 22005151-a250-37b5-bb00-163df3bf0ad6
> 135.201.73.144 (dc2)
> 135.201.73.145 (dc2)
> 150.233.236.97 (dc1)
> 150.233.236.98 (dc1)
> [cassdra@hltd217 conf]$ nodetool -h hltd217.hydc.sbc.com -p 20000
> getendpoints grd route d1e86f4e-6d74-3bf6-8d76-27f41ae18149
> 150.233.236.97 (dc1)
> 135.201.73.144 (dc2)
> 150.233.236.98 (dc1)
> 135.201.73.145 (dc2)
> Not sure if we are missing any configuration. Would really appreciate some
> help.
> thx - srrepaka
--
This message was sent by Atlassian JIRA
(v6.2#6252)