Yes - using NetworkTopologyStrategy

From: aaron morton [mailto:aa...@thelastpickle.com]
Sent: Thursday, March 21, 2013 10:22 AM
To: user@cassandra.apache.org
Subject: Re: Question regarding multi datacenter and LOCAL_QUORUM

DEBUG [Thrift:1] 2013-03-19 00:00:53,313 ReadCallback.java (line 79) Blockfor 
is 2; setting up requests to /xx.yy.zz.146,/xx.yy.zz.143,/xx.yy.zz.145
DEBUG [Thrift:1] 2013-03-19 00:00:53,334 CassandraServer.java (line 306) 
get_slice
DEBUG [Thrift:1] 2013-03-19 00:00:53,334 ReadCallback.java (line 79) Blockfor 
is 2; setting up requests to /xx.yy.zz.146,/xx.yy.zz.143
DEBUG [Thrift:1] 2013-03-19 00:00:53,366 CassandraServer.java (line 306) 
get_slice
DEBUG [Thrift:1] 2013-03-19 00:00:53,367 ReadCallback.java (line 79) Blockfor 
is 2; setting up requests to /xx.yy.zz.146,/xx.yy.zz.143,/xx.yy.zz.145
This is Read Repair, as controlled by the read_repaur_chance and 
dclocal_read_repair_chance CF settings, in action.

"Blockfor" is how many nodes the read operation is going to wait for. When the 
number of nodes in the request is more than blockfor it means Read Repair is 
active, we are reading from all UP nodes and will repair any detected 
differences in the background. Your read is waiting for 2 nodes to respond only 
(including the one we ask for the data.)

The odd thing here is that there are only 3 replicas nodes. Are you using the 
Network Topology Strategy ? If so I would expect there to be 6 nodes in the the 
request with RR, 3 in each DC.

Cheers


-----------------
Aaron Morton
Freelance Cassandra Consultant
New Zealand

@aaronmorton
http://www.thelastpickle.com

-----------------
Aaron Morton
Freelance Cassandra Consultant
New Zealand

@aaronmorton
http://www.thelastpickle.com

On 21/03/2013, at 12:38 PM, Tyler Hobbs 
<ty...@datastax.com<mailto:ty...@datastax.com>> wrote:



On Wed, Mar 20, 2013 at 3:18 PM, Tycen Stafford 
<tstaff...@medio.com<mailto:tstaff...@medio.com>> wrote:
I don't think that's correct for a mult-dc ring, but you'll want to hear a 
final answer from someone more authoritative.  I could easily be wrong.  Try 
using the built in token generating tool (token-generator) - I don't seem to 
have it on my hosts (1.1.6 also) so I can't confirm.  I used the tokentoolv2.py 
tool (from here http://www.datastax.com/docs/1.0/initialize/token_generation) 
and got the following (which looks to me evenly spaced and not using offsets):

tstafford@tycen-linux:Cassandra$ ./tokentoolv2.py 3 3
{
    "0": {
        "0": 0,
        "1": 56713727820156410577229101238628035242,
        "2": 113427455640312821154458202477256070485
    },
    "1": {
        "0": 28356863910078205288614550619314017621,
        "1": 85070591730234615865843651857942052863,
        "2": 141784319550391026443072753096570088106
    }
}

For multi-DC clusters, the only requirement for a balanced cluster is that all 
tokens within a DC must be balanced; you can basically treat each DC as a 
separate ring (as long as your tokens don't line up exactly).  So, either using 
an offset for the second DC or evenly spacing all nodes is acceptable.

--
Tyler Hobbs
DataStax<http://datastax.com/>

Reply via email to