Re: Replica data distributing between racks

2011-05-04 Thread aaron morton
Eric, Jonathan is suggesting the approach Jeremiah was using. Calculate the tokens the nodes in each DC independantly, and then add 1 to the tokens if there are two nodes with the same tokens. In your case with 2 DC's with 2 nodes each. In DC 1 node 1 = 0 node 2 =

Re: Replica data distributing between racks

2011-05-04 Thread Eric tamme
       Jonathan is suggesting the approach Jeremiah was using.        Calculate the tokens the nodes in each DC independantly, and then add 1 to the tokens if there are two nodes with the same tokens.        In your case with 2 DC's with 2 nodes each. In DC 1 node 1 = 0 node 2 =

Re: Replica data distributing between racks

2011-05-04 Thread Konstantin Naryshkin
D2R2 8 D1R2 D2R2 9 D1R2 D2R2 Each node is responsible for half of the ring in its own DC. - Original Message - From: Eric tamme eta...@gmail.com To: user@cassandra.apache.org Sent: Wednesday, May 4, 2011 1:58:19 PM Subject: Re: Replica data distributing between racks        Jonathan

Re: Replica data distributing between racks

2011-05-04 Thread Eric tamme
On Wed, May 4, 2011 at 10:09 AM, Konstantin Naryshkin konstant...@a-bb.net wrote: The way that I understand it (and that seems to be consistent with what was said in this discussion) is that each DC has its own data space. Using your simplified 1-10 system:   DC1   DC2 0  D1R1  D2R2 1  

Re: Replica data distributing between racks

2011-05-03 Thread aaron morton
I've been digging into this and worked was able to reproduce something, not sure if it's a fault and I can't work on it any more tonight. To reproduce: - 2 node cluster on my mac book - set the tokens as if they were nodes 3 and 4 in a 4 node cluster, e.g. node 1 with

Re: Replica data distributing between racks

2011-05-03 Thread Jonathan Ellis
Right, when you are computing balanced RP tokens for NTS you need to compute the tokens for each DC independently. On Tue, May 3, 2011 at 6:23 AM, aaron morton aa...@thelastpickle.com wrote: I've been digging into this and worked was able to reproduce something, not sure if it's a fault and I

RE: Replica data distributing between racks

2011-05-03 Thread Jeremiah Jordan
Subject: Re: Replica data distributing between racks Right, when you are computing balanced RP tokens for NTS you need to compute the tokens for each DC independently. On Tue, May 3, 2011 at 6:23 AM, aaron morton aa...@thelastpickle.com wrote: I've been digging into this and worked was able

Re: Replica data distributing between racks

2011-05-03 Thread Eric tamme
On Tue, May 3, 2011 at 10:13 AM, Jonathan Ellis jbel...@gmail.com wrote: Right, when you are computing balanced RP tokens for NTS you need to compute the tokens for each DC independently. I am confused ... sorry. Are you saying that ... I need to change how my keys are calculated to fix this

Re: Replica data distributing between racks

2011-05-03 Thread aaron morton
Jonathan, I think you are saying each DC should have it's own (logical) token ring. Which makes sense as the only way to balance the load in each dc. I think most people assume (including me) there was a single token ring for the entire cluster. But currently two endpoints

Re: Replica data distributing between racks

2011-05-03 Thread Jonathan Ellis
On Tue, May 3, 2011 at 2:46 PM, aaron morton aa...@thelastpickle.com wrote: Jonathan,        I think you are saying each DC should have it's own (logical) token ring. Right. (Only with NTS, although you'd usually end up with a similar effect if you alternate DC locations for nodes in a ONTS

Re: Replica data distributing between racks

2011-05-03 Thread Eric tamme
On Tue, May 3, 2011 at 4:08 PM, Jonathan Ellis jbel...@gmail.com wrote: On Tue, May 3, 2011 at 2:46 PM, aaron morton aa...@thelastpickle.com wrote: Jonathan,        I think you are saying each DC should have it's own (logical) token ring. Right. (Only with NTS, although you'd usually end up

Re: Replica data distributing between racks

2011-05-02 Thread aaron morton
That appears to be working correctly, but does not sound great. When the NTS selects replicas in a DC it orders the tokens available in the DC, then (in the first pass) iterates through them placing a replica in each unique rack. e.g. if the RF in each DC was 2, the replicas would be put on

Re: Replica data distributing between racks

2011-05-02 Thread Jonathan Ellis
On Mon, May 2, 2011 at 2:18 PM, aaron morton aa...@thelastpickle.com wrote: When the NTS selects replicas in a DC it orders the tokens available in  the DC, then (in the first pass) iterates through them placing a replica in each unique rack.  e.g. if the RF in each DC was 2, the replicas

Re: Replica data distributing between racks

2011-05-02 Thread Eric tamme
On Mon, May 2, 2011 at 3:22 PM, Jonathan Ellis jbel...@gmail.com wrote: On Mon, May 2, 2011 at 2:18 PM, aaron morton aa...@thelastpickle.com wrote: When the NTS selects replicas in a DC it orders the tokens available in  the DC, then (in the first pass) iterates through them placing a replica

Re: Replica data distributing between racks

2011-05-02 Thread aaron morton
My bad, I missed the way TokenMetadata.ringIterator() and firstTokenIndex() work. Eric, can you show the output from nodetool ring ? Aaron On 3 May 2011, at 07:30, Eric tamme wrote: On Mon, May 2, 2011 at 3:22 PM, Jonathan Ellis jbel...@gmail.com wrote: On Mon, May 2, 2011 at 2:18 PM,

Re: Replica data distributing between racks

2011-05-02 Thread Eric tamme
On Mon, May 2, 2011 at 5:59 PM, aaron morton aa...@thelastpickle.com wrote: My bad, I missed the way TokenMetadata.ringIterator() and firstTokenIndex() work. Eric, can you show the output from nodetool ring ? Here is output from nodtool ring - ip addresses changed obviously. Address

Re: Replica data distributing between racks

2011-05-02 Thread Eric tamme
On Mon, May 2, 2011 at 5:59 PM, aaron morton aa...@thelastpickle.com wrote: My bad, I missed the way TokenMetadata.ringIterator() and firstTokenIndex() work. Eric, can you show the output from nodetool ring ? Sorry if the previous paste was way to unformatted, here is a pastie.org link