Re: Why is my cluster imbalanced ?

2014-04-07 Thread Tupshin Harper
I recommend rf=3 for most situations, and it would certainly be appropriate
here.

Just remember to add a third rack, and maintain the able number of nodes in
each rack.

-Tupshin
On Apr 7, 2014 9:49 AM, "Oleg Dulin"  wrote:

> Tupshin:
>
> For EC2, 3 us-east, would you recommend RF=3 ? That would make sense,
> wouldn't it...
>
> That's what I'll do for production.
>
> Oleg
>
> On 2014-04-07 12:23:51 +, Tupshin Harper said:
>
>  Your us-east datacenter, has RF=2, and 2 racks, which is the right way
>> to do it (I would rarely recommend using a different number of racks
>> than your RF). But by having three nodes on one rack (1b) and only one
>> on the other(1a), you are telling Cassandra to distribute the data so
>> that no two copies of the same partition exist on the same rack.
>>
>> So with rack ownership of 100% and 100% respectively, there is no even
>> way to distribute your data among those four nodes.
>>
>> tl;dr Switch node 2 to rack 1a.
>>
>> -Tupshin
>>
>>
>>
>> On Mon, Apr 7, 2014 at 8:08 AM, Oleg Dulin  wrote:
>>
>>> I added two more nodes on Friday, and moved tokens around.
>>>
>>> For four nodes, the tokesn should be:
>>>
>>> Node #1:0
>>> Node #2:   42535295865117307932921825928971026432
>>> Node #3:   85070591730234615865843651857942052864
>>> Node #4:  127605887595351923798765477786913079296
>>>
>>> And yet my ring status shows this (for a specific keyspace). RF=2.
>>>
>>> Datacenter: us-east
>>> ==
>>> Replicas: 2
>>>
>>> AddressRackStatus State   LoadOwns
>>> Token
>>>
>>> 42535295865117307932921825928971026432
>>> x.x.x.1  1b  Up Normal  13.51 GB25.00%
>>> 127605887595351923798765477786913079296
>>> x.x.x.2  1b  Up Normal  4.46 GB 25.00%
>>> 85070591730234615865843651857942052164
>>> x.x.x.3  1a  Up Normal  62.58 GB100.00% 0
>>> x.x.x.4  1b  Up Normal  66.71 GB50.00%
>>> 42535295865117307932921825928971026432
>>>
>>> Datacenter: us-west
>>> ==
>>> Replicas: 1
>>>
>>> AddressRackStatus State   LoadOwns
>>> Token
>>>
>>> x.x.x.5   1b  Up Normal  62.72 GB100.00%
>>> 100
>>> --
>>> Regards,
>>> Oleg Dulin
>>> http://www.olegdulin.com
>>>
>>
>
> --
> Regards,
> Oleg Dulin
> http://www.olegdulin.com
>
>
>


Re: Why is my cluster imbalanced ?

2014-04-07 Thread Oleg Dulin

Tupshin:

For EC2, 3 us-east, would you recommend RF=3 ? That would make sense, 
wouldn't it...


That's what I'll do for production.

Oleg

On 2014-04-07 12:23:51 +, Tupshin Harper said:


Your us-east datacenter, has RF=2, and 2 racks, which is the right way
to do it (I would rarely recommend using a different number of racks
than your RF). But by having three nodes on one rack (1b) and only one
on the other(1a), you are telling Cassandra to distribute the data so
that no two copies of the same partition exist on the same rack.

So with rack ownership of 100% and 100% respectively, there is no even
way to distribute your data among those four nodes.

tl;dr Switch node 2 to rack 1a.

-Tupshin



On Mon, Apr 7, 2014 at 8:08 AM, Oleg Dulin  wrote:

I added two more nodes on Friday, and moved tokens around.

For four nodes, the tokesn should be:

Node #1:0
Node #2:   42535295865117307932921825928971026432
Node #3:   85070591730234615865843651857942052864
Node #4:  127605887595351923798765477786913079296

And yet my ring status shows this (for a specific keyspace). RF=2.

Datacenter: us-east
==
Replicas: 2

AddressRackStatus State   LoadOwns
Token

42535295865117307932921825928971026432
x.x.x.1  1b  Up Normal  13.51 GB25.00%
127605887595351923798765477786913079296
x.x.x.2  1b  Up Normal  4.46 GB 25.00%
85070591730234615865843651857942052164
x.x.x.3  1a  Up Normal  62.58 GB100.00% 0
x.x.x.4  1b  Up Normal  66.71 GB50.00%
42535295865117307932921825928971026432

Datacenter: us-west
==
Replicas: 1

AddressRackStatus State   LoadOwns
Token

x.x.x.5   1b  Up Normal  62.72 GB100.00% 100
--
Regards,
Oleg Dulin
http://www.olegdulin.com



--
Regards,
Oleg Dulin
http://www.olegdulin.com




Re: Why is my cluster imbalanced ?

2014-04-07 Thread Oleg Dulin

Excellent, thanks.

On 2014-04-07 12:23:51 +, Tupshin Harper said:


Your us-east datacenter, has RF=2, and 2 racks, which is the right way
to do it (I would rarely recommend using a different number of racks
than your RF). But by having three nodes on one rack (1b) and only one
on the other(1a), you are telling Cassandra to distribute the data so
that no two copies of the same partition exist on the same rack.

So with rack ownership of 100% and 100% respectively, there is no even
way to distribute your data among those four nodes.

tl;dr Switch node 2 to rack 1a.

-Tupshin



On Mon, Apr 7, 2014 at 8:08 AM, Oleg Dulin  wrote:

I added two more nodes on Friday, and moved tokens around.

For four nodes, the tokesn should be:

Node #1:0
Node #2:   42535295865117307932921825928971026432
Node #3:   85070591730234615865843651857942052864
Node #4:  127605887595351923798765477786913079296

And yet my ring status shows this (for a specific keyspace). RF=2.

Datacenter: us-east
==
Replicas: 2

AddressRackStatus State   LoadOwns
Token

42535295865117307932921825928971026432
x.x.x.1  1b  Up Normal  13.51 GB25.00%
127605887595351923798765477786913079296
x.x.x.2  1b  Up Normal  4.46 GB 25.00%
85070591730234615865843651857942052164
x.x.x.3  1a  Up Normal  62.58 GB100.00% 0
x.x.x.4  1b  Up Normal  66.71 GB50.00%
42535295865117307932921825928971026432

Datacenter: us-west
==
Replicas: 1

AddressRackStatus State   LoadOwns
Token

x.x.x.5   1b  Up Normal  62.72 GB100.00% 100
--
Regards,
Oleg Dulin
http://www.olegdulin.com



--
Regards,
Oleg Dulin
http://www.olegdulin.com




Re: Why is my cluster imbalanced ?

2014-04-07 Thread Tupshin Harper
Your us-east datacenter, has RF=2, and 2 racks, which is the right way
to do it (I would rarely recommend using a different number of racks
than your RF). But by having three nodes on one rack (1b) and only one
on the other(1a), you are telling Cassandra to distribute the data so
that no two copies of the same partition exist on the same rack.

So with rack ownership of 100% and 100% respectively, there is no even
way to distribute your data among those four nodes.

tl;dr Switch node 2 to rack 1a.

-Tupshin



On Mon, Apr 7, 2014 at 8:08 AM, Oleg Dulin  wrote:
> I added two more nodes on Friday, and moved tokens around.
>
> For four nodes, the tokesn should be:
>
>  Node #1:0
>  Node #2:   42535295865117307932921825928971026432
>  Node #3:   85070591730234615865843651857942052864
>  Node #4:  127605887595351923798765477786913079296
>
> And yet my ring status shows this (for a specific keyspace). RF=2.
>
> Datacenter: us-east
> ==
> Replicas: 2
>
> AddressRackStatus State   LoadOwns
> Token
>
> 42535295865117307932921825928971026432
> x.x.x.1  1b  Up Normal  13.51 GB25.00%
> 127605887595351923798765477786913079296
> x.x.x.2  1b  Up Normal  4.46 GB 25.00%
> 85070591730234615865843651857942052164
> x.x.x.3  1a  Up Normal  62.58 GB100.00% 0
> x.x.x.4  1b  Up Normal  66.71 GB50.00%
> 42535295865117307932921825928971026432
>
> Datacenter: us-west
> ==
> Replicas: 1
>
> AddressRackStatus State   LoadOwns
> Token
>
> x.x.x.5   1b  Up Normal  62.72 GB100.00% 100
> --
> Regards,
> Oleg Dulin
> http://www.olegdulin.com
>
>


Why is my cluster imbalanced ?

2014-04-07 Thread Oleg Dulin

I added two more nodes on Friday, and moved tokens around.

For four nodes, the tokesn should be:

 Node #1:    0
 Node #2:   42535295865117307932921825928971026432
 Node #3:   85070591730234615865843651857942052864
 Node #4:  127605887595351923798765477786913079296

And yet my ring status shows this (for a specific keyspace). RF=2.

Datacenter: us-east
==
Replicas: 2

AddressRackStatus State   LoadOwns  
 Token
   
 42535295865117307932921825928971026432
x.x.x.1  1b  Up Normal  13.51 GB25.00%  
127605887595351923798765477786913079296
x.x.x.2  1b  Up Normal  4.46 GB 25.00%  
85070591730234615865843651857942052164

x.x.x.3  1a  Up Normal  62.58 GB100.00% 0
x.x.x.4  1b  Up Normal  66.71 GB50.00%  
42535295865117307932921825928971026432


Datacenter: us-west
==
Replicas: 1

AddressRackStatus State   LoadOwns  
 Token


x.x.x.5   1b  Up Normal  62.72 GB100.00% 100
--
Regards,
Oleg Dulin
http://www.olegdulin.com