Your us-east datacenter, has RF=2, and 2 racks, which is the right way
to do it (I would rarely recommend using a different number of racks
than your RF). But by having three nodes on one rack (1b) and only one
on the other(1a), you are telling Cassandra to distribute the data so
that no two copies of the same partition exist on the same rack.

So with rack ownership of 100% and 100% respectively, there is no even
way to distribute your data among those four nodes.

tl;dr Switch node 2 to rack 1a.

-Tupshin



On Mon, Apr 7, 2014 at 8:08 AM, Oleg Dulin <oleg.du...@gmail.com> wrote:
> I added two more nodes on Friday, and moved tokens around.
>
> For four nodes, the tokesn should be:
>
>  Node #1:                                        0
>  Node #2:   42535295865117307932921825928971026432
>  Node #3:   85070591730234615865843651857942052864
>  Node #4:  127605887595351923798765477786913079296
>
> And yet my ring status shows this (for a specific keyspace). RF=2.
>
> Datacenter: us-east
> ==========
> Replicas: 2
>
> Address        Rack        Status State   Load            Owns
> Token
>
> 42535295865117307932921825928971026432
> x.x.x.1  1b          Up     Normal  13.51 GB        25.00%
> 127605887595351923798765477786913079296
> x.x.x.2  1b          Up     Normal  4.46 GB         25.00%
> 85070591730234615865843651857942052164
> x.x.x.3  1a          Up     Normal  62.58 GB        100.00%             0
> x.x.x.4  1b          Up     Normal  66.71 GB        50.00%
> 42535295865117307932921825928971026432
>
> Datacenter: us-west
> ==========
> Replicas: 1
>
> Address        Rack        Status State   Load            Owns
> Token
>
> x.x.x.5   1b          Up     Normal  62.72 GB        100.00%             100
> --
> Regards,
> Oleg Dulin
> http://www.olegdulin.com
>
>

Reply via email to