If that's what tracing is telling you then it's fine and just a product of
data distribution (note your token count isn't identical anyway).

If you're doing cl one queries directly against particular nodes and
getting different results it sounds like dropped mutations, streaming
errors and or timeouts. Does running repair or reading at CL level all give
you an accurate total record count?

nodetool tpstats should help post bootstrap identify dropped mutations but
you also want to monitor logs for any errors (in general this is always
good advice for any system).. There could be a myriad or problems with
bootstrapping new nodes, usually this is related to under provisioning.

On Mon, Sep 7, 2015 at 8:19 AM Alain RODRIGUEZ <arodr...@gmail.com> wrote:

> Hi Sara,
>
> Can you detail actions performed, like how you load data, what scaleup /
> scaledown are and precise if you let it decommission fully (streams
> finished, node removed from nodetool status) etc ?
>
> This would help us to help you :).
>
> Also, what happens if you query using "CONSISTENCY LOCAL_QUORUM;" (or ALL)
> before your select ? If not using cqlsh, set the Consistency Level of your
> client to LOCAL_QUORUM or ALL and try to select again.
>
> Also, I am not sure of the meaning of this --> " i'm affecting to each of
> my node a different token based on there ip address (the token is A+B+C+D
> and the ip is A.B.C.D)". Aren't you using RandomPartitioner or
> Murmur3Partitioner ?
>
> C*heers,
>
> Alain
>
>
>
> 2015-09-07 12:01 GMT+02:00 Edouard COLE <edouard.c...@rgsystem.com>:
>
>> Please, don't mail me directly
>>
>> I read your answer, but I cannot help anymore
>>
>> And answering with "Sorry, I can't help" is pointless :)
>>
>> Wait for the community to answer
>>
>> De : ICHIBA Sara [mailto:ichi.s...@gmail.com]
>> Envoyé : Monday, September 07, 2015 11:34 AM
>> À : user@cassandra.apache.org
>> Objet : Re: cassandra scalability
>>
>> when there's a scaledown action, i make sure to decommission the node
>> before. but still, I don't understand why I'm having this behaviour. is it
>> normal. what do you do normally to remove a node? is it related to tokens?
>> i'm affecting to each of my node a different token based on there ip
>> address (the token is A+B+C+D and the ip is A.B.C.D)
>>
>> 2015-09-07 11:28 GMT+02:00 ICHIBA Sara <ichi.s...@gmail.com>:
>> at the biginning it looks like this:
>>
>> [root@demo-server-seed-k6g62qr57nok ~]# nodetool status
>> Datacenter: DC1
>> ===============
>> Status=Up/Down
>> |/ State=Normal/Leaving/Joining/Moving
>> --  Address     Load       Tokens  Owns    Host
>> ID                               Rack
>> UN  40.0.0.208  128.73 KB  248     ?
>> 6e7788f9-56bf-4314-a23a-3bf1642d0606  RAC1
>> UN  40.0.0.209  114.59 KB  249     ?
>> 84f6f0be-6633-4c36-b341-b968ff91a58f  RAC1
>> UN  40.0.0.205  129.53 KB  245     ?
>> aa233dc2-a8ae-4c00-af74-0a119825237f  RAC1
>>
>>
>>
>>
>> [root@demo-server-seed-k6g62qr57nok ~]# nodetool status
>> service_dictionary
>> Datacenter: DC1
>> ===============
>> Status=Up/Down
>> |/ State=Normal/Leaving/Joining/Moving
>> --  Address     Load       Tokens  Owns (effective)  Host
>> ID                               Rack
>> UN  40.0.0.208  128.73 KB  248     68.8%
>> 6e7788f9-56bf-4314-a23a-3bf1642d0606  RAC1
>> UN  40.0.0.209  114.59 KB  249     67.8%
>> 84f6f0be-6633-4c36-b341-b968ff91a58f  RAC1
>> UN  40.0.0.205  129.53 KB  245     63.5%
>> aa233dc2-a8ae-4c00-af74-0a119825237f  RAC1
>>
>> the result of the query select * from service_dictionary.table1; gave me
>>  70 rows from 40.0.0.205
>> 64 from 40.0.0.209
>> 54 from 40.0.0.208
>>
>> 2015-09-07 11:13 GMT+02:00 Edouard COLE <edouard.c...@rgsystem.com>:
>> Could you provide the result of :
>> - nodetool status
>> - nodetool status YOURKEYSPACE
>>
>>
>>
> --
Regards,

Ryan Svihla

Reply via email to