[ 
https://issues.apache.org/jira/browse/CASSANDRA-10430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15148655#comment-15148655
 ] 

clint martin commented on CASSANDRA-10430:
------------------------------------------

I am also experiencing this issue, using DSE 4.7.3 (cassandra 2.1.8.689).  Load 
was reported correctly until I switched my cluster to use Incremental Repair.

# nodetool status
Datacenter: DC1
===============
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address        Load       Tokens  Owns    Host ID                           
    Rack
UN  172.16.10.250  1.76 TB    1       ?       
88280120-c7d6-401e-8a75-5726cbb081e8  RAC1
UN  172.16.10.251  2.28 TB    1       ?       
3812bbd5-d63d-4bf1-a22b-6c31ce279018  RAC1
UN  172.16.10.252  2.05 TB    1       ?       
59028151-892a-4896-89b7-a368cceaddd6  RAC1


I only have 1.3TB of raw space on each of these nodes, and am only actually 
using approximately 385G to 468G of raw space on each node. 



> "Load" report from "nodetool status" is inaccurate
> --------------------------------------------------
>
>                 Key: CASSANDRA-10430
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-10430
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Tools
>         Environment: Cassandra v2.1.9 running on 6 node Amazon AWS, vnodes 
> enabled. 
>            Reporter: julia zhang
>             Fix For: 2.1.x
>
>         Attachments: system.log.2.zip, system.log.3.zip, system.log.4.zip
>
>
> After running an incremental repair, nodetool status report unbalanced load 
> among cluster. 
> $ nodetool status mykeyspace
> ==========================
> ||Status|| Address         ||Load           ||Tokens  ||Owns (effective)  
> ||Host ID ||  Rack ||                         
> |UN  |10.1.1.1  |1.13 TB       |256    |48.5%            
> |a4477534-a5c6-4e3e-9108-17a69aebcfc0|  RAC1|
> |UN  |10.1.1.2  |2.58 TB       |256     |50.5%             
> |1a7c3864-879f-48c5-8dde-bc00cf4b23e6  |RAC2|
> |UN  |10.1.1.3  |1.49 TB       |256     |51.5%             
> |27df5b30-a5fc-44a5-9a2c-1cd65e1ba3f7  |RAC1|
> |UN  |10.1.1.4  |250.97 GB  |256     |51.9%             
> |9898a278-2fe6-4da2-b6dc-392e5fda51e6  |RAC3|
> |UN  |10.1.1.5 |1.88 TB      |256     |49.5%             
> |04aa9ce1-c1c3-4886-8d72-270b024b49b9  |RAC2|
> |UN  |10.1.1.6 |1.3 TB        |256     |48.1%             
> |6d5d48e6-d188-4f88-808d-dcdbb39fdca5  |RAC3|
> It seems that only 10.1.1.4 reports correct "Load". There is no hints in the 
> cluster and report remains the same after running "nodetool cleanup" on each 
> node. "nodetool cfstats" shows number of keys are evenly distributed and 
> Cassandra data physical disk on each node report about the same usage. 
> "nodetool status" report these inaccurate large storage load until we restart 
> each node, after the restart, "Load" report match what we've seen from disk.  
> We did not see this behavior until upgrade to v2.1.9



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to