[ 
https://issues.apache.org/jira/browse/CASSANDRA-4436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13416298#comment-13416298
 ] 

Peter Velas commented on CASSANDRA-4436:
----------------------------------------

You are right its not affected by compression.
I was just curious if its problem with our python code using pycassa ... 
So I created increments.cql containing 100k lines with 1000 increments for each 
of 100 key values.
{code}
cassandra-cli -h $HOSTNAME -p 9160 -f increments.cql -B >/dev/null 
{code}

after 3 rolling restarts each value was correct with value 3000 
after 4 rolling restart values are incorrect see bellow

{code}
col1    5479
col10   5507
col100  5531
col11   5480
col12   5501
col13   5499
col14   5516
{code}

Its 2 node cluster with replication=2. 




{code}
[root@cass-bug1 ~]# /opt/apache-cassandra-1.0.10/bin/cassandra-cli -h $HOSTNAME 
-p 9160 -f increments.cql -B >/dev/null 
[root@cass-bug1 ~]# /opt/apache-cassandra-1.0.10/bin/nodetool -h $HOSTNAME drain

[root@cass-bug2 ~]# /opt/apache-cassandra-1.0.10/bin/nodetool -h $HOSTNAME ring
Address         DC          Rack        Status State   Load            Owns    
Token                                       
                                                                               
85070591730234615865843651857942052864      
10.20.30.160    datacenter1 rack1       Down   Normal  97.67 KB        50.00%  
0                                           
10.20.30.161    datacenter1 rack1       Up     Normal  113.45 KB       50.00%  
85070591730234615865843651857942052864  

[root@cass-bug1 ~]# killall java
[root@cass-bug1 ~]# /opt/apache-cassandra-1.0.10/bin/cassandra

[root@cass-bug2 ~]# /opt/apache-cassandra-1.0.10/bin/nodetool -h $HOSTNAME drain

[root@cass-bug1 ~]# /opt/apache-cassandra-1.0.10/bin/nodetool -h $HOSTNAME ring
Address         DC          Rack        Status State   Load            Owns    
Token                                       
                                                                               
85070591730234615865843651857942052864      
10.20.30.160    datacenter1 rack1       Up     Normal  97.67 KB        50.00%  
0                                           
10.20.30.161    datacenter1 rack1       Down   Normal  86.13 KB        50.00%  
85070591730234615865843651857942052864 

[root@cass-bug2 ~]# killall java
[root@cass-bug2 ~]# /opt/apache-cassandra-1.0.10/bin/cassandra


{code}



Here is dump of keyspace and CF 


{code}
create keyspace inc_test
  with placement_strategy = 'SimpleStrategy'
  and strategy_options = {replication_factor : 2}
  and durable_writes = true;

use inc_test;

create column family cf1_increment
  with column_type = 'Standard'
  and comparator = 'BytesType'
  and default_validation_class = 'CounterColumnType'
  and key_validation_class = 'BytesType'
  and rows_cached = 0.0
  and row_cache_save_period = 0
  and row_cache_keys_to_save = 2147483647
  and keys_cached = 200000.0
  and key_cache_save_period = 14400
  and read_repair_chance = 1.0
  and gc_grace = 864000
  and min_compaction_threshold = 4
  and max_compaction_threshold = 32
  and replicate_on_write = true
  and row_cache_provider = 'SerializingCacheProvider'
  and compaction_strategy = 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy';
{code}


Hope that helps you reproduce ..
                
> Counters in columns don't preserve correct values after cluster restart
> -----------------------------------------------------------------------
>
>                 Key: CASSANDRA-4436
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-4436
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 1.0.10
>            Reporter: Peter Velas
>         Attachments: increments.cql.gz
>
>
> Similar to #3821. but affecting normal columns. 
> Set up a 2-node cluster with rf=2.
> 1. Create a counter column family and increment a 100 keys in loop 5000 
> times. 
> 2. Then make a rolling restart to cluster. 
> 3. Again increment another 5000 times.
> 4. Make a rolling restart to cluster.
> 5. Again increment another 5000 times.
> 6. Make a rolling restart to cluster.
> After step 6 we were able to reproduce bug with bad counter values. 
> Expected values were 15 000. Values returned from cluster are higher then 
> 15000 + some random number.
> Rolling restarts are done with nodetool drain. Always waiting until second 
> node discover its down then kill java process. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to