[ 
https://issues.apache.org/jira/browse/CASSANDRA-10547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15004449#comment-15004449
 ] 

Susan Perkins commented on CASSANDRA-10547:
-------------------------------------------

* We are just upgrading to 2.1 now;
* Each list has about 80 doubles
* In the past, we didn't check the data before updating, we just wrote whether 
it was changed or not, so the entire list would be updated, with the same data. 
 We recently updated the code to check whether there were any changes.  And, it 
no longer updates: If anything differs it deletes the entire partition and 
writes a new one.


> Updating a CQL List many times creates many tombstones 
> -------------------------------------------------------
>
>                 Key: CASSANDRA-10547
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-10547
>             Project: Cassandra
>          Issue Type: Bug
>         Environment: Cassandra 2.1.9, Java driver 2.1.5
>            Reporter: James Bishop
>         Attachments: tombstone.snippet
>
>
> We encountered a TombstoneOverwhelmingException in cassandra system.log which 
> caused some of our CQL queries to fail.
> We are able to reproduce this issue by updating a CQL List column many times. 
> The number of tombstones created seems to be related to (number of list items 
> * number of list updates). We update the entire list on each update using the 
> java driver. (see attached code for details)
> Running nodetool compact does not help, but nodetool flush does. It appears 
> that the tombstones are being accumulated in memory. 
> For example if we update a list of 100 items 1000 times, this creates more  
> than 100,000 tombstones and exceeds the default tombstone_failure_threshold.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to