[
https://issues.apache.org/jira/browse/CASSANDRA-3921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13209202#comment-13209202
]
Sylvain Lebresne commented on CASSANDRA-3921:
---------------------------------------------
Yeah, it's the serializing cache, we invalidate on each put, so the chance of
tons of tombstone building up overtime is anecdotal imo. We could probably fix
this by cloning the cachedRow after the get, apply removeDeleted on the clone
and do a conditional replace (if it's still the value we got). This involve a
bunch of serialization/deserialization so it's unclear that doing that is a
better optimization than leaving the tombstone. So I'm good leaving it the way
it is. Except that we may want to make removeDeletedInCache be a noop for
copying caches just to avoid the useless deserialization.
> Compaction doesn't clear out expired tombstones from SerializingCache
> ---------------------------------------------------------------------
>
> Key: CASSANDRA-3921
> URL: https://issues.apache.org/jira/browse/CASSANDRA-3921
> Project: Cassandra
> Issue Type: Bug
> Affects Versions: 0.8.0
> Reporter: Jonathan Ellis
> Priority: Minor
> Fix For: 1.1.0
>
>
> Compaction calls removeDeletedInCache, which looks like this:
> {code}
> . public void removeDeletedInCache(DecoratedKey key)
> {
> ColumnFamily cachedRow = cfs.getRawCachedRow(key);
> if (cachedRow != null)
> ColumnFamilyStore.removeDeleted(cachedRow, gcBefore);
> }
> {code}
> For the SerializingCache, this means it calls removeDeleted on a temporary,
> deserialized copy, which leaves the cache contents unaffected.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira