[
https://issues.apache.org/jira/browse/CASSANDRA-3921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13209211#comment-13209211
]
Peter Schuller commented on CASSANDRA-3921:
-------------------------------------------
If we invalidate on every put, I'm +1 on just ignoring the problem. Sure, it's
possible to have a constant subset of a hotset repeatedly read, and someone
making an attempt to make it take less space in cache by deleting data and
waiting for tombstones... but that's so obscure/extreme that we can probably
fix 500 other JIRA:s before this is a priority :) Definitely +1 on NOOP:ing the
method though; or more importantly, documenting why it's a NOOP.
(Btw the incoherence of my previous comment is what happens when you split the
posting of a comment in two pieces with a meeting in between...)
> Compaction doesn't clear out expired tombstones from SerializingCache
> ---------------------------------------------------------------------
>
> Key: CASSANDRA-3921
> URL: https://issues.apache.org/jira/browse/CASSANDRA-3921
> Project: Cassandra
> Issue Type: Bug
> Affects Versions: 0.8.0
> Reporter: Jonathan Ellis
> Priority: Minor
> Fix For: 1.1.0
>
>
> Compaction calls removeDeletedInCache, which looks like this:
> {code}
> . public void removeDeletedInCache(DecoratedKey key)
> {
> ColumnFamily cachedRow = cfs.getRawCachedRow(key);
> if (cachedRow != null)
> ColumnFamilyStore.removeDeleted(cachedRow, gcBefore);
> }
> {code}
> For the SerializingCache, this means it calls removeDeleted on a temporary,
> deserialized copy, which leaves the cache contents unaffected.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira