[
https://issues.apache.org/jira/browse/CASSANDRA-5357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775953#comment-13775953
]
Jonathan Ellis commented on CASSANDRA-5357:
-------------------------------------------
bq. It is required because we need to know the query which populated the cache
Sure, but why does that imply we need to *serialize* the filters? I'm saying
just shove the ColumnFamily payload off-heap but leave the rest "live."
That might also simplify the Sentinel business.
bq. If the slice with count as 250 is stored we might not need to store the
slice with count of 50 with same range, we can also merge overlapping slices
etc.
Pushing that to a separate ticket is fine.
bq. Can we [handle respecting memory limits] in a separate ticket?
I think that's pretty core functionality; it seems like we should do that here.
That said, I'm not sure I understand exactly how the problem happens here.
> Query cache
> -----------
>
> Key: CASSANDRA-5357
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5357
> Project: Cassandra
> Issue Type: Bug
> Reporter: Jonathan Ellis
> Assignee: Vijay
>
> I think that most people expect the row cache to act like a query cache,
> because that's a reasonable model. Caching the entire partition is, in
> retrospect, not really reasonable, so it's not surprising that it catches
> people off guard, especially given the confusion we've inflicted on ourselves
> as to what a "row" constitutes.
> I propose replacing it with a true query cache.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira