[
https://issues.apache.org/jira/browse/CASSANDRA-16904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Josh McKenzie updated CASSANDRA-16904:
--------------------------------------
Resolution: Fixed
Status: Resolved (was: Triage Needed)
We introduced a new concept of {{ShallowIndexedEntry}} objects in 3.6 /
CASSANDRA-11206 that prevent us materializing large indexes on heap and instead
creating a shallow, max size
{quote}BASE_SIZE = ObjectSizes.measure(new ShallowIndexedEntry(0, 0,
DeletionTime.LIVE, 0, 10, 0, null));
{quote}
on heap when we're past size and have multiple columns. This issue doesn't
apply to 4.0+.
> Check if size of object being added to RowCache and KeyCache is bigger than
> cache capacity
> ------------------------------------------------------------------------------------------
>
> Key: CASSANDRA-16904
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16904
> Project: Cassandra
> Issue Type: Improvement
> Components: Local/Caching
> Reporter: Josh McKenzie
> Assignee: Josh McKenzie
> Priority: Normal
>
> We don't check if the size of an object being added to the RowCache/KeyCache
> itself exceeds the max configured size of the cache.
> For instance, if a RowCache object is ~5GB due to IndexInfo objects, but the
> cache is configured to have a max capacity of 100MB, we will still add the
> 5GB object into the cache and then need to wait for the eviction thread in
> the cache to come around, realize we're over capacity, and remove the object
> from the cache.
> We could check the size of the object with jamm and ensure it's smaller than
> the max size of the cache. If it exceeds the size of the cache don't cache it
> at all.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]