[
https://issues.apache.org/jira/browse/CASSANDRA-7438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14226776#comment-14226776
]
Robert Stupp commented on CASSANDRA-7438:
-----------------------------------------
The row cache can contain very large rows AFAIK.
Idea is to pre-allocate some portion of the configured capacity for large
blocks - new blocks could be allocated on demand (edge-trigger).
OTOH if it stores that amount of data on a cache, that amount of time
(20...60ms) might be irrelevant compared to the time needed for serialization -
so maybe it would be wasted effort. Not sure about that.
Table resizing may take as long as it takes - I do not really bother about
allocation time for that, because no reads or writes are locked while
allocating the new partition(segment) table.
> Serializing Row cache alternative (Fully off heap)
> --------------------------------------------------
>
> Key: CASSANDRA-7438
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7438
> Project: Cassandra
> Issue Type: Improvement
> Components: Core
> Environment: Linux
> Reporter: Vijay
> Assignee: Vijay
> Labels: performance
> Fix For: 3.0
>
> Attachments: 0001-CASSANDRA-7438.patch
>
>
> Currently SerializingCache is partially off heap, keys are still stored in
> JVM heap as BB,
> * There is a higher GC costs for a reasonably big cache.
> * Some users have used the row cache efficiently in production for better
> results, but this requires careful tunning.
> * Overhead in Memory for the cache entries are relatively high.
> So the proposal for this ticket is to move the LRU cache logic completely off
> heap and use JNI to interact with cache. We might want to ensure that the new
> implementation match the existing API's (ICache), and the implementation
> needs to have safe memory access, low overhead in memory and less memcpy's
> (As much as possible).
> We might also want to make this cache configurable.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)