[
https://issues.apache.org/jira/browse/CASSANDRA-5661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13693419#comment-13693419
]
Pavel Yaskevich commented on CASSANDRA-5661:
--------------------------------------------
As a brain dump, I see 3 ways to resolve this:
1. Introduce queue total memory limit and evict based on that, but there is no
guarantee that we won't be evicting incorrect instances.
2. Introduce liveness per instance (e.g. 20-30 seconds) before evictor
considers it as "old", that solves the problem with #1 but are relying on
eviction thread to be robust and run constantly, any delays in such manual GC
could result in the same memory bloat as described in the issue.
3. Remove caching and go with mmap'ed segments instead, the problem with that
is that we need to create direct byte buffer every time we decompress data
which I'm not sure if could be GC'ed reliably, so for example, if particular
JVM implementation only cleans up such buffers only on CMS or full GC process
can effectively OOM because we are actually trying to avoid any significant GC
activity as much as possible.
I like #2 the most. Thoughts?
> Discard pooled readers for cold data
> ------------------------------------
>
> Key: CASSANDRA-5661
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5661
> Project: Cassandra
> Issue Type: Bug
> Components: Core
> Affects Versions: 1.2.1
> Reporter: Jonathan Ellis
> Assignee: Pavel Yaskevich
> Fix For: 1.2.7
>
>
> Reader pooling was introduced in CASSANDRA-4942 but pooled
> RandomAccessReaders are never cleaned up until the SSTableReader is closed.
> So memory use is "the worst case simultaneous RAR we had open for this file,
> forever."
> We should introduce a global limit on how much memory to use for RAR, and
> evict old ones.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira