[ 
https://issues.apache.org/jira/browse/CASSANDRA-2273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13002816#comment-13002816
 ] 

Sébastien Giroux edited comment on CASSANDRA-2273 at 3/4/11 9:12 PM:
---------------------------------------------------------------------

Found the issue. It was row cache that cached rows that were REALLY big. Rows 
that big shouldnt be there, application bug.

So sorry, that ticket is invalid. But wouldn't it makes sense to have a 
max_row_size_for_row_cache setting so it doesnt cache row bigger then X MB?

      was (Author: wajam):
    Found the issue. It was row cache that cached rows that were REALLY big. 
Shouldn't happen, application bug.

So sorry, that ticket is invalid. But wouldn't it makes sense to have a 
max_row_size_for_row_cache setting so it doesnt cache row bigger then X MB?
  
> Possible Memory leak
> --------------------
>
>                 Key: CASSANDRA-2273
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-2273
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>            Reporter: Sébastien Giroux
>            Priority: Critical
>             Fix For: 0.7.4
>
>         Attachments: heap_peak_OOM.PNG, jconsole_OOM.PNG
>
>
> I have a few problematic nodes in my cluster that will crash OutOfMemory very 
> often. This is Cassandra 0.7.3 downloaded from Hudson.
> Heap size is 6GB, server memory is 8GB.
> Memtable are flushed at 64MB, I have 5 CFs.
> FlushLargestMemtablesAt is set at 0.8 but doesn't help with this issue.
> I will attach a screenshot showing my issue. There is no compaction going on 
> when the heap usage start increasing like crazy.
> It could be a configuration issue but it kinda looks like a bug to me.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira


Reply via email to