Have you changed the default Memtable settings?  Are you running on
nodes with a single 1TB drive?  Are you monitoring your I/O load on
the nodes?

On Thu, Jul 22, 2010 at 6:40 PM, 王一锋 <wangyif...@aspire-tech.com> wrote:
> The version we are using is 0.6.1
>
> 2010-07-23
> ________________________________
>
> ________________________________
> 发件人: 王一锋
> 发送时间: 2010-07-23  09:38:15
> 收件人: user
> 抄送:
> 主题: Re: Re: Re: What is consuming the heap?
> Yes, we are doing a lot of inserts.
>
> But how can CASSANDRA-1042 cause an OutOfMemory?
> And we are using multigetSlice(). We are not doing any get_range_slice() at
> all.
>
> 2010-07-23
> ________________________________
>
> ________________________________
> 发件人: Jonathan Ellis
> 发送时间: 2010-07-21  21:17:21
> 收件人: user
> 抄送:
> 主题: Re: Re: What is consuming the heap?
> On Tue, Jul 20, 2010 at 11:33 PM, Peter Schuller
> <peter.schul...@infidyne.com> wrote:
>>>  INFO [GC inspection] 2010-07-21 01:01:49,661 GCInspector.java (line 110) 
>>> GC for ConcurrentMarkSweep: 11748 ms, 413673472 reclaimed leaving 
>>> 9779542600 used; max is 10873667584
>>> ERROR [Thread-35] 2010-07-21 01:02:10,941 CassandraDaemon.java (line 78) 
>>> Fatal exception in thread Thread[Thread-35,5,main]
>>> java.lang.OutOfMemoryError: Java heap space
>>>  INFO [GC inspection] 2010-07-21 01:02:10,958 GCInspector.java (line 110) 
>>> GC for ConcurrentMarkSweep: 10043 ms, 259576 reclaimed leaving 10172790816 
>>> used; max is 10873667584
>>
>> So that confirms a "legitimate" out-of-memory condition in the sense
>> that CMS is reclaiming extremely little and the live set after a
>> concurrent mark/sweep is indeed around the 10 gig.
> Are you doing a lot of inserts?  You might be hitting
> https://issues.apache.org/jira/browse/CASSANDRA-1042
> --
> Jonathan Ellis
> Project Chair, Apache Cassandra
> co-founder of Riptano, the source for professional Cassandra support
> http://riptano.com

Reply via email to