"You can trigger minor compaction on an individual SStable file when the
percentage of tombstones in that Sstable crosses a user-defined threshold."

We have just one cf with TTL. I don't think the problem comes from there.

"Peaks may be occurring during compaction, when Sstable files are
memmapped."

Ok, but why am I always running between 4 and 6 GB heap used, even when
there is no traffic ?


2013/3/13 Alain RODRIGUEZ <arodr...@gmail.com>

> "called index_interval set to 128"
>
> I think this is for BloomFilters actually.
>
>
> 2013/3/13 Hiller, Dean <dean.hil...@nrel.gov>
>
> Going to 1.2.2 helped us quite a bit as well as turning on LCS from STCS
>> which gave us smaller bloomfilters.
>>
>> As far as key cache.  There is an entry in cassandra.yaml called
>> index_interval set to 128.  I am not sure if that is related to key_cache.
>>  I think it is.  By turning that to 512 or maybe even 1024, you will
>> consume less ram there as well though I ran this test in QA and my key
>> cache size stayed the same so I am really not sure(I am actually checking
>> out cassandra code now to dig a little deeper into this property.
>>
>> Dean
>>
>> From: Alain RODRIGUEZ <arodr...@gmail.com<mailto:arodr...@gmail.com>>
>> Reply-To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>" <
>> user@cassandra.apache.org<mailto:user@cassandra.apache.org>>
>> Date: Wednesday, March 13, 2013 10:11 AM
>> To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>" <
>> user@cassandra.apache.org<mailto:user@cassandra.apache.org>>
>> Subject: About the heap
>>
>> Hi,
>>
>> I would like to know everything that is in the heap.
>>
>> We are here speaking of C*1.1.6
>>
>> Theory :
>>
>> - Memtable (1024 MB)
>> - Key Cache (100 MB)
>> - Row Cache (disabled, and serialized with JNA activated anyway, so
>> should be off-heap)
>> - BloomFilters (about 1,03 GB - from cfstats, adding all the "Bloom
>> Filter Space Used" and considering they are showed in Bytes - 1103765112)
>> - Anything else ?
>>
>> So my heap should be fluctuating between 1,15 GB and 2.15 GB and growing
>> slowly (from the new BF of my new data).
>>
>> My heap is actually changing from 3-4 GB to 6 GB and sometimes growing to
>> the max 8 GB (crashing the node).
>>
>> Because of this I have an unstable cluster and have no other choice than
>> use Amazon EC2 xLarge instances when we would rather use twice more EC2
>> Large nodes.
>>
>> What am I missing ?
>>
>> Practice :
>>
>> Is there a way not inducing any load and easy to do to dump the heap to
>> analyse it with MAT (or anything else that you could advice) ?
>>
>> Alain
>>
>
>

Reply via email to