It may be helpful:
https://thelastpickle.com/blog/2018/08/08/compression_performance.html
It's complex. Simple explanation, cassandra keeps sstables in memory based
on chunk size and sstable parts. It manage loading new sstables to memory
based on requests on different sstables correctly . You should be worry
about it (sstables loaded in memory)

*VafaTech.com - A Total Solution for Data Gathering & Analysis*


On Mon, Dec 2, 2019 at 6:18 PM Rahul Reddy <rahulreddy1...@gmail.com> wrote:

> Thanks Hossein,
>
> How does the chunks are moved out of memory (LRU?) if it want to make room
> for new requests to get chunks?if it has mechanism to clear chunks from
> cache what causes to cannot allocate chunk? Can you point me to any
> documention?
>
> On Sun, Dec 1, 2019, 12:03 PM Hossein Ghiyasi Mehr <ghiyasim...@gmail.com>
> wrote:
>
>> Chunks are part of sstables. When there is enough space in memory to
>> cache them, read performance will increase if application requests it again.
>>
>> Your real answer is application dependent. For example write heavy
>> applications are different than read heavy or read-write heavy. Real time
>> applications are different than time series data environments and ... .
>>
>>
>>
>> On Sun, Dec 1, 2019 at 7:09 PM Rahul Reddy <rahulreddy1...@gmail.com>
>> wrote:
>>
>>> Hello,
>>>
>>> We are seeing memory usage reached 512 mb and cannot allocate 1MB.  I
>>> see this because file_cache_size_mb by default set to 512MB.
>>>
>>> Datastax document recommends to increase the file_cache_size.
>>>
>>> We have 32G over all memory allocated 16G to Cassandra. What is the
>>> recommended value in my case. And also when does this memory gets filled up
>>> frequent does nodeflush helps in avoiding this info messages?
>>>
>>

Reply via email to