Hi Ewen,

I am using 3 brokers with 12 topic and near about 120-125 partitions
without any replication and the message size is approx 15MB/message.

The problem is when the cache memory increases and reaches to the max
available the performance starts degrading also i am using Storm spot as
consumer which  stops reading at times.

When i do a free -m on my broker node after 1/2 - 1 hr the memory foot
print is as follows.
1) Physical memory - 500 MB - 600 MB
2) Cache Memory - 6.5 GB
3) Free Memory - 50 - 60 MB

Regards,
Nilesh Chhapru.

On Monday 27 July 2015 11:02 PM, Ewen Cheslack-Postava wrote:
> Having the OS cache the data in Kafka's log files is useful since it means
> that data doesn't need to be read back from disk when consumed. This is
> good for the latency and throughput of consumers. Usually this caching
> works out pretty well, keeping the latest data from your topics in cache
> and only pulling older data into memory if a consumer reads data from
> earlier in the log. In other words, by leveraging OS-level caching of
> files, Kafka gets an in-memory caching layer for free.
>
> Generally you shouldn't need to clear this data -- the OS should only be
> using memory that isn't being used anyway. Is there a particular problem
> you're encountering that clearing the cache would help with?
>
> -Ewen
>
> On Mon, Jul 27, 2015 at 2:33 AM, Nilesh Chhapru <
> nilesh.chha...@ugamsolutions.com> wrote:
>
>> Hi All,
>>
>> I am facing issues with kafka broker process taking  a lot of cache
>> memory, just wanted to know if the process really need that much of
>> cache memory, or can i clear the OS level cache by setting a cron.
>>
>> Regards,
>> Nilesh Chhapru.
>>
>
>

Reply via email to