Hi Ewen,
Thanks for reply.
The assumptions that you made for replication and partitions are
correct, 120 is total number of partitions and replication factor is 1
for all the topics.
Does that mean that a broker will keep all the messages that are
produced in memory, or will only the unconsumed
On Tue, Jul 28, 2015 at 11:29 PM, Nilesh Chhapru
nilesh.chha...@ugamsolutions.com wrote:
Hi Ewen,
Thanks for reply.
The assumptions that you made for replication and partitions are
correct, 120 is total number of partitions and replication factor is 1
for all the topics.
Does that mean
Nilesh,
It's expected that a lot of memory is used for cache. This makes sense
because under the hood, Kafka mostly just reads and writes data to/from
files. While Kafka does manage some in-memory data, mostly it is writing
produced data (or replicated data) to log files and then serving those
Hi Ewen,
I am using 3 brokers with 12 topic and near about 120-125 partitions
without any replication and the message size is approx 15MB/message.
The problem is when the cache memory increases and reaches to the max
available the performance starts degrading also i am using Storm spot as
Having the OS cache the data in Kafka's log files is useful since it means
that data doesn't need to be read back from disk when consumed. This is
good for the latency and throughput of consumers. Usually this caching
works out pretty well, keeping the latest data from your topics in cache
and
http://www.linuxatemyram.com may be a helpful resource to explain this
better.
On Tue, 28 Jul 2015 at 5:32 AM Ewen Cheslack-Postava e...@confluent.io
wrote:
Having the OS cache the data in Kafka's log files is useful since it means
that data doesn't need to be read back from disk when consumed.
Hi All,
I am facing issues with kafka broker process taking a lot of cache
memory, just wanted to know if the process really need that much of
cache memory, or can i clear the OS level cache by setting a cron.
Regards,
Nilesh Chhapru.