Sagar Rao created KAFKA-13624:
---------------------------------

             Summary: Add Metric for Store Cache Size
                 Key: KAFKA-13624
                 URL: https://issues.apache.org/jira/browse/KAFKA-13624
             Project: Kafka
          Issue Type: Improvement
          Components: streams
            Reporter: Sagar Rao
            Assignee: Sagar Rao


The current config "buffered.records.per.partition" controls how many records 
in maximum to bookkeep, and hence it is exceed we would pause fetching from 
this partition. However this config has two issues:

* It's a per-partition config, so the total memory consumed is dependent on the 
dynamic number of partitions assigned.
* Record size could vary from case to case.

And hence it's hard to bound the memory usage for this buffering. We should 
consider deprecating that config with a global, e.g. "input.buffer.max.bytes" 
which controls how much bytes in total is allowed to be buffered. This is 
doable since we buffer the raw records in <byte[], byte[]>.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

Reply via email to