[ 
https://issues.apache.org/jira/browse/KAFKA-4168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eno Thereska updated KAFKA-4168:
--------------------------------
    Issue Type: Improvement  (was: Sub-task)
        Parent:     (was: KAFKA-3776)

> More precise accounting of memory usage
> ---------------------------------------
>
>                 Key: KAFKA-4168
>                 URL: https://issues.apache.org/jira/browse/KAFKA-4168
>             Project: Kafka
>          Issue Type: Improvement
>          Components: streams
>    Affects Versions: 0.10.1.0
>            Reporter: Eno Thereska
>             Fix For: 0.10.2.0
>
>
> Right now, the cache.max.bytes.buffering parameter controls the size of the 
> cache used. Specifically the size includes the size of the values stored in 
> the cache plus basic overheads, such as key size, all the LRU entry sizes, 
> etc. However, we could be more fine-grained in the memory accounting and add 
> up the size of hash sets, hash maps and their entries more precisely. For 
> example, currently a dirty entry is placed into a dirty keys set, but we do 
> not account for the size of that set in the memory consumption calculation.
> It is likely this falls under "memory management" rather than "buffer cache 
> management".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to