[ 
https://issues.apache.org/jira/browse/KAFKA-8020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Yu updated KAFKA-8020:
------------------------------
    Description: In distributed systems, time-aware LRU Caches offers a 
superior eviction policy better than traditional LRU models, having more cache 
hits than misses. In this new policy, if an item is stored beyond its useful 
lifespan, then it is removed. For example, in {{CachingWindowStore}}, a window 
usually is of limited size. After it expires, it would no longer be queried 
for, but it potentially could stay in the ThreadCache for an unnecessary amount 
of time if it is not evicted (i.e. the number of entries being inserted is 
few). For better allocation of memory, it would be better if we implement a 
time-aware LRU Cache which takes into account the lifespan of an entry and 
removes it once it has expired.  (was: In distributed systems, time-aware LRU 
Caches offers a superior eviction policy better than traditional LRU models, 
having more cache hits than misses. In this new policy, if an item which is 
stored beyond its useful lifespan, then it is removed. For example, in 
{{CachingWindowStore}}, a window usually is of limited size. After it expires, 
it would no longer be queried for, but it potentially could stay in the 
ThreadCache for an unnecessary amount of time if it is not evicted (i.e. the 
number of entries being inserted is few). For better allocation of memory, it 
would be better if we implement a time-aware LRU Cache which takes into account 
the lifespan of an entry and removes it once it has expired.)

> Consider making ThreadCache a time-aware LRU Cache
> --------------------------------------------------
>
>                 Key: KAFKA-8020
>                 URL: https://issues.apache.org/jira/browse/KAFKA-8020
>             Project: Kafka
>          Issue Type: Improvement
>          Components: streams
>            Reporter: Richard Yu
>            Priority: Major
>
> In distributed systems, time-aware LRU Caches offers a superior eviction 
> policy better than traditional LRU models, having more cache hits than 
> misses. In this new policy, if an item is stored beyond its useful lifespan, 
> then it is removed. For example, in {{CachingWindowStore}}, a window usually 
> is of limited size. After it expires, it would no longer be queried for, but 
> it potentially could stay in the ThreadCache for an unnecessary amount of 
> time if it is not evicted (i.e. the number of entries being inserted is few). 
> For better allocation of memory, it would be better if we implement a 
> time-aware LRU Cache which takes into account the lifespan of an entry and 
> removes it once it has expired.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to