[ https://issues.apache.org/jira/browse/KAFKA-8020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16790263#comment-16790263 ]
Matthias J. Sax commented on KAFKA-8020: ---------------------------------------- I think I understand your intention better now. However, I am wondering if you have any data that backs your claim? It's tricky to rewrite the caches and we should only do it, if we get a reasonable performance improvement (otherwise, we risk to introduce bug with no reasonable benefits). {quote}elements which have been inserted usually have a certain lifetime {quote} What would this lifetime be? Do you refer to windowed and session store retention time? KeyValue stores don't have a retention time though. {quote}In terms of a KIP, I don't think that it requires ones. There shouldn't be any changes to public API, and we are offering a performance enhancement, so a configuration which chooses between different caching policies shouldn't be necessary. Oh about implementing this policy for all caches. I'm not too sure about that. I was only planning on implementing this policy for ThreadCache, since I'm somewhat familiar with this part of Kafka Streams. {quote} I see. If you target ThreadCache, I agree that a KIP is not required. We need to evaluate if there is a measurable performance improvement though before we would merge this change. > Consider changing design of ThreadCache > ---------------------------------------- > > Key: KAFKA-8020 > URL: https://issues.apache.org/jira/browse/KAFKA-8020 > Project: Kafka > Issue Type: Improvement > Components: streams > Reporter: Richard Yu > Priority: Major > > In distributed systems, time-aware LRU Caches offers a superior eviction > policy better than traditional LRU models, having more cache hits than > misses. In this new policy, if an item is stored beyond its useful lifespan, > then it is removed. For example, in {{CachingWindowStore}}, a window usually > is of limited size. After it expires, it would no longer be queried for, but > it potentially could stay in the ThreadCache for an unnecessary amount of > time if it is not evicted (i.e. the number of entries being inserted is few). > For better allocation of memory, it would be better if we implement a > time-aware LRU Cache which takes into account the lifespan of an entry and > removes it once it has expired. -- This message was sent by Atlassian JIRA (v7.6.3#76005)