TangSiyang2001 opened a new issue, #18047: URL: https://github.com/apache/doris/issues/18047
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/doris/issues?q=is%3Aissue) and found no similar issues. ### Description The current release behavior will throw away the item if cache usage is larger than its capacity, which is opposite of LRU policy. That may result in hot data to be phased out when the cache is full. ```cpp void LRUCache::release(Cache::Handle* handle) { ... { std::lock_guard l(_mutex); last_ref = _unref(e); if (last_ref) { _usage -= e->total_size; } else if (e->in_cache && e->refs == 1) { if (_usage > _capacity) { // throw away the just used item directly here when cache is full // I think it disobeys the LRU policy bool removed = _table.remove(e); DCHECK(removed); e->in_cache = false; _unref(e); _usage -= e->total_size; last_ref = true; } else { ... } } } ... } ``` ### Solution Instead, call _evict_from_lru to start a LRU eviction, or just ignore the small overflow in a dynamic-threshold way. Anyway, we should just keep that hot item to be last reached in LRU list. ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
