[
https://issues.apache.org/jira/browse/CAMEL-21888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17937534#comment-17937534
]
Ben Manes commented on CAMEL-21888:
-----------------------------------
I don't know how your simple lru works, but for small concurrent caches w/o
dependencies then I'd recommend using the Clock algorithm (1960s) as a pseudo
lru. It is often referred to as Second Chance. This is very simple to implement
as it is merely a FIFO with a "marked" bit set on read, and on eviction it
locks to scan the fifo resetting the bit until it finds an unset one to evict.
Thus it has excellent concurrent read performance (no movement or locking), has
LRU like hit rates, and a worst-case O(n) eviction cost. For a small cache
that's perfectly fine and it is very simple to implement. For larger caches one
could simply use a scan threshold to limit the worst case.
When I was originally exploring algorithms that was my first
[proof-of-concept|https://github.com/ben-manes/concurrentlinkedhashmap/blob/cc3e11603e8a91185c1748633be2c703e218219e/src/test/java/com/googlecode/concurrentlinkedhashmap/caches/ProductionMap.java]
and it was perfect for spot issues at work. I don't like it as a general
library algorithm where the workloads and user needs vary significantly and
many more features would be needed so a more general concurrency mechanism
(e.g. to cover TTL) was preferable. For a stand in while exploring the options
it was my go to and I still think it is a great fit for modest sized, zero
dependency use cases.
> High CPU usage at startup due to Deque.size() iteration in SimpleLRUCache
> -------------------------------------------------------------------------
>
> Key: CAMEL-21888
> URL: https://issues.apache.org/jira/browse/CAMEL-21888
> Project: Camel
> Issue Type: Bug
> Components: came-core
> Affects Versions: 4.7.0
> Reporter: Nadina Florea
> Assignee: Nicolas Filotto
> Priority: Major
>
> Hello,
> Starting with *Camel 4.7.0,* a change was introduced in *SimpleLRUCache,*
> where a *Deque* is now used to track *lastChanges.*
> Please see:
> [https://github.com/apache/camel/commit/a7e696927dea30795a49a4c0ac4a36ee700131ff]
> and CAMEL-20850
> This change causes *high CPU usage at startup* when the cache is populated,
> as *Deque.size() iterates over all cache entries.* This results in 100% CPU
> usage, and the *application failed to start.*
> This issue was observed only in production, after migrating to Camel 4.7.0,
> where the cache size is significantly large (1,200,000 entries at that time).
> Current stable version is Camel 4.6.0. The behavior was reproduced with Camel
> 4.10.0 as well.
>
> We are using {*}KafkaIdempotentRepository{*}, which relies on SimpleLRUCache.
> In our app, we expose a metric that contains the size of the cache at a
> certain point.
> {code:java}
> @Bean
> @ConditionalOnBean(KafkaIdempotentRepository.class)
> public MeterBinder cacheSize(final KafkaIdempotentRepository
> idempotentRepository) {
> return registry ->
> Gauge.builder("cacheSize",idempotentRepository::getCacheSize)
> .register(registry);
> }{code}
> Would appreciate any insights on this issue.
>
>
--
This message was sent by Atlassian Jira
(v8.20.10#820010)