[
https://issues.apache.org/jira/browse/OFBIZ-3779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12871529#action_12871529
]
Ben Manes edited comment on OFBIZ-3779 at 5/26/10 2:34 AM:
-----------------------------------------------------------
Ouch, okay. Its hard to guarantee strict LRU order under concurrency due to the
randomness of concurrent operations. The previous version in LRU mode provided
stronger order by having separate locking for the hash table and the list. The
problem with this version was that reads still had to lock the list so under
load it would show contention.
This new version weakens recency ordering slightly by using a per-segment queue
for reads and single write queue. The read queues allows for both shrinking the
critical section (append to a queue) and not require obtaining the lock
(scheduled on queue to allow any thread can drain it). The write queue avoids
concurrent writers on different segments from blocking each other, while also
maintaining a global capacity constraint when an eviction is required (vs.
per-segment LRU chain). While the recency ordering was weakened slightly for
performance it is generally acceptable because the evicted entry is cold and
may only be slightly off from being the least recently used. The enhancement to
include frequency (LIRS) will allow for having a better hit rate than a strict
LRU while maintaining performance near that of a raw ConcurrentHashMap. The
reducing of capacity evicts the most suitable victim based on the policy
described, until it reaches the new threshold.
I'm not sure if your tests needs to enforce a strict LRU order since cached
data is expected to be transient by the application. You can, however, coerce
the new version into behaving like a strict LRU by setting the concurrency
level to 1 (a single segment) as a single read queue will be utilized. Because
I wanted the flexibility to improve the eviction policy and applications should
not be dependent on removal order (but rather want a high hit rate), I left the
policy details outside of the class's JavaDoc.
Are you sure that you aren't being too strict in your requirements?
was (Author: ben.manes):
Ouch, okay. Its hard to guarantee strict LRU order under concurrency due to
the randomness of concurrent operations. The previous version in LRU mode
provided stronger order by having separate locking for the hash table and the
list. The problem with this version was that reads still had to lock the list
so under load it would show contention.
This new version weakens recency ordering slightly by using a per-segment
buffer for reads and single write queue. The read queues allows for both
shrinking the critical section (append to a queue) and not requiring the
obtaining the lock (scheduled on queue to allow any thread can drain it). The
write queue avoids concurrent writers on different segments from blocking each
other, while also maintaining a global capacity constraint when an eviction is
required (vs. per-segment LRU chain). While the recency ordering was weakened
slightly for performance it is generally acceptable because the evicted entry
is cold and may only be slightly off from being the least recently used. The
enhancement to include frequency (LIRS) will allow for having a better hit rate
than a strict LRU while maintaining performance near that of a raw
ConcurrentHashMap. The reducing of capacity evicts the most suitable victim
based on the policy described, until it reaches the new threshold.
I'm not sure if your tests needs enforce a strict LRU order since cached data
is expected to be transient by the application. You can, however, coerce the
new version into behaving like a strict LRU by setting the concurrency level to
1 (a single segment) as a single read queue will be utilized. Because I wanted
the flexibility to improve the eviction policy and applications should be
dependent on removal order (but rather want a high hit rate), I left the policy
details outside of the class's JavaDoc.
Are you sure that you aren't being too strict in your requirements?
> Upgrade ConcurrentLinkedHashMap
> -------------------------------
>
> Key: OFBIZ-3779
> URL: https://issues.apache.org/jira/browse/OFBIZ-3779
> Project: OFBiz
> Issue Type: Improvement
> Reporter: Ben Manes
> Assignee: Jacques Le Roux
> Priority: Trivial
> Fix For: SVN trunk
>
>
> The UtilCache class should use v1.0 of the CLHM library. While the previous
> version is fully functional, I am no longer supporting it. The new design
> should be faster and more reliable due to not needing to use a pseudo LRU
> algorithm for performance (which has degradation scenarios ). A true LRU is
> now supported with no lock contention issues. Please consider upgrading when
> convenient.
> http://code.google.com/p/concurrentlinkedhashmap/
> JavaDoc:
> http://concurrentlinkedhashmap.googlecode.com/svn/wiki/release-1.0-LRU/index.html
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.