[ 
https://issues.apache.org/jira/browse/OFBIZ-3779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12871541#action_12871541
 ] 

Ben Manes commented on OFBIZ-3779:
----------------------------------

In any concurrent LRU policy you'll have some fuzziness, though with CLHM it 
reduces that more than other approaches. The most common concurrent LRU caches 
use a per-segment LRU link chain such that the global order is lost and you 
lose efficiency / true LRU order and reads are cheaper by splitting the lock. 
This type of cache is used in large data sets, e.g. Oracle. The other common 
approach is to make writes fairly expensive by performing an O(n) scan and 
tagging an element with its timestamp on reads (how soft references are 
maintained in LRU order). The LHM approach is fairly common due to its 
simplicity and the fact that generally the cache operations are not the 
bottleneck (e.g. memcached does this and spends most of its time on the network 
protocol). When performance is critical these caches are written in C, such in 
all of the previous examples.

The only way to have a perfect LRU policy in a concurrent setting is to use a 
LHM with a fair lock which forces sequential odering. All other approaches may 
show LRU characteristics in single-threaded context, but the interleaving of 
operations / threads will break the its purity. A fair lock will be a severe 
performance hit.

Its fair to argue that CLHM v1.0 weakens a _least recently used_ policy to a 
_less recently used_ policy, though the statistical error will be quite small. 
An application's expectation of a cache should be to try to remove entries less 
likely to be reduce the occurrences of suffering a miss penalty. The LRU policy 
is one approach at that, but it is routinely beaten in studies due to not 
accounting for frequency and by not being scan resistant. The LIRS policy will 
provide a superior hit rate while maintaining the application's expectation. 
This version of CLHM is intended to provide the architectural structure for 
supporting LIRS while achieving high performance as well.

You should definitely choose what works best for your needs, so using v1.0 may 
not be best. Just make sure that you are making the correct trade-off between 
the performance and efficiency while meeting the usage needs. I suspect that a 
strict LRU ordering is not actually the application's requirement of a cache.

> Upgrade ConcurrentLinkedHashMap
> -------------------------------
>
>                 Key: OFBIZ-3779
>                 URL: https://issues.apache.org/jira/browse/OFBIZ-3779
>             Project: OFBiz
>          Issue Type: Improvement
>            Reporter: Ben Manes
>            Assignee: Jacques Le Roux
>            Priority: Trivial
>             Fix For: SVN trunk
>
>
> The UtilCache class should use v1.0 of the CLHM library. While the previous 
> version is fully functional, I am no longer supporting it. The new design 
> should be faster and more reliable due to not needing to use a pseudo LRU 
> algorithm for performance (which has degradation scenarios ). A true LRU is 
> now supported with no lock contention issues. Please consider upgrading when 
> convenient.
> http://code.google.com/p/concurrentlinkedhashmap/
> JavaDoc: 
> http://concurrentlinkedhashmap.googlecode.com/svn/wiki/release-1.0-LRU/index.html

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to