[
https://issues.apache.org/jira/browse/OFBIZ-3779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12871536#action_12871536
]
Adam Heath commented on OFBIZ-3779:
-----------------------------------
To me, LRU means that when an item needs to be removed, for *whatever* reason,
the most LRU item that qualifies will be removed. This applies both to an
insert operator, and a size reduction. If that is not what is occurring, then
CLHM doesn't implement LRU. LinkedHashMap maintains this LRU design. I could
switch to that, and the test cases would function identically(except that then
it has to have correct synchronized blocks).
The design of UtilCache is that it uses LRU if there is a maxMemSize parameter
set. CLHM 1.0 doesn't implement LRU, so we can't use it.
> Upgrade ConcurrentLinkedHashMap
> -------------------------------
>
> Key: OFBIZ-3779
> URL: https://issues.apache.org/jira/browse/OFBIZ-3779
> Project: OFBiz
> Issue Type: Improvement
> Reporter: Ben Manes
> Assignee: Jacques Le Roux
> Priority: Trivial
> Fix For: SVN trunk
>
>
> The UtilCache class should use v1.0 of the CLHM library. While the previous
> version is fully functional, I am no longer supporting it. The new design
> should be faster and more reliable due to not needing to use a pseudo LRU
> algorithm for performance (which has degradation scenarios ). A true LRU is
> now supported with no lock contention issues. Please consider upgrading when
> convenient.
> http://code.google.com/p/concurrentlinkedhashmap/
> JavaDoc:
> http://concurrentlinkedhashmap.googlecode.com/svn/wiki/release-1.0-LRU/index.html
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.