[ 
https://issues.apache.org/jira/browse/JEXL-414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17790801#comment-17790801
 ] 

Holger Sunke edited comment on JEXL-414 at 11/28/23 10:58 PM:
--------------------------------------------------------------

Nice test case, thank you.

When using the test as is, testSynchronized and testSpread are performing 
nearly equally while testConcurrent is a bit faster.

When setting HIT=500 I get quite expressive results:

INFORMATION: testSynchronized : 5,523
INFORMATION: testSpread : 1,349
INFORMATION: testConcurrent : 1,248

Same remains true when playing with other settings (THREADS, SCRIPTS, CACHED): 
testConcurrent typically is slightly faster than testSpread which is way faster 
than testSynchronized.

testSynchronized produces lowest CPU utilization likely because it is blocking 
more than the other tests.

 

So after all I'd prefer to use the ConcurrentCache on our application or the 
SpreadCache if introducing a dependency was an issue to me.

Would it be useful to set the concurrentlinkedhashmap-lru dependency scope to 
'provided' ? Not sure about this scopes' implications tho.


was (Author: JIRAUSER303087):
Nice test case, thank you.

When using the test as is, testSynchronized and testSpread are performing 
nearly equally while testConcurrent is a bit faster.

When setting HIT=500 I get quite expressive results:

INFORMATION: testSynchronized : 5,523
INFORMATION: testSpread : 1,349
INFORMATION: testConcurrent : 1,248

Same remains true when playing with other settings (THREADS, SCRIPTS, CACHED): 
testConcurrent is slightly faster than testSpread which is way faster than 
testSynchronized.

testSynchronized produces lowest CPU utilization likely because it is blocking 
more than the other tests.

 

So after all I'd prefer to use the ConcurrentCache on our application or the 
SpreadCache if introducing a dependency was an issue to me.

Would it be useful to that concurrentlinkedhashmap-lru dependency scope to 
'provided' ? Not sure about this scopes' implications tho.

> SoftCache may suffer from race conditions
> -----------------------------------------
>
>                 Key: JEXL-414
>                 URL: https://issues.apache.org/jira/browse/JEXL-414
>             Project: Commons JEXL
>          Issue Type: Bug
>    Affects Versions: 3.3
>            Reporter: Holger Sunke
>            Assignee: Henri Biestro
>            Priority: Major
>             Fix For: 3.3.1
>
>
> I found the SoftCache used from within the JEXL class Engine to be very 
> relevant for overall performance (depending on the application) on the one 
> hand, but on the other hand it may suffer from race conditions.
> While solid efford was taken to protect it from race conditions by 
> surrounding access using a ReadWriteLock, parallel read access actually do 
> reach out on a LinkedHashMap having set accessOrder=true. This means that on 
> every potentially parallel, non serialized invocation of LinkedHashMap#get 
> the order of elements is modified to bring the last accessed element down to 
> the tail of the internal linked list structure and modCount is incremented.
>  
> In our application rendering web templates, we observe multiple of 10000 
> access on the SoftCache per page impression and multiple of 10 page 
> impressions per second per JVM instance.
> Not sure whether this is result of race condition claimed above, in heapdumps 
> of that application I observed the LinkedHashMap used within SoftCache having 
> a size of ~9500 elements while the SoftCache#size is set to limit the cache 
> to 5000 elements. Additionaly the LinkedHashMap#modCount shows up with 
> arbitrary giant positive or negative numbers, telling me it has overflown 
> already after some hours of application uptime.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to