I think I'm with you Mr. Pike.  Chris H.'s code modelled the basic locking 
pattern, but if the cache is handling locking we're going to have two solutions 
potentially fighting against each other.  We know we have failure scenarios 
with the attempted locking, can we drop the locking and use the JMeter tests to 
pound it and see shows evidence of an issue?

"The programmer … works only slightly removed from pure thought-stuff.
He builds his castles in the air, from air, creating by exertion of the 
imagination."
— Fred Brooks

Shawn Smith
Director of Software Engineering
Administrative Information Services
Penn State University
814-321-5227
[email protected]

----- Original Message -----
From: "CHRISTOPHER PIKE" psu.edu>
To: [email protected]
Sent: Tuesday, March 8, 2016 3:08:55 PM
Subject: Re: LDAP Connection Management

I tried implementing the suggested code, and while it works in eclipse, when 
running the service, the first run passes, but all subsequent requests hang 
while getting object from the cache.

After looking more closely at the caching code, it is already using a blocking 
cache, which by default will return null on the first get and lock indefinitely 
on subsequent gets until the cache is populated. Perhaps it's the combination 
of our locks and ehcache's locks are causing issues??? I do think the blocking 
cache makes our lock code unnecessary as the cache is already handling multiple 
threads. Thoughts?

I would like to add ability to configure the maxtimeout on the blocking cache 
so indefinite blocks can't happen. If the timeout triggers on multiple gets to 
the blocking cache, it is possible that load graph could be called 
simultaneously. Perhaps this method needs to be synchronized for this hopefully 
unlikely scenario.





----- Original Message -----
From: "Emmanuel Lécharny" <[email protected]>
To: [email protected]
Sent: Tuesday, March 8, 2016 1:19:06 PM
Subject: Re: LDAP Connection Management

Le 08/03/16 17:22, Shawn McKinney a écrit :
>> On Mar 8, 2016, at 8:40 AM, Christopher Harm psu.edu> wrote:
>>
>> The Javadocs state that upgrading from a readLock to a writeLock is not 
>> possible, and I've been reading other material that states that this is 
>> because that operation is deadlock prone.  The function is pretty simple, in 
>> that it doesn't do anything inside the read lock after it's acquired the 
>> write lock.  I suggest that we de-couple the read and write lock scopes as 
>> such....
>>
>>   private static SimpleDirectedGraph getGraph(String contextId) {
>>     ReadWriteLock hierLock = HierUtil.getLock(contextId, HierUtil.Type.ROLE);
>>     String key = getKey(contextId);
>>
>>     SimpleDirectedGraph graph = null;
>>     hierLock.readLock().lock();
>>     try {
>>       graph = (SimpleDirectedGraph) roleCache.get(key);
>>       if (graph != null) {
>>         return graph;
>>       }
>>     } finally {
>>       hierLock.readLock().unlock();
>>     }
>>
>>     hierLock.writeLock().lock();
>>     try {
>>       graph = (SimpleDirectedGraph) roleCache.get(key);
>>       if (graph != null) {
>>         return graph;
>>       }
>>
>>       graph = loadGraph(contextId);
>>
>>       return graph;
>>     } finally {
>>       hierLock.writeLock().unlock();
>>     }
>>   }
> +1 looks good.  (Much simpler)  

I think Shawn is right when he said that we first should check if the
protection against concurrent access is really needed.

That's a bit foggy for me (it was 2 years ago), and I'm afraid that I
suggested to protect this code just for the sake of it. I'd liek to get
a second review of this specific piece of code which *may* not need
synchronization at all...

Reply via email to