Quoting Martin Desruisseaux <[EMAIL PROTECTED]>:

> ObjectCacheEntry holds a ReadWriteLock. ReadWriteLock are used for example
> with
> HashMap, in order to allows multiple concurrent read operations (on different
> keys) while allowing only one write operation at time.
> 
> But with ObjectCacheEntry, ReadWriteLock are used on a entry-by-entry level,
> which seems useless to me. I understand that we want to lock on an
> entry-by-entry level (allowing concurrent operations while we are creating a
> CRS), but ObjectCacheEntry need only a write lock for this task.
> 
> ObjectCacheEntry is not in the "many read operations with few write
> operations"
> case like HashMap, since it can returns only one value. Every use cases I
> found
> in the code is of the form (directly or indirectly):
> 
>      Object foo = cache.get(key);  // Read lock here

Read lock is relased there as well

>      if (foo == null) {
>          cache.writeLock(key);     // Write lock here
>          foo = cache.peek(key);    // An other write lock here

We are allowed to reenter the write lock so this is good.

>          if (foo == null) {
>              // Do some stuff
>          }
>      }
> 
> So:
> 
> 1) Either the "foo" value was already cached (in which case
>     the 'get' method perform a very short locking since it
>     executes nothing more than "return value" - we could even
>     use no lock at all for such a trivial task, since returning
>     a single reference is already an atomic operation on JVM.

Except for one very important point - when a writer is updating the entry that
get method is going to block until a value is ready.


> 2) Either the "foo" value is already in process of being
>     computed, in which case "get" will block anyway (or return
>     "null" is not synchronized) and "peek" will block too.

Peek will not block; we can enter the read write lock more than once (so methods
can be recursive if needed). 

> I believe there is no need for ObjectCacheEntry holding a ReadWriteLock. This
> class could probably be completly removed in favor of (maybe) AtomicReference
> and ConcurrentHashMap.

I wanted to block the readers while the value was being determiend; the second
check (the peek) is for when more than one writer gets started ...

So in you review of the code; just not the intent. Specifically- an atomic
reference and concurrent has map would be always consistent (and thus thread 
safe).

But we are after something more than just threadsafely - we are trying to
control what the threads are up to and prevent duplication of work... and that
is what we have.

Cheers,
Jody 



-------------------------------------------------------------------------
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
_______________________________________________
Geotools-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/geotools-devel

Reply via email to