On Jul 10, 2006, at 12:41 PM, Oscar Maire-Richard wrote:

Hi,
I would like to know your suggestions for what I think is a common problem:

For each request my application obtains an identifier (that is not the PK but must be unique), if a DataObject with this identifier exists, uses it, other case creates a new one; does some processing (with other DataObjects) and commits all. I am using a data context per session. My problem is how to avoid DataObjects with duplicated identifiers when several sessions send requests simultaneously.

An obvious solution is to synchronize the whole process until the commit is done, however the processing requires a long time, and I would like to parallelize to improve performance.

Thanks,

Oscar

Is identifier something that is derived from request values?

In any event you'll have to do synchronization. Just need to make it smart, so that it only locks out the threads that access the same identifier. This means that identifier instances themselves are the prime candidates to work as synchronization objects.

For example you can have an application-scoped HashMap that keeps currently processed identifiers. So a request filter may look like this:

String id = ... // generate id
Map ids = ... // get an application shared map

String lockId;

synchronized(ids) {
  // make sure that we do not override a matching identifier

  lockId = (String) ids.get(id);

  if(lockId == null) {
     lockId = id;
     ids.put(id, id);
  }
}

synchronized(lockId) {
  // do processing
}

Replacing HashMap with some sort of LRUMap (such as the one in Jakarta commons-collections) should prevent unlimited memory growth.

Andrus


Reply via email to