We actually are in a distributed environment.

We actually use javagroups & broadcast the invalidation message to other
servers. This was additional code I had to write, but that's how I solved
the problem. Each server maintains it's own version of the hashmap & they
are different. When all objects associated with a key are to be invalidated,
it just sends an invalidate message with the key as the payload to all the
servers in the same javagroup. In our scenario, an Oracle trigger fires
whenever a table is updated and stores it's name in a hashset until a commit
or rollback comes in. When the commit or rollback comes in, the database
server makes a call (an http get) to one of the servers in the cluster. We
actually have a wrapper around the commit & send a combined message for all
the tables that have been updated in that transaction.  That application
server looks up the table name in the hashmap, extracts the set of JCS keys,
and proceeds to remove these objects from the cache. It also sends a message
to the other servers containing the table names invalidated. Each server in
the cluster receives that message and proceeds with it's lookup & subsequent
cache removals. 

We are still working out the bugs, but it does function and doesn't seem to
destroy the cache performance. We are considering limiting the list of
cacheable tables since some update to often to effectively cache anything.
This will prevent excessive callbacks from the database server to the
application servers.

Good luck.

Wayne



-----Original Message-----
From: Fulco Houkes [mailto:[EMAIL PROTECTED]
Sent: Saturday, July 12, 2003 5:34 AM
To: JCS user mailinglist
Subject: Re: composite keys and cache entries keys retrieval


Hi all,

First of all I would like to thank you all for your prompt responses. I
really appreciate and I've been considering carefully all the suggestions.

Young Wayne proposed an additional hashmap, but this is working only in a
non distributed environment. When using a remote cache (as I need to) to
synchronize local caches, your additional hashmap will not receive the event
of entry invalidation or new elements coming from the remote cache. So you
might have the case where invalidated cache entries keys are still
"polluting" your additional hashmap while not existant anymore in the JCS
cache, or that not all the entries are present in your additional hashmap.
Furthermore, I'm not avoiding the iteration of the second hashmap. For
example, if I'm using an additional hashmap for tracking the second item, as
you proposed, I will have to parse the JCS keys stored in the hashmap to
know it there is a key holding a first item that have to be invalidated.
When this is the case I will have to remove this key from the additional
hashmap either.

As Aaron pointed out, and he's right, I do not want to serve a stalled page,
and therefore pageIDs have priority over userIDs; it's already what I was
doing with hierarchical removal (pageid+":"). But ideally I would like to do
the same when a user logs off or his/her session expired; I've already
implemented an automatic expiration on elements, but as Aaron suggested, I
might just base the cleanup on the element expiration, and not offer the
ability to force a userID cleanup.

I've been looking, as Aaron suggested, for using a region for each pageid.
But it's unclear for myself how these regions behave in a distributed
environment:

- First of all, if I'm right, regions can only be created through the
CacheAccess.defineRegion(..) static methods, which return a CacheAccess
instance; or directly by instanciating a CacheAccess object. Once such a
region has been created, how to I associate it to the JCS cache?

- I've been looking for using the CacheAccess constructor and associate the
same CompositeCache to all the regions, but it's of no use, as the put() and
remove() methods of the CacheAccess are redirected to the CompositeCache
removal methods. Therefore, creating a region for each pageID will not give
me the ability to remove userIDs simply by using remove(userID+":"), as all
the element keys in the composite cache will start with pageID+":"!

- Accoding to the last point, it would be necessary to have distinct
composite caches for each region. This would force me to handle myself the
regions and I would lose the concept of distributed cache. Or is there any
way to associate regions with distinct composite caches to a JCS cache, in
order to distribute the regions to the other local caches? (I don't think
so).

- If it's possible to assocate multiple regions with distinct composite
caches to a JCS cache, will all the other local JCS caches, in the
distributed environment, be updated when a new region is created (or removed
or updated) in one of the local caches?


Finally, the concept of multiple keys indexing, as Hanson Char suggested,
would be really nice ;)

Thanks for all,
Fulco





---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to