<vendor>

> > > The whole idea of optimisation by caching in the JVM to avoid DB access and
> > > (re)construction of objects from data is somewhat suspect. It can and will
> > > be done but it only works in special cases where there is a clean
> > > relationship between database rows and EBs and is fundamentally
> > > non-scaleable to multiple JVMs and clusered systems.
> >
> > GemStone/J's caching capability is based on an integrated persistent object
> > store, accessible concurrently - and transactionally - from a large number
> > of independent Java VMs on one or more distinct machines, via shared
> > memory. Yes, the relationship between object models and the relational data
> > they represent can be tricky to establish and synchronize, but once cached,
> > our system does allow scaleable access to the cache from multiple JVMs and
> > clustered systems.


Ian McCallion wrote:

> This is excellent. I had assumed that the OODBMSs would have built a
> scaleable object cache for their own database. However if I understand
> correctly, with Gemstone you can also use the cache to hold objects
> constructed from relational data. Presumably the objects are serialised to
> and from multiple JVMs on multiple systems as needed. Presumably too the
> O-R mapping is taken completely out of the Java and EJB evironment so that
> the relational DB appears like an OODB.
>
> Have I got the right idea? If not I'd appreciate a bit more clarification.

I wish I could claim that we've unearthed the Holy Grail of caching (I
can't help but visualize John Cleese when I write that...), but in its
current form the mapping and synchonizing of data to cache requires coding,
either to low-level APIs like JDBC, or via an O/R mapping technology. Once
mapped and loaded, the cache is accessed from multiple possibly remote JVMs
through shared memory. No serialization is involved.

As with any cache, application designers do have to make some choices about
how to use the cache, and how and when to synchronize it with the backing
data. For read-only browsing applications which don't require immediate
real-time cache updates, it's straightforward to update the cache via
database triggers or a periodic synchronizing process. The result is simple
to use and performant, since the browsing application works directly
against Java objects, never manipulating relational structures or
performing fetch/store operations.

When updating transactions are performed, the application designer has to
choose whether to write those transactions to the cache, with delayed
rollup to the backing data, or whether to write the transactions to the
backing data, with parallel update of the cache. There isn't a single right
answer for every application.

However transactions are done, though, the objects in the cache are always
available to all JVMs in the system, regardless of where they're running.

</vendor>

    Marc San Soucie
    GemStone Systems, Inc.
    Beaverton, Oregon
    [EMAIL PROTECTED]

===========================================================================
To unsubscribe, send email to [EMAIL PROTECTED] and include in the body
of the message "signoff EJB-INTEREST".  For general help, send email to
[EMAIL PROTECTED] and include in the body of the message "help".

Reply via email to