In general, I wouldn't mind having the level 2 caches turned on by default.
My concern is that we don't want first time users to get unexpected
results.  Like you mentioned in your previous reply, if we select the
appropriate default values for the caches so that we can most likely avoid
stale reads, but still help with general performance, then I'm okay with the
idea.  But, if these caches cause our users to question the results or
require them to tweak the cache settings for their first applications, then
I would prefer to go the safe route and leave the caches turned off by

All that I know about these caches at this point is what you have documented
in your OpenJPA manual, so I don't have much personal experience with them.

The goal should be to have a pleasant first-time experience with OpenJPA.


On 8/24/06, Abe White <[EMAIL PROTECTED]> wrote:

> t I was looking for some documentation about what the default cache
> configuration settings would be, and I'm not seeing it there.

The defaults are:
- Data cache maintains hard refs to 1000 PCData object, where a
PCData holds the cached data for a single persistence-capable object.
- When the 1000 object limit is exceeded, we move random objects to a
soft backing map.  This map does not have a size limit, other than
what the JVM decides to GC.  The reason we evict randomly is that LRU
eviction requires more synchronization.
- Query cache has the same defaults (1000 hard refs, soft backing
map), but each entry is a list of oids for matching objects, or
object/object[] in the case of projections.

All this is user-configurable, of course.

> 1) How much memory would the default 'cache on' setting consume?
> Are people going to easily receive OOM errors with the default
> settings?  (I doubt it, but thought it should be mentioned.)

The above defaults shouldn't cause OOM errors in most JVMs.

> 2) What's the default concurrency strategy?  My wife and I making
> an ATM withdrawal or deposit at the same time makes me nervous the
> farther away from transactional we get.

We bypass the cache during non-optimistic transactions, so DB locking
isn't affected.  And the standard optimistic checks ensure data
integrity in non-optimistic transactions.  If you start using the
cache in a distributed environment or among multiple
EntityManagerFactories you have to consider possible stale reads if
the cache invalidation notification from one EMF hasn't reached the
others yet, but that's not the common case.

Notice:  This email message, together with any attachments, may contain
information  of  BEA Systems,  Inc.,  its subsidiaries  and  affiliated
entities,  that may be confidential,  proprietary,  copyrighted  and/or
legally privileged, and is intended solely for the use of the individual
or entity named in this message. If you are not the intended recipient,
and have received this message in error, please immediately return this
by email and then delete it.

Reply via email to