Aaron Mulder wrote:
I think you're OK having separate copies of the Oracle JAR loaded in
separate applications. I believe the problem with Derby is that it is
(or can be) an in-memory database, and so different copies of the JAR
in memory would have unexpected effects.
=== SNIP ==
Hi -
I want to try and clarify why Derby acts this way. Basically, it is
because the JDBC driver *is* the database engine so there is a lot more
going on with Derby than when other JDBC drivers are used. So what
happens? When you load the Derby driver for the first time within a JVM
all the DBMS caches, lockmanager and other threads and memory structures
are established. The term 'embedded database' stems from this behavior;
the DBMS system runs in the same process space as the application using
it, the JVM.
Contrast this to a DBMS that runs as a Server. The DBMS is started in
some way and waits ready to accept connections. It runs in it's own,
isolated address space where the 'heavy duty' processing happens. The
primary roll of a type 4 JDBC driver in this architecture is to convert
JDBC calls into the network protocol used by DBMS. There should be no
problem having two threads converting JDBC calls and sending the packets
to a the DBMS server process.
With Derby JDBC is the native API so no conversion of any kind is
needed. Since the Derby engine is right there (embedded) it goes to
work immediately on the SQL request. The 'heavy duty' processing goes
on in the same address space as the application. For Derby to function
properly all it's classes must have access to all the internal
structures of the DBMS system. Classloaders can prevent this from
happening. Derby is at the mercy of the environment that loads it. If
the calling routine loads Derby in a classloader that cannot be accessed
by Derby objects in other classloaders or visa versa unexpected
failures, like this one, will occur.
HTH, Stan