Jason Boehle wrote:
On 9/19/06, Stanley Bradbury <[EMAIL PROTECTED]> wrote:
Jason Boehle wrote:
> I am using Derby embedded in an application.  If the database is
> shutdown via opening a connection to
> jdbc:derby:/path/to/db;shutdown=true, what should happen to existing
> open connections to that database?
>
Any existing connections to that DB will be closed.  Any attempts to use
such a connection will result in an exception.

I wrote a small test that exercised this, and Derby leaks memory like
crazy (almost 1MB per database) if this happens.  Here is the code for
my test:

public class TestDerbyBug {
   public static void main(String[] args) {
       try {
Class.forName("org.apache.derby.jdbc.EmbeddedDriver").newInstance();
           while (true) {
               final String dir =
System.getProperty("java.io.tmpdir") + java.io.File.separator +
"TestDerbyBug-" + Math.abs(new java.util.Random().nextInt());
               final String url = "jdbc:derby:" + dir;
               java.sql.Connection conn =
java.sql.DriverManager.getConnection(url + ";create=true");
               // oops, forgot to close conn before shutting down...
               try {
                   java.sql.DriverManager.getConnection(url +
";shutdown=true");
                   assert false; // shouldn't get here
               }
               catch (java.sql.SQLException e) {
                   // ignore, shutdown is supposed to throw
               }
               // finally got around to remembering to close conn
               conn.close();
           }
       }
       catch (Throwable t) {
           t.printStackTrace();
       }
   }
}

If you run that class with a max heap of 16MB (-Xmx16m), it will only
be able to create about 18-20 databases before running out of memory.
If you move the 'conn.close()' line above the inner try statement, no
memory is leaked and it can run for at least 15 minutes (I got bored
and killed it at that point).  I tried this with 10.1.2.1, 10.1.3.1,
and 10.2.1.3-beta and got the same results on all releases.  I am
running it under JRE 1.5.0_08.

There is a similar bug reported, DERBY-23, that is reported fixed in
10.2, if that helps at all.

Obviously, the easiest fix is to make sure to clean up my connections
before shutting down.  :)

Hi Jason -
From what I see Derby is acting as expected. What you are observing is Derby's memory allocation process. Derby will continue to allocate memory until it reaches the set by it's various configuration properties (most notably: pageCacheSize). At this point Derby's memory footprint will become stable. For most schemas the default settings for Derby allow it to function well with the default JVM size of 64 Mb. As you found out in your test, should you reduce the amount of memory allocated to the JVM you will eventually get OutOfMemoryExceptions. If you need Derby to run with a smaller footprint and do not need the better performance of a larger pageCache, reduce the Derby pageCacheSize - the minimum setting is 40.

Reply via email to