[ http://issues.apache.org/jira/browse/DERBY-1713?page=all ]

John H. Embretsen updated DERBY-1713:
-------------------------------------

    Component/s: SQL
        Urgency: Normal  (was: Urgent)
       Priority: Major  (was: Critical)

This issue is seen when executing certain kinds of queries against large 
tables. Two work-arounds have been identified and tested:

1. Increase max heap size (e.g. from 32 MB to 38 MB, -Xmx38M)
2. Reduce the size of Derby's page cache. With Derby1713repro.java, the 
following command succeeded:

> java -Xmx32m -Dderby.storage.pageCacheSize=800 Derby1713repro haveDB

whereas it fails with the default pageCacheSize value (1000). Using different 
values for derby.storage.pageSize only did not seem to have a noticeable effect 
on memory usage.

See Derby's tuning guide for more information about these properties ( 
http://db.apache.org/derby/docs/dev/tuning/ ).

The reported problem of not being able to free memory following an 
OutOfMemoryError is not a bug. Quoting the Java Language Specification, 3rd 
edition, Chapter 11 ( 
http://java.sun.com/docs/books/jls/third_edition/html/exceptions.html ):

"The class Error and its subclasses are exceptions from which ordinary programs 
are not ordinarily expected to recover. (...) recovery is typically not 
possible".

If any of the reported work-arounds are not acceptable, the programmer probably 
needs to handle such situations preemptively. There may be some useful advice 
in the Tuning Guide (see 
http://db.apache.org/derby/docs/dev/tuning/ctundepth10525.html ).

Lowering Priority (Major; work-arounds exist) and Urgency (Normal; "If this 
issue scratches the itch of any particular developer, then they should help to 
solve it and provide a patch").


> Memory do not return to the system after Shuting down derby 10.2.1.0, 
> following an out of memory event
> ------------------------------------------------------------------------------------------------------
>
>                 Key: DERBY-1713
>                 URL: http://issues.apache.org/jira/browse/DERBY-1713
>             Project: Derby
>          Issue Type: Bug
>          Components: Performance, SQL
>    Affects Versions: 10.2.1.0
>         Environment: Windows XP SP2
> JRE 1.6 beta2
>            Reporter: Ibrahim
>         Attachments: Derby1713repro.java, test.zip, Test1.java
>
>
> I face a problem when querying large tables. I run the below SQL and it stuck 
> in this query and throws java heap exception OutOfMemory:
> SELECT count(*) FROM <table> WHERE .....
> N.B. I'm using a database of more than 90,000 records (40 MB). I set the 
> maxHeap to 32 MB (all other settings have the default value, pageCache ... 
> etc ). 
> Then, I shutdown the database but the memory is not returned to the system 
> (and remain 32 MB [max threshold]). I tried to increase the maxHeap to 128 MB 
> in which it works and releases the memory, so I think the problem is when it 
> reaches the maxHeap then it seems to not respond to anything such as closing 
> the connection or shutting down the database. How can I get rid of this? 
> (because i cannot increase the maxHeap as the database increases, I want to 
> throw an exception and release the memory)
> I'm using this to shutdown the DB:
> try{DriverManager.getConnection("jdbc:derby:;shutdown=true");}
> catch(SQLException ex){System.err.println("SQLException: " + 
> ex.getMessage());}
> I'm using a memory Profiler for monitoring the memory usage.
> Thanks in advanced.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to