[ http://issues.apache.org/jira/browse/DERBY-1713?page=all ]
Mike Matrigali updated DERBY-1713:
----------------------------------
Summary: Memory do not return to the system after Shuting down derby
10.2.1.0, following an out of memory event (was: Memory do not return to the
system after Shuting down derby 10.2.1.0)
Can you post a reproducible case?
If not can you describe your db in more detail. For instance what is the ddl
for the table in question. If the table contains variable length fields what
is the average size of those fields. Are you using anything special to set
page size for the table? What I am trying to figure out is the page size of
your table, derby may set it to
8 or 32k depending on the ddl.
If it is 32k, and the default cache size is 1000 pages - it is very likely
32meg is not enough memory. Each entry in the cache is going to start with a
copy of the page and then each has a variable amount of memory
associated with it that is linearly related to the number of rows on the page.
The first thing I always suggest
with these kinds of out of memory cases is to bump the page cache size down and
see if it reproduces. In
the past I have tested with a 40 page cache. The page cache will grow greedily
up to whatever size you have
it set to (default 1000 pages), and there is no code to remove pages based on
memory situations (I don't
think there is reasonable java support to do so). It should release the memory
when the database is
successfully shutdown with the shutdown=true approach you are giving, but as
has been commented once
the jvm runs out of memory I have found nothing after is guaranteed to work.
You say you are running with a memory profiler, can you post the top memory
classes and how much memory they are using. That should make it clear what
the issue is.
> Memory do not return to the system after Shuting down derby 10.2.1.0,
> following an out of memory event
> ------------------------------------------------------------------------------------------------------
>
> Key: DERBY-1713
> URL: http://issues.apache.org/jira/browse/DERBY-1713
> Project: Derby
> Issue Type: Bug
> Components: Performance
> Affects Versions: 10.2.0.0
> Environment: Windows XP SP2
> JRE 1.6 beta2
> Reporter: Ibrahim
> Priority: Critical
>
> I face a problem when querying large tables. I run the below SQL and it stuck
> in this query and throws java heap exception OutOfMemory:
> SELECT count(*) FROM <table> WHERE .....
> N.B. I'm using a database of more than 90,000 records (40 MB). I set the
> maxHeap to 32 MB (all other settings have the default value, pageCache ...
> etc ).
> Then, I shutdown the database but the memory is not returned to the system
> (and remain 32 MB [max threshold]). I tried to increase the maxHeap to 128 MB
> in which it works and releases the memory, so I think the problem is when it
> reaches the maxHeap then it seems to not respond to anything such as closing
> the connection or shutting down the database. How can I get rid of this?
> (because i cannot increase the maxHeap as the database increases, I want to
> throw an exception and release the memory)
> I'm using this to shutdown the DB:
> try{DriverManager.getConnection("jdbc:derby:;shutdown=true");}
> catch(SQLException ex){System.err.println("SQLException: " +
> ex.getMessage());}
> I'm using a memory Profiler for monitoring the memory usage.
> Thanks in advanced.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira