On 08.02.10 15:25, Ronald Rudy wrote:
Version is 10.4.2.0, and no I don't - I am actually trying the group by/etc. on
another column that is indexed to see if there's any memory benefits.. The
table isn't optimized for the below statement because in production it will
never be executed like that, I'm just trying to monitor some counts while the
app is running..
Hi Ronald,
If you can easily control this process, can you run it with
-XX:+HeapDumpOnOutOfMemoryError and post a heap dump summary?
(I don't remember off the top of my head how you extract a histogram
from the binary heap dump, but I guess maybe using jmap or HeapViewer)
Are there many Derby classes in the top 10?
You might only see a bunch of arrays up there, in which case more
investigation may be required to understand what's going on.
Regards,
--
Kristian
On Feb 8, 2010, at 9:21:29 AM, Peter Ondruška wrote:
What Derby version is this? Do you have an index on groupCol?
On Mon, Feb 8, 2010 at 3:11 PM, Ronald Rudy<[email protected]> wrote:
I've gotten to the point where I seem to be able to frequently induce
OutOfMemoryErrors when executing a statement like this one:
SELECT groupCol, count(*) FROM myTable GROUP BY groupCol;
The total number of records doesn't seem terribly large (about 400,000 records)
though while I'm executing the above statement there are likely some inserts
and deletes happening concurrently. I don't need anything transactional here,
I'm really just monitoring a table from a separate network connection.
In our app the database runs in its own JSVC daemon as a NetworkServer with
512MB allocated to it. Page cache size is set to 2000. Page size should be
Derby default (4096 I think).
I cannot be 100% sure that the above is specifically what is causing the error,
but I can say that I haven't seen memory errors until I started monitoring this
table frequently and even then I really only saw it when the table size started
getting a bit bigger (not big by any stretch, but> 300k rows).
The table itself is not really big - there are 7 varchar columns along with an
ID column.
In this production-level situation stability is more important than
performance. I'd like Derby configured so that no matter what is requested SQL
wise the daemon itself is not impacted.
I'd also like to be able to build in some handling code perhaps that if/when an
OutOfMemoryError is encountered, it will automatically restart - for this I
might need to add a script with -XX:OnOutOfMemoryError as a JVM option unless
there's some way to handle this internally?
-Ron