[ 
https://issues.apache.org/jira/browse/DERBY-4620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13059920#comment-13059920
 ] 

Knut Anders Hatlen commented on DERBY-4620:
-------------------------------------------

Increasing the amount of memory doesn't seem to have much effect, since it's 
the derby.language.maxMemoryPerTable property that decides whether or not hash 
joins can be chosen. Increasing that setting to 10000 and the heap to 512MB 
made the query complete in six minutes.

I wonder what a reasonable default value would be. 1% of the total heap size to 
match the heuristic used by BackingStoreHashtable when no limit has been 
specified? That's probably a bit low. But it has to be multiplied by the number 
of join operators in the query and the number of concurrent users to see the 
potential peak memory usage, and it's probably better to execute queries 
successfully than have them failing with OOME, even if that means the 
performance could be suboptimal unless you do some tuning.

> Query optimizer causes OOM error on a complex query
> ---------------------------------------------------
>
>                 Key: DERBY-4620
>                 URL: https://issues.apache.org/jira/browse/DERBY-4620
>             Project: Derby
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 10.5.3.0
>         Environment: java version "1.6.0_17"
> Java(TM) SE Runtime Environment (build 1.6.0_17-b04)
> Java HotSpot(TM) Client VM (build 14.3-b01, mixed mode, sharing)
> Linux rocio.xxx 2.6.24-27-generic #1 SMP Fri Mar 12 01:10:31 UTC 2010 i686 
> GNU/Linux
>            Reporter: Chris Wilson
>              Labels: derby_triage10_8
>         Attachments: estimate-sizes.diff, query-plan.txt
>
>
> I have a query that generates about 2,000 rows in a summary table. It runs 
> fast enough (~30 seconds) on MySQL. The same query on Derby runs for about 10 
> minutes and then fails with an OutOfMemoryError.
> I have created a test case to reproduce the problem. It's not minimal because 
> it relies on a rather large dataset to reproduce it, and it's not trivial, 
> but I don't mind doing a bit of work trimming it if someone can point me in 
> the necessary direction.
> You can check out the test case from our Subversion server here:
>   http://rita.wfplogistics.org/svn/trackrita/rita/doc/derby-oom-slow-query
> which includes a pre-built Derby database in "testdb.derby". If this database 
> is deleted, "test.sh" will recreate it, but that takes about 10-15 minutes.
> Just modify the script "test.sh" to point to your Derby libraries, and run it 
> (or just execute the commands in "movement_complete.sql") to demonstrate the 
> problem. You can view the source of that file online here:
>   http://rita.wfplogistics.org/trac/browser/rita/conf/movement_complete.sql
> The first "insert into movement_complete" (starting around line 4) takes 
> about 15 seconds to complete and inserts 5890 rows. The second, starting 
> around line 54, does not complete in reasonable time on Derby. On MySQL, it 
> runs in 28 seconds and inserts 2038 rows. On Derby, after 10 minutes I get:
> JAVA ERROR: java.lang.OutOfMemoryError: Java heap space
> ij> ERROR X0Y67: Cannot issue rollback in a nested connection when there 
> is a pending operation in the parent connection.
> (process exits)
> It does not output the query plan in this case.
> Following the suggestion of Bryan Pendleton, I tried increasing the JVM 
> memory limit from the default to 1024m, and this allows the query to finish 
> executing quite quickly. I guess that means that the optimiser is just taking 
> a lot of memory to 
> optimise the query, and it spends forever in GC before finally hitting OOM 
> and giving up when using the default settings.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to