[jira] [Commented] (DERBY-3009) Out of memory error when creating a very large table

2011-03-29 Thread Knut Anders Hatlen (JIRA)

[ 
https://issues.apache.org/jira/browse/DERBY-3009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13012855#comment-13012855
 ] 

Knut Anders Hatlen commented on DERBY-3009:
---

Thanks for testing the patch, Dag and Lily.

Lily, if you run the new test case outside of the lowmem suite, you need to 
invoke JUnit with -Xmx16M in order to see the OOME. The ant target junit-lowmem 
adds that JVM argument automatically.

> Out of memory error when creating a very large table
> 
>
> Key: DERBY-3009
> URL: https://issues.apache.org/jira/browse/DERBY-3009
> Project: Derby
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 10.2.2.0
> Environment: Win XP Pro
>Reporter: Nick Williamson
>Assignee: Knut Anders Hatlen
>  Labels: derby_triage10_5_2
> Fix For: 10.8.0.0
>
> Attachments: DERBY-3009.zip, derby-3009-1a.diff, derby-3009-1b.diff
>
>
> When creating an extremely large table (c.50 indexes, c.50 FK constraints), 
> IJ crashes with an out of memory error. The table can be created successfully 
> if it is done in stages, each one in a different IJ session.
> From Kristian Waagan:
> "With default settings on my machine, I also get the OOME.
> A brief investigation revealed a few things:
>   1) The OOME occurs during constraint additions (with ALTER TABLE ... 
> ADD CONSTRAINT). I could observe this by monitoring the heap usage.
>   2) The complete script can be run by increasing the heap size. I tried with 
> 256 MB, but the monitoring showed usage peaked at around 150 MB.
>   3) The stack traces produced when the OOME occurs varies (as could be 
> expected).
>   4) It is the Derby engine that "produce" the OOME, not ij (i.e. when I ran 
> with the network server, the server failed).
> I have not had time to examine the heap content, but I do believe there is a 
> bug in Derby. It seems some resource is not freed after use."

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (DERBY-3009) Out of memory error when creating a very large table

2011-03-29 Thread Lily Wei (JIRA)

[ 
https://issues.apache.org/jira/browse/DERBY-3009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13012770#comment-13012770
 ] 

Lily Wei commented on DERBY-3009:
-

+1 for the fix. When I verified without code changes on windows 7, the new 
lowmem test case did not fail for me. I have to add to tables=200 and 
column=200 and run it with -Dderby.storage.pageCacheSize=4M. The test failed in 
the case above for me. However, I couldn't run ant lowmem. Either case, the 
test verified the fix is fixing memory leak issue. Thanks Knut.

> Out of memory error when creating a very large table
> 
>
> Key: DERBY-3009
> URL: https://issues.apache.org/jira/browse/DERBY-3009
> Project: Derby
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 10.2.2.0
> Environment: Win XP Pro
>Reporter: Nick Williamson
>Assignee: Knut Anders Hatlen
>  Labels: derby_triage10_5_2
> Fix For: 10.8.0.0
>
> Attachments: DERBY-3009.zip, derby-3009-1a.diff, derby-3009-1b.diff
>
>
> When creating an extremely large table (c.50 indexes, c.50 FK constraints), 
> IJ crashes with an out of memory error. The table can be created successfully 
> if it is done in stages, each one in a different IJ session.
> From Kristian Waagan:
> "With default settings on my machine, I also get the OOME.
> A brief investigation revealed a few things:
>   1) The OOME occurs during constraint additions (with ALTER TABLE ... 
> ADD CONSTRAINT). I could observe this by monitoring the heap usage.
>   2) The complete script can be run by increasing the heap size. I tried with 
> 256 MB, but the monitoring showed usage peaked at around 150 MB.
>   3) The stack traces produced when the OOME occurs varies (as could be 
> expected).
>   4) It is the Derby engine that "produce" the OOME, not ij (i.e. when I ran 
> with the network server, the server failed).
> I have not had time to examine the heap content, but I do believe there is a 
> bug in Derby. It seems some resource is not freed after use."

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (DERBY-3009) Out of memory error when creating a very large table

2011-03-29 Thread Dag H. Wanvik (JIRA)

[ 
https://issues.apache.org/jira/browse/DERBY-3009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13012720#comment-13012720
 ] 

Dag H. Wanvik commented on DERBY-3009:
--

Fix looks safe to me. I verified that without the code changes, the new lowmem 
test case fails. 
+1

> Out of memory error when creating a very large table
> 
>
> Key: DERBY-3009
> URL: https://issues.apache.org/jira/browse/DERBY-3009
> Project: Derby
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 10.2.2.0
> Environment: Win XP Pro
>Reporter: Nick Williamson
>Assignee: Knut Anders Hatlen
>  Labels: derby_triage10_5_2
> Fix For: 10.8.0.0
>
> Attachments: DERBY-3009.zip, derby-3009-1a.diff, derby-3009-1b.diff
>
>
> When creating an extremely large table (c.50 indexes, c.50 FK constraints), 
> IJ crashes with an out of memory error. The table can be created successfully 
> if it is done in stages, each one in a different IJ session.
> From Kristian Waagan:
> "With default settings on my machine, I also get the OOME.
> A brief investigation revealed a few things:
>   1) The OOME occurs during constraint additions (with ALTER TABLE ... 
> ADD CONSTRAINT). I could observe this by monitoring the heap usage.
>   2) The complete script can be run by increasing the heap size. I tried with 
> 256 MB, but the monitoring showed usage peaked at around 150 MB.
>   3) The stack traces produced when the OOME occurs varies (as could be 
> expected).
>   4) It is the Derby engine that "produce" the OOME, not ij (i.e. when I ran 
> with the network server, the server failed).
> I have not had time to examine the heap content, but I do believe there is a 
> bug in Derby. It seems some resource is not freed after use."

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] Commented: (DERBY-3009) Out of memory error when creating a very large table

2011-02-17 Thread Brett Wooldridge (JIRA)

[ 
https://issues.apache.org/jira/browse/DERBY-3009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12996229#comment-12996229
 ] 

Brett Wooldridge commented on DERBY-3009:
-

I am seeing the same issue as Nathan Boy commented on in May 2009.  Performing 
an ALTER TABLE with ADD CONSTRAINT on a large table causes an OOM error.  
Unless this issue is fixed, it will be impossible for us to upgrade customers' 
databases in the field.


> Out of memory error when creating a very large table
> 
>
> Key: DERBY-3009
> URL: https://issues.apache.org/jira/browse/DERBY-3009
> Project: Derby
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 10.2.2.0
> Environment: Win XP Pro
>Reporter: Nick Williamson
>  Labels: derby_triage10_5_2
> Attachments: DERBY-3009.zip
>
>
> When creating an extremely large table (c.50 indexes, c.50 FK constraints), 
> IJ crashes with an out of memory error. The table can be created successfully 
> if it is done in stages, each one in a different IJ session.
> From Kristian Waagan:
> "With default settings on my machine, I also get the OOME.
> A brief investigation revealed a few things:
>   1) The OOME occurs during constraint additions (with ALTER TABLE ... 
> ADD CONSTRAINT). I could observe this by monitoring the heap usage.
>   2) The complete script can be run by increasing the heap size. I tried with 
> 256 MB, but the monitoring showed usage peaked at around 150 MB.
>   3) The stack traces produced when the OOME occurs varies (as could be 
> expected).
>   4) It is the Derby engine that "produce" the OOME, not ij (i.e. when I ran 
> with the network server, the server failed).
> I have not had time to examine the heap content, but I do believe there is a 
> bug in Derby. It seems some resource is not freed after use."

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (DERBY-3009) Out of memory error when creating a very large table

2011-01-03 Thread Christian Stolz (JIRA)

[ 
https://issues.apache.org/jira/browse/DERBY-3009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12976743#action_12976743
 ] 

Christian Stolz commented on DERBY-3009:


Also seeing what appears to be this problem on Derby 10.7.1.1
Any progress on this topic?

> Out of memory error when creating a very large table
> 
>
> Key: DERBY-3009
> URL: https://issues.apache.org/jira/browse/DERBY-3009
> Project: Derby
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 10.2.2.0
> Environment: Win XP Pro
>Reporter: Nick Williamson
> Attachments: DERBY-3009.zip
>
>
> When creating an extremely large table (c.50 indexes, c.50 FK constraints), 
> IJ crashes with an out of memory error. The table can be created successfully 
> if it is done in stages, each one in a different IJ session.
> From Kristian Waagan:
> "With default settings on my machine, I also get the OOME.
> A brief investigation revealed a few things:
>   1) The OOME occurs during constraint additions (with ALTER TABLE ... 
> ADD CONSTRAINT). I could observe this by monitoring the heap usage.
>   2) The complete script can be run by increasing the heap size. I tried with 
> 256 MB, but the monitoring showed usage peaked at around 150 MB.
>   3) The stack traces produced when the OOME occurs varies (as could be 
> expected).
>   4) It is the Derby engine that "produce" the OOME, not ij (i.e. when I ran 
> with the network server, the server failed).
> I have not had time to examine the heap content, but I do believe there is a 
> bug in Derby. It seems some resource is not freed after use."

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (DERBY-3009) Out of memory error when creating a very large table

2009-05-13 Thread Tim Halloran (JIRA)

[ 
https://issues.apache.org/jira/browse/DERBY-3009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12709054#action_12709054
 ] 

Tim Halloran commented on DERBY-3009:
-

Also seeing what appears to be this problem on Derby 10.5.1.1 (on Windows I 
can't get the heap up above 1.5 GB-ish)

> Out of memory error when creating a very large table
> 
>
> Key: DERBY-3009
> URL: https://issues.apache.org/jira/browse/DERBY-3009
> Project: Derby
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 10.2.2.0
> Environment: Win XP Pro
>Reporter: Nick Williamson
> Attachments: DERBY-3009.zip
>
>
> When creating an extremely large table (c.50 indexes, c.50 FK constraints), 
> IJ crashes with an out of memory error. The table can be created successfully 
> if it is done in stages, each one in a different IJ session.
> From Kristian Waagan:
> "With default settings on my machine, I also get the OOME.
> A brief investigation revealed a few things:
>   1) The OOME occurs during constraint additions (with ALTER TABLE ... 
> ADD CONSTRAINT). I could observe this by monitoring the heap usage.
>   2) The complete script can be run by increasing the heap size. I tried with 
> 256 MB, but the monitoring showed usage peaked at around 150 MB.
>   3) The stack traces produced when the OOME occurs varies (as could be 
> expected).
>   4) It is the Derby engine that "produce" the OOME, not ij (i.e. when I ran 
> with the network server, the server failed).
> I have not had time to examine the heap content, but I do believe there is a 
> bug in Derby. It seems some resource is not freed after use."

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (DERBY-3009) Out of memory error when creating a very large table

2009-05-13 Thread Nathan Boy (JIRA)

[ 
https://issues.apache.org/jira/browse/DERBY-3009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12708918#action_12708918
 ] 

Nathan Boy commented on DERBY-3009:
---

I have this problem as well, in Derby 10.5.1.1 and 10.4.2.0.  I have a schema 
of about 16 tables, a few of which generally have 200-300k rows.  All of the 
data is loaded in, and then foreign key constraints are added one by one.  I 
tried committing between each ADD CONSTRAINT statement, but this did not seem 
to have any effect.  I still run out of memory even when heap size is set to 
2-3 gb.  I have not tried shutting down and starting up the database between 
each add constraint statement.  I will try this next.

> Out of memory error when creating a very large table
> 
>
> Key: DERBY-3009
> URL: https://issues.apache.org/jira/browse/DERBY-3009
> Project: Derby
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 10.2.2.0
> Environment: Win XP Pro
>Reporter: Nick Williamson
> Attachments: DERBY-3009.zip
>
>
> When creating an extremely large table (c.50 indexes, c.50 FK constraints), 
> IJ crashes with an out of memory error. The table can be created successfully 
> if it is done in stages, each one in a different IJ session.
> From Kristian Waagan:
> "With default settings on my machine, I also get the OOME.
> A brief investigation revealed a few things:
>   1) The OOME occurs during constraint additions (with ALTER TABLE ... 
> ADD CONSTRAINT). I could observe this by monitoring the heap usage.
>   2) The complete script can be run by increasing the heap size. I tried with 
> 256 MB, but the monitoring showed usage peaked at around 150 MB.
>   3) The stack traces produced when the OOME occurs varies (as could be 
> expected).
>   4) It is the Derby engine that "produce" the OOME, not ij (i.e. when I ran 
> with the network server, the server failed).
> I have not had time to examine the heap content, but I do believe there is a 
> bug in Derby. It seems some resource is not freed after use."

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (DERBY-3009) Out of memory error when creating a very large table

2008-04-08 Thread Gerald Khin (JIRA)

[ 
https://issues.apache.org/jira/browse/DERBY-3009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12586691#action_12586691
 ] 

Gerald Khin commented on DERBY-3009:


I just came across the same effect as Andrew Brown mentioned in his comment: A 
couple of ALTER TABLE ADD CONSTRAINT FOREIGN KEY statements on a couple of 
non-empty tables (the biggest of about 150k rows) caused an OOME. And the OOME 
doesn't happen when restarting the database process before each ALTER TABLE 
statement.

But it seems to me that this effect doesn't match the description of this JIRA 
entry. So my question is: Is this effect already known and honoured in a 
separate JIRA entry?


> Out of memory error when creating a very large table
> 
>
> Key: DERBY-3009
> URL: https://issues.apache.org/jira/browse/DERBY-3009
> Project: Derby
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 10.2.2.0
> Environment: Win XP Pro
>Reporter: Nick Williamson
> Attachments: DERBY-3009.zip
>
>
> When creating an extremely large table (c.50 indexes, c.50 FK constraints), 
> IJ crashes with an out of memory error. The table can be created successfully 
> if it is done in stages, each one in a different IJ session.
> From Kristian Waagan:
> "With default settings on my machine, I also get the OOME.
> A brief investigation revealed a few things:
>   1) The OOME occurs during constraint additions (with ALTER TABLE ... 
> ADD CONSTRAINT). I could observe this by monitoring the heap usage.
>   2) The complete script can be run by increasing the heap size. I tried with 
> 256 MB, but the monitoring showed usage peaked at around 150 MB.
>   3) The stack traces produced when the OOME occurs varies (as could be 
> expected).
>   4) It is the Derby engine that "produce" the OOME, not ij (i.e. when I ran 
> with the network server, the server failed).
> I have not had time to examine the heap content, but I do believe there is a 
> bug in Derby. It seems some resource is not freed after use."

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (DERBY-3009) Out of memory error when creating a very large table

2007-11-27 Thread Nick Williamson (JIRA)

[ 
https://issues.apache.org/jira/browse/DERBY-3009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12545913
 ] 

Nick Williamson commented on DERBY-3009:


Thanks for that, Andrew. It must be a different thing in my case, as my
tables are empty; it's a big (500+ tables) schema with one particularly
large and complex table, and it seems to be too much for Derby to handle
in one go. I guess there's some generic weakness in Derby that holds
onto resources when processing DDL, and any number of things can trigger
it...

Regards,
Nick

 



> Out of memory error when creating a very large table
> 
>
> Key: DERBY-3009
> URL: https://issues.apache.org/jira/browse/DERBY-3009
> Project: Derby
>  Issue Type: Bug
>Affects Versions: 10.2.2.0
> Environment: Win XP Pro
>Reporter: Nick Williamson
> Attachments: DERBY-3009.zip
>
>
> When creating an extremely large table (c.50 indexes, c.50 FK constraints), 
> IJ crashes with an out of memory error. The table can be created successfully 
> if it is done in stages, each one in a different IJ session.
> From Kristian Waagan:
> "With default settings on my machine, I also get the OOME.
> A brief investigation revealed a few things:
>   1) The OOME occurs during constraint additions (with ALTER TABLE ... 
> ADD CONSTRAINT). I could observe this by monitoring the heap usage.
>   2) The complete script can be run by increasing the heap size. I tried with 
> 256 MB, but the monitoring showed usage peaked at around 150 MB.
>   3) The stack traces produced when the OOME occurs varies (as could be 
> expected).
>   4) It is the Derby engine that "produce" the OOME, not ij (i.e. when I ran 
> with the network server, the server failed).
> I have not had time to examine the heap content, but I do believe there is a 
> bug in Derby. It seems some resource is not freed after use."

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (DERBY-3009) Out of memory error when creating a very large table

2007-11-14 Thread Andrew Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/DERBY-3009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12542608
 ] 

Andrew Brown commented on DERBY-3009:
-

I have run accross this issue also and I narrowed it down to index building.  I 
have a few tables with 10-30 million records and when building indexes on them 
I can watch the memory used grow until it crashes.  The only way around this 
for me has been to restart Derby after each index is built (not really a good 
thing is a production environment).  This happens both in IJ and through a java 
application.  We changed the  java code to commit and close the connection 
after each index build and that seemed to help, but the problem would still 
manifest itself.

I played around with some of the memory settings and by setting 
derby.storage.pageSize to a bigger size than the default size, this just caused 
the crash to happen faster.  I am not a Java developer but it seems that once 
an index is built, the buffer still has a lock on the memory and it isn't being 
freed.

> Out of memory error when creating a very large table
> 
>
> Key: DERBY-3009
> URL: https://issues.apache.org/jira/browse/DERBY-3009
> Project: Derby
>  Issue Type: Bug
>Affects Versions: 10.2.2.0
> Environment: Win XP Pro
>Reporter: Nick Williamson
> Attachments: DERBY-3009.zip
>
>
> When creating an extremely large table (c.50 indexes, c.50 FK constraints), 
> IJ crashes with an out of memory error. The table can be created successfully 
> if it is done in stages, each one in a different IJ session.
> From Kristian Waagan:
> "With default settings on my machine, I also get the OOME.
> A brief investigation revealed a few things:
>   1) The OOME occurs during constraint additions (with ALTER TABLE ... 
> ADD CONSTRAINT). I could observe this by monitoring the heap usage.
>   2) The complete script can be run by increasing the heap size. I tried with 
> 256 MB, but the monitoring showed usage peaked at around 150 MB.
>   3) The stack traces produced when the OOME occurs varies (as could be 
> expected).
>   4) It is the Derby engine that "produce" the OOME, not ij (i.e. when I ran 
> with the network server, the server failed).
> I have not had time to examine the heap content, but I do believe there is a 
> bug in Derby. It seems some resource is not freed after use."

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (DERBY-3009) Out of memory error when creating a very large table

2007-08-15 Thread Daniel John Debrunner (JIRA)

[ 
https://issues.apache.org/jira/browse/DERBY-3009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12520030
 ] 

Daniel John Debrunner commented on DERBY-3009:
--

May not be related but in addressing DERBY-3008 it seemed that even when a 
table is being created CreateIndexConstantAction scans the table and sets up a 
sorter to populate the index. Of course when creating a table there will be no 
rows so it's wasted work. In this case with up to 100 indexes maybe that's a 
factor.
I didn't fully check that CreatetableConstantAction does scan the table & sort 
when creating the table, it was just a quick glance at the code.

> Out of memory error when creating a very large table
> 
>
> Key: DERBY-3009
> URL: https://issues.apache.org/jira/browse/DERBY-3009
> Project: Derby
>  Issue Type: Bug
>Affects Versions: 10.2.2.0
> Environment: Win XP Pro
>Reporter: Nick Williamson
> Attachments: DERBY-3009.zip
>
>
> When creating an extremely large table (c.50 indexes, c.50 FK constraints), 
> IJ crashes with an out of memory error. The table can be created successfully 
> if it is done in stages, each one in a different IJ session.
> From Kristian Waagan:
> "With default settings on my machine, I also get the OOME.
> A brief investigation revealed a few things:
>   1) The OOME occurs during constraint additions (with ALTER TABLE ... 
> ADD CONSTRAINT). I could observe this by monitoring the heap usage.
>   2) The complete script can be run by increasing the heap size. I tried with 
> 256 MB, but the monitoring showed usage peaked at around 150 MB.
>   3) The stack traces produced when the OOME occurs varies (as could be 
> expected).
>   4) It is the Derby engine that "produce" the OOME, not ij (i.e. when I ran 
> with the network server, the server failed).
> I have not had time to examine the heap content, but I do believe there is a 
> bug in Derby. It seems some resource is not freed after use."

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.