[jira] Commented: (DERBY-418) outofmemory error when running large query in autocommit=false mode

2006-08-30 Thread Knut Anders Hatlen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-418?page=comments#action_12431510 ] 

Knut Anders Hatlen commented on DERBY-418:
--

Derbyall ran cleanly. However, when running the repro for DERBY-1142 (without 
rs.close()), I noticed that markUnused() was often called twice for the same 
activation. This happens because both EmbedPreparedStatement and EmbedResultSet 
call markUnused() in their finalizers. I think it would be a good idea to wrap 
the body of markUnused() with if (isInUse()) { ... } to avoid false calls to 
lcc.notifyUnusedActivation().

 outofmemory error when running large query in autocommit=false mode
 ---

 Key: DERBY-418
 URL: http://issues.apache.org/jira/browse/DERBY-418
 Project: Derby
  Issue Type: Bug
  Components: Store
Affects Versions: 10.1.1.0
 Environment: I can reproduce this problem on Win2k/ T42 laptop. 
 jdk141. 
Reporter: Sunitha Kambhampati
 Assigned To: Mayuresh Nirhali
 Fix For: 10.2.1.0

 Attachments: AutoCommitTest.java, derby418_v1.diff, derby418_v2.diff, 
 derby418_v3.diff, derby418_v4.diff, derby418_v5.diff


 On the derby-user list,  Chris reported tihs problem with his application and 
 also a repro for the problem.  I am logging the jira issue so it doesnt get 
 lost in all the mail.  
 (http://www.mail-archive.com/derby-user@db.apache.org/msg01258.html)
 --from chris's post--
 I'm running a set of ~5 queries on one table, using inserts and updates, 
 and i want to be able to roll them back so i turned off autocommit using 
 setAutoCommit(false).  As the update runs, the memory used by the JVM 
 increases continually until i get the following exception about 20% of the 
 way through:
 ERROR 40XT0: An internal error was identified by RawStore module.
at 
 org.apache.derby.iapi.error.StandardException.newException(StandardException.java)
at org.apache.derby.impl.store.raw.xact.Xact.setActiveState(Xact.java)
at org.apache.derby.impl.store.raw.xact.Xact.openContainer(Xact.java)
at 
 org.apache.derby.impl.store.access.conglomerate.OpenConglomerate.init(OpenConglomerate.java)
at org.apache.derby.impl.store.access.heap.Heap.open(Heap.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getDescriptorViaIndex(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.locateSchemaRow(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getSchemaDescriptor(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindTableDescriptor(FromBaseTable.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindNonVTITables(FromBaseTable.java)
at org.apache.derby.impl.sql.compile.FromList.bindTables(FromList.java)
at 
 org.apache.derby.impl.sql.compile.SelectNode.bindNonVTITables(SelectNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bindTables(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bind(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.ReadCursorNode.bind(ReadCursorNode.java)
at org.apache.derby.impl.sql.compile.CursorNode.bind(CursorNode.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepMinion(GenericStatement.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepare(GenericStatement.java)
at 
 org.apache.derby.impl.sql.conn.GenericLanguageConnectionContext.prepareInternalStatement(GenericLanguageConnectionContext.java)
at org.apache.derby.impl.jdbc.EmbedStatement.execute(EmbedStatement.java)
at 
 org.apache.derby.impl.jdbc.EmbedStatement.executeQuery(EmbedStatement.java)
at vi.hotspot.database.DataInterface._query(DataInterface.java:181)
at vi.hotspot.database.DataInterface.query(DataInterface.java:160)
at 
 vi.hotspot.database.UpdateManager.updatePartialTable(UpdateManager.java:518)
at 
 vi.hotspot.database.UpdateManager.updatePartialTables(UpdateManager.java:619)
at vi.hotspot.database.UpdateManager.run(UpdateManager.java:924)
at java.lang.Thread.run(Thread.java:534)
 vi.hotspot.exception.ServerTransactionException
at 
 vi.hotspot.database.UpdateManager.updatePartialTable(UpdateManager.java:555)
at 
 vi.hotspot.database.UpdateManager.updatePartialTables(UpdateManager.java:619)
at vi.hotspot.database.UpdateManager.run(UpdateManager.java:924)
at 

[jira] Commented: (DERBY-418) outofmemory error when running large query in autocommit=false mode

2006-08-29 Thread Knut Anders Hatlen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-418?page=comments#action_12431148 ] 

Knut Anders Hatlen commented on DERBY-418:
--

Thanks Mayuresh. I think the patch looks good. If there are no more comments, I 
will run some tests and commit the patch tomorrow.

 outofmemory error when running large query in autocommit=false mode
 ---

 Key: DERBY-418
 URL: http://issues.apache.org/jira/browse/DERBY-418
 Project: Derby
  Issue Type: Bug
  Components: Store
Affects Versions: 10.1.1.0
 Environment: I can reproduce this problem on Win2k/ T42 laptop. 
 jdk141. 
Reporter: Sunitha Kambhampati
 Assigned To: Mayuresh Nirhali
 Fix For: 10.2.1.0

 Attachments: AutoCommitTest.java, derby418_v1.diff, derby418_v2.diff, 
 derby418_v3.diff, derby418_v4.diff, derby418_v5.diff


 On the derby-user list,  Chris reported tihs problem with his application and 
 also a repro for the problem.  I am logging the jira issue so it doesnt get 
 lost in all the mail.  
 (http://www.mail-archive.com/derby-user@db.apache.org/msg01258.html)
 --from chris's post--
 I'm running a set of ~5 queries on one table, using inserts and updates, 
 and i want to be able to roll them back so i turned off autocommit using 
 setAutoCommit(false).  As the update runs, the memory used by the JVM 
 increases continually until i get the following exception about 20% of the 
 way through:
 ERROR 40XT0: An internal error was identified by RawStore module.
at 
 org.apache.derby.iapi.error.StandardException.newException(StandardException.java)
at org.apache.derby.impl.store.raw.xact.Xact.setActiveState(Xact.java)
at org.apache.derby.impl.store.raw.xact.Xact.openContainer(Xact.java)
at 
 org.apache.derby.impl.store.access.conglomerate.OpenConglomerate.init(OpenConglomerate.java)
at org.apache.derby.impl.store.access.heap.Heap.open(Heap.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getDescriptorViaIndex(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.locateSchemaRow(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getSchemaDescriptor(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindTableDescriptor(FromBaseTable.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindNonVTITables(FromBaseTable.java)
at org.apache.derby.impl.sql.compile.FromList.bindTables(FromList.java)
at 
 org.apache.derby.impl.sql.compile.SelectNode.bindNonVTITables(SelectNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bindTables(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bind(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.ReadCursorNode.bind(ReadCursorNode.java)
at org.apache.derby.impl.sql.compile.CursorNode.bind(CursorNode.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepMinion(GenericStatement.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepare(GenericStatement.java)
at 
 org.apache.derby.impl.sql.conn.GenericLanguageConnectionContext.prepareInternalStatement(GenericLanguageConnectionContext.java)
at org.apache.derby.impl.jdbc.EmbedStatement.execute(EmbedStatement.java)
at 
 org.apache.derby.impl.jdbc.EmbedStatement.executeQuery(EmbedStatement.java)
at vi.hotspot.database.DataInterface._query(DataInterface.java:181)
at vi.hotspot.database.DataInterface.query(DataInterface.java:160)
at 
 vi.hotspot.database.UpdateManager.updatePartialTable(UpdateManager.java:518)
at 
 vi.hotspot.database.UpdateManager.updatePartialTables(UpdateManager.java:619)
at vi.hotspot.database.UpdateManager.run(UpdateManager.java:924)
at java.lang.Thread.run(Thread.java:534)
 vi.hotspot.exception.ServerTransactionException
at 
 vi.hotspot.database.UpdateManager.updatePartialTable(UpdateManager.java:555)
at 
 vi.hotspot.database.UpdateManager.updatePartialTables(UpdateManager.java:619)
at vi.hotspot.database.UpdateManager.run(UpdateManager.java:924)
at java.lang.Thread.run(Thread.java:534)
 Derby is running in standalone mode. 

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: 

[jira] Commented: (DERBY-418) outofmemory error when running large query in autocommit=false mode

2006-08-25 Thread Daniel John Debrunner (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-418?page=comments#action_12430512 ] 

Daniel John Debrunner commented on DERBY-418:
-

I think your loop that closes activations needs to include similar logic to 
htis, from cleanupOnError

// it maybe the case that a reset()/close() 
ends up closing
// one or more activation leaving our index 
beyond
// the end of the array
if (i = acts.size())
continue;


Also there is a chance that you clear the unusedActs flag when you shouldn't. 
Basically if more activations are marked unused during the time you are closing 
the others then you will clear the flag when it should be kept set. The 
clearing of the flag should be before you process the loop.

E.g. 

if ( unusedActs )
 {
   unusedActs  = false;
   close loop
  }

I would also remove the check for size()  20, I think it's of no value once 
you have the unusedActs flag.

 outofmemory error when running large query in autocommit=false mode
 ---

 Key: DERBY-418
 URL: http://issues.apache.org/jira/browse/DERBY-418
 Project: Derby
  Issue Type: Bug
  Components: Store
Affects Versions: 10.1.1.0
 Environment: I can reproduce this problem on Win2k/ T42 laptop. 
 jdk141. 
Reporter: Sunitha Kambhampati
 Assigned To: Mayuresh Nirhali
 Fix For: 10.2.1.0

 Attachments: AutoCommitTest.java, derby418_v1.diff, derby418_v2.diff, 
 derby418_v3.diff, derby418_v4.diff


 On the derby-user list,  Chris reported tihs problem with his application and 
 also a repro for the problem.  I am logging the jira issue so it doesnt get 
 lost in all the mail.  
 (http://www.mail-archive.com/derby-user@db.apache.org/msg01258.html)
 --from chris's post--
 I'm running a set of ~5 queries on one table, using inserts and updates, 
 and i want to be able to roll them back so i turned off autocommit using 
 setAutoCommit(false).  As the update runs, the memory used by the JVM 
 increases continually until i get the following exception about 20% of the 
 way through:
 ERROR 40XT0: An internal error was identified by RawStore module.
at 
 org.apache.derby.iapi.error.StandardException.newException(StandardException.java)
at org.apache.derby.impl.store.raw.xact.Xact.setActiveState(Xact.java)
at org.apache.derby.impl.store.raw.xact.Xact.openContainer(Xact.java)
at 
 org.apache.derby.impl.store.access.conglomerate.OpenConglomerate.init(OpenConglomerate.java)
at org.apache.derby.impl.store.access.heap.Heap.open(Heap.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getDescriptorViaIndex(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.locateSchemaRow(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getSchemaDescriptor(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindTableDescriptor(FromBaseTable.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindNonVTITables(FromBaseTable.java)
at org.apache.derby.impl.sql.compile.FromList.bindTables(FromList.java)
at 
 org.apache.derby.impl.sql.compile.SelectNode.bindNonVTITables(SelectNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bindTables(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bind(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.ReadCursorNode.bind(ReadCursorNode.java)
at org.apache.derby.impl.sql.compile.CursorNode.bind(CursorNode.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepMinion(GenericStatement.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepare(GenericStatement.java)
at 
 org.apache.derby.impl.sql.conn.GenericLanguageConnectionContext.prepareInternalStatement(GenericLanguageConnectionContext.java)
at org.apache.derby.impl.jdbc.EmbedStatement.execute(EmbedStatement.java)
at 
 org.apache.derby.impl.jdbc.EmbedStatement.executeQuery(EmbedStatement.java)
at vi.hotspot.database.DataInterface._query(DataInterface.java:181)
at vi.hotspot.database.DataInterface.query(DataInterface.java:160)
at 
 vi.hotspot.database.UpdateManager.updatePartialTable(UpdateManager.java:518)
at 
 

[jira] Commented: (DERBY-418) outofmemory error when running large query in autocommit=false mode

2006-08-25 Thread Knut Anders Hatlen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-418?page=comments#action_12430517 ] 

Knut Anders Hatlen commented on DERBY-418:
--

Thank you for addressing my comments, Mayuresh!

I think you misunderstood what I meant with the indentation (you're not the 
first one to have problems with this). The files you have changed use tabs to 
indent the code. The people who created the files used editors with tab stops 
at four characters. Since your editor has tab stops at eight characters, and 
you use space as indentation character, you end up indenting the code twice as 
much as the surrounding code.

For instance, BaseActivation.java has this diff:

@@ -787,6 +791,7 @@
public final void markUnused()
{
inUse = false;
+lcc.notifyUnusedActivation();
}

The line with inUse = false is indented with two tabs, which should be 
equivalent to eight spaces. However, the line with lcc.notifyUnusedActivation() 
is indented with 16 spaces, whereas it should have been on the same indentation 
level as the previous line. This seems to be the case for all the changed lines.

Solution: Change the tab settings in your editor and reduce the indentation 
level of the changed code. Since the surrounding code uses tabs, it would be 
preferable that the changes used tabs as well, but if you choose to use spaces, 
one tab character should match four spaces.

I'm sorry that I didn't comment on this before, but mixing of tabs and spaces 
with incorrect tab width is one of those issues that you don't see when you 
inspect the patch, only after you have applied the patch and read the source in 
an editor.

 outofmemory error when running large query in autocommit=false mode
 ---

 Key: DERBY-418
 URL: http://issues.apache.org/jira/browse/DERBY-418
 Project: Derby
  Issue Type: Bug
  Components: Store
Affects Versions: 10.1.1.0
 Environment: I can reproduce this problem on Win2k/ T42 laptop. 
 jdk141. 
Reporter: Sunitha Kambhampati
 Assigned To: Mayuresh Nirhali
 Fix For: 10.2.1.0

 Attachments: AutoCommitTest.java, derby418_v1.diff, derby418_v2.diff, 
 derby418_v3.diff, derby418_v4.diff


 On the derby-user list,  Chris reported tihs problem with his application and 
 also a repro for the problem.  I am logging the jira issue so it doesnt get 
 lost in all the mail.  
 (http://www.mail-archive.com/derby-user@db.apache.org/msg01258.html)
 --from chris's post--
 I'm running a set of ~5 queries on one table, using inserts and updates, 
 and i want to be able to roll them back so i turned off autocommit using 
 setAutoCommit(false).  As the update runs, the memory used by the JVM 
 increases continually until i get the following exception about 20% of the 
 way through:
 ERROR 40XT0: An internal error was identified by RawStore module.
at 
 org.apache.derby.iapi.error.StandardException.newException(StandardException.java)
at org.apache.derby.impl.store.raw.xact.Xact.setActiveState(Xact.java)
at org.apache.derby.impl.store.raw.xact.Xact.openContainer(Xact.java)
at 
 org.apache.derby.impl.store.access.conglomerate.OpenConglomerate.init(OpenConglomerate.java)
at org.apache.derby.impl.store.access.heap.Heap.open(Heap.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getDescriptorViaIndex(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.locateSchemaRow(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getSchemaDescriptor(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindTableDescriptor(FromBaseTable.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindNonVTITables(FromBaseTable.java)
at org.apache.derby.impl.sql.compile.FromList.bindTables(FromList.java)
at 
 org.apache.derby.impl.sql.compile.SelectNode.bindNonVTITables(SelectNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bindTables(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bind(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.ReadCursorNode.bind(ReadCursorNode.java)
at org.apache.derby.impl.sql.compile.CursorNode.bind(CursorNode.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepMinion(GenericStatement.java)
at 
 

Re: [jira] Commented: (DERBY-418) outofmemory error when running large query in autocommit=false mode

2006-08-23 Thread Knut Anders Hatlen
Mayuresh Nirhali [EMAIL PROTECTED] writes:

 Hi Knut,

 Thanks for the review and trying out my changes.

 Could you tell me with what memory size did you run the repros ??
 I see the OOME for -Xmx16m and do not see it for 64m.

You're right. I used 64 MB, and it did fail eventually.

However, I had the repro for DERBY-1142 (with rs.close() removed)
running for more than four hours with 16 MB heap, and I didn't get any
out of memory error.

 My impression is
 that there is still some leak happening, though I agree that this is
 an edge case.

It could be that DERBY-418 has more leaks than DERBY-1142. Since 1142
has a simpler repro, it could be a good idea to focus on that one
first.

One more comment to your changes:

Since it might be perfectly legitimate to have more than 20 open
activations, and since there might be a delay between the result set
becoming unreachable and it being finalized, the activations vector
may be scanned in vain. I think the other solution Dan suggested
(setting a flag in GenericLanguageConnectionContext from
BaseActivation.markUnused()) is better, since the vector is only
scanned when there is an unused activation.

Thanks,
-- 
Knut Anders


[jira] Commented: (DERBY-418) outofmemory error when running large query in autocommit=false mode

2006-08-23 Thread Daniel John Debrunner (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-418?page=comments#action_12430028 ] 

Daniel John Debrunner commented on DERBY-418:
-

The change ignores the exception:

+} catch(StandardException e) {
+StandardException.plainWrapException(e);
+}

That code just creates an exception (in the call to plainWrapException)  and 
then throws it away.



 outofmemory error when running large query in autocommit=false mode
 ---

 Key: DERBY-418
 URL: http://issues.apache.org/jira/browse/DERBY-418
 Project: Derby
  Issue Type: Bug
  Components: Store
Affects Versions: 10.1.1.0
 Environment: I can reproduce this problem on Win2k/ T42 laptop. 
 jdk141. 
Reporter: Sunitha Kambhampati
 Assigned To: Mayuresh Nirhali
 Fix For: 10.2.1.0

 Attachments: AutoCommitTest.java, derby418_v1.diff


 On the derby-user list,  Chris reported tihs problem with his application and 
 also a repro for the problem.  I am logging the jira issue so it doesnt get 
 lost in all the mail.  
 (http://www.mail-archive.com/derby-user@db.apache.org/msg01258.html)
 --from chris's post--
 I'm running a set of ~5 queries on one table, using inserts and updates, 
 and i want to be able to roll them back so i turned off autocommit using 
 setAutoCommit(false).  As the update runs, the memory used by the JVM 
 increases continually until i get the following exception about 20% of the 
 way through:
 ERROR 40XT0: An internal error was identified by RawStore module.
at 
 org.apache.derby.iapi.error.StandardException.newException(StandardException.java)
at org.apache.derby.impl.store.raw.xact.Xact.setActiveState(Xact.java)
at org.apache.derby.impl.store.raw.xact.Xact.openContainer(Xact.java)
at 
 org.apache.derby.impl.store.access.conglomerate.OpenConglomerate.init(OpenConglomerate.java)
at org.apache.derby.impl.store.access.heap.Heap.open(Heap.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getDescriptorViaIndex(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.locateSchemaRow(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getSchemaDescriptor(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindTableDescriptor(FromBaseTable.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindNonVTITables(FromBaseTable.java)
at org.apache.derby.impl.sql.compile.FromList.bindTables(FromList.java)
at 
 org.apache.derby.impl.sql.compile.SelectNode.bindNonVTITables(SelectNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bindTables(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bind(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.ReadCursorNode.bind(ReadCursorNode.java)
at org.apache.derby.impl.sql.compile.CursorNode.bind(CursorNode.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepMinion(GenericStatement.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepare(GenericStatement.java)
at 
 org.apache.derby.impl.sql.conn.GenericLanguageConnectionContext.prepareInternalStatement(GenericLanguageConnectionContext.java)
at org.apache.derby.impl.jdbc.EmbedStatement.execute(EmbedStatement.java)
at 
 org.apache.derby.impl.jdbc.EmbedStatement.executeQuery(EmbedStatement.java)
at vi.hotspot.database.DataInterface._query(DataInterface.java:181)
at vi.hotspot.database.DataInterface.query(DataInterface.java:160)
at 
 vi.hotspot.database.UpdateManager.updatePartialTable(UpdateManager.java:518)
at 
 vi.hotspot.database.UpdateManager.updatePartialTables(UpdateManager.java:619)
at vi.hotspot.database.UpdateManager.run(UpdateManager.java:924)
at java.lang.Thread.run(Thread.java:534)
 vi.hotspot.exception.ServerTransactionException
at 
 vi.hotspot.database.UpdateManager.updatePartialTable(UpdateManager.java:555)
at 
 vi.hotspot.database.UpdateManager.updatePartialTables(UpdateManager.java:619)
at vi.hotspot.database.UpdateManager.run(UpdateManager.java:924)
at java.lang.Thread.run(Thread.java:534)
 Derby is running in standalone mode. 

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 

[jira] Commented: (DERBY-418) outofmemory error when running large query in autocommit=false mode

2006-08-22 Thread Mayuresh Nirhali (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-418?page=comments#action_12429668 ] 

Mayuresh Nirhali commented on DERBY-418:


I tried the approach suggested by Dan in DERBY-1142, and my version of change 
looks like as below,

In GenericLanguageConnectionContext,

public void addActivation(Activation a) {
acts.addElement(a);

+  try {
+  if (acts.size()  20) {
+   resetActivations(false);
+   }
+  } catch(StandardException e) {
+  }
+
if (SanityManager.DEBUG) {

if (SanityManager.DEBUG_ON(memoryLeakTrace)) {

if (acts.size()  20)

System.out.println(memoryLeakTrace:GenericLanguageContext:activations  + 
acts.size());
}
}
}


I thought it will be better to use the method resetActivations as it is meant 
to do the desired business for us. ( let me know if there are any negative 
implications of using this method here)

This approach did not solve the OOME but differed it for sometime. The effect 
of this change is equal to the effect of having singleUseActivation.close in 
the finalize method of EmbedRS, as suggested earlier.

Further, and importantly, I found that there still exist some Activation 
objects with inUse=true. I guess there is a synchronization issue here due to 
which EmbedRS.finalize is not called for all the objects, thus not marking them 
unused. On several runs, the worst case I found is that for 100 such select 
queries about 10% activation objects still stay on the heap. So, as the 
finalize method is not executed for many objects, the activation objects are 
not freed. Here, it is irrelevent if they are closed directly from within the 
finalize method or indirectly by first just marking them unused.

I ran these tests on both jdk1.6 and jdk1.5.

any thoughts on behaviour of finalizer thread in this context.
any inputs on synchronizing the markunused operation ??

 outofmemory error when running large query in autocommit=false mode
 ---

 Key: DERBY-418
 URL: http://issues.apache.org/jira/browse/DERBY-418
 Project: Derby
  Issue Type: Bug
  Components: Store
Affects Versions: 10.1.1.0
 Environment: I can reproduce this problem on Win2k/ T42 laptop. 
 jdk141. 
Reporter: Sunitha Kambhampati
 Assigned To: Mayuresh Nirhali
 Fix For: 10.2.1.0

 Attachments: AutoCommitTest.java


 On the derby-user list,  Chris reported tihs problem with his application and 
 also a repro for the problem.  I am logging the jira issue so it doesnt get 
 lost in all the mail.  
 (http://www.mail-archive.com/derby-user@db.apache.org/msg01258.html)
 --from chris's post--
 I'm running a set of ~5 queries on one table, using inserts and updates, 
 and i want to be able to roll them back so i turned off autocommit using 
 setAutoCommit(false).  As the update runs, the memory used by the JVM 
 increases continually until i get the following exception about 20% of the 
 way through:
 ERROR 40XT0: An internal error was identified by RawStore module.
at 
 org.apache.derby.iapi.error.StandardException.newException(StandardException.java)
at org.apache.derby.impl.store.raw.xact.Xact.setActiveState(Xact.java)
at org.apache.derby.impl.store.raw.xact.Xact.openContainer(Xact.java)
at 
 org.apache.derby.impl.store.access.conglomerate.OpenConglomerate.init(OpenConglomerate.java)
at org.apache.derby.impl.store.access.heap.Heap.open(Heap.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getDescriptorViaIndex(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.locateSchemaRow(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getSchemaDescriptor(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindTableDescriptor(FromBaseTable.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindNonVTITables(FromBaseTable.java)
at org.apache.derby.impl.sql.compile.FromList.bindTables(FromList.java)
at 
 org.apache.derby.impl.sql.compile.SelectNode.bindNonVTITables(SelectNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bindTables(DMLStatementNode.java)
at 
 

[jira] Commented: (DERBY-418) outofmemory error when running large query in autocommit=false mode

2006-08-22 Thread Knut Anders Hatlen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-418?page=comments#action_12429674 ] 

Knut Anders Hatlen commented on DERBY-418:
--

Hi Mayuresh,

I'm afraid your fix has some unwanted consequences. First of all, the 
OutOfMemoryError is still there. Secondly, resetActivations() will also reset 
activations that are still open and in use. Take for instance this code:

c.setAutoCommit(false);
c.setHoldability(ResultSet.CLOSE_CURSORS_AT_COMMIT);
Statement[] ss = new Statement[21];
ResultSet[] rss = new ResultSet[ss.length];
for (int i = 0; i  ss.length; i++) {
ss[i] = c.createStatement();
rss[i] = ss[i].executeQuery(values 1);
}
rss[0].next();

Without your changes, it runs successfully. With your fix applied, 
rss[0].next() fails with java.sql.SQLException: ResultSet not open. Operation 
'next' not permitted. Verify that autocommit is OFF.

 outofmemory error when running large query in autocommit=false mode
 ---

 Key: DERBY-418
 URL: http://issues.apache.org/jira/browse/DERBY-418
 Project: Derby
  Issue Type: Bug
  Components: Store
Affects Versions: 10.1.1.0
 Environment: I can reproduce this problem on Win2k/ T42 laptop. 
 jdk141. 
Reporter: Sunitha Kambhampati
 Assigned To: Mayuresh Nirhali
 Fix For: 10.2.1.0

 Attachments: AutoCommitTest.java


 On the derby-user list,  Chris reported tihs problem with his application and 
 also a repro for the problem.  I am logging the jira issue so it doesnt get 
 lost in all the mail.  
 (http://www.mail-archive.com/derby-user@db.apache.org/msg01258.html)
 --from chris's post--
 I'm running a set of ~5 queries on one table, using inserts and updates, 
 and i want to be able to roll them back so i turned off autocommit using 
 setAutoCommit(false).  As the update runs, the memory used by the JVM 
 increases continually until i get the following exception about 20% of the 
 way through:
 ERROR 40XT0: An internal error was identified by RawStore module.
at 
 org.apache.derby.iapi.error.StandardException.newException(StandardException.java)
at org.apache.derby.impl.store.raw.xact.Xact.setActiveState(Xact.java)
at org.apache.derby.impl.store.raw.xact.Xact.openContainer(Xact.java)
at 
 org.apache.derby.impl.store.access.conglomerate.OpenConglomerate.init(OpenConglomerate.java)
at org.apache.derby.impl.store.access.heap.Heap.open(Heap.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getDescriptorViaIndex(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.locateSchemaRow(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getSchemaDescriptor(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindTableDescriptor(FromBaseTable.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindNonVTITables(FromBaseTable.java)
at org.apache.derby.impl.sql.compile.FromList.bindTables(FromList.java)
at 
 org.apache.derby.impl.sql.compile.SelectNode.bindNonVTITables(SelectNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bindTables(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bind(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.ReadCursorNode.bind(ReadCursorNode.java)
at org.apache.derby.impl.sql.compile.CursorNode.bind(CursorNode.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepMinion(GenericStatement.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepare(GenericStatement.java)
at 
 org.apache.derby.impl.sql.conn.GenericLanguageConnectionContext.prepareInternalStatement(GenericLanguageConnectionContext.java)
at org.apache.derby.impl.jdbc.EmbedStatement.execute(EmbedStatement.java)
at 
 org.apache.derby.impl.jdbc.EmbedStatement.executeQuery(EmbedStatement.java)
at vi.hotspot.database.DataInterface._query(DataInterface.java:181)
at vi.hotspot.database.DataInterface.query(DataInterface.java:160)
at 
 vi.hotspot.database.UpdateManager.updatePartialTable(UpdateManager.java:518)
at 
 vi.hotspot.database.UpdateManager.updatePartialTables(UpdateManager.java:619)
at vi.hotspot.database.UpdateManager.run(UpdateManager.java:924)
at java.lang.Thread.run(Thread.java:534)
 

[jira] Commented: (DERBY-418) outofmemory error when running large query in autocommit=false mode

2006-08-22 Thread Mayuresh Nirhali (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-418?page=comments#action_12429709 ] 

Mayuresh Nirhali commented on DERBY-418:


Thanks Knut for pointing that out.

I did not mean to provide a patch for this JIRA, but just wanted to try out a 
suggested approach. 
I have corrected this error and now the trial piece of code looks like, 

public void addActivation(Activation a) {
acts.addElement(a);
try {
if (acts.size()  20) {
for (int i = acts.size() - 1; i = 0; i--) {
Activation a1 = (Activation) acts.elementAt(i);

if (!a1.isInUse())
{
a1.close();
}

} //for
}
} catch(StandardException e) {
}

if (SanityManager.DEBUG) {

if (SanityManager.DEBUG_ON(memoryLeakTrace)) {

if (acts.size()  20)

System.out.println(memoryLeakTrace:GenericLanguageContext:activations  + 
acts.size());
}
}
}

I have tried your code snippet with this along with the repro and did not see 
any regressions.

However, the OOME is still seen. As mentioned in my earlier comment, the main 
issue seems to be with the Finalizer thread behaviour.
I guess there are 2 parts to this issue. 
1. Make sure the inUse field is synchronized
2. Close activation objects either directly from EmbedRS.finalize method or 
indirectly as per the change in GenLCC.addActivation method.

 outofmemory error when running large query in autocommit=false mode
 ---

 Key: DERBY-418
 URL: http://issues.apache.org/jira/browse/DERBY-418
 Project: Derby
  Issue Type: Bug
  Components: Store
Affects Versions: 10.1.1.0
 Environment: I can reproduce this problem on Win2k/ T42 laptop. 
 jdk141. 
Reporter: Sunitha Kambhampati
 Assigned To: Mayuresh Nirhali
 Fix For: 10.2.1.0

 Attachments: AutoCommitTest.java


 On the derby-user list,  Chris reported tihs problem with his application and 
 also a repro for the problem.  I am logging the jira issue so it doesnt get 
 lost in all the mail.  
 (http://www.mail-archive.com/derby-user@db.apache.org/msg01258.html)
 --from chris's post--
 I'm running a set of ~5 queries on one table, using inserts and updates, 
 and i want to be able to roll them back so i turned off autocommit using 
 setAutoCommit(false).  As the update runs, the memory used by the JVM 
 increases continually until i get the following exception about 20% of the 
 way through:
 ERROR 40XT0: An internal error was identified by RawStore module.
at 
 org.apache.derby.iapi.error.StandardException.newException(StandardException.java)
at org.apache.derby.impl.store.raw.xact.Xact.setActiveState(Xact.java)
at org.apache.derby.impl.store.raw.xact.Xact.openContainer(Xact.java)
at 
 org.apache.derby.impl.store.access.conglomerate.OpenConglomerate.init(OpenConglomerate.java)
at org.apache.derby.impl.store.access.heap.Heap.open(Heap.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getDescriptorViaIndex(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.locateSchemaRow(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getSchemaDescriptor(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindTableDescriptor(FromBaseTable.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindNonVTITables(FromBaseTable.java)
at org.apache.derby.impl.sql.compile.FromList.bindTables(FromList.java)
at 
 org.apache.derby.impl.sql.compile.SelectNode.bindNonVTITables(SelectNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bindTables(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bind(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.ReadCursorNode.bind(ReadCursorNode.java)
at org.apache.derby.impl.sql.compile.CursorNode.bind(CursorNode.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepMinion(GenericStatement.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepare(GenericStatement.java)
at 
 

[jira] Commented: (DERBY-418) outofmemory error when running large query in autocommit=false mode

2006-08-22 Thread Knut Anders Hatlen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-418?page=comments#action_12429712 ] 

Knut Anders Hatlen commented on DERBY-418:
--

Did you try to declare BaseActivation.inUse as volatile?

 outofmemory error when running large query in autocommit=false mode
 ---

 Key: DERBY-418
 URL: http://issues.apache.org/jira/browse/DERBY-418
 Project: Derby
  Issue Type: Bug
  Components: Store
Affects Versions: 10.1.1.0
 Environment: I can reproduce this problem on Win2k/ T42 laptop. 
 jdk141. 
Reporter: Sunitha Kambhampati
 Assigned To: Mayuresh Nirhali
 Fix For: 10.2.1.0

 Attachments: AutoCommitTest.java


 On the derby-user list,  Chris reported tihs problem with his application and 
 also a repro for the problem.  I am logging the jira issue so it doesnt get 
 lost in all the mail.  
 (http://www.mail-archive.com/derby-user@db.apache.org/msg01258.html)
 --from chris's post--
 I'm running a set of ~5 queries on one table, using inserts and updates, 
 and i want to be able to roll them back so i turned off autocommit using 
 setAutoCommit(false).  As the update runs, the memory used by the JVM 
 increases continually until i get the following exception about 20% of the 
 way through:
 ERROR 40XT0: An internal error was identified by RawStore module.
at 
 org.apache.derby.iapi.error.StandardException.newException(StandardException.java)
at org.apache.derby.impl.store.raw.xact.Xact.setActiveState(Xact.java)
at org.apache.derby.impl.store.raw.xact.Xact.openContainer(Xact.java)
at 
 org.apache.derby.impl.store.access.conglomerate.OpenConglomerate.init(OpenConglomerate.java)
at org.apache.derby.impl.store.access.heap.Heap.open(Heap.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getDescriptorViaIndex(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.locateSchemaRow(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getSchemaDescriptor(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindTableDescriptor(FromBaseTable.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindNonVTITables(FromBaseTable.java)
at org.apache.derby.impl.sql.compile.FromList.bindTables(FromList.java)
at 
 org.apache.derby.impl.sql.compile.SelectNode.bindNonVTITables(SelectNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bindTables(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bind(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.ReadCursorNode.bind(ReadCursorNode.java)
at org.apache.derby.impl.sql.compile.CursorNode.bind(CursorNode.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepMinion(GenericStatement.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepare(GenericStatement.java)
at 
 org.apache.derby.impl.sql.conn.GenericLanguageConnectionContext.prepareInternalStatement(GenericLanguageConnectionContext.java)
at org.apache.derby.impl.jdbc.EmbedStatement.execute(EmbedStatement.java)
at 
 org.apache.derby.impl.jdbc.EmbedStatement.executeQuery(EmbedStatement.java)
at vi.hotspot.database.DataInterface._query(DataInterface.java:181)
at vi.hotspot.database.DataInterface.query(DataInterface.java:160)
at 
 vi.hotspot.database.UpdateManager.updatePartialTable(UpdateManager.java:518)
at 
 vi.hotspot.database.UpdateManager.updatePartialTables(UpdateManager.java:619)
at vi.hotspot.database.UpdateManager.run(UpdateManager.java:924)
at java.lang.Thread.run(Thread.java:534)
 vi.hotspot.exception.ServerTransactionException
at 
 vi.hotspot.database.UpdateManager.updatePartialTable(UpdateManager.java:555)
at 
 vi.hotspot.database.UpdateManager.updatePartialTables(UpdateManager.java:619)
at vi.hotspot.database.UpdateManager.run(UpdateManager.java:924)
at java.lang.Thread.run(Thread.java:534)
 Derby is running in standalone mode. 

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (DERBY-418) outofmemory error when running large query in autocommit=false mode

2006-08-22 Thread Daniel John Debrunner (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-418?page=comments#action_12429721 ] 

Daniel John Debrunner commented on DERBY-418:
-

Any change must not ignore exceptions thrrown during an activation.close().

} catch(StandardException e) {
} 

-1 on any change that includes that.

 outofmemory error when running large query in autocommit=false mode
 ---

 Key: DERBY-418
 URL: http://issues.apache.org/jira/browse/DERBY-418
 Project: Derby
  Issue Type: Bug
  Components: Store
Affects Versions: 10.1.1.0
 Environment: I can reproduce this problem on Win2k/ T42 laptop. 
 jdk141. 
Reporter: Sunitha Kambhampati
 Assigned To: Mayuresh Nirhali
 Fix For: 10.2.1.0

 Attachments: AutoCommitTest.java


 On the derby-user list,  Chris reported tihs problem with his application and 
 also a repro for the problem.  I am logging the jira issue so it doesnt get 
 lost in all the mail.  
 (http://www.mail-archive.com/derby-user@db.apache.org/msg01258.html)
 --from chris's post--
 I'm running a set of ~5 queries on one table, using inserts and updates, 
 and i want to be able to roll them back so i turned off autocommit using 
 setAutoCommit(false).  As the update runs, the memory used by the JVM 
 increases continually until i get the following exception about 20% of the 
 way through:
 ERROR 40XT0: An internal error was identified by RawStore module.
at 
 org.apache.derby.iapi.error.StandardException.newException(StandardException.java)
at org.apache.derby.impl.store.raw.xact.Xact.setActiveState(Xact.java)
at org.apache.derby.impl.store.raw.xact.Xact.openContainer(Xact.java)
at 
 org.apache.derby.impl.store.access.conglomerate.OpenConglomerate.init(OpenConglomerate.java)
at org.apache.derby.impl.store.access.heap.Heap.open(Heap.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getDescriptorViaIndex(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.locateSchemaRow(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getSchemaDescriptor(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindTableDescriptor(FromBaseTable.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindNonVTITables(FromBaseTable.java)
at org.apache.derby.impl.sql.compile.FromList.bindTables(FromList.java)
at 
 org.apache.derby.impl.sql.compile.SelectNode.bindNonVTITables(SelectNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bindTables(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bind(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.ReadCursorNode.bind(ReadCursorNode.java)
at org.apache.derby.impl.sql.compile.CursorNode.bind(CursorNode.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepMinion(GenericStatement.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepare(GenericStatement.java)
at 
 org.apache.derby.impl.sql.conn.GenericLanguageConnectionContext.prepareInternalStatement(GenericLanguageConnectionContext.java)
at org.apache.derby.impl.jdbc.EmbedStatement.execute(EmbedStatement.java)
at 
 org.apache.derby.impl.jdbc.EmbedStatement.executeQuery(EmbedStatement.java)
at vi.hotspot.database.DataInterface._query(DataInterface.java:181)
at vi.hotspot.database.DataInterface.query(DataInterface.java:160)
at 
 vi.hotspot.database.UpdateManager.updatePartialTable(UpdateManager.java:518)
at 
 vi.hotspot.database.UpdateManager.updatePartialTables(UpdateManager.java:619)
at vi.hotspot.database.UpdateManager.run(UpdateManager.java:924)
at java.lang.Thread.run(Thread.java:534)
 vi.hotspot.exception.ServerTransactionException
at 
 vi.hotspot.database.UpdateManager.updatePartialTable(UpdateManager.java:555)
at 
 vi.hotspot.database.UpdateManager.updatePartialTables(UpdateManager.java:619)
at vi.hotspot.database.UpdateManager.run(UpdateManager.java:924)
at java.lang.Thread.run(Thread.java:534)
 Derby is running in standalone mode. 

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] Commented: (DERBY-418) outofmemory error when running large query in autocommit=false mode

2006-08-22 Thread Knut Anders Hatlen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-418?page=comments#action_12429725 ] 

Knut Anders Hatlen commented on DERBY-418:
--

Mayuresh, with your latest changes, I don't see the memory leak any more 
(neither DERBY-418 nor DERBY-1142).

 outofmemory error when running large query in autocommit=false mode
 ---

 Key: DERBY-418
 URL: http://issues.apache.org/jira/browse/DERBY-418
 Project: Derby
  Issue Type: Bug
  Components: Store
Affects Versions: 10.1.1.0
 Environment: I can reproduce this problem on Win2k/ T42 laptop. 
 jdk141. 
Reporter: Sunitha Kambhampati
 Assigned To: Mayuresh Nirhali
 Fix For: 10.2.1.0

 Attachments: AutoCommitTest.java


 On the derby-user list,  Chris reported tihs problem with his application and 
 also a repro for the problem.  I am logging the jira issue so it doesnt get 
 lost in all the mail.  
 (http://www.mail-archive.com/derby-user@db.apache.org/msg01258.html)
 --from chris's post--
 I'm running a set of ~5 queries on one table, using inserts and updates, 
 and i want to be able to roll them back so i turned off autocommit using 
 setAutoCommit(false).  As the update runs, the memory used by the JVM 
 increases continually until i get the following exception about 20% of the 
 way through:
 ERROR 40XT0: An internal error was identified by RawStore module.
at 
 org.apache.derby.iapi.error.StandardException.newException(StandardException.java)
at org.apache.derby.impl.store.raw.xact.Xact.setActiveState(Xact.java)
at org.apache.derby.impl.store.raw.xact.Xact.openContainer(Xact.java)
at 
 org.apache.derby.impl.store.access.conglomerate.OpenConglomerate.init(OpenConglomerate.java)
at org.apache.derby.impl.store.access.heap.Heap.open(Heap.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getDescriptorViaIndex(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.locateSchemaRow(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getSchemaDescriptor(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindTableDescriptor(FromBaseTable.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindNonVTITables(FromBaseTable.java)
at org.apache.derby.impl.sql.compile.FromList.bindTables(FromList.java)
at 
 org.apache.derby.impl.sql.compile.SelectNode.bindNonVTITables(SelectNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bindTables(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bind(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.ReadCursorNode.bind(ReadCursorNode.java)
at org.apache.derby.impl.sql.compile.CursorNode.bind(CursorNode.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepMinion(GenericStatement.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepare(GenericStatement.java)
at 
 org.apache.derby.impl.sql.conn.GenericLanguageConnectionContext.prepareInternalStatement(GenericLanguageConnectionContext.java)
at org.apache.derby.impl.jdbc.EmbedStatement.execute(EmbedStatement.java)
at 
 org.apache.derby.impl.jdbc.EmbedStatement.executeQuery(EmbedStatement.java)
at vi.hotspot.database.DataInterface._query(DataInterface.java:181)
at vi.hotspot.database.DataInterface.query(DataInterface.java:160)
at 
 vi.hotspot.database.UpdateManager.updatePartialTable(UpdateManager.java:518)
at 
 vi.hotspot.database.UpdateManager.updatePartialTables(UpdateManager.java:619)
at vi.hotspot.database.UpdateManager.run(UpdateManager.java:924)
at java.lang.Thread.run(Thread.java:534)
 vi.hotspot.exception.ServerTransactionException
at 
 vi.hotspot.database.UpdateManager.updatePartialTable(UpdateManager.java:555)
at 
 vi.hotspot.database.UpdateManager.updatePartialTables(UpdateManager.java:619)
at vi.hotspot.database.UpdateManager.run(UpdateManager.java:924)
at java.lang.Thread.run(Thread.java:534)
 Derby is running in standalone mode. 

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (DERBY-418) outofmemory error when running large query in autocommit=false mode

2006-08-21 Thread Andreas Korneliussen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-418?page=comments#action_12429361 ] 

Andreas Korneliussen commented on DERBY-418:


I think that the lack of synchronization when calling 
singleUseActivation.markUnused() may cause that other threads do not see that 
the field inUse has been modified in the activation. Since it is the finalizer 
thread which calls this, it could mean that the thread which checks the inUse 
flag to close the activation, will not see that it has been modified to false.

 outofmemory error when running large query in autocommit=false mode
 ---

 Key: DERBY-418
 URL: http://issues.apache.org/jira/browse/DERBY-418
 Project: Derby
  Issue Type: Bug
  Components: Store
Affects Versions: 10.1.1.0
 Environment: I can reproduce this problem on Win2k/ T42 laptop. 
 jdk141. 
Reporter: Sunitha Kambhampati
 Assigned To: Mayuresh Nirhali
 Fix For: 10.2.1.0

 Attachments: AutoCommitTest.java


 On the derby-user list,  Chris reported tihs problem with his application and 
 also a repro for the problem.  I am logging the jira issue so it doesnt get 
 lost in all the mail.  
 (http://www.mail-archive.com/derby-user@db.apache.org/msg01258.html)
 --from chris's post--
 I'm running a set of ~5 queries on one table, using inserts and updates, 
 and i want to be able to roll them back so i turned off autocommit using 
 setAutoCommit(false).  As the update runs, the memory used by the JVM 
 increases continually until i get the following exception about 20% of the 
 way through:
 ERROR 40XT0: An internal error was identified by RawStore module.
at 
 org.apache.derby.iapi.error.StandardException.newException(StandardException.java)
at org.apache.derby.impl.store.raw.xact.Xact.setActiveState(Xact.java)
at org.apache.derby.impl.store.raw.xact.Xact.openContainer(Xact.java)
at 
 org.apache.derby.impl.store.access.conglomerate.OpenConglomerate.init(OpenConglomerate.java)
at org.apache.derby.impl.store.access.heap.Heap.open(Heap.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getDescriptorViaIndex(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.locateSchemaRow(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getSchemaDescriptor(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindTableDescriptor(FromBaseTable.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindNonVTITables(FromBaseTable.java)
at org.apache.derby.impl.sql.compile.FromList.bindTables(FromList.java)
at 
 org.apache.derby.impl.sql.compile.SelectNode.bindNonVTITables(SelectNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bindTables(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bind(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.ReadCursorNode.bind(ReadCursorNode.java)
at org.apache.derby.impl.sql.compile.CursorNode.bind(CursorNode.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepMinion(GenericStatement.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepare(GenericStatement.java)
at 
 org.apache.derby.impl.sql.conn.GenericLanguageConnectionContext.prepareInternalStatement(GenericLanguageConnectionContext.java)
at org.apache.derby.impl.jdbc.EmbedStatement.execute(EmbedStatement.java)
at 
 org.apache.derby.impl.jdbc.EmbedStatement.executeQuery(EmbedStatement.java)
at vi.hotspot.database.DataInterface._query(DataInterface.java:181)
at vi.hotspot.database.DataInterface.query(DataInterface.java:160)
at 
 vi.hotspot.database.UpdateManager.updatePartialTable(UpdateManager.java:518)
at 
 vi.hotspot.database.UpdateManager.updatePartialTables(UpdateManager.java:619)
at vi.hotspot.database.UpdateManager.run(UpdateManager.java:924)
at java.lang.Thread.run(Thread.java:534)
 vi.hotspot.exception.ServerTransactionException
at 
 vi.hotspot.database.UpdateManager.updatePartialTable(UpdateManager.java:555)
at 
 vi.hotspot.database.UpdateManager.updatePartialTables(UpdateManager.java:619)
at vi.hotspot.database.UpdateManager.run(UpdateManager.java:924)
at java.lang.Thread.run(Thread.java:534)
 Derby is running in standalone mode. 

-- 
This message is automatically generated by JIRA.
-
If you think it was sent 

[jira] Commented: (DERBY-418) outofmemory error when running large query in autocommit=false mode

2006-08-21 Thread Andreas Korneliussen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-418?page=comments#action_12429386 ] 

Andreas Korneliussen commented on DERBY-418:


The finalizer thread is as pr javadoc guaranteed to not hold any user-visible 
synchronization locks when finalize is invoked. If the finalizer synchronizes 
in the same order as the other methods, it should not introduce any dead-locks 
(you may get lock-waiting, but not dead-lock).

 outofmemory error when running large query in autocommit=false mode
 ---

 Key: DERBY-418
 URL: http://issues.apache.org/jira/browse/DERBY-418
 Project: Derby
  Issue Type: Bug
  Components: Store
Affects Versions: 10.1.1.0
 Environment: I can reproduce this problem on Win2k/ T42 laptop. 
 jdk141. 
Reporter: Sunitha Kambhampati
 Assigned To: Mayuresh Nirhali
 Fix For: 10.2.1.0

 Attachments: AutoCommitTest.java


 On the derby-user list,  Chris reported tihs problem with his application and 
 also a repro for the problem.  I am logging the jira issue so it doesnt get 
 lost in all the mail.  
 (http://www.mail-archive.com/derby-user@db.apache.org/msg01258.html)
 --from chris's post--
 I'm running a set of ~5 queries on one table, using inserts and updates, 
 and i want to be able to roll them back so i turned off autocommit using 
 setAutoCommit(false).  As the update runs, the memory used by the JVM 
 increases continually until i get the following exception about 20% of the 
 way through:
 ERROR 40XT0: An internal error was identified by RawStore module.
at 
 org.apache.derby.iapi.error.StandardException.newException(StandardException.java)
at org.apache.derby.impl.store.raw.xact.Xact.setActiveState(Xact.java)
at org.apache.derby.impl.store.raw.xact.Xact.openContainer(Xact.java)
at 
 org.apache.derby.impl.store.access.conglomerate.OpenConglomerate.init(OpenConglomerate.java)
at org.apache.derby.impl.store.access.heap.Heap.open(Heap.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getDescriptorViaIndex(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.locateSchemaRow(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getSchemaDescriptor(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindTableDescriptor(FromBaseTable.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindNonVTITables(FromBaseTable.java)
at org.apache.derby.impl.sql.compile.FromList.bindTables(FromList.java)
at 
 org.apache.derby.impl.sql.compile.SelectNode.bindNonVTITables(SelectNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bindTables(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bind(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.ReadCursorNode.bind(ReadCursorNode.java)
at org.apache.derby.impl.sql.compile.CursorNode.bind(CursorNode.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepMinion(GenericStatement.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepare(GenericStatement.java)
at 
 org.apache.derby.impl.sql.conn.GenericLanguageConnectionContext.prepareInternalStatement(GenericLanguageConnectionContext.java)
at org.apache.derby.impl.jdbc.EmbedStatement.execute(EmbedStatement.java)
at 
 org.apache.derby.impl.jdbc.EmbedStatement.executeQuery(EmbedStatement.java)
at vi.hotspot.database.DataInterface._query(DataInterface.java:181)
at vi.hotspot.database.DataInterface.query(DataInterface.java:160)
at 
 vi.hotspot.database.UpdateManager.updatePartialTable(UpdateManager.java:518)
at 
 vi.hotspot.database.UpdateManager.updatePartialTables(UpdateManager.java:619)
at vi.hotspot.database.UpdateManager.run(UpdateManager.java:924)
at java.lang.Thread.run(Thread.java:534)
 vi.hotspot.exception.ServerTransactionException
at 
 vi.hotspot.database.UpdateManager.updatePartialTable(UpdateManager.java:555)
at 
 vi.hotspot.database.UpdateManager.updatePartialTables(UpdateManager.java:619)
at vi.hotspot.database.UpdateManager.run(UpdateManager.java:924)
at java.lang.Thread.run(Thread.java:534)
 Derby is running in standalone mode. 

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 

[jira] Commented: (DERBY-418) outofmemory error when running large query in autocommit=false mode

2006-08-21 Thread Daniel John Debrunner (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-418?page=comments#action_12429430 ] 

Daniel John Debrunner commented on DERBY-418:
-

I don't think there is any guarantee about the order of finalization, therefore 
it is impossible to guarantee the same order of synchronization by the 
finalizer threads as the main code path, so obtaining synchronization within  a 
finlize method is subject to deadlocks.

 outofmemory error when running large query in autocommit=false mode
 ---

 Key: DERBY-418
 URL: http://issues.apache.org/jira/browse/DERBY-418
 Project: Derby
  Issue Type: Bug
  Components: Store
Affects Versions: 10.1.1.0
 Environment: I can reproduce this problem on Win2k/ T42 laptop. 
 jdk141. 
Reporter: Sunitha Kambhampati
 Assigned To: Mayuresh Nirhali
 Fix For: 10.2.1.0

 Attachments: AutoCommitTest.java


 On the derby-user list,  Chris reported tihs problem with his application and 
 also a repro for the problem.  I am logging the jira issue so it doesnt get 
 lost in all the mail.  
 (http://www.mail-archive.com/derby-user@db.apache.org/msg01258.html)
 --from chris's post--
 I'm running a set of ~5 queries on one table, using inserts and updates, 
 and i want to be able to roll them back so i turned off autocommit using 
 setAutoCommit(false).  As the update runs, the memory used by the JVM 
 increases continually until i get the following exception about 20% of the 
 way through:
 ERROR 40XT0: An internal error was identified by RawStore module.
at 
 org.apache.derby.iapi.error.StandardException.newException(StandardException.java)
at org.apache.derby.impl.store.raw.xact.Xact.setActiveState(Xact.java)
at org.apache.derby.impl.store.raw.xact.Xact.openContainer(Xact.java)
at 
 org.apache.derby.impl.store.access.conglomerate.OpenConglomerate.init(OpenConglomerate.java)
at org.apache.derby.impl.store.access.heap.Heap.open(Heap.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getDescriptorViaIndex(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.locateSchemaRow(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getSchemaDescriptor(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindTableDescriptor(FromBaseTable.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindNonVTITables(FromBaseTable.java)
at org.apache.derby.impl.sql.compile.FromList.bindTables(FromList.java)
at 
 org.apache.derby.impl.sql.compile.SelectNode.bindNonVTITables(SelectNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bindTables(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bind(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.ReadCursorNode.bind(ReadCursorNode.java)
at org.apache.derby.impl.sql.compile.CursorNode.bind(CursorNode.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepMinion(GenericStatement.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepare(GenericStatement.java)
at 
 org.apache.derby.impl.sql.conn.GenericLanguageConnectionContext.prepareInternalStatement(GenericLanguageConnectionContext.java)
at org.apache.derby.impl.jdbc.EmbedStatement.execute(EmbedStatement.java)
at 
 org.apache.derby.impl.jdbc.EmbedStatement.executeQuery(EmbedStatement.java)
at vi.hotspot.database.DataInterface._query(DataInterface.java:181)
at vi.hotspot.database.DataInterface.query(DataInterface.java:160)
at 
 vi.hotspot.database.UpdateManager.updatePartialTable(UpdateManager.java:518)
at 
 vi.hotspot.database.UpdateManager.updatePartialTables(UpdateManager.java:619)
at vi.hotspot.database.UpdateManager.run(UpdateManager.java:924)
at java.lang.Thread.run(Thread.java:534)
 vi.hotspot.exception.ServerTransactionException
at 
 vi.hotspot.database.UpdateManager.updatePartialTable(UpdateManager.java:555)
at 
 vi.hotspot.database.UpdateManager.updatePartialTables(UpdateManager.java:619)
at vi.hotspot.database.UpdateManager.run(UpdateManager.java:924)
at java.lang.Thread.run(Thread.java:534)
 Derby is running in standalone mode. 

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 

[jira] Commented: (DERBY-418) outofmemory error when running large query in autocommit=false mode

2006-08-21 Thread Andreas Korneliussen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-418?page=comments#action_12429438 ] 

Andreas Korneliussen commented on DERBY-418:


I am not sure what you mean. 

Could you give an example ?

I agree that it is possible to get a deadlock in the finalize() method if it 
obtains its locks in a different order than another user thread or another 
finalizer thread. If it obtains the locks in the same order, the condition for 
a deadlock is not there. If there are multiple objects being garbage collected, 
sharing mutexes, they need to set the locks in the same order - or else you may 
get deadlock.

 outofmemory error when running large query in autocommit=false mode
 ---

 Key: DERBY-418
 URL: http://issues.apache.org/jira/browse/DERBY-418
 Project: Derby
  Issue Type: Bug
  Components: Store
Affects Versions: 10.1.1.0
 Environment: I can reproduce this problem on Win2k/ T42 laptop. 
 jdk141. 
Reporter: Sunitha Kambhampati
 Assigned To: Mayuresh Nirhali
 Fix For: 10.2.1.0

 Attachments: AutoCommitTest.java


 On the derby-user list,  Chris reported tihs problem with his application and 
 also a repro for the problem.  I am logging the jira issue so it doesnt get 
 lost in all the mail.  
 (http://www.mail-archive.com/derby-user@db.apache.org/msg01258.html)
 --from chris's post--
 I'm running a set of ~5 queries on one table, using inserts and updates, 
 and i want to be able to roll them back so i turned off autocommit using 
 setAutoCommit(false).  As the update runs, the memory used by the JVM 
 increases continually until i get the following exception about 20% of the 
 way through:
 ERROR 40XT0: An internal error was identified by RawStore module.
at 
 org.apache.derby.iapi.error.StandardException.newException(StandardException.java)
at org.apache.derby.impl.store.raw.xact.Xact.setActiveState(Xact.java)
at org.apache.derby.impl.store.raw.xact.Xact.openContainer(Xact.java)
at 
 org.apache.derby.impl.store.access.conglomerate.OpenConglomerate.init(OpenConglomerate.java)
at org.apache.derby.impl.store.access.heap.Heap.open(Heap.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getDescriptorViaIndex(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.locateSchemaRow(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getSchemaDescriptor(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindTableDescriptor(FromBaseTable.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindNonVTITables(FromBaseTable.java)
at org.apache.derby.impl.sql.compile.FromList.bindTables(FromList.java)
at 
 org.apache.derby.impl.sql.compile.SelectNode.bindNonVTITables(SelectNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bindTables(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bind(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.ReadCursorNode.bind(ReadCursorNode.java)
at org.apache.derby.impl.sql.compile.CursorNode.bind(CursorNode.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepMinion(GenericStatement.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepare(GenericStatement.java)
at 
 org.apache.derby.impl.sql.conn.GenericLanguageConnectionContext.prepareInternalStatement(GenericLanguageConnectionContext.java)
at org.apache.derby.impl.jdbc.EmbedStatement.execute(EmbedStatement.java)
at 
 org.apache.derby.impl.jdbc.EmbedStatement.executeQuery(EmbedStatement.java)
at vi.hotspot.database.DataInterface._query(DataInterface.java:181)
at vi.hotspot.database.DataInterface.query(DataInterface.java:160)
at 
 vi.hotspot.database.UpdateManager.updatePartialTable(UpdateManager.java:518)
at 
 vi.hotspot.database.UpdateManager.updatePartialTables(UpdateManager.java:619)
at vi.hotspot.database.UpdateManager.run(UpdateManager.java:924)
at java.lang.Thread.run(Thread.java:534)
 vi.hotspot.exception.ServerTransactionException
at 
 vi.hotspot.database.UpdateManager.updatePartialTable(UpdateManager.java:555)
at 
 vi.hotspot.database.UpdateManager.updatePartialTables(UpdateManager.java:619)
at vi.hotspot.database.UpdateManager.run(UpdateManager.java:924)
at java.lang.Thread.run(Thread.java:534)
 Derby is running in 

[jira] Commented: (DERBY-418) outofmemory error when running large query in autocommit=false mode

2006-08-21 Thread Daniel John Debrunner (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-418?page=comments#action_12429442 ] 

Daniel John Debrunner commented on DERBY-418:
-

The alternate way to phrase it  is:

Can you guarantee no deadlocks from using synchronization in the finalize 
method?

Closing an activation in the finalize() methods is going to hit a lot of 
synchronized blocks, there just seems endless potential
for deadlocks, especially when the connection may be in any state, not being 
used, rolling back, preparing a statement, executing a statement, being 
shutdown by an unrelated error, being finalized itself, etc. etc.. This is then 
further compilcated by the fact the the order of execution within finalization 
is not guaranteed, if I have a chain of objects A,B,C the order they are 
finalized is not guaranteed, could be BAC, could be CBA, could be by different 
finalize threads.

So given that, how easy is it to show the locking ordering is consistent?

If you prove there is no possible chance of deadlocks, how is this enforced 
going forward? Does every change to any sycnhronized block have to go through a 
full review of possible implications with a single finalize method in 
EmbededResultSet?

Maybe it's just me, but getting no synchronization in a finalize() method seems 
to be a nice simple rule, that doesn't require a proof for on-going changes in 
the engine. I suggested some possible changes to fix this in DERBY-1142 and 
noted them above, are there any issues with those?
  

 outofmemory error when running large query in autocommit=false mode
 ---

 Key: DERBY-418
 URL: http://issues.apache.org/jira/browse/DERBY-418
 Project: Derby
  Issue Type: Bug
  Components: Store
Affects Versions: 10.1.1.0
 Environment: I can reproduce this problem on Win2k/ T42 laptop. 
 jdk141. 
Reporter: Sunitha Kambhampati
 Assigned To: Mayuresh Nirhali
 Fix For: 10.2.1.0

 Attachments: AutoCommitTest.java


 On the derby-user list,  Chris reported tihs problem with his application and 
 also a repro for the problem.  I am logging the jira issue so it doesnt get 
 lost in all the mail.  
 (http://www.mail-archive.com/derby-user@db.apache.org/msg01258.html)
 --from chris's post--
 I'm running a set of ~5 queries on one table, using inserts and updates, 
 and i want to be able to roll them back so i turned off autocommit using 
 setAutoCommit(false).  As the update runs, the memory used by the JVM 
 increases continually until i get the following exception about 20% of the 
 way through:
 ERROR 40XT0: An internal error was identified by RawStore module.
at 
 org.apache.derby.iapi.error.StandardException.newException(StandardException.java)
at org.apache.derby.impl.store.raw.xact.Xact.setActiveState(Xact.java)
at org.apache.derby.impl.store.raw.xact.Xact.openContainer(Xact.java)
at 
 org.apache.derby.impl.store.access.conglomerate.OpenConglomerate.init(OpenConglomerate.java)
at org.apache.derby.impl.store.access.heap.Heap.open(Heap.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getDescriptorViaIndex(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.locateSchemaRow(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getSchemaDescriptor(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindTableDescriptor(FromBaseTable.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindNonVTITables(FromBaseTable.java)
at org.apache.derby.impl.sql.compile.FromList.bindTables(FromList.java)
at 
 org.apache.derby.impl.sql.compile.SelectNode.bindNonVTITables(SelectNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bindTables(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bind(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.ReadCursorNode.bind(ReadCursorNode.java)
at org.apache.derby.impl.sql.compile.CursorNode.bind(CursorNode.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepMinion(GenericStatement.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepare(GenericStatement.java)
at 
 org.apache.derby.impl.sql.conn.GenericLanguageConnectionContext.prepareInternalStatement(GenericLanguageConnectionContext.java)
at 

[jira] Commented: (DERBY-418) outofmemory error when running large query in autocommit=false mode

2006-08-21 Thread Andreas Korneliussen (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-418?page=comments#action_12429473 ] 

Andreas Korneliussen commented on DERBY-418:


Assuming the Derby embedded JDBC driver is thread-safe, it should be safe for a 
result set to call its own close() method in its finalizer. If you get a 
dead-lock in the finalizer, it proves that it is also possible to write a 
multithreaded program which gets deadlocks when calling ResultSet.close, and 
derby then is not really MT-safe.

If this happens, I think it is better to fix the embedded driver so that it 
really becomes MT-safe, than avoiding synchronization in the finalizer threads.

As for the suggested change in 1142, I would note that If there is no 
synchronization in the finalizer, and you set a field in a object from it, 
there is no guarantee that other threads will see the modification of the field 
(unless, I think, it is volatile). However, I think Mayuresh has been working 
on this issue, so maybe he has tried that approach?

Another approach could be to use a WeakHashMap to store the activations in, 
instead of a Vector. If all objects referring to the activation have been 
garbage collected, the activation will be removed from the WeakHashMap.


 outofmemory error when running large query in autocommit=false mode
 ---

 Key: DERBY-418
 URL: http://issues.apache.org/jira/browse/DERBY-418
 Project: Derby
  Issue Type: Bug
  Components: Store
Affects Versions: 10.1.1.0
 Environment: I can reproduce this problem on Win2k/ T42 laptop. 
 jdk141. 
Reporter: Sunitha Kambhampati
 Assigned To: Mayuresh Nirhali
 Fix For: 10.2.1.0

 Attachments: AutoCommitTest.java


 On the derby-user list,  Chris reported tihs problem with his application and 
 also a repro for the problem.  I am logging the jira issue so it doesnt get 
 lost in all the mail.  
 (http://www.mail-archive.com/derby-user@db.apache.org/msg01258.html)
 --from chris's post--
 I'm running a set of ~5 queries on one table, using inserts and updates, 
 and i want to be able to roll them back so i turned off autocommit using 
 setAutoCommit(false).  As the update runs, the memory used by the JVM 
 increases continually until i get the following exception about 20% of the 
 way through:
 ERROR 40XT0: An internal error was identified by RawStore module.
at 
 org.apache.derby.iapi.error.StandardException.newException(StandardException.java)
at org.apache.derby.impl.store.raw.xact.Xact.setActiveState(Xact.java)
at org.apache.derby.impl.store.raw.xact.Xact.openContainer(Xact.java)
at 
 org.apache.derby.impl.store.access.conglomerate.OpenConglomerate.init(OpenConglomerate.java)
at org.apache.derby.impl.store.access.heap.Heap.open(Heap.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getDescriptorViaIndex(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.locateSchemaRow(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getSchemaDescriptor(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindTableDescriptor(FromBaseTable.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindNonVTITables(FromBaseTable.java)
at org.apache.derby.impl.sql.compile.FromList.bindTables(FromList.java)
at 
 org.apache.derby.impl.sql.compile.SelectNode.bindNonVTITables(SelectNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bindTables(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bind(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.ReadCursorNode.bind(ReadCursorNode.java)
at org.apache.derby.impl.sql.compile.CursorNode.bind(CursorNode.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepMinion(GenericStatement.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepare(GenericStatement.java)
at 
 org.apache.derby.impl.sql.conn.GenericLanguageConnectionContext.prepareInternalStatement(GenericLanguageConnectionContext.java)
at org.apache.derby.impl.jdbc.EmbedStatement.execute(EmbedStatement.java)
at 
 org.apache.derby.impl.jdbc.EmbedStatement.executeQuery(EmbedStatement.java)
at vi.hotspot.database.DataInterface._query(DataInterface.java:181)
at vi.hotspot.database.DataInterface.query(DataInterface.java:160)
at 
 

[jira] Commented: (DERBY-418) outofmemory error when running large query in autocommit=false mode

2006-08-21 Thread Daniel John Debrunner (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-418?page=comments#action_12429483 ] 

Daniel John Debrunner commented on DERBY-418:
-

I'd assumed that you were going to go in at a lower level than 
ResultSet.close(). ResultSet.close() is thread safe, but think of the 
consequences of calling ResultSet.close() from a finalizer.

  - The finalizer thread will block until the application thread executing a 
JDBC method on another object in the same connection completes the call. Worst 
case is the application thread is executing a query that takes several hours. 
Now the synchronization in the finalize method has resulted in the finalize 
thread stalling for a few hours, not good for the VM.

 -  What if the garbage collector thread is running because the application 
thread, using the same connection, requires memory, now you have created a 
deadlock , the application thread is waiting on the JVM to get memory, the vm 
finalizer thread is waiting on the application thread to complete its JDBC 
method call.

Not sure of the guarantee of the unsynchronized field being set. Are you saying 
that field will never be seen as set, or that the setting may not be seen for 
some time?

The activation being in the Vector is not an issue, so replacing it with a 
WeakHashMap doesn't help. Code needs to be executed against the activation in 
order to clean it up, just letting it be garbage collected is not enough.  The 
current scheme was set up to be simple, only a single thread active in a 
connection at a single time, and to avoid the jvm deadlocks that wer seen when 
the activations were cleaned up directly in the finalize thread (since it 
breaks the simple rule, only a single thread active in a connection).

 outofmemory error when running large query in autocommit=false mode
 ---

 Key: DERBY-418
 URL: http://issues.apache.org/jira/browse/DERBY-418
 Project: Derby
  Issue Type: Bug
  Components: Store
Affects Versions: 10.1.1.0
 Environment: I can reproduce this problem on Win2k/ T42 laptop. 
 jdk141. 
Reporter: Sunitha Kambhampati
 Assigned To: Mayuresh Nirhali
 Fix For: 10.2.1.0

 Attachments: AutoCommitTest.java


 On the derby-user list,  Chris reported tihs problem with his application and 
 also a repro for the problem.  I am logging the jira issue so it doesnt get 
 lost in all the mail.  
 (http://www.mail-archive.com/derby-user@db.apache.org/msg01258.html)
 --from chris's post--
 I'm running a set of ~5 queries on one table, using inserts and updates, 
 and i want to be able to roll them back so i turned off autocommit using 
 setAutoCommit(false).  As the update runs, the memory used by the JVM 
 increases continually until i get the following exception about 20% of the 
 way through:
 ERROR 40XT0: An internal error was identified by RawStore module.
at 
 org.apache.derby.iapi.error.StandardException.newException(StandardException.java)
at org.apache.derby.impl.store.raw.xact.Xact.setActiveState(Xact.java)
at org.apache.derby.impl.store.raw.xact.Xact.openContainer(Xact.java)
at 
 org.apache.derby.impl.store.access.conglomerate.OpenConglomerate.init(OpenConglomerate.java)
at org.apache.derby.impl.store.access.heap.Heap.open(Heap.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getDescriptorViaIndex(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.locateSchemaRow(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getSchemaDescriptor(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindTableDescriptor(FromBaseTable.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindNonVTITables(FromBaseTable.java)
at org.apache.derby.impl.sql.compile.FromList.bindTables(FromList.java)
at 
 org.apache.derby.impl.sql.compile.SelectNode.bindNonVTITables(SelectNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bindTables(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bind(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.ReadCursorNode.bind(ReadCursorNode.java)
at org.apache.derby.impl.sql.compile.CursorNode.bind(CursorNode.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepMinion(GenericStatement.java)
at 
 

Re: [jira] Commented: (DERBY-418) outofmemory error when running large query in autocommit=false mode

2006-08-21 Thread Knut Anders Hatlen
Andreas Korneliussen (JIRA) derby-dev@db.apache.org writes:

 Assuming the Derby embedded JDBC driver is thread-safe, it should be
 safe for a result set to call its own close() method in its
 finalizer. If you get a dead-lock in the finalizer, it proves that
 it is also possible to write a multithreaded program which gets
 deadlocks when calling ResultSet.close, and derby then is not really
 MT-safe.

 If this happens, I think it is better to fix the embedded driver so
 that it really becomes MT-safe, than avoiding synchronization in the
 finalizer threads.

There are calls to System.runFinalization() many places in the
code. If the thread that invokes System.runFinalization() has obtained
the same mutex that a finalize method requires, there can indeed be
deadlocks. (But I guess you will argue that we shouldn't call
runFinalization() explicitly.)

 As for the suggested change in 1142, I would note that If there is
 no synchronization in the finalizer, and you set a field in a object
 from it, there is no guarantee that other threads will see the
 modification of the field (unless, I think, it is
 volatile). However, I think Mayuresh has been working on this issue,
 so maybe he has tried that approach?

FWIW, I tried that approach in my sandbox (setting a volatile variable
in GenericLanguageConnectionContext from BaseActivation.markUnused())
and I didn't see the OutOfMemoryError any more. It's a very simple
fix, and I don't think the overhead is noticeable, so I'd recommend
that we go for that solution.

-- 
Knut Anders


Re: [jira] Commented: (DERBY-418) outofmemory error when running large query in autocommit=false mode

2006-08-21 Thread Andreas Korneliussen
Knut Anders Hatlen wrote:
 Andreas Korneliussen (JIRA) derby-dev@db.apache.org writes:
 
 Assuming the Derby embedded JDBC driver is thread-safe, it should be
 safe for a result set to call its own close() method in its
 finalizer. If you get a dead-lock in the finalizer, it proves that
 it is also possible to write a multithreaded program which gets
 deadlocks when calling ResultSet.close, and derby then is not really
 MT-safe.

 If this happens, I think it is better to fix the embedded driver so
 that it really becomes MT-safe, than avoiding synchronization in the
 finalizer threads.
 
 There are calls to System.runFinalization() many places in the
 code. If the thread that invokes System.runFinalization() has obtained
 the same mutex that a finalize method requires, there can indeed be
 deadlocks. (But I guess you will argue that we shouldn't call
 runFinalization() explicitly.)
 
Yes.

 As for the suggested change in 1142, I would note that If there is
 no synchronization in the finalizer, and you set a field in a object
 from it, there is no guarantee that other threads will see the
 modification of the field (unless, I think, it is
 volatile). However, I think Mayuresh has been working on this issue,
 so maybe he has tried that approach?
 
 FWIW, I tried that approach in my sandbox (setting a volatile variable
 in GenericLanguageConnectionContext from BaseActivation.markUnused())
 and I didn't see the OutOfMemoryError any more. It's a very simple
 fix, and I don't think the overhead is noticeable, so I'd recommend
 that we go for that solution.
 
Seems like a good idea.

Andreas




Re: [jira] Commented: (DERBY-418) outofmemory error when running large query in autocommit=false mode

2006-08-21 Thread Andreas Korneliussen

 Not sure of the guarantee of the unsynchronized field being set. Are you 
 saying that field will never be seen as set, or that the setting may not be 
 seen for some time?
 

It may be seen, however it may also never  be seen.

Andreas


[jira] Commented: (DERBY-418) outofmemory error when running large query in autocommit=false mode

2006-08-20 Thread Mayuresh Nirhali (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-418?page=comments#action_12429259 ] 

Mayuresh Nirhali commented on DERBY-418:


I tried modifying the finalize method of embedResultSet to close 
singleuseActivation and that seems to have worked for good reasons. So, when GC 
operations clean up resultset objects the associated activation objects will be 
closed, thus freeing up some more memory. But, for the reproducible case, this 
is not the complete solution. This change differs the leak for a long period 
but does not completely fix it.

I tried the same fix for the repro in DERBY-1142 (without resultset.close 
statement) and did not see any leak there.

I think, this can be a good fix for now until the extreme case is tracked down.

I will run derbyall with this fix and produce a patch soon.

 outofmemory error when running large query in autocommit=false mode
 ---

 Key: DERBY-418
 URL: http://issues.apache.org/jira/browse/DERBY-418
 Project: Derby
  Issue Type: Bug
  Components: Store
Affects Versions: 10.1.1.0
 Environment: I can reproduce this problem on Win2k/ T42 laptop. 
 jdk141. 
Reporter: Sunitha Kambhampati
 Assigned To: Mayuresh Nirhali
 Fix For: 10.2.1.0

 Attachments: AutoCommitTest.java


 On the derby-user list,  Chris reported tihs problem with his application and 
 also a repro for the problem.  I am logging the jira issue so it doesnt get 
 lost in all the mail.  
 (http://www.mail-archive.com/derby-user@db.apache.org/msg01258.html)
 --from chris's post--
 I'm running a set of ~5 queries on one table, using inserts and updates, 
 and i want to be able to roll them back so i turned off autocommit using 
 setAutoCommit(false).  As the update runs, the memory used by the JVM 
 increases continually until i get the following exception about 20% of the 
 way through:
 ERROR 40XT0: An internal error was identified by RawStore module.
at 
 org.apache.derby.iapi.error.StandardException.newException(StandardException.java)
at org.apache.derby.impl.store.raw.xact.Xact.setActiveState(Xact.java)
at org.apache.derby.impl.store.raw.xact.Xact.openContainer(Xact.java)
at 
 org.apache.derby.impl.store.access.conglomerate.OpenConglomerate.init(OpenConglomerate.java)
at org.apache.derby.impl.store.access.heap.Heap.open(Heap.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getDescriptorViaIndex(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.locateSchemaRow(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getSchemaDescriptor(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindTableDescriptor(FromBaseTable.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindNonVTITables(FromBaseTable.java)
at org.apache.derby.impl.sql.compile.FromList.bindTables(FromList.java)
at 
 org.apache.derby.impl.sql.compile.SelectNode.bindNonVTITables(SelectNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bindTables(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bind(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.ReadCursorNode.bind(ReadCursorNode.java)
at org.apache.derby.impl.sql.compile.CursorNode.bind(CursorNode.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepMinion(GenericStatement.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepare(GenericStatement.java)
at 
 org.apache.derby.impl.sql.conn.GenericLanguageConnectionContext.prepareInternalStatement(GenericLanguageConnectionContext.java)
at org.apache.derby.impl.jdbc.EmbedStatement.execute(EmbedStatement.java)
at 
 org.apache.derby.impl.jdbc.EmbedStatement.executeQuery(EmbedStatement.java)
at vi.hotspot.database.DataInterface._query(DataInterface.java:181)
at vi.hotspot.database.DataInterface.query(DataInterface.java:160)
at 
 vi.hotspot.database.UpdateManager.updatePartialTable(UpdateManager.java:518)
at 
 vi.hotspot.database.UpdateManager.updatePartialTables(UpdateManager.java:619)
at vi.hotspot.database.UpdateManager.run(UpdateManager.java:924)
at java.lang.Thread.run(Thread.java:534)
 vi.hotspot.exception.ServerTransactionException
at 
 vi.hotspot.database.UpdateManager.updatePartialTable(UpdateManager.java:555)
at 
 

[jira] Commented: (DERBY-418) outofmemory error when running large query in autocommit=false mode

2006-08-20 Thread Daniel John Debrunner (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-418?page=comments#action_12429264 ] 

Daniel John Debrunner commented on DERBY-418:
-

Closing the activation directly in the finalize method will lead to problems. 
The close of the activation requires obtaining synchronization which is not 
recommended for the finalizer thread, it can lead to deadlocks with the thread 
that owns the connection and its synchronization. That's why the activations 
are closed indirectly through the set inactive state mechanism.

 outofmemory error when running large query in autocommit=false mode
 ---

 Key: DERBY-418
 URL: http://issues.apache.org/jira/browse/DERBY-418
 Project: Derby
  Issue Type: Bug
  Components: Store
Affects Versions: 10.1.1.0
 Environment: I can reproduce this problem on Win2k/ T42 laptop. 
 jdk141. 
Reporter: Sunitha Kambhampati
 Assigned To: Mayuresh Nirhali
 Fix For: 10.2.1.0

 Attachments: AutoCommitTest.java


 On the derby-user list,  Chris reported tihs problem with his application and 
 also a repro for the problem.  I am logging the jira issue so it doesnt get 
 lost in all the mail.  
 (http://www.mail-archive.com/derby-user@db.apache.org/msg01258.html)
 --from chris's post--
 I'm running a set of ~5 queries on one table, using inserts and updates, 
 and i want to be able to roll them back so i turned off autocommit using 
 setAutoCommit(false).  As the update runs, the memory used by the JVM 
 increases continually until i get the following exception about 20% of the 
 way through:
 ERROR 40XT0: An internal error was identified by RawStore module.
at 
 org.apache.derby.iapi.error.StandardException.newException(StandardException.java)
at org.apache.derby.impl.store.raw.xact.Xact.setActiveState(Xact.java)
at org.apache.derby.impl.store.raw.xact.Xact.openContainer(Xact.java)
at 
 org.apache.derby.impl.store.access.conglomerate.OpenConglomerate.init(OpenConglomerate.java)
at org.apache.derby.impl.store.access.heap.Heap.open(Heap.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getDescriptorViaIndex(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.locateSchemaRow(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getSchemaDescriptor(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindTableDescriptor(FromBaseTable.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindNonVTITables(FromBaseTable.java)
at org.apache.derby.impl.sql.compile.FromList.bindTables(FromList.java)
at 
 org.apache.derby.impl.sql.compile.SelectNode.bindNonVTITables(SelectNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bindTables(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bind(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.ReadCursorNode.bind(ReadCursorNode.java)
at org.apache.derby.impl.sql.compile.CursorNode.bind(CursorNode.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepMinion(GenericStatement.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepare(GenericStatement.java)
at 
 org.apache.derby.impl.sql.conn.GenericLanguageConnectionContext.prepareInternalStatement(GenericLanguageConnectionContext.java)
at org.apache.derby.impl.jdbc.EmbedStatement.execute(EmbedStatement.java)
at 
 org.apache.derby.impl.jdbc.EmbedStatement.executeQuery(EmbedStatement.java)
at vi.hotspot.database.DataInterface._query(DataInterface.java:181)
at vi.hotspot.database.DataInterface.query(DataInterface.java:160)
at 
 vi.hotspot.database.UpdateManager.updatePartialTable(UpdateManager.java:518)
at 
 vi.hotspot.database.UpdateManager.updatePartialTables(UpdateManager.java:619)
at vi.hotspot.database.UpdateManager.run(UpdateManager.java:924)
at java.lang.Thread.run(Thread.java:534)
 vi.hotspot.exception.ServerTransactionException
at 
 vi.hotspot.database.UpdateManager.updatePartialTable(UpdateManager.java:555)
at 
 vi.hotspot.database.UpdateManager.updatePartialTables(UpdateManager.java:619)
at vi.hotspot.database.UpdateManager.run(UpdateManager.java:924)
at java.lang.Thread.run(Thread.java:534)
 Derby is running in standalone mode. 

-- 
This message is automatically generated by JIRA.
-
If you think it 

[jira] Commented: (DERBY-418) outofmemory error when running large query in autocommit=false mode

2006-08-14 Thread Mayuresh Nirhali (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-418?page=comments#action_12427891 ] 

Mayuresh Nirhali commented on DERBY-418:


Thanks to Andreas and Oystein for pointing out another possibilty that when the 
statement objects are GC'd, the activations associated with them are not closed 
which could be causing the leak.

I shall work further on this and have assigned this one to myself.

 outofmemory error when running large query in autocommit=false mode
 ---

 Key: DERBY-418
 URL: http://issues.apache.org/jira/browse/DERBY-418
 Project: Derby
  Issue Type: Bug
  Components: Store
Affects Versions: 10.1.1.0
 Environment: I can reproduce this problem on Win2k/ T42 laptop. 
 jdk141. 
Reporter: Sunitha Kambhampati
 Assigned To: Mayuresh Nirhali
 Fix For: 10.2.0.0

 Attachments: AutoCommitTest.java


 On the derby-user list,  Chris reported tihs problem with his application and 
 also a repro for the problem.  I am logging the jira issue so it doesnt get 
 lost in all the mail.  
 (http://www.mail-archive.com/derby-user@db.apache.org/msg01258.html)
 --from chris's post--
 I'm running a set of ~5 queries on one table, using inserts and updates, 
 and i want to be able to roll them back so i turned off autocommit using 
 setAutoCommit(false).  As the update runs, the memory used by the JVM 
 increases continually until i get the following exception about 20% of the 
 way through:
 ERROR 40XT0: An internal error was identified by RawStore module.
at 
 org.apache.derby.iapi.error.StandardException.newException(StandardException.java)
at org.apache.derby.impl.store.raw.xact.Xact.setActiveState(Xact.java)
at org.apache.derby.impl.store.raw.xact.Xact.openContainer(Xact.java)
at 
 org.apache.derby.impl.store.access.conglomerate.OpenConglomerate.init(OpenConglomerate.java)
at org.apache.derby.impl.store.access.heap.Heap.open(Heap.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getDescriptorViaIndex(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.locateSchemaRow(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getSchemaDescriptor(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindTableDescriptor(FromBaseTable.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindNonVTITables(FromBaseTable.java)
at org.apache.derby.impl.sql.compile.FromList.bindTables(FromList.java)
at 
 org.apache.derby.impl.sql.compile.SelectNode.bindNonVTITables(SelectNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bindTables(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bind(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.ReadCursorNode.bind(ReadCursorNode.java)
at org.apache.derby.impl.sql.compile.CursorNode.bind(CursorNode.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepMinion(GenericStatement.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepare(GenericStatement.java)
at 
 org.apache.derby.impl.sql.conn.GenericLanguageConnectionContext.prepareInternalStatement(GenericLanguageConnectionContext.java)
at org.apache.derby.impl.jdbc.EmbedStatement.execute(EmbedStatement.java)
at 
 org.apache.derby.impl.jdbc.EmbedStatement.executeQuery(EmbedStatement.java)
at vi.hotspot.database.DataInterface._query(DataInterface.java:181)
at vi.hotspot.database.DataInterface.query(DataInterface.java:160)
at 
 vi.hotspot.database.UpdateManager.updatePartialTable(UpdateManager.java:518)
at 
 vi.hotspot.database.UpdateManager.updatePartialTables(UpdateManager.java:619)
at vi.hotspot.database.UpdateManager.run(UpdateManager.java:924)
at java.lang.Thread.run(Thread.java:534)
 vi.hotspot.exception.ServerTransactionException
at 
 vi.hotspot.database.UpdateManager.updatePartialTable(UpdateManager.java:555)
at 
 vi.hotspot.database.UpdateManager.updatePartialTables(UpdateManager.java:619)
at vi.hotspot.database.UpdateManager.run(UpdateManager.java:924)
at java.lang.Thread.run(Thread.java:534)
 Derby is running in standalone mode. 

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more 

[jira] Commented: (DERBY-418) outofmemory error when running large query in autocommit=false mode

2006-08-14 Thread Daniel John Debrunner (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-418?page=comments#action_12427899 ] 

Daniel John Debrunner commented on DERBY-418:
-

I think this is the same case as described in DERBY-1142, comment towards end 
has a couple of possible solutions.
http://issues.apache.org/jira/browse/DERBY-1142#action_12420461

 outofmemory error when running large query in autocommit=false mode
 ---

 Key: DERBY-418
 URL: http://issues.apache.org/jira/browse/DERBY-418
 Project: Derby
  Issue Type: Bug
  Components: Store
Affects Versions: 10.1.1.0
 Environment: I can reproduce this problem on Win2k/ T42 laptop. 
 jdk141. 
Reporter: Sunitha Kambhampati
 Assigned To: Mayuresh Nirhali
 Fix For: 10.2.0.0

 Attachments: AutoCommitTest.java


 On the derby-user list,  Chris reported tihs problem with his application and 
 also a repro for the problem.  I am logging the jira issue so it doesnt get 
 lost in all the mail.  
 (http://www.mail-archive.com/derby-user@db.apache.org/msg01258.html)
 --from chris's post--
 I'm running a set of ~5 queries on one table, using inserts and updates, 
 and i want to be able to roll them back so i turned off autocommit using 
 setAutoCommit(false).  As the update runs, the memory used by the JVM 
 increases continually until i get the following exception about 20% of the 
 way through:
 ERROR 40XT0: An internal error was identified by RawStore module.
at 
 org.apache.derby.iapi.error.StandardException.newException(StandardException.java)
at org.apache.derby.impl.store.raw.xact.Xact.setActiveState(Xact.java)
at org.apache.derby.impl.store.raw.xact.Xact.openContainer(Xact.java)
at 
 org.apache.derby.impl.store.access.conglomerate.OpenConglomerate.init(OpenConglomerate.java)
at org.apache.derby.impl.store.access.heap.Heap.open(Heap.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getDescriptorViaIndex(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.locateSchemaRow(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getSchemaDescriptor(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindTableDescriptor(FromBaseTable.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindNonVTITables(FromBaseTable.java)
at org.apache.derby.impl.sql.compile.FromList.bindTables(FromList.java)
at 
 org.apache.derby.impl.sql.compile.SelectNode.bindNonVTITables(SelectNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bindTables(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bind(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.ReadCursorNode.bind(ReadCursorNode.java)
at org.apache.derby.impl.sql.compile.CursorNode.bind(CursorNode.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepMinion(GenericStatement.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepare(GenericStatement.java)
at 
 org.apache.derby.impl.sql.conn.GenericLanguageConnectionContext.prepareInternalStatement(GenericLanguageConnectionContext.java)
at org.apache.derby.impl.jdbc.EmbedStatement.execute(EmbedStatement.java)
at 
 org.apache.derby.impl.jdbc.EmbedStatement.executeQuery(EmbedStatement.java)
at vi.hotspot.database.DataInterface._query(DataInterface.java:181)
at vi.hotspot.database.DataInterface.query(DataInterface.java:160)
at 
 vi.hotspot.database.UpdateManager.updatePartialTable(UpdateManager.java:518)
at 
 vi.hotspot.database.UpdateManager.updatePartialTables(UpdateManager.java:619)
at vi.hotspot.database.UpdateManager.run(UpdateManager.java:924)
at java.lang.Thread.run(Thread.java:534)
 vi.hotspot.exception.ServerTransactionException
at 
 vi.hotspot.database.UpdateManager.updatePartialTable(UpdateManager.java:555)
at 
 vi.hotspot.database.UpdateManager.updatePartialTables(UpdateManager.java:619)
at vi.hotspot.database.UpdateManager.run(UpdateManager.java:924)
at java.lang.Thread.run(Thread.java:534)
 Derby is running in standalone mode. 

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (DERBY-418) outofmemory error when running large query in autocommit=false mode

2006-08-11 Thread Mayuresh Nirhali (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-418?page=comments#action_12427480 ] 

Mayuresh Nirhali commented on DERBY-418:


I used the JHAT (jdk1.6) to dump the java heap and observed that large number 
of generated objects of type CursorActivation exist on the heap. The expected 
behavior here is to have not more than 1 Activation when the statement is being 
executed. Further investigation showed that these generated objects are not 
getting closed for only the select kind statements. 
The GenericPreparedStatement fires activation.execute method and gets the 
result in a ResultSet object. It then checks if the singleExecution flag for 
activation is true and the returned resultSet is closed. If both are true then 
the activation is closed. For the select statements in the reproducible 
testcase, the returned resultSet is not closed and hence the activation is NOT 
closed. Thus causing the memory leak.

The code for activation.execute is dynamically generated code. I need some help 
to understand how this can be debugged.
Before that, I would like to understand if the returned resultSet should really 
be closed for select statement ?? If Yes, then when the activation is 
expected to be closed for such case ?? 

 outofmemory error when running large query in autocommit=false mode
 ---

 Key: DERBY-418
 URL: http://issues.apache.org/jira/browse/DERBY-418
 Project: Derby
  Issue Type: Bug
  Components: Store
Affects Versions: 10.1.1.0
 Environment: I can reproduce this problem on Win2k/ T42 laptop. 
 jdk141. 
Reporter: Sunitha Kambhampati
 Fix For: 10.2.0.0

 Attachments: AutoCommitTest.java


 On the derby-user list,  Chris reported tihs problem with his application and 
 also a repro for the problem.  I am logging the jira issue so it doesnt get 
 lost in all the mail.  
 (http://www.mail-archive.com/derby-user@db.apache.org/msg01258.html)
 --from chris's post--
 I'm running a set of ~5 queries on one table, using inserts and updates, 
 and i want to be able to roll them back so i turned off autocommit using 
 setAutoCommit(false).  As the update runs, the memory used by the JVM 
 increases continually until i get the following exception about 20% of the 
 way through:
 ERROR 40XT0: An internal error was identified by RawStore module.
at 
 org.apache.derby.iapi.error.StandardException.newException(StandardException.java)
at org.apache.derby.impl.store.raw.xact.Xact.setActiveState(Xact.java)
at org.apache.derby.impl.store.raw.xact.Xact.openContainer(Xact.java)
at 
 org.apache.derby.impl.store.access.conglomerate.OpenConglomerate.init(OpenConglomerate.java)
at org.apache.derby.impl.store.access.heap.Heap.open(Heap.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getDescriptorViaIndex(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.locateSchemaRow(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getSchemaDescriptor(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindTableDescriptor(FromBaseTable.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindNonVTITables(FromBaseTable.java)
at org.apache.derby.impl.sql.compile.FromList.bindTables(FromList.java)
at 
 org.apache.derby.impl.sql.compile.SelectNode.bindNonVTITables(SelectNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bindTables(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bind(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.ReadCursorNode.bind(ReadCursorNode.java)
at org.apache.derby.impl.sql.compile.CursorNode.bind(CursorNode.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepMinion(GenericStatement.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepare(GenericStatement.java)
at 
 org.apache.derby.impl.sql.conn.GenericLanguageConnectionContext.prepareInternalStatement(GenericLanguageConnectionContext.java)
at org.apache.derby.impl.jdbc.EmbedStatement.execute(EmbedStatement.java)
at 
 org.apache.derby.impl.jdbc.EmbedStatement.executeQuery(EmbedStatement.java)
at vi.hotspot.database.DataInterface._query(DataInterface.java:181)
at vi.hotspot.database.DataInterface.query(DataInterface.java:160)
at 
 

[jira] Commented: (DERBY-418) outofmemory error when running large query in autocommit=false mode

2006-07-20 Thread Mayuresh Nirhali (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-418?page=comments#action_12422407 ] 

Mayuresh Nirhali commented on DERBY-418:


I was able to reproduce this issue on my Sol10 x86 jdk1.5 platform.

I ran with following option,
-verbosegc (java option)
--Xmx16m (preferred this option to avoid large output file n reproduce the 
error in less time for the analysis)
-Dderby.debug.true=memoryLeakTrace

Following are the observations from the output of this command,

memoryLeakTrace:GenericLanguageContext:activations 1917 (memoryLeak is reported 
after activations grow  20)
memoryLeakTrace:BasicDependencyManager:table 1966 (memoryLeak is reported after 
table size grow  100)
memoryLeakTrace:BasicDependencyManager:deps 1966 (memoryLeak is reported after 
dependencies grow  50)
memoryLeakTrace:LockSet: 1908 (mem leak reported after lock objects grow  1000)

I have not gotten any further on this issue and do not plan to work on this 
until atleast the week after next.
HTH!

 outofmemory error when running large query in autocommit=false mode
 ---

 Key: DERBY-418
 URL: http://issues.apache.org/jira/browse/DERBY-418
 Project: Derby
  Issue Type: Bug
  Components: Store
Affects Versions: 10.1.1.0
 Environment: I can reproduce this problem on Win2k/ T42 laptop. 
 jdk141. 
Reporter: Sunitha Kambhampati
 Fix For: 10.2.0.0

 Attachments: AutoCommitTest.java


 On the derby-user list,  Chris reported tihs problem with his application and 
 also a repro for the problem.  I am logging the jira issue so it doesnt get 
 lost in all the mail.  
 (http://www.mail-archive.com/derby-user@db.apache.org/msg01258.html)
 --from chris's post--
 I'm running a set of ~5 queries on one table, using inserts and updates, 
 and i want to be able to roll them back so i turned off autocommit using 
 setAutoCommit(false).  As the update runs, the memory used by the JVM 
 increases continually until i get the following exception about 20% of the 
 way through:
 ERROR 40XT0: An internal error was identified by RawStore module.
at 
 org.apache.derby.iapi.error.StandardException.newException(StandardException.java)
at org.apache.derby.impl.store.raw.xact.Xact.setActiveState(Xact.java)
at org.apache.derby.impl.store.raw.xact.Xact.openContainer(Xact.java)
at 
 org.apache.derby.impl.store.access.conglomerate.OpenConglomerate.init(OpenConglomerate.java)
at org.apache.derby.impl.store.access.heap.Heap.open(Heap.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getDescriptorViaIndex(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.locateSchemaRow(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getSchemaDescriptor(DataDictionaryImpl.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindTableDescriptor(FromBaseTable.java)
at 
 org.apache.derby.impl.sql.compile.FromBaseTable.bindNonVTITables(FromBaseTable.java)
at org.apache.derby.impl.sql.compile.FromList.bindTables(FromList.java)
at 
 org.apache.derby.impl.sql.compile.SelectNode.bindNonVTITables(SelectNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bindTables(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.DMLStatementNode.bind(DMLStatementNode.java)
at 
 org.apache.derby.impl.sql.compile.ReadCursorNode.bind(ReadCursorNode.java)
at org.apache.derby.impl.sql.compile.CursorNode.bind(CursorNode.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepMinion(GenericStatement.java)
at 
 org.apache.derby.impl.sql.GenericStatement.prepare(GenericStatement.java)
at 
 org.apache.derby.impl.sql.conn.GenericLanguageConnectionContext.prepareInternalStatement(GenericLanguageConnectionContext.java)
at org.apache.derby.impl.jdbc.EmbedStatement.execute(EmbedStatement.java)
at 
 org.apache.derby.impl.jdbc.EmbedStatement.executeQuery(EmbedStatement.java)
at vi.hotspot.database.DataInterface._query(DataInterface.java:181)
at vi.hotspot.database.DataInterface.query(DataInterface.java:160)
at 
 vi.hotspot.database.UpdateManager.updatePartialTable(UpdateManager.java:518)
at 
 vi.hotspot.database.UpdateManager.updatePartialTables(UpdateManager.java:619)
at vi.hotspot.database.UpdateManager.run(UpdateManager.java:924)
at java.lang.Thread.run(Thread.java:534)