Re: [External] : Re: JDK 22 RDP2 & Deprecate sun.misc.Unsafe Memory-Access Methods…

2024-01-29 Thread David Delabassee
Thanks for the update, Rik!

--David

From: Rick Hillegas 
Date: Sunday, 28 January 2024 at 14:32
To: derby-dev@db.apache.org , David Delabassee 

Subject: [External] : Re: JDK 22 RDP2 & Deprecate sun.misc.Unsafe Memory-Access 
Methods…
Thanks, David. Derby found no problems with build 22-ea+33-2356. See
https://urldefense.com/v3/__https://issues.apache.org/jira/browse/DERBY-7159__;!!ACWV5N9M2RV99hQ!IBtSXk2yDDQeTWeHLTl3jTj5bAcO6FKAkkLz7NfipJr4BmqF8yueQzcXY_Eui4g2yb7-kODH4y05wABIpHZN-IzJtTi8$<https://urldefense.com/v3/__https:/issues.apache.org/jira/browse/DERBY-7159__;!!ACWV5N9M2RV99hQ!IBtSXk2yDDQeTWeHLTl3jTj5bAcO6FKAkkLz7NfipJr4BmqF8yueQzcXY_Eui4g2yb7-kODH4y05wABIpHZN-IzJtTi8$>

On 1/26/24 3:11 AM, David Delabassee wrote:
> Greetings!
>
> We are starting 2024 with JDK 22 as it has just entered Rampdown Phase 2 [1]. 
> And with the initial JDK 22 Release Candidates now less than 2 weeks away 
> (Feb. 8th) [2], it is time to shift our attention to JDK 23.
>
> After multiple rounds of incubations and preview, the Foreign Function & 
> Memory API is becoming standard and permanent in JDK 22. If we put its 
> 'Function' angle aside, this API also offers a standard and secure way to 
> access off-heap API. And that brings us to the heads-up below 'Deprecate the 
> memory-access methods in sun.misc.Unsafe for removal in a future release' as 
> developers still using sun.misc.Unsafe for accessing memory are strongly 
> encouraged to start preparing their plans to migrate away from those unsafe 
> methods.
>
> [1] https://mail.openjdk.org/pipermail/jdk-dev/2024-January/008675.html
> [2] https://openjdk.org/projects/jdk/22/
>
>
> ## Heads-up: Deprecate the Memory-Access Methods in sun.misc.Unsafe for 
> Removal in a Future Release
>
> The effort focused on enforcing the integrity of the Java platform [3] 
> continues! The next phase in that long but important initiative will most 
> likely target the sun.misc.Unsafe API used for accessing memory. Those 
> methods alone represent 79 methods out of the 87 sun.misc.Unsafe methods!
>
> This draft JEP [4] outlines the plan to deprecate for removal the 
> sun.misc.Unsafe Memory-Access methods, the reasons, and the standard 
> alternatives. As the draft plan suggests, the first step will be to deprecate 
> all memory-access methods (on-heap, off-heap, and bimodal) for removal. This 
> will cause compile-time deprecation warnings for code that refers to the 
> methods, alerting library developers to their forthcoming removal. In 
> addition, a new command-line option will allow users to receive runtime 
> warnings when those methods are used. This command-line will help users to 
> assess if their codebase uses those unsafe API to access memory. It should be 
> mentioned that other tools such as JFR and jdeprscan can also be used to 
> detect the use of those deprecated APIs.
>
> Library developers are strongly encouraged to migrate from sun.misc.Unsafe to 
> the supported replacements, so that applications can migrate smoothly to 
> modern JDKs. The initial step will be to conduct investigations to understand 
> if, how, and where sun.misc.Unsafe methods are used to access memory.
>
> [3] https://openjdk.org/jeps/8305968
> [4] https://openjdk.org/jeps/8323072
>
>
> ## Heads-up: Java Array Element Alignment - Weakening of Some Methods 
> Guarantees ?
>
> Some methods make promises about Java array element alignment that are too 
> strong. There are some ongoing reflexions to change the implementation (and 
> the specification) of `MethodHandles::byteArrayViewVarHandle`, 
> `MethodHandles::byteBufferViewVarHandle`, `ByteBuffer::alignedSlice`, and 
> `ByteBuffer::alignmentOffset` to weaken the guarantees they make about the 
> alignment of Java array elements, in order to bring them in line with the 
> guarantees made by an arbitrary JVM implementation.
>
> For more details, make sure to check JDK-8320247 [5] and the related PR [6] 
> but in a nutshell, the new behaviour would be :
> - The `VarHandle` returned by `MethodHandles::byteArrayViewVarHandle` would 
> only support `get` and `set` methods, and all other access methods would 
> throw an exception.
> - The `VarHandle` returned by `MethodHandles::byteBufferViewHandle` would 
> only support the `get` and `set` access methods when a heap buffer is used, 
> and all other access methods would throw an exception when used with a heap 
> buffer. Direct byte buffers will continue to work the same way.
> - The `ByteBuffer::alignmentOffset` and `ByteBuffer::alignedSlice` methods 
> would throw an exception if the buffer is a heap buffer, and the given 
> `unitSize` is greater than 1.
>
> If you have relevant feedback about this potential change, please make su

Re: JDK 22 RDP2 & Deprecate sun.misc.Unsafe Memory-Access Methods…

2024-01-28 Thread Rick Hillegas
Thanks, David. Derby found no problems with build 22-ea+33-2356. See 
https://issues.apache.org/jira/browse/DERBY-7159


On 1/26/24 3:11 AM, David Delabassee wrote:

Greetings!

We are starting 2024 with JDK 22 as it has just entered Rampdown Phase 2 [1]. 
And with the initial JDK 22 Release Candidates now less than 2 weeks away (Feb. 
8th) [2], it is time to shift our attention to JDK 23.

After multiple rounds of incubations and preview, the Foreign Function & Memory 
API is becoming standard and permanent in JDK 22. If we put its 'Function' angle 
aside, this API also offers a standard and secure way to access off-heap API. And 
that brings us to the heads-up below 'Deprecate the memory-access methods in 
sun.misc.Unsafe for removal in a future release' as developers still using 
sun.misc.Unsafe for accessing memory are strongly encouraged to start preparing 
their plans to migrate away from those unsafe methods.

[1] https://mail.openjdk.org/pipermail/jdk-dev/2024-January/008675.html
[2] https://openjdk.org/projects/jdk/22/


## Heads-up: Deprecate the Memory-Access Methods in sun.misc.Unsafe for Removal 
in a Future Release

The effort focused on enforcing the integrity of the Java platform [3] 
continues! The next phase in that long but important initiative will most 
likely target the sun.misc.Unsafe API used for accessing memory. Those methods 
alone represent 79 methods out of the 87 sun.misc.Unsafe methods!

This draft JEP [4] outlines the plan to deprecate for removal the 
sun.misc.Unsafe Memory-Access methods, the reasons, and the standard 
alternatives. As the draft plan suggests, the first step will be to deprecate 
all memory-access methods (on-heap, off-heap, and bimodal) for removal. This 
will cause compile-time deprecation warnings for code that refers to the 
methods, alerting library developers to their forthcoming removal. In addition, 
a new command-line option will allow users to receive runtime warnings when 
those methods are used. This command-line will help users to assess if their 
codebase uses those unsafe API to access memory. It should be mentioned that 
other tools such as JFR and jdeprscan can also be used to detect the use of 
those deprecated APIs.

Library developers are strongly encouraged to migrate from sun.misc.Unsafe to 
the supported replacements, so that applications can migrate smoothly to modern 
JDKs. The initial step will be to conduct investigations to understand if, how, 
and where sun.misc.Unsafe methods are used to access memory.

[3] https://openjdk.org/jeps/8305968
[4] https://openjdk.org/jeps/8323072


## Heads-up: Java Array Element Alignment - Weakening of Some Methods 
Guarantees ?

Some methods make promises about Java array element alignment that are too 
strong. There are some ongoing reflexions to change the implementation (and the 
specification) of `MethodHandles::byteArrayViewVarHandle`, 
`MethodHandles::byteBufferViewVarHandle`, `ByteBuffer::alignedSlice`, and 
`ByteBuffer::alignmentOffset` to weaken the guarantees they make about the 
alignment of Java array elements, in order to bring them in line with the 
guarantees made by an arbitrary JVM implementation.

For more details, make sure to check JDK-8320247 [5] and the related PR [6] but 
in a nutshell, the new behaviour would be :
- The `VarHandle` returned by `MethodHandles::byteArrayViewVarHandle` would 
only support `get` and `set` methods, and all other access methods would throw 
an exception.
- The `VarHandle` returned by `MethodHandles::byteBufferViewHandle` would only 
support the `get` and `set` access methods when a heap buffer is used, and all 
other access methods would throw an exception when used with a heap buffer. 
Direct byte buffers will continue to work the same way.
- The `ByteBuffer::alignmentOffset` and `ByteBuffer::alignedSlice` methods 
would throw an exception if the buffer is a heap buffer, and the given 
`unitSize` is greater than 1.

If you have relevant feedback about this potential change, please make sure to 
bring it to the core-libs-dev mailing list [7], or comment on the PR [6].

[5] https://bugs.openjdk.org/browse/JDK-8320247
[6] https://github.com/openjdk/jdk/pull/16681
[7] https://mail.openjdk.org/pipermail/core-libs-dev/


## JDK 22 Early-Access Builds

JDK 22 Early-Access builds 33 are now available [8], and are provided under the 
GNU General Public License v2, with the Classpath Exception. The Release Notes 
[9] and the javadocs [10] are also available.

### Changes in recent JDK 22 builds that may be of interest:

- JDK-8320597: RSA signature verification fails on signed data that does not 
encode params correctly [Reported by Apache POI]
- JDK-8322214: Return value of XMLInputFactory.getProperty() changed from 
boolean to String in JDK 22 early access builds [Reported by Apache POI]
- JDK-8322725: (tz) Update Timezone Data to 2023d
- JDK-8321480: ISO 4217 Amendment 176 Update
- JDK-8314468: Improve Compiler loops
- JDK-8314295: Enh

JDK 22 RDP2 & Deprecate sun.misc.Unsafe Memory-Access Methods…

2024-01-26 Thread David Delabassee
Greetings!

We are starting 2024 with JDK 22 as it has just entered Rampdown Phase 2 [1]. 
And with the initial JDK 22 Release Candidates now less than 2 weeks away (Feb. 
8th) [2], it is time to shift our attention to JDK 23.

After multiple rounds of incubations and preview, the Foreign Function & Memory 
API is becoming standard and permanent in JDK 22. If we put its 'Function' 
angle aside, this API also offers a standard and secure way to access off-heap 
API. And that brings us to the heads-up below 'Deprecate the memory-access 
methods in sun.misc.Unsafe for removal in a future release' as developers still 
using sun.misc.Unsafe for accessing memory are strongly encouraged to start 
preparing their plans to migrate away from those unsafe methods.

[1] https://mail.openjdk.org/pipermail/jdk-dev/2024-January/008675.html
[2] https://openjdk.org/projects/jdk/22/


## Heads-up: Deprecate the Memory-Access Methods in sun.misc.Unsafe for Removal 
in a Future Release

The effort focused on enforcing the integrity of the Java platform [3] 
continues! The next phase in that long but important initiative will most 
likely target the sun.misc.Unsafe API used for accessing memory. Those methods 
alone represent 79 methods out of the 87 sun.misc.Unsafe methods!

This draft JEP [4] outlines the plan to deprecate for removal the 
sun.misc.Unsafe Memory-Access methods, the reasons, and the standard 
alternatives. As the draft plan suggests, the first step will be to deprecate 
all memory-access methods (on-heap, off-heap, and bimodal) for removal. This 
will cause compile-time deprecation warnings for code that refers to the 
methods, alerting library developers to their forthcoming removal. In addition, 
a new command-line option will allow users to receive runtime warnings when 
those methods are used. This command-line will help users to assess if their 
codebase uses those unsafe API to access memory. It should be mentioned that 
other tools such as JFR and jdeprscan can also be used to detect the use of 
those deprecated APIs.

Library developers are strongly encouraged to migrate from sun.misc.Unsafe to 
the supported replacements, so that applications can migrate smoothly to modern 
JDKs. The initial step will be to conduct investigations to understand if, how, 
and where sun.misc.Unsafe methods are used to access memory.

[3] https://openjdk.org/jeps/8305968
[4] https://openjdk.org/jeps/8323072


## Heads-up: Java Array Element Alignment - Weakening of Some Methods 
Guarantees ?

Some methods make promises about Java array element alignment that are too 
strong. There are some ongoing reflexions to change the implementation (and the 
specification) of `MethodHandles::byteArrayViewVarHandle`, 
`MethodHandles::byteBufferViewVarHandle`, `ByteBuffer::alignedSlice`, and 
`ByteBuffer::alignmentOffset` to weaken the guarantees they make about the 
alignment of Java array elements, in order to bring them in line with the 
guarantees made by an arbitrary JVM implementation.

For more details, make sure to check JDK-8320247 [5] and the related PR [6] but 
in a nutshell, the new behaviour would be :
- The `VarHandle` returned by `MethodHandles::byteArrayViewVarHandle` would 
only support `get` and `set` methods, and all other access methods would throw 
an exception.
- The `VarHandle` returned by `MethodHandles::byteBufferViewHandle` would only 
support the `get` and `set` access methods when a heap buffer is used, and all 
other access methods would throw an exception when used with a heap buffer. 
Direct byte buffers will continue to work the same way.
- The `ByteBuffer::alignmentOffset` and `ByteBuffer::alignedSlice` methods 
would throw an exception if the buffer is a heap buffer, and the given 
`unitSize` is greater than 1.

If you have relevant feedback about this potential change, please make sure to 
bring it to the core-libs-dev mailing list [7], or comment on the PR [6].

[5] https://bugs.openjdk.org/browse/JDK-8320247
[6] https://github.com/openjdk/jdk/pull/16681
[7] https://mail.openjdk.org/pipermail/core-libs-dev/


## JDK 22 Early-Access Builds

JDK 22 Early-Access builds 33 are now available [8], and are provided under the 
GNU General Public License v2, with the Classpath Exception. The Release Notes 
[9] and the javadocs [10] are also available.

### Changes in recent JDK 22 builds that may be of interest:

- JDK-8320597: RSA signature verification fails on signed data that does not 
encode params correctly [Reported by Apache POI]
- JDK-8322214: Return value of XMLInputFactory.getProperty() changed from 
boolean to String in JDK 22 early access builds [Reported by Apache POI]
- JDK-8322725: (tz) Update Timezone Data to 2023d
- JDK-8321480: ISO 4217 Amendment 176 Update
- JDK-8314468: Improve Compiler loops
- JDK-8314295: Enhance verification of verifier
- JDK-8316976: Improve signature handling
- JDK-8317547: Enhance TLS connection support
- JDK-8318971: Better Error Handling for Jar Tool 

Re: Multiple connections to the same in-memory database

2015-11-04 Thread Bryan Pendleton

What's the definition of "does not work successfully?

I'm not aware of such a limitation, but I imagine some classloading issues can 
break


Thanks Kristian!

I realize the question was pretty vague.

Here's what little context there was:

http://stackoverflow.com/questions/33168323/spring-and-h2-or-derby-multiple-transactions/33187908#33187908

Perhaps that user will provide more details, and we
could have a more specific discussion.

Thanks for the help, and for the suggestions about classloader issues.

bryan




Multiple connections to the same in-memory database

2015-11-03 Thread Bryan Pendleton
In a separate forum, someone has claimed that multiple
connections to the same database from the same hosting
JVM does not work successfully if the database is an
in-memory database (uses jdbc:derby:memory:whatever).

Is this true?

If it is true, is it documented anywhere?

I didn't believe that such a restriction existed, but I
couldn't find any particular documentation that seemed
to say clearly one way or another.

thanks,

bryan


Re: Multiple connections to the same in-memory database

2015-11-03 Thread Kristian Waagan
Hi Bryan,

What's the definition of "does not work successfully?

I'm not aware of such a limitation, but I imagine some classloading issues
can break this (depending on your expectations).

Regards,
-- 
Kristian

tir. 3. nov. 2015, 20:02 skrev Bryan Pendleton <bpendleton.de...@gmail.com>:

> In a separate forum, someone has claimed that multiple
> connections to the same database from the same hosting
> JVM does not work successfully if the database is an
> in-memory database (uses jdbc:derby:memory:whatever).
>
> Is this true?
>
> If it is true, is it documented anywhere?
>
> I didn't believe that such a restriction existed, but I
> couldn't find any particular documentation that seemed
> to say clearly one way or another.
>
> thanks,
>
> bryan
>
>


Re: Having an out of memory condition on Derby 10.9.2.0

2015-05-19 Thread Bryan Pendleton

On 5/19/2015 11:00 AM, Bergquist, Brett wrote:

I found how to export to a text file.   So here is one of the areas that the 
leak detection pointed out:




'- buf, lock java.io.StringWriter @ 0xfffd86693928 |   48 |   
706,545,952
   '- stringWriter org.apache.derby.iapi.error.ErrorStringBuilder @ 
0xfffd86693900
  '- errorStringBuilder 
org.apache.derby.iapi.services.context.ContextManager @ 0xfffd86637d80


I'm not totally sure how to read this, but I think it's saying that the
errorStringBuilder field in the ContextManager has 706 meg of string data in it.

From what I can tell by looking at ContextManager.java, the
errorStringBuilder field is only supposed to contain data if
a fairly severe error has occurred, and Derby is trying to
write that error to derby.log:

/**
 * clean up error and print it to derby.log. Extended diagnosis including
 * thread dump to derby.log and javaDump if available, will print if the
 * database is active and severity is greater than or equals to
 * SESSTION_SEVERITY or as configured by
 * derby.stream.error.extendedDiagSeverityLevel property
 *
 * @param error the error we want to clean up
 * @param diagActive
 *true if extended diagnostics should be considered,
 *false not interested of extended diagnostic information
 * @return true if the context manager is shutdown, false otherwise.
 */
public boolean cleanupOnError(Throwable error, boolean diagActive)

Now, 706 meg is a LOT of string data, but it's still a long ways from 8GB.

Did you end up getting any information in your derby.log?

I'm totally speculating, but it's possible that the ContextManager heap usage
is a secondary problem, that the real problem is:

what caused Derby to invoke ContextManager.cleanupOnError() in the 
first place?

Hope this is of some help and gives you some things to pursue.

thanks,

bryan



Having an out of memory condition on Derby 10.9.2.0

2015-05-19 Thread Bergquist, Brett
I am having an out of memory condition on Derby 10.9.2.0 in our production 
environment.   Derby is given 8G maximum heap but I am able to get a heap dump 
periodically and analyze it via Eclipse Memory Analyzer.

I see a couple of strange things and was wondering if I can attach a screen 
shot here or not?   Maybe someone might see something obvious.

Brett


Canoga Perkins
20600 Prairie Street
Chatsworth, CA 91311
(818) 718-6300

This e-mail and any attached document(s) is confidential and is intended only 
for the review of the party to whom it is addressed. If you have received this 
transmission in error, please notify the sender immediately and discard the 
original message and any attachment(s).


RE: Having an out of memory condition on Derby 10.9.2.0

2015-05-19 Thread Bergquist, Brett
  

|  240 
| 4,104
   |- ra org.apache.derby.impl.drda.XADatabase @ 
0x5720a8c0  

|  240 
| 4,112
   |- ra org.apache.derby.impl.drda.XADatabase @ 
0x573b3908  

|  240 
| 4,104
   |- ra org.apache.derby.impl.drda.XADatabase @ 
0x575db540  

|  240 
| 4,112
   |- ra org.apache.derby.impl.drda.XADatabase @ 
0x57713920  

|  240 
| 4,104
   |- ra org.apache.derby.impl.drda.XADatabase @ 
0x57891350  

|  240 
| 4,104
   |- ra org.apache.derby.impl.drda.XADatabase @ 
0x57a6e8b8  

|  240 
| 4,112
   |- ra org.apache.derby.impl.drda.XADatabase @ 
0x57ac4eb0  

|  240 
| 4,112
   |- ra org.apache.derby.impl.drda.XADatabase @ 
0x57c3e500  

|  240 
| 4,104

From: Bergquist, Brett [mailto:bbergqu...@canoga.com]
Sent: Tuesday, May 19, 2015 1:39 PM
To: derby-dev@db.apache.org
Subject: Having an out of memory condition on Derby 10.9.2.0

I am having an out of memory condition on Derby 10.9.2.0 in our production 
environment.   Derby is given 8G maximum heap but I am able to get a heap dump 
periodically and analyze it via Eclipse Memory Analyzer.

I see a couple of strange things and was wondering if I can attach a screen 
shot here or not?   Maybe someone might see something obvious.

Brett


Canoga Perkins
20600 Prairie Street
Chatsworth, CA 91311
(818) 718-6300

This e-mail and any attached document(s) is confidential and is intended only 
for the review of the party to whom it is addressed. If you have received this 
transmission in error, please notify the sender immediately and discard the 
original message and any attachment(s).


Canoga Perkins
20600 Prairie Street
Chatsworth, CA 91311
(818) 718-6300

This e-mail and any attached document(s) is confidential and is intended only 
for the review of the party to whom it is addressed. If you have received this 
transmission in error, please notify the sender immediately and discard the 
original message and any attachment(s).


[jira] [Created] (DERBY-6799) Suggested update to in-memory database docs

2015-04-19 Thread Osric Wilkinson (JIRA)
Osric Wilkinson created DERBY-6799:
--

 Summary: Suggested update to in-memory database docs
 Key: DERBY-6799
 URL: https://issues.apache.org/jira/browse/DERBY-6799
 Project: Derby
  Issue Type: Improvement
  Components: Documentation
Affects Versions: 10.11.1.1
Reporter: Osric Wilkinson
Priority: Trivial


The docs on the in-memory database at 
https://db.apache.org/derby/docs/10.11/devguide/cdevdvlpinmemdb.html say to 
create a new connection string with drop=true to drop an in-memory db. They 
don't say that you need to close that connection to actually drop the db.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Embedded Derby causing out of memory error

2015-04-14 Thread Dyre Tjeldvoll
What makes you say that the excess memory consumption is caused by 
Derby? Do you have a memory profile showing that Derby consumes the heap?


On 04/13/2015 02:08 PM, rohitsonawat wrote:

While Inserting the large data record content say (450kb)using CLOB it
is throwing Out of Memory Error.this error is accruing after inserting
200 times data content. Below is the stack trace of it.
DBAppender::updateParameters()::Data.Length()::432164 Exception in
thread AppointmentFileListener java.lang.OutOfMemoryError: Java heap
space at java.util.Arrays.copyOf(Arrays.java:2367) at
java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:130)
at
java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:114)
at
java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:415)
at java.lang.StringBuffer.append(StringBuffer.java:237) at
java.io.StringWriter.write(StringWriter.java:101) I tried all the
possible solution what I am able to do like -Ensuring all the resultset
statement are close after completing the transaction. -Setting the CLOB
value with setString(),setClob(),setASCIIStream(). -Heap size increment
to 1GB and 2GB. Any suggestion on this really appreciated. Thanks

View this message in context: Embedded Derby causing out of memory error
http://apache-database.10148.n7.nabble.com/Embedded-Derby-causing-out-of-memory-error-tp143925.html
Sent from the Apache Derby Developers mailing list archive
http://apache-database.10148.n7.nabble.com/Apache-Derby-Developers-f4.html
at Nabble.com.



--
Regards,

Dyre


Embedded Derby causing out of memory error

2015-04-13 Thread rohitsonawat
While Inserting the large data record content say (450kb)using CLOB it is
throwing Out of Memory Error.this error is accruing after inserting 200
times data content.Below is the stack trace of
it.DBAppender::updateParameters()::Data.Length()::432164 Exception in thread
AppointmentFileListener java.lang.OutOfMemoryError: Java heap space at
java.util.Arrays.copyOf(Arrays.java:2367) at
java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:130)
at
java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:114)
at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:415) at
java.lang.StringBuffer.append(StringBuffer.java:237) at
java.io.StringWriter.write(StringWriter.java:101)I tried all the possible
solution what I am able to do like-Ensuring all the resultset statement are
close after completing the transaction.-Setting the CLOB value with
setString(),setClob(),setASCIIStream().-Heap size increment to 1GB and
2GB.Any suggestion on this really appreciated.Thanks



--
View this message in context: 
http://apache-database.10148.n7.nabble.com/Embedded-Derby-causing-out-of-memory-error-tp143925.html
Sent from the Apache Derby Developers mailing list archive at Nabble.com.

[jira] [Resolved] (DERBY-6271) Backup error when in-memory with jar

2015-01-21 Thread Mike Matrigali (JIRA)

 [ 
https://issues.apache.org/jira/browse/DERBY-6271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Matrigali resolved DERBY-6271.
---
Resolution: Cannot Reproduce

no more info was provided to a query over a year ago.  closing as cannot 
reproduce.

 Backup error when in-memory with jar
 

 Key: DERBY-6271
 URL: https://issues.apache.org/jira/browse/DERBY-6271
 Project: Derby
  Issue Type: Bug
  Components: Store
Affects Versions: 10.8.2.2
Reporter: Albert
  Labels: derby_triage10_12

 connect 'jdbc:derby://localhost:1527/memory:imdb4tf;create=true';
 call 
 sqlj.install_jar('E:\baocr\NetBeansProjects\derbyFunc\dist\derbyFunc.jar', 
 'app.bcrFunc',0);
 call syscs_util.syscs_set_database_property('derby.database.classpath', 
 'app.bcrFunc');
 CREATE FUNCTION regexp_substr
 ( srcstr varchar(100), pattern varchar(100))
 RETURNS varchar(100)
 PARAMETER STYLE JAVA
 NO SQL LANGUAGE JAVA
 EXTERNAL NAME 'bcr.derby.MyFunc.regexp_substr';
 call syscs_util.syscs_backup_database('C:\Users\Albert\derby-IMDB');
 
 错误代码 -1, SQL 状态XSRS5: 在备份期间将文件从 
 (db=E:\baocr\.netbeans-derby\imdb4tf)E:\baocr\.netbeans-derby\imdb4tf\jar\APP\BCRFUNC.jar.G1371800096459#exists=true,
  isDirectory=false, length=2698, canWrite=true 复制到 
 C:\Users\Albert\derby-IMDB\imdb4tf\jar\APP\BCRFUNC.jar.G1371800096459 时出错。
 --
 help please!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DERBY-6271) Backup error when in-memory with jar

2015-01-21 Thread Mike Matrigali (JIRA)

 [ 
https://issues.apache.org/jira/browse/DERBY-6271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Matrigali updated DERBY-6271:
--
Labels: derby_triage10_12  (was: )

10.12 bug triage effort. 



 Backup error when in-memory with jar
 

 Key: DERBY-6271
 URL: https://issues.apache.org/jira/browse/DERBY-6271
 Project: Derby
  Issue Type: Bug
  Components: Store
Affects Versions: 10.8.2.2
Reporter: Albert
  Labels: derby_triage10_12

 connect 'jdbc:derby://localhost:1527/memory:imdb4tf;create=true';
 call 
 sqlj.install_jar('E:\baocr\NetBeansProjects\derbyFunc\dist\derbyFunc.jar', 
 'app.bcrFunc',0);
 call syscs_util.syscs_set_database_property('derby.database.classpath', 
 'app.bcrFunc');
 CREATE FUNCTION regexp_substr
 ( srcstr varchar(100), pattern varchar(100))
 RETURNS varchar(100)
 PARAMETER STYLE JAVA
 NO SQL LANGUAGE JAVA
 EXTERNAL NAME 'bcr.derby.MyFunc.regexp_substr';
 call syscs_util.syscs_backup_database('C:\Users\Albert\derby-IMDB');
 
 错误代码 -1, SQL 状态XSRS5: 在备份期间将文件从 
 (db=E:\baocr\.netbeans-derby\imdb4tf)E:\baocr\.netbeans-derby\imdb4tf\jar\APP\BCRFUNC.jar.G1371800096459#exists=true,
  isDirectory=false, length=2698, canWrite=true 复制到 
 C:\Users\Albert\derby-IMDB\imdb4tf\jar\APP\BCRFUNC.jar.G1371800096459 时出错。
 --
 help please!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DERBY-5963) Memory leak when shutting down Derby system

2015-01-21 Thread Mike Matrigali (JIRA)

 [ 
https://issues.apache.org/jira/browse/DERBY-5963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Matrigali updated DERBY-5963:
--
Issue  fix info: Repro attached
  Labels: derby_triage10_12  (was: )

10.12 triage effort.  

 Memory leak when shutting down Derby system
 ---

 Key: DERBY-5963
 URL: https://issues.apache.org/jira/browse/DERBY-5963
 Project: Derby
  Issue Type: Bug
  Components: Store
Affects Versions: 10.5.3.0, 10.9.1.0
 Environment: Embedded Derby
 Windows 7
 java version 1.6.0_31
 Java(TM) SE Runtime Environment (build 1.6.0_31-b05)
 Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode)
Reporter: Igor Sereda
  Labels: derby_triage10_12
 Attachments: TestDerbyLeak.java, yourkit.png


 I am using an embedded Derby on a server within OSGi environment, as a 
 private library in my bundle. When the bundle is deactivated, I stop Derby 
 database (with jdbc:derby:;shutdown=true;deregister=true URL)
 But although otherwise the database is released, an instance of 
 ContextManager stays in memory due to a leaked reference in a ThreadLocal 
 variable (from ContextService, I presume). The instance of ContextManager is 
 a big deal, because it also holds the whole page cache in memory (40MB), and 
 also, via class loader, holds whole my OSGi bundle too.
 Please let me know if you need any information on reproducing this problem.
 Thanks!
 Igor



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (DERBY-6662) DatabaseMetaData.usesLocalFiles() returns true for in-memory databases

2014-09-29 Thread Myrna van Lunteren (JIRA)

 [ 
https://issues.apache.org/jira/browse/DERBY-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Myrna van Lunteren resolved DERBY-6662.
---
   Resolution: Fixed
Fix Version/s: 10.12.0.0
   10.11.1.2
   10.10.2.1

 DatabaseMetaData.usesLocalFiles() returns true for in-memory databases
 --

 Key: DERBY-6662
 URL: https://issues.apache.org/jira/browse/DERBY-6662
 Project: Derby
  Issue Type: Bug
  Components: JDBC
Affects Versions: 10.11.1.1
Reporter: Rick Hillegas
Assignee: Myrna van Lunteren
 Fix For: 10.10.2.1, 10.11.1.2, 10.12.0.0

 Attachments: DERBY-6662.diff, DERBY-6662.diff2


 DatabaseMetaData.usesLocalFiles() returns true for in-memory databases. And 
 it also returns true for DatabaseMetaData.usesLocalFilePerTable(). The 
 following script shows this:
 {noformat}
 connect 'jdbc:derby:memory:db;create=true';
 call syscs_util.syscs_register_tool( 'databaseMetaData', true );
 values usesLocalFiles();
 values usesLocalFilePerTable();
 {noformat}
 I think that these methods should return false because an in-memory database 
 does not store tables in files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DERBY-6662) DatabaseMetaData.usesLocalFiles() returns true for in-memory databases

2014-09-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/DERBY-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14151219#comment-14151219
 ] 

ASF subversion and git services commented on DERBY-6662:


Commit 1628101 from [~myrna] in branch 'code/branches/10.10'
[ https://svn.apache.org/r1628101 ]

DERBY-6662; DatabaseMetaData.usesLocalFiles() returns true for in-memory 
databases
   preventing the optional tools test case to run with JDBC lower than 4

 DatabaseMetaData.usesLocalFiles() returns true for in-memory databases
 --

 Key: DERBY-6662
 URL: https://issues.apache.org/jira/browse/DERBY-6662
 Project: Derby
  Issue Type: Bug
  Components: JDBC
Affects Versions: 10.11.1.1
Reporter: Rick Hillegas
Assignee: Myrna van Lunteren
 Attachments: DERBY-6662.diff, DERBY-6662.diff2


 DatabaseMetaData.usesLocalFiles() returns true for in-memory databases. And 
 it also returns true for DatabaseMetaData.usesLocalFilePerTable(). The 
 following script shows this:
 {noformat}
 connect 'jdbc:derby:memory:db;create=true';
 call syscs_util.syscs_register_tool( 'databaseMetaData', true );
 values usesLocalFiles();
 values usesLocalFilePerTable();
 {noformat}
 I think that these methods should return false because an in-memory database 
 does not store tables in files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DERBY-6662) DatabaseMetaData.usesLocalFiles() returns true for in-memory databases

2014-09-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/DERBY-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14150785#comment-14150785
 ] 

ASF subversion and git services commented on DERBY-6662:


Commit 1628008 from [~myrna] in branch 'code/trunk'
[ https://svn.apache.org/r1628008 ]

DERBY-6662; DatabaseMetaData.usesLocalFiles() returns true for in-memory 
databases
   changing the in-memory databasename used in the test.

 DatabaseMetaData.usesLocalFiles() returns true for in-memory databases
 --

 Key: DERBY-6662
 URL: https://issues.apache.org/jira/browse/DERBY-6662
 Project: Derby
  Issue Type: Bug
  Components: JDBC
Affects Versions: 10.11.1.1
Reporter: Rick Hillegas
Assignee: Myrna van Lunteren
 Attachments: DERBY-6662.diff, DERBY-6662.diff2


 DatabaseMetaData.usesLocalFiles() returns true for in-memory databases. And 
 it also returns true for DatabaseMetaData.usesLocalFilePerTable(). The 
 following script shows this:
 {noformat}
 connect 'jdbc:derby:memory:db;create=true';
 call syscs_util.syscs_register_tool( 'databaseMetaData', true );
 values usesLocalFiles();
 values usesLocalFilePerTable();
 {noformat}
 I think that these methods should return false because an in-memory database 
 does not store tables in files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DERBY-6662) DatabaseMetaData.usesLocalFiles() returns true for in-memory databases

2014-09-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/DERBY-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14150788#comment-14150788
 ] 

ASF subversion and git services commented on DERBY-6662:


Commit 1628009 from [~myrna] in branch 'code/branches/10.11'
[ https://svn.apache.org/r1628009 ]

DERBY-6662; DatabaseMetaData.usesLocalFiles() returns true for in-memory 
databases
   merge of revision 1628008 from trunk to change the name of the db in the 
test.

 DatabaseMetaData.usesLocalFiles() returns true for in-memory databases
 --

 Key: DERBY-6662
 URL: https://issues.apache.org/jira/browse/DERBY-6662
 Project: Derby
  Issue Type: Bug
  Components: JDBC
Affects Versions: 10.11.1.1
Reporter: Rick Hillegas
Assignee: Myrna van Lunteren
 Attachments: DERBY-6662.diff, DERBY-6662.diff2


 DatabaseMetaData.usesLocalFiles() returns true for in-memory databases. And 
 it also returns true for DatabaseMetaData.usesLocalFilePerTable(). The 
 following script shows this:
 {noformat}
 connect 'jdbc:derby:memory:db;create=true';
 call syscs_util.syscs_register_tool( 'databaseMetaData', true );
 values usesLocalFiles();
 values usesLocalFilePerTable();
 {noformat}
 I think that these methods should return false because an in-memory database 
 does not store tables in files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DERBY-6662) DatabaseMetaData.usesLocalFiles() returns true for in-memory databases

2014-09-27 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/DERBY-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14150818#comment-14150818
 ] 

ASF subversion and git services commented on DERBY-6662:


Commit 1628025 from [~myrna] in branch 'code/branches/10.10'
[ https://svn.apache.org/r1628025 ]

DERBY-6662; DatabaseMetaData.usesLocalFiles() returns true for in-memory 
databases
   backport of revision 1627895 and revision 1628009 from the 10.11 branch;
   making the affected methods return false for in-memory databases and adding 
a test.

 DatabaseMetaData.usesLocalFiles() returns true for in-memory databases
 --

 Key: DERBY-6662
 URL: https://issues.apache.org/jira/browse/DERBY-6662
 Project: Derby
  Issue Type: Bug
  Components: JDBC
Affects Versions: 10.11.1.1
Reporter: Rick Hillegas
Assignee: Myrna van Lunteren
 Attachments: DERBY-6662.diff, DERBY-6662.diff2


 DatabaseMetaData.usesLocalFiles() returns true for in-memory databases. And 
 it also returns true for DatabaseMetaData.usesLocalFilePerTable(). The 
 following script shows this:
 {noformat}
 connect 'jdbc:derby:memory:db;create=true';
 call syscs_util.syscs_register_tool( 'databaseMetaData', true );
 values usesLocalFiles();
 values usesLocalFilePerTable();
 {noformat}
 I think that these methods should return false because an in-memory database 
 does not store tables in files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DERBY-6662) DatabaseMetaData.usesLocalFiles() returns true for in-memory databases

2014-09-27 Thread Myrna van Lunteren (JIRA)

[ 
https://issues.apache.org/jira/browse/DERBY-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14150822#comment-14150822
 ] 

Myrna van Lunteren commented on DERBY-6662:
---

The merge to 10.10 from 10.11 needed tweaking because 'goodStatement' and 
'assertResults(...)' were not available in the 10.10 BaseJDBCTestCase, and also 
I could not use 'String.contains()' as that gave me a build failure (10.10 came 
out with support for 1.4.2; contains is from Java 1.5).

 DatabaseMetaData.usesLocalFiles() returns true for in-memory databases
 --

 Key: DERBY-6662
 URL: https://issues.apache.org/jira/browse/DERBY-6662
 Project: Derby
  Issue Type: Bug
  Components: JDBC
Affects Versions: 10.11.1.1
Reporter: Rick Hillegas
Assignee: Myrna van Lunteren
 Attachments: DERBY-6662.diff, DERBY-6662.diff2


 DatabaseMetaData.usesLocalFiles() returns true for in-memory databases. And 
 it also returns true for DatabaseMetaData.usesLocalFilePerTable(). The 
 following script shows this:
 {noformat}
 connect 'jdbc:derby:memory:db;create=true';
 call syscs_util.syscs_register_tool( 'databaseMetaData', true );
 values usesLocalFiles();
 values usesLocalFilePerTable();
 {noformat}
 I think that these methods should return false because an in-memory database 
 does not store tables in files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DERBY-6662) DatabaseMetaData.usesLocalFiles() returns true for in-memory databases

2014-09-26 Thread Myrna van Lunteren (JIRA)

 [ 
https://issues.apache.org/jira/browse/DERBY-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Myrna van Lunteren updated DERBY-6662:
--
Attachment: DERBY-6662.diff2

For good measure, adding a test using the metadata optional tool, as that's 
what was used in the repro description.

 DatabaseMetaData.usesLocalFiles() returns true for in-memory databases
 --

 Key: DERBY-6662
 URL: https://issues.apache.org/jira/browse/DERBY-6662
 Project: Derby
  Issue Type: Bug
  Components: JDBC
Affects Versions: 10.11.1.1
Reporter: Rick Hillegas
Assignee: Myrna van Lunteren
 Attachments: DERBY-6662.diff, DERBY-6662.diff2


 DatabaseMetaData.usesLocalFiles() returns true for in-memory databases. And 
 it also returns true for DatabaseMetaData.usesLocalFilePerTable(). The 
 following script shows this:
 {noformat}
 connect 'jdbc:derby:memory:db;create=true';
 call syscs_util.syscs_register_tool( 'databaseMetaData', true );
 values usesLocalFiles();
 values usesLocalFilePerTable();
 {noformat}
 I think that these methods should return false because an in-memory database 
 does not store tables in files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DERBY-6662) DatabaseMetaData.usesLocalFiles() returns true for in-memory databases

2014-09-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/DERBY-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14149745#comment-14149745
 ] 

ASF subversion and git services commented on DERBY-6662:


Commit 1627851 from [~myrna] in branch 'code/trunk'
[ https://svn.apache.org/r1627851 ]

DERBY-6662; DatabaseMetaData.usesLocalFiles() returns true for in-memory 
databases
   Adding a test case using the metadata optional tool.

 DatabaseMetaData.usesLocalFiles() returns true for in-memory databases
 --

 Key: DERBY-6662
 URL: https://issues.apache.org/jira/browse/DERBY-6662
 Project: Derby
  Issue Type: Bug
  Components: JDBC
Affects Versions: 10.11.1.1
Reporter: Rick Hillegas
Assignee: Myrna van Lunteren
 Attachments: DERBY-6662.diff, DERBY-6662.diff2


 DatabaseMetaData.usesLocalFiles() returns true for in-memory databases. And 
 it also returns true for DatabaseMetaData.usesLocalFilePerTable(). The 
 following script shows this:
 {noformat}
 connect 'jdbc:derby:memory:db;create=true';
 call syscs_util.syscs_register_tool( 'databaseMetaData', true );
 values usesLocalFiles();
 values usesLocalFilePerTable();
 {noformat}
 I think that these methods should return false because an in-memory database 
 does not store tables in files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DERBY-6662) DatabaseMetaData.usesLocalFiles() returns true for in-memory databases

2014-09-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/DERBY-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14150031#comment-14150031
 ] 

ASF subversion and git services commented on DERBY-6662:


Commit 1627895 from [~myrna] in branch 'code/branches/10.11'
[ https://svn.apache.org/r1627895 ]

DERBY-6662; DatabaseMetaData.usesLocalFiles() returns true for in-memory 
databases
  merge of revision 1627671 and 1627851 from trunk

 DatabaseMetaData.usesLocalFiles() returns true for in-memory databases
 --

 Key: DERBY-6662
 URL: https://issues.apache.org/jira/browse/DERBY-6662
 Project: Derby
  Issue Type: Bug
  Components: JDBC
Affects Versions: 10.11.1.1
Reporter: Rick Hillegas
Assignee: Myrna van Lunteren
 Attachments: DERBY-6662.diff, DERBY-6662.diff2


 DatabaseMetaData.usesLocalFiles() returns true for in-memory databases. And 
 it also returns true for DatabaseMetaData.usesLocalFilePerTable(). The 
 following script shows this:
 {noformat}
 connect 'jdbc:derby:memory:db;create=true';
 call syscs_util.syscs_register_tool( 'databaseMetaData', true );
 values usesLocalFiles();
 values usesLocalFilePerTable();
 {noformat}
 I think that these methods should return false because an in-memory database 
 does not store tables in files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (DERBY-6662) DatabaseMetaData.usesLocalFiles() returns true for in-memory databases

2014-09-25 Thread Myrna van Lunteren (JIRA)

 [ 
https://issues.apache.org/jira/browse/DERBY-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Myrna van Lunteren reassigned DERBY-6662:
-

Assignee: Myrna van Lunteren

 DatabaseMetaData.usesLocalFiles() returns true for in-memory databases
 --

 Key: DERBY-6662
 URL: https://issues.apache.org/jira/browse/DERBY-6662
 Project: Derby
  Issue Type: Bug
  Components: JDBC
Affects Versions: 10.11.1.1
Reporter: Rick Hillegas
Assignee: Myrna van Lunteren

 DatabaseMetaData.usesLocalFiles() returns true for in-memory databases. And 
 it also returns true for DatabaseMetaData.usesLocalFilePerTable(). The 
 following script shows this:
 {noformat}
 connect 'jdbc:derby:memory:db;create=true';
 call syscs_util.syscs_register_tool( 'databaseMetaData', true );
 values usesLocalFiles();
 values usesLocalFilePerTable();
 {noformat}
 I think that these methods should return false because an in-memory database 
 does not store tables in files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (DERBY-6662) DatabaseMetaData.usesLocalFiles() returns true for in-memory databases

2014-09-25 Thread Myrna van Lunteren (JIRA)

 [ 
https://issues.apache.org/jira/browse/DERBY-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Myrna van Lunteren updated DERBY-6662:
--
Attachment: DERBY-6662.diff

Attaching a patch which fixes this.
Also adds a test (Derby6662Test) to the memorydb._Suite.
I ran the memorydb suite and the test passed in the environment with the fix, 
but failed without. I will commit this shortly.

 DatabaseMetaData.usesLocalFiles() returns true for in-memory databases
 --

 Key: DERBY-6662
 URL: https://issues.apache.org/jira/browse/DERBY-6662
 Project: Derby
  Issue Type: Bug
  Components: JDBC
Affects Versions: 10.11.1.1
Reporter: Rick Hillegas
Assignee: Myrna van Lunteren
 Attachments: DERBY-6662.diff


 DatabaseMetaData.usesLocalFiles() returns true for in-memory databases. And 
 it also returns true for DatabaseMetaData.usesLocalFilePerTable(). The 
 following script shows this:
 {noformat}
 connect 'jdbc:derby:memory:db;create=true';
 call syscs_util.syscs_register_tool( 'databaseMetaData', true );
 values usesLocalFiles();
 values usesLocalFilePerTable();
 {noformat}
 I think that these methods should return false because an in-memory database 
 does not store tables in files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DERBY-6662) DatabaseMetaData.usesLocalFiles() returns true for in-memory databases

2014-09-25 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/DERBY-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14148337#comment-14148337
 ] 

ASF subversion and git services commented on DERBY-6662:


Commit 1627671 from [~myrna] in branch 'code/trunk'
[ https://svn.apache.org/r1627671 ]

DERBY-6662; DatabaseMetaData.usesLocalFiles() returns true for in-memory 
databases
   Making the usesLocalFiles and usesLocalFilePerTable method return false if 
it's a memory database and add a test.

 DatabaseMetaData.usesLocalFiles() returns true for in-memory databases
 --

 Key: DERBY-6662
 URL: https://issues.apache.org/jira/browse/DERBY-6662
 Project: Derby
  Issue Type: Bug
  Components: JDBC
Affects Versions: 10.11.1.1
Reporter: Rick Hillegas
Assignee: Myrna van Lunteren
 Attachments: DERBY-6662.diff


 DatabaseMetaData.usesLocalFiles() returns true for in-memory databases. And 
 it also returns true for DatabaseMetaData.usesLocalFilePerTable(). The 
 following script shows this:
 {noformat}
 connect 'jdbc:derby:memory:db;create=true';
 call syscs_util.syscs_register_tool( 'databaseMetaData', true );
 values usesLocalFiles();
 values usesLocalFilePerTable();
 {noformat}
 I think that these methods should return false because an in-memory database 
 does not store tables in files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (DERBY-6662) DatabaseMetaData.usesLocalFiles() returns true for in-memory databases

2014-07-14 Thread Rick Hillegas (JIRA)
Rick Hillegas created DERBY-6662:


 Summary: DatabaseMetaData.usesLocalFiles() returns true for 
in-memory databases
 Key: DERBY-6662
 URL: https://issues.apache.org/jira/browse/DERBY-6662
 Project: Derby
  Issue Type: Bug
  Components: JDBC
Affects Versions: 10.11.0.0
Reporter: Rick Hillegas


DatabaseMetaData.usesLocalFiles() returns true for in-memory databases. The 
following script shows this:

{noformat}
connect 'jdbc:derby:memory:db;create=true';

call syscs_util.syscs_register_tool( 'databaseMetaData', true );

values usesLocalFiles();
{noformat}

I think that this method should return false because an in-memory database does 
not store tables in a local file.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (DERBY-6662) DatabaseMetaData.usesLocalFiles() returns true for in-memory databases

2014-07-14 Thread Rick Hillegas (JIRA)

 [ 
https://issues.apache.org/jira/browse/DERBY-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rick Hillegas updated DERBY-6662:
-

Issue  fix info: Newcomer,Repro attached  (was: Repro attached)

 DatabaseMetaData.usesLocalFiles() returns true for in-memory databases
 --

 Key: DERBY-6662
 URL: https://issues.apache.org/jira/browse/DERBY-6662
 Project: Derby
  Issue Type: Bug
  Components: JDBC
Affects Versions: 10.11.0.0
Reporter: Rick Hillegas

 DatabaseMetaData.usesLocalFiles() returns true for in-memory databases. The 
 following script shows this:
 {noformat}
 connect 'jdbc:derby:memory:db;create=true';
 call syscs_util.syscs_register_tool( 'databaseMetaData', true );
 values usesLocalFiles();
 {noformat}
 I think that this method should return false because an in-memory database 
 does not store tables in a local file.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (DERBY-6662) DatabaseMetaData.usesLocalFiles() returns true for in-memory databases

2014-07-14 Thread Rick Hillegas (JIRA)

 [ 
https://issues.apache.org/jira/browse/DERBY-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rick Hillegas updated DERBY-6662:
-

Description: 
DatabaseMetaData.usesLocalFiles() returns true for in-memory databases. And it 
also returns true for DatabaseMetaData.usesLocalFilePerTable(). The following 
script shows this:

{noformat}
connect 'jdbc:derby:memory:db;create=true';

call syscs_util.syscs_register_tool( 'databaseMetaData', true );

values usesLocalFiles();

values usesLocalFilePerTable();
{noformat}

I think that these methods should return false because an in-memory database 
does not store tables in files.

  was:
DatabaseMetaData.usesLocalFiles() returns true for in-memory databases. The 
following script shows this:

{noformat}
connect 'jdbc:derby:memory:db;create=true';

call syscs_util.syscs_register_tool( 'databaseMetaData', true );

values usesLocalFiles();
{noformat}

I think that this method should return false because an in-memory database does 
not store tables in a local file.


 DatabaseMetaData.usesLocalFiles() returns true for in-memory databases
 --

 Key: DERBY-6662
 URL: https://issues.apache.org/jira/browse/DERBY-6662
 Project: Derby
  Issue Type: Bug
  Components: JDBC
Affects Versions: 10.11.0.0
Reporter: Rick Hillegas

 DatabaseMetaData.usesLocalFiles() returns true for in-memory databases. And 
 it also returns true for DatabaseMetaData.usesLocalFilePerTable(). The 
 following script shows this:
 {noformat}
 connect 'jdbc:derby:memory:db;create=true';
 call syscs_util.syscs_register_tool( 'databaseMetaData', true );
 values usesLocalFiles();
 values usesLocalFilePerTable();
 {noformat}
 I think that these methods should return false because an in-memory database 
 does not store tables in files.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (DERBY-6096) OutOfMemoryError with Clob or Blob hash join: DataTypeDescriptor.estimatedMemoryUsage() has no case for BLOB or CLOB so would underestimate memory usage for those types

2014-06-09 Thread Rick Hillegas (JIRA)

 [ 
https://issues.apache.org/jira/browse/DERBY-6096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rick Hillegas updated DERBY-6096:
-

Attachment: releaseNote.html

Correct some typos in the detailed release note.

 OutOfMemoryError with Clob or Blob hash join: 
 DataTypeDescriptor.estimatedMemoryUsage()  has no case for BLOB or CLOB so 
 would underestimate memory usage for those types at zero
 -

 Key: DERBY-6096
 URL: https://issues.apache.org/jira/browse/DERBY-6096
 Project: Derby
  Issue Type: Bug
  Components: SQL
Affects Versions: 10.1.3.1, 10.2.2.0, 10.3.3.0, 10.4.2.0, 10.5.3.0, 
 10.6.2.1, 10.7.1.1, 10.8.3.0, 10.9.1.0, 10.10.1.1
Reporter: Kathey Marsden
Assignee: Kathey Marsden
  Labels: derby_backport_reject_10_10
 Fix For: 10.11.0.0

 Attachments: D6096.java, derby-6096_code_diff.txt, 
 derby-6096_diff.txt, derby-6096_diff2.txt, less-gc.diff, releaseNote.html, 
 releaseNote.html


 In discussion on derby-dev regarding how much memory is used for hash joins, 
 Knut noted:
 I haven't verified, but I think HashJoinStrategy uses
 DataTypeDescriptor.estimatedMemoryUsage() to estimate how much memory
 the hash table will consume. That method has no case for BLOB or CLOB,
 so it looks as if it will return zero for LOB columns. If that's so, it
 will definitely overestimate how many rows fits in maxMemoryPerTable
 kilobytes if the rows contain LOBs.
 DataTypeDescriptor.estimatedMemoryUsage() should be updated to include BLOB 
 and CLOB and we should try verify if this theory is correct with a 
 reproduction.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: In memory db and usage of file system apis

2014-02-04 Thread Mamta Satoor
Hi Rick,

Thanks for responding to my email. You are correct about differences in
legal length limit on filename on various file systems and that is why it
will be nice to write the test with in memory url so we are not limited by
the differences of various file systems. In order to get around the problem
on having really large filename with in memory db, I have been able to test
the 1024 bytes length limit for RDBNAM using following url with nested
directories. I will commit that change soon for DERBY-4805.
connect
'jdbc:derby://localhost/memory:dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/dir1234567890/aaa;create=true';
thanks again,
Mamta

On Mon, Feb 3, 2014 at 6:53 AM, Rick Hillegas rick.hille...@oracle.comwrote:

 Hi Mamta,

 Some comments inline...


 On 1/31/14 3:18 PM, Mamta Satoor wrote:

 Hi,
 I am not familiar with Derby's in memory db implementation (accessed
 through jdbc url jdbc:derby:memory:... ) but I thought there will not be
 any file system access for such a db. But when I tried a long dbname with
 such a url, I got the exception(the complete stack trace is at the bottom
 of this email.) Caused by: java.sql.SQLException: Java exception: 'The
 parameter is incorrect.: java.io.IOException'. The url I tried is
 'jdbc:derby:memory:aa
 
 
 
 aaa;create=true';

 For the record, that connection url works fine on Mac OSX. Maybe that long
 database name is not a legal file name on your file system?

 I am copying a little part of the long stack trace to show that exception
 is being thrown by windows system api(I am trying this on a Windows 7
 machine)
 Caused by: java.io.IOException: The parameter is incorrect.
 at java.io.Win32FileSystem.canonicalize(Win32FileSystem.java:407)
 at java.io.File.getCanonicalPath(File.java:570)
 at org.apache.derby.impl.io.VFMemoryStorageFactory.init(
 VFMemoryStorageFactory.java:109)

 In-memory database names are canonicalized just like on-disk database
 names. That is, the part after the memory: token is treated as a file
 name. Relative and absolute file names are equivalent provided that they
 identify the same file name. So, on my file system the following two
 database names are equivalent:

 memory:foo
 memory:/Users/rh161140/derby/mainline/foo

 Enforcing this equivalence requires Derby to make java.io calls, even
 though nothing on disk is actually touched.

 Hope this helps,
 -Rick

 The java comment for VFMemoryStorageFactory.init is as follows. It looks
 like we are accessing the file system to make sure there is no such dbname
 already existing in the file system. Should we be catching 'The parameter
 is incorrect.: java.io.IOException' in this code and if we get this
 exception, then we can assume that there is no physical db with the same
 name and hence we can go ahead and create in memory db. Appreciate all the
 help. Thanks
 /**
  * Initializes the storage factory instance by setting up a temporary
  * directory, the database directory and checking if the database
 being
  * named already exists.
  *
  * @param home the value of {@code mailto:%7B@code system.home}
 for this storage factory

  * @param databaseName the name of the database, all relative
 pathnames are
  *  relative to this name
  * @param tempDirNameIgnored ignored
  * @param uniqueName used to determine when the temporary directory
 can be
  *  created, but not to name the temporary directory itself
  *
  * @exception IOException on an error (unexpected).
  */
 Here is the complete stack trace
 ERROR XJ040: Failed to start database 'memory:
 
 

Re: In memory db and usage of file system apis

2014-02-03 Thread Rick Hillegas

Hi Mamta,

Some comments inline...

On 1/31/14 3:18 PM, Mamta Satoor wrote:

Hi,
I am not familiar with Derby's in memory db implementation (accessed 
through jdbc url jdbc:derby:memory:... ) but I thought there will not 
be any file system access for such a db. But when I tried a long 
dbname with such a url, I got the exception(the complete stack trace 
is at the bottom of this email.) Caused by: java.sql.SQLException: 
Java exception: 'The parameter is incorrect.: java.io.IOException'. 
The url I tried is

'jdbc:derby:memory:a;create=true';
For the record, that connection url works fine on Mac OSX. Maybe that 
long database name is not a legal file name on your file system?
I am copying a little part of the long stack trace to show that 
exception is being thrown by windows system api(I am trying this on a 
Windows 7 machine)

Caused by: java.io.IOException: The parameter is incorrect.
at java.io.Win32FileSystem.canonicalize(Win32FileSystem.java:407)
at java.io.File.getCanonicalPath(File.java:570)
at 
org.apache.derby.impl.io.VFMemoryStorageFactory.init(VFMemoryStorageFactory.java:109)
In-memory database names are canonicalized just like on-disk database 
names. That is, the part after the memory: token is treated as a file 
name. Relative and absolute file names are equivalent provided that they 
identify the same file name. So, on my file system the following two 
database names are equivalent:


memory:foo
memory:/Users/rh161140/derby/mainline/foo

Enforcing this equivalence requires Derby to make java.io calls, even 
though nothing on disk is actually touched.


Hope this helps,
-Rick
The java comment for VFMemoryStorageFactory.init is as follows. It 
looks like we are accessing the file system to make sure there is no 
such dbname already existing in the file system. Should we be catching 
'The parameter is incorrect.: java.io.IOException' in this code and if 
we get this exception, then we can assume that there is no physical db 
with the same name and hence we can go ahead and create in memory db. 
Appreciate all the help. Thanks

/**
 * Initializes the storage factory instance by setting up a temporary
 * directory, the database directory and checking if the database 
being

 * named already exists.
 *
 * @param home the value of {@code mailto:%7B@code system.home} 
for this storage factory
 * @param databaseName the name of the database, all relative 
pathnames are

 *  relative to this name
 * @param tempDirNameIgnored ignored
 * @param uniqueName used to determine when the temporary 
directory can be

 *  created, but not to name the temporary directory itself
 *
 * @exception IOException on an error (unexpected).
 */
Here is the complete stack trace
ERROR XJ040: Failed to start database 
'memory:a' 
with class loader sun.misc.Launcher$AppClassLoader@53745374 
mailto:sun.misc.Launcher$AppClassLoader@53745374, see the next 
exception for details.
java.sql.SQLException: Failed to start database 
'memory:a' 
with class loader sun.misc.Launcher$AppClassLoader@53745374 
mailto:sun.misc.Launcher$AppClassLoader@53745374, see the next 
exception for details.
at 
org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(SQLExceptionFactory.java:103)
at 
org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Util.java:137)

at org.apache.derby.impl.jdbc.Util.seeNextException(Util.java:310)
at 
org.apache.derby.impl.jdbc.EmbedConnection.bootDatabase(EmbedConnection.java:2842)
at 
org.apache.derby.impl.jdbc.EmbedConnection.init(EmbedConnection.java:405)
at 
org.apache.derby.jdbc.InternalDriver.getNewEmbedConnection(InternalDriver.java:628)
at 
org.apache.derby.jdbc.InternalDriver.connect(InternalDriver.java:282)
at 
org.apache.derby.jdbc.InternalDriver.connect(InternalDriver.java:913)
at 
org.apache.derby.jdbc.AutoloadedDriver.connect(AutoloadedDriver.java:147)

at java.sql.DriverManager.getConnection(DriverManager.java:419)
at java.sql.DriverManager.getConnection(DriverManager.java:391)
at 
org.apache.derby.impl.tools.ij.ij.dynamicConnection(ij.java:1483

In memory db and usage of file system apis

2014-01-31 Thread Mamta Satoor
Hi,

I am not familiar with Derby's in memory db implementation (accessed
through jdbc url jdbc:derby:memory:... ) but I thought there will not be
any file system access for such a db. But when I tried a long dbname with
such a url, I got the exception(the complete stack trace is at the bottom
of this email.) Caused by: java.sql.SQLException: Java exception: 'The
parameter is incorrect.: java.io.IOException'. The url I tried is
'jdbc:derby:memory:a;create=true';

I am copying a little part of the long stack trace to show that exception
is being thrown by windows system api(I am trying this on a Windows 7
machine)
Caused by: java.io.IOException: The parameter is incorrect.
at java.io.Win32FileSystem.canonicalize(Win32FileSystem.java:407)
at java.io.File.getCanonicalPath(File.java:570)
at
org.apache.derby.impl.io.VFMemoryStorageFactory.init(VFMemoryStorageFactory.java:109)

The java comment for VFMemoryStorageFactory.init is as follows. It looks
like we are accessing the file system to make sure there is no such dbname
already existing in the file system. Should we be catching 'The parameter
is incorrect.: java.io.IOException' in this code and if we get this
exception, then we can assume that there is no physical db with the same
name and hence we can go ahead and create in memory db. Appreciate all the
help. Thanks

/**
 * Initializes the storage factory instance by setting up a temporary
 * directory, the database directory and checking if the database being
 * named already exists.
 *
 * @param home the value of {@code system.home} for this storage factory
 * @param databaseName the name of the database, all relative pathnames
are
 *  relative to this name
 * @param tempDirNameIgnored ignored
 * @param uniqueName used to determine when the temporary directory can
be
 *  created, but not to name the temporary directory itself
 *
 * @exception IOException on an error (unexpected).
 */


Here is the complete stack trace
ERROR XJ040: Failed to start database
'memory:a'
with class loader sun.misc.Launcher$AppClassLoader@53745374, see the next
exception for details.
java.sql.SQLException: Failed to start database
'memory:a'
with class loader sun.misc.Launcher$AppClassLoader@53745374, see the next
exception for details.
at
org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(SQLExceptionFactory.java:103)
at
org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Util.java:137)
at org.apache.derby.impl.jdbc.Util.seeNextException(Util.java:310)
at
org.apache.derby.impl.jdbc.EmbedConnection.bootDatabase(EmbedConnection.java:2842)
at
org.apache.derby.impl.jdbc.EmbedConnection.init(EmbedConnection.java:405)
at
org.apache.derby.jdbc.InternalDriver.getNewEmbedConnection(InternalDriver.java:628)
at
org.apache.derby.jdbc.InternalDriver.connect(InternalDriver.java:282)
at
org.apache.derby.jdbc.InternalDriver.connect(InternalDriver.java:913)
at
org.apache.derby.jdbc.AutoloadedDriver.connect(AutoloadedDriver.java:147)
at java.sql.DriverManager.getConnection(DriverManager.java:419)
at java.sql.DriverManager.getConnection(DriverManager.java:391)
at org.apache.derby.impl.tools.ij.ij.dynamicConnection(ij.java:1483)
at org.apache.derby.impl.tools.ij.ij.ConnectStatement(ij.java:1313)
at org.apache.derby.impl.tools.ij.ij.ijStatement(ij.java:1101)
at
org.apache.derby.impl.tools.ij.utilMain.runScriptGuts(utilMain.java:347)
at org.apache.derby.impl.tools.ij.utilMain.go(utilMain.java:245)
at org.apache.derby.impl.tools.ij.Main.go(Main.java:229)
at org.apache.derby.impl.tools.ij.Main.mainCore(Main.java:184)
at org.apache.derby.impl.tools.ij.Main.main(Main.java:75)
at org.apache.derby.tools.ij.main(ij.java:59)
Caused by: java.sql.SQLException: Failed to start database
'memory:a

[jira] [Created] (DERBY-6449) Analyze and possible correct the suspicious addition of in-memory dependencies during statement execution

2014-01-08 Thread Rick Hillegas (JIRA)
Rick Hillegas created DERBY-6449:


 Summary: Analyze and possible correct the suspicious addition of 
in-memory dependencies during statement execution
 Key: DERBY-6449
 URL: https://issues.apache.org/jira/browse/DERBY-6449
 Project: Derby
  Issue Type: Task
  Components: SQL
Affects Versions: 10.11.0.0
Reporter: Rick Hillegas


During work on DERBY-6434, I was surprised to find that in-memory dependencies 
are added during statement execution. I had expected that the dependency graph 
would be completely built by the bind() phase. This issue is a place to record 
an analysis of the calls to DependencyManager.addDependency() made by the 
ConstantActions and to make any necessary changes.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (DERBY-6449) Analyze and possibly correct the suspicious addition of in-memory dependencies during statement execution

2014-01-08 Thread Rick Hillegas (JIRA)

 [ 
https://issues.apache.org/jira/browse/DERBY-6449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rick Hillegas updated DERBY-6449:
-

Summary: Analyze and possibly correct the suspicious addition of in-memory 
dependencies during statement execution  (was: Analyze and possible correct the 
suspicious addition of in-memory dependencies during statement execution)

 Analyze and possibly correct the suspicious addition of in-memory 
 dependencies during statement execution
 -

 Key: DERBY-6449
 URL: https://issues.apache.org/jira/browse/DERBY-6449
 Project: Derby
  Issue Type: Task
  Components: SQL
Affects Versions: 10.11.0.0
Reporter: Rick Hillegas

 During work on DERBY-6434, I was surprised to find that in-memory 
 dependencies are added during statement execution. I had expected that the 
 dependency graph would be completely built by the bind() phase. This issue is 
 a place to record an analysis of the calls to 
 DependencyManager.addDependency() made by the ConstantActions and to make any 
 necessary changes.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (DERBY-6271) Backup error when in-memory with jar

2014-01-08 Thread Mike Matrigali (JIRA)

 [ 
https://issues.apache.org/jira/browse/DERBY-6271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Matrigali updated DERBY-6271:
--

Component/s: Store

 Backup error when in-memory with jar
 

 Key: DERBY-6271
 URL: https://issues.apache.org/jira/browse/DERBY-6271
 Project: Derby
  Issue Type: Bug
  Components: Store
Affects Versions: 10.8.2.2
Reporter: Albert

 connect 'jdbc:derby://localhost:1527/memory:imdb4tf;create=true';
 call 
 sqlj.install_jar('E:\baocr\NetBeansProjects\derbyFunc\dist\derbyFunc.jar', 
 'app.bcrFunc',0);
 call syscs_util.syscs_set_database_property('derby.database.classpath', 
 'app.bcrFunc');
 CREATE FUNCTION regexp_substr
 ( srcstr varchar(100), pattern varchar(100))
 RETURNS varchar(100)
 PARAMETER STYLE JAVA
 NO SQL LANGUAGE JAVA
 EXTERNAL NAME 'bcr.derby.MyFunc.regexp_substr';
 call syscs_util.syscs_backup_database('C:\Users\Albert\derby-IMDB');
 
 错误代码 -1, SQL 状态XSRS5: 在备份期间将文件从 
 (db=E:\baocr\.netbeans-derby\imdb4tf)E:\baocr\.netbeans-derby\imdb4tf\jar\APP\BCRFUNC.jar.G1371800096459#exists=true,
  isDirectory=false, length=2698, canWrite=true 复制到 
 C:\Users\Albert\derby-IMDB\imdb4tf\jar\APP\BCRFUNC.jar.G1371800096459 时出错。
 --
 help please!



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (DERBY-6271) Backup error when in-memory with jar

2013-06-24 Thread Dag H. Wanvik (JIRA)

[ 
https://issues.apache.org/jira/browse/DERBY-6271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13691729#comment-13691729
 ] 

Dag H. Wanvik commented on DERBY-6271:
--

It seems the copying of that jar file gave an error; into 
C:\Users\Albert\derby-IMDB\imdb4tf\jar\APP\BCRFUNC.jar.G1371800096459.

a) Did you check that you have file system permissions to write into that 
location?
b)  If the server is running under a security manager, does the server policies 
allow writing into that location? 
c)  If so, could you be out of file space?
d) Did you see any stack trace in derby.log? If so, could you post it?




 Backup error when in-memory with jar
 

 Key: DERBY-6271
 URL: https://issues.apache.org/jira/browse/DERBY-6271
 Project: Derby
  Issue Type: Bug
Affects Versions: 10.8.2.2
Reporter: Albert

 connect 'jdbc:derby://localhost:1527/memory:imdb4tf;create=true';
 call 
 sqlj.install_jar('E:\baocr\NetBeansProjects\derbyFunc\dist\derbyFunc.jar', 
 'app.bcrFunc',0);
 call syscs_util.syscs_set_database_property('derby.database.classpath', 
 'app.bcrFunc');
 CREATE FUNCTION regexp_substr
 ( srcstr varchar(100), pattern varchar(100))
 RETURNS varchar(100)
 PARAMETER STYLE JAVA
 NO SQL LANGUAGE JAVA
 EXTERNAL NAME 'bcr.derby.MyFunc.regexp_substr';
 call syscs_util.syscs_backup_database('C:\Users\Albert\derby-IMDB');
 
 错误代码 -1, SQL 状态XSRS5: 在备份期间将文件从 
 (db=E:\baocr\.netbeans-derby\imdb4tf)E:\baocr\.netbeans-derby\imdb4tf\jar\APP\BCRFUNC.jar.G1371800096459#exists=true,
  isDirectory=false, length=2698, canWrite=true 复制到 
 C:\Users\Albert\derby-IMDB\imdb4tf\jar\APP\BCRFUNC.jar.G1371800096459 时出错。
 --
 help please!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (DERBY-1699) Save optimizer costs in saved obejcts rather than as compiled byte code, reduces memory usage and generated class size.

2013-06-24 Thread Mamta A. Satoor (JIRA)

 [ 
https://issues.apache.org/jira/browse/DERBY-1699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mamta A. Satoor updated DERBY-1699:
---

Urgency: Normal
 Labels: derby_triage10_11  (was: )

 Save optimizer costs in saved obejcts rather than as compiled byte code, 
 reduces memory usage and generated class size.
 ---

 Key: DERBY-1699
 URL: https://issues.apache.org/jira/browse/DERBY-1699
 Project: Derby
  Issue Type: Improvement
  Components: SQL
Reporter: Daniel John Debrunner
  Labels: derby_triage10_11

  A UNION node will generate byte code to call this method:
 NoPutResultSet getUnionResultSet(NoPutResultSet source1,
 NoPutResultSet source2,
 Activation activation,
 int resultSetNumber,
 double optimizerEstimatedRowCount,
 double optimizerEstimatedCost,
 GeneratedMethod closeCleanup)
 The optimizer costs being passed in are rarely used, in some cases they are 
 used as estimates for sizing items.
 They are also used if the plan is displayed, to show the costs.
 It's possible that the cost estimates could be saved in the saved objects 
 structure of the plan and be available by
 result set number. E.g. .store a Hashtable in the saved objects with a key of 
 costEstimates, the hashtable would have a key of resultSetNumber and value 
 of a StoreCostResult. This would also be a one time storage at compile time, 
 rather than the current code which incurs a both a cpu and memory cost at 
 runtime for each ResultSet and hence each active query. 
 This would apply to any node that takes an optimizer cost.
 This has been split out from DERBY-766

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (DERBY-6271) Backup error when in-memory with jar

2013-06-21 Thread Albert (JIRA)
Albert created DERBY-6271:
-

 Summary: Backup error when in-memory with jar
 Key: DERBY-6271
 URL: https://issues.apache.org/jira/browse/DERBY-6271
 Project: Derby
  Issue Type: Bug
Affects Versions: 10.8.2.2
Reporter: Albert


connect 'jdbc:derby://localhost:1527/memory:imdb4tf;create=true';
call sqlj.install_jar('E:\baocr\NetBeansProjects\derbyFunc\dist\derbyFunc.jar', 
'app.bcrFunc',0);
call syscs_util.syscs_set_database_property('derby.database.classpath', 
'app.bcrFunc');
CREATE FUNCTION regexp_substr
( srcstr varchar(100), pattern varchar(100))
RETURNS varchar(100)
PARAMETER STYLE JAVA
NO SQL LANGUAGE JAVA
EXTERNAL NAME 'bcr.derby.MyFunc.regexp_substr';

call syscs_util.syscs_backup_database('C:\Users\Albert\derby-IMDB');

错误代码 -1, SQL 状态XSRS5: 在备份期间将文件从 
(db=E:\baocr\.netbeans-derby\imdb4tf)E:\baocr\.netbeans-derby\imdb4tf\jar\APP\BCRFUNC.jar.G1371800096459#exists=true,
 isDirectory=false, length=2698, canWrite=true 复制到 
C:\Users\Albert\derby-IMDB\imdb4tf\jar\APP\BCRFUNC.jar.G1371800096459 时出错。
--
help please!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (DERBY-5057) Out-of-memory error in istat tests

2013-06-14 Thread Rick Hillegas (JIRA)

 [ 
https://issues.apache.org/jira/browse/DERBY-5057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rick Hillegas closed DERBY-5057.



 Out-of-memory error in istat tests
 --

 Key: DERBY-5057
 URL: https://issues.apache.org/jira/browse/DERBY-5057
 Project: Derby
  Issue Type: Bug
  Components: Test
 Environment: Java 5, solaris
Reporter: Rick Hillegas
  Labels: derby_triage10_8

 We saw an out-of-memory error in the istat tests during the nightly test run 
 on Java 5 on Solaris last night. See 
 http://dbtg.foundry.sun.com/derby/test/Daily/jvm1.5/testing/Limited/testSummary-1071310.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (DERBY-6096) OutOfMemoryError with Clob or Blob hash join: DataTypeDescriptor.estimatedMemoryUsage() has no case for BLOB or CLOB so would underestimate memory usage for those types

2013-06-13 Thread Kathey Marsden (JIRA)

 [ 
https://issues.apache.org/jira/browse/DERBY-6096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kathey Marsden updated DERBY-6096:
--

Labels: derby_backport_reject_10_10  (was: )

 OutOfMemoryError with Clob or Blob hash join: 
 DataTypeDescriptor.estimatedMemoryUsage()  has no case for BLOB or CLOB so 
 would underestimate memory usage for those types at zero
 -

 Key: DERBY-6096
 URL: https://issues.apache.org/jira/browse/DERBY-6096
 Project: Derby
  Issue Type: Bug
  Components: SQL
Affects Versions: 10.1.3.1, 10.2.2.0, 10.3.3.0, 10.4.2.0, 10.5.3.0, 
 10.6.2.1, 10.7.1.1, 10.8.3.0, 10.9.1.0, 10.10.1.1
Reporter: Kathey Marsden
Assignee: Kathey Marsden
  Labels: derby_backport_reject_10_10
 Attachments: D6096.java, derby-6096_code_diff.txt, 
 derby-6096_diff2.txt, derby-6096_diff.txt, less-gc.diff, releaseNote.html


 In discussion on derby-dev regarding how much memory is used for hash joins, 
 Knut noted:
 I haven't verified, but I think HashJoinStrategy uses
 DataTypeDescriptor.estimatedMemoryUsage() to estimate how much memory
 the hash table will consume. That method has no case for BLOB or CLOB,
 so it looks as if it will return zero for LOB columns. If that's so, it
 will definitely overestimate how many rows fits in maxMemoryPerTable
 kilobytes if the rows contain LOBs.
 DataTypeDescriptor.estimatedMemoryUsage() should be updated to include BLOB 
 and CLOB and we should try verify if this theory is correct with a 
 reproduction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (DERBY-6096) OutOfMemoryError with Clob or Blob hash join: DataTypeDescriptor.estimatedMemoryUsage() has no case for BLOB or CLOB so would underestimate memory usage for those types

2013-06-13 Thread Kathey Marsden (JIRA)

 [ 
https://issues.apache.org/jira/browse/DERBY-6096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kathey Marsden resolved DERBY-6096.
---

Resolution: Fixed

Resolving. This issue should not be back ported as it could cause a change in 
performance for some applications that expect the zero estimate.

 OutOfMemoryError with Clob or Blob hash join: 
 DataTypeDescriptor.estimatedMemoryUsage()  has no case for BLOB or CLOB so 
 would underestimate memory usage for those types at zero
 -

 Key: DERBY-6096
 URL: https://issues.apache.org/jira/browse/DERBY-6096
 Project: Derby
  Issue Type: Bug
  Components: SQL
Affects Versions: 10.1.3.1, 10.2.2.0, 10.3.3.0, 10.4.2.0, 10.5.3.0, 
 10.6.2.1, 10.7.1.1, 10.8.3.0, 10.9.1.0, 10.10.1.1
Reporter: Kathey Marsden
Assignee: Kathey Marsden
  Labels: derby_backport_reject_10_10
 Attachments: D6096.java, derby-6096_code_diff.txt, 
 derby-6096_diff2.txt, derby-6096_diff.txt, less-gc.diff, releaseNote.html


 In discussion on derby-dev regarding how much memory is used for hash joins, 
 Knut noted:
 I haven't verified, but I think HashJoinStrategy uses
 DataTypeDescriptor.estimatedMemoryUsage() to estimate how much memory
 the hash table will consume. That method has no case for BLOB or CLOB,
 so it looks as if it will return zero for LOB columns. If that's so, it
 will definitely overestimate how many rows fits in maxMemoryPerTable
 kilobytes if the rows contain LOBs.
 DataTypeDescriptor.estimatedMemoryUsage() should be updated to include BLOB 
 and CLOB and we should try verify if this theory is correct with a 
 reproduction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (DERBY-6096) OutOfMemoryError with Clob or Blob hash join: DataTypeDescriptor.estimatedMemoryUsage() has no case for BLOB or CLOB so would underestimate memory usage for those types

2013-06-13 Thread Kathey Marsden (JIRA)

 [ 
https://issues.apache.org/jira/browse/DERBY-6096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kathey Marsden updated DERBY-6096:
--

Fix Version/s: 10.11.0.0

 OutOfMemoryError with Clob or Blob hash join: 
 DataTypeDescriptor.estimatedMemoryUsage()  has no case for BLOB or CLOB so 
 would underestimate memory usage for those types at zero
 -

 Key: DERBY-6096
 URL: https://issues.apache.org/jira/browse/DERBY-6096
 Project: Derby
  Issue Type: Bug
  Components: SQL
Affects Versions: 10.1.3.1, 10.2.2.0, 10.3.3.0, 10.4.2.0, 10.5.3.0, 
 10.6.2.1, 10.7.1.1, 10.8.3.0, 10.9.1.0, 10.10.1.1
Reporter: Kathey Marsden
Assignee: Kathey Marsden
  Labels: derby_backport_reject_10_10
 Fix For: 10.11.0.0

 Attachments: D6096.java, derby-6096_code_diff.txt, 
 derby-6096_diff2.txt, derby-6096_diff.txt, less-gc.diff, releaseNote.html


 In discussion on derby-dev regarding how much memory is used for hash joins, 
 Knut noted:
 I haven't verified, but I think HashJoinStrategy uses
 DataTypeDescriptor.estimatedMemoryUsage() to estimate how much memory
 the hash table will consume. That method has no case for BLOB or CLOB,
 so it looks as if it will return zero for LOB columns. If that's so, it
 will definitely overestimate how many rows fits in maxMemoryPerTable
 kilobytes if the rows contain LOBs.
 DataTypeDescriptor.estimatedMemoryUsage() should be updated to include BLOB 
 and CLOB and we should try verify if this theory is correct with a 
 reproduction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (DERBY-6096) OutOfMemoryError with Clob or Blob hash join: DataTypeDescriptor.estimatedMemoryUsage() has no case for BLOB or CLOB so would underestimate memory usage for those type

2013-05-21 Thread Dag H. Wanvik (JIRA)

[ 
https://issues.apache.org/jira/browse/DERBY-6096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13662906#comment-13662906
 ] 

Dag H. Wanvik commented on DERBY-6096:
--

Can this issue be resolved?


 OutOfMemoryError with Clob or Blob hash join: 
 DataTypeDescriptor.estimatedMemoryUsage()  has no case for BLOB or CLOB so 
 would underestimate memory usage for those types at zero
 -

 Key: DERBY-6096
 URL: https://issues.apache.org/jira/browse/DERBY-6096
 Project: Derby
  Issue Type: Bug
  Components: SQL
Affects Versions: 10.1.3.1, 10.2.2.0, 10.3.3.0, 10.4.2.0, 10.5.3.0, 
 10.6.2.1, 10.7.1.1, 10.8.3.0, 10.9.1.0, 10.10.1.1
Reporter: Kathey Marsden
Assignee: Kathey Marsden
 Attachments: D6096.java, derby-6096_code_diff.txt, 
 derby-6096_diff2.txt, derby-6096_diff.txt, less-gc.diff, releaseNote.html


 In discussion on derby-dev regarding how much memory is used for hash joins, 
 Knut noted:
 I haven't verified, but I think HashJoinStrategy uses
 DataTypeDescriptor.estimatedMemoryUsage() to estimate how much memory
 the hash table will consume. That method has no case for BLOB or CLOB,
 so it looks as if it will return zero for LOB columns. If that's so, it
 will definitely overestimate how many rows fits in maxMemoryPerTable
 kilobytes if the rows contain LOBs.
 DataTypeDescriptor.estimatedMemoryUsage() should be updated to include BLOB 
 and CLOB and we should try verify if this theory is correct with a 
 reproduction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (DERBY-6096) OutOfMemoryError with Clob or Blob hash join: DataTypeDescriptor.estimatedMemoryUsage() has no case for BLOB or CLOB so would underestimate memory usage for those types

2013-04-04 Thread Knut Anders Hatlen (JIRA)

 [ 
https://issues.apache.org/jira/browse/DERBY-6096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Knut Anders Hatlen updated DERBY-6096:
--

Attachment: less-gc.diff

Attaching less-gc.diff which makes a small change in the tests so that they 
only run System.gc() if the memory statistics are actually going to be printed. 
(The call to gc() doesn't seem to slow down the test when it's run separately, 
but I suppose it could take longer if it runs as part of a larger test suite 
and there's more data on the heap.)

Committed revision 1464470.

 OutOfMemoryError with Clob or Blob hash join: 
 DataTypeDescriptor.estimatedMemoryUsage()  has no case for BLOB or CLOB so 
 would underestimate memory usage for those types at zero
 -

 Key: DERBY-6096
 URL: https://issues.apache.org/jira/browse/DERBY-6096
 Project: Derby
  Issue Type: Bug
  Components: SQL
Affects Versions: 10.1.3.1, 10.2.2.0, 10.3.3.0, 10.4.2.0, 10.5.3.0, 
 10.6.2.1, 10.7.1.1, 10.9.1.0, 10.10.1.1, 10.8.3.0
Reporter: Kathey Marsden
Assignee: Kathey Marsden
 Attachments: D6096.java, derby-6096_code_diff.txt, 
 derby-6096_diff2.txt, derby-6096_diff.txt, less-gc.diff, releaseNote.html


 In discussion on derby-dev regarding how much memory is used for hash joins, 
 Knut noted:
 I haven't verified, but I think HashJoinStrategy uses
 DataTypeDescriptor.estimatedMemoryUsage() to estimate how much memory
 the hash table will consume. That method has no case for BLOB or CLOB,
 so it looks as if it will return zero for LOB columns. If that's so, it
 will definitely overestimate how many rows fits in maxMemoryPerTable
 kilobytes if the rows contain LOBs.
 DataTypeDescriptor.estimatedMemoryUsage() should be updated to include BLOB 
 and CLOB and we should try verify if this theory is correct with a 
 reproduction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (DERBY-6096) OutOfMemoryError with Clob or Blob hash join: DataTypeDescriptor.estimatedMemoryUsage() has no case for BLOB or CLOB so would underestimate memory usage for those type

2013-04-03 Thread Kathey Marsden (JIRA)

[ 
https://issues.apache.org/jira/browse/DERBY-6096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13621617#comment-13621617
 ] 

Kathey Marsden commented on DERBY-6096:
---

Sendingjava\engine\org\apache\derby\iapi\types\DataTypeDescriptor.java
Sending
java\testing\org\apache\derbyTesting\functionTests\tests\memory\BlobMemTest.java
Sending
java\testing\org\apache\derbyTesting\functionTests\tests\memory\ClobMemTest.java
Transmitting file data ...
Committed revision 1464247.

URL: http://svn.apache.org/r1464247
Log:
DERBY-6096 OutOfMemoryError with Clob or Blob hash join: 
DataTypeDescriptor.estimatedMemoryUsage() has no case for BLOB or CLOB so would 
underestimate memory usage for those types at zero 

Estimate BLOB/CLOB size at 1 like other long types.



 OutOfMemoryError with Clob or Blob hash join: 
 DataTypeDescriptor.estimatedMemoryUsage()  has no case for BLOB or CLOB so 
 would underestimate memory usage for those types at zero
 -

 Key: DERBY-6096
 URL: https://issues.apache.org/jira/browse/DERBY-6096
 Project: Derby
  Issue Type: Bug
  Components: SQL
Affects Versions: 10.1.3.1, 10.2.2.0, 10.3.3.0, 10.4.2.0, 10.5.3.0, 
 10.6.2.1, 10.7.1.1, 10.9.1.0, 10.10.1.1, 10.8.3.0
Reporter: Kathey Marsden
 Attachments: D6096.java, derby-6096_code_diff.txt, 
 derby-6096_diff2.txt, derby-6096_diff.txt, releaseNote.html


 In discussion on derby-dev regarding how much memory is used for hash joins, 
 Knut noted:
 I haven't verified, but I think HashJoinStrategy uses
 DataTypeDescriptor.estimatedMemoryUsage() to estimate how much memory
 the hash table will consume. That method has no case for BLOB or CLOB,
 so it looks as if it will return zero for LOB columns. If that's so, it
 will definitely overestimate how many rows fits in maxMemoryPerTable
 kilobytes if the rows contain LOBs.
 DataTypeDescriptor.estimatedMemoryUsage() should be updated to include BLOB 
 and CLOB and we should try verify if this theory is correct with a 
 reproduction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (DERBY-6096) OutOfMemoryError with Clob or Blob hash join: DataTypeDescriptor.estimatedMemoryUsage() has no case for BLOB or CLOB so would underestimate memory usage for those types

2013-04-03 Thread Kathey Marsden (JIRA)

 [ 
https://issues.apache.org/jira/browse/DERBY-6096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kathey Marsden reassigned DERBY-6096:
-

Assignee: Kathey Marsden

 OutOfMemoryError with Clob or Blob hash join: 
 DataTypeDescriptor.estimatedMemoryUsage()  has no case for BLOB or CLOB so 
 would underestimate memory usage for those types at zero
 -

 Key: DERBY-6096
 URL: https://issues.apache.org/jira/browse/DERBY-6096
 Project: Derby
  Issue Type: Bug
  Components: SQL
Affects Versions: 10.1.3.1, 10.2.2.0, 10.3.3.0, 10.4.2.0, 10.5.3.0, 
 10.6.2.1, 10.7.1.1, 10.9.1.0, 10.10.1.1, 10.8.3.0
Reporter: Kathey Marsden
Assignee: Kathey Marsden
 Attachments: D6096.java, derby-6096_code_diff.txt, 
 derby-6096_diff2.txt, derby-6096_diff.txt, releaseNote.html


 In discussion on derby-dev regarding how much memory is used for hash joins, 
 Knut noted:
 I haven't verified, but I think HashJoinStrategy uses
 DataTypeDescriptor.estimatedMemoryUsage() to estimate how much memory
 the hash table will consume. That method has no case for BLOB or CLOB,
 so it looks as if it will return zero for LOB columns. If that's so, it
 will definitely overestimate how many rows fits in maxMemoryPerTable
 kilobytes if the rows contain LOBs.
 DataTypeDescriptor.estimatedMemoryUsage() should be updated to include BLOB 
 and CLOB and we should try verify if this theory is correct with a 
 reproduction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (DERBY-6096) OutOfMemoryError with Clob or Blob hash join: DataTypeDescriptor.estimatedMemoryUsage() has no case for BLOB or CLOB so would underestimate memory usage for those types

2013-04-02 Thread Kathey Marsden (JIRA)

 [ 
https://issues.apache.org/jira/browse/DERBY-6096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kathey Marsden updated DERBY-6096:
--

Attachment: derby-6096_code_diff.txt

This is the proposed code change for this issue, which is to have BLOB and CLOB 
match the other long types at an estimated 10,000 bytes.  I still need to add 
tests.  

This change will require a release note as users my want to increase 
derby.language.maxMemoryPerTable to accomodate that this value is now being 
used with BLOB and CLOB.


 OutOfMemoryError with Clob or Blob hash join: 
 DataTypeDescriptor.estimatedMemoryUsage()  has no case for BLOB or CLOB so 
 would underestimate memory usage for those types at zero
 -

 Key: DERBY-6096
 URL: https://issues.apache.org/jira/browse/DERBY-6096
 Project: Derby
  Issue Type: Bug
  Components: SQL
Affects Versions: 10.1.3.1, 10.2.2.0, 10.3.3.0, 10.4.2.0, 10.5.3.0, 
 10.6.2.1, 10.7.1.1, 10.9.1.0, 10.10.1.1, 10.8.3.0
Reporter: Kathey Marsden
 Attachments: D6096.java, derby-6096_code_diff.txt


 In discussion on derby-dev regarding how much memory is used for hash joins, 
 Knut noted:
 I haven't verified, but I think HashJoinStrategy uses
 DataTypeDescriptor.estimatedMemoryUsage() to estimate how much memory
 the hash table will consume. That method has no case for BLOB or CLOB,
 so it looks as if it will return zero for LOB columns. If that's so, it
 will definitely overestimate how many rows fits in maxMemoryPerTable
 kilobytes if the rows contain LOBs.
 DataTypeDescriptor.estimatedMemoryUsage() should be updated to include BLOB 
 and CLOB and we should try verify if this theory is correct with a 
 reproduction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (DERBY-6096) OutOfMemoryError with Clob or Blob hash join: DataTypeDescriptor.estimatedMemoryUsage() has no case for BLOB or CLOB so would underestimate memory usage for those types

2013-04-02 Thread Kathey Marsden (JIRA)

 [ 
https://issues.apache.org/jira/browse/DERBY-6096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kathey Marsden updated DERBY-6096:
--

Attachment: derby-6096_diff.txt

derby-6096_diff.txt is the full patch with tests based on Knut's repro. Tests 
are in progress, please review.


 OutOfMemoryError with Clob or Blob hash join: 
 DataTypeDescriptor.estimatedMemoryUsage()  has no case for BLOB or CLOB so 
 would underestimate memory usage for those types at zero
 -

 Key: DERBY-6096
 URL: https://issues.apache.org/jira/browse/DERBY-6096
 Project: Derby
  Issue Type: Bug
  Components: SQL
Affects Versions: 10.1.3.1, 10.2.2.0, 10.3.3.0, 10.4.2.0, 10.5.3.0, 
 10.6.2.1, 10.7.1.1, 10.9.1.0, 10.10.1.1, 10.8.3.0
Reporter: Kathey Marsden
 Attachments: D6096.java, derby-6096_code_diff.txt, derby-6096_diff.txt


 In discussion on derby-dev regarding how much memory is used for hash joins, 
 Knut noted:
 I haven't verified, but I think HashJoinStrategy uses
 DataTypeDescriptor.estimatedMemoryUsage() to estimate how much memory
 the hash table will consume. That method has no case for BLOB or CLOB,
 so it looks as if it will return zero for LOB columns. If that's so, it
 will definitely overestimate how many rows fits in maxMemoryPerTable
 kilobytes if the rows contain LOBs.
 DataTypeDescriptor.estimatedMemoryUsage() should be updated to include BLOB 
 and CLOB and we should try verify if this theory is correct with a 
 reproduction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (DERBY-6096) OutOfMemoryError with Clob or Blob hash join: DataTypeDescriptor.estimatedMemoryUsage() has no case for BLOB or CLOB so would underestimate memory usage for those type

2013-04-02 Thread Kathey Marsden (JIRA)

[ 
https://issues.apache.org/jira/browse/DERBY-6096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13620147#comment-13620147
 ] 

Kathey Marsden commented on DERBY-6096:
---

Suites.All, derbyall, and the memory suite with -Xmx64M passed with the 
derby-6096_diff.txt patch except for DERBY-6138 which passed on rerun with a 
different classpath and is not likely related.

 OutOfMemoryError with Clob or Blob hash join: 
 DataTypeDescriptor.estimatedMemoryUsage()  has no case for BLOB or CLOB so 
 would underestimate memory usage for those types at zero
 -

 Key: DERBY-6096
 URL: https://issues.apache.org/jira/browse/DERBY-6096
 Project: Derby
  Issue Type: Bug
  Components: SQL
Affects Versions: 10.1.3.1, 10.2.2.0, 10.3.3.0, 10.4.2.0, 10.5.3.0, 
 10.6.2.1, 10.7.1.1, 10.9.1.0, 10.10.1.1, 10.8.3.0
Reporter: Kathey Marsden
 Attachments: D6096.java, derby-6096_code_diff.txt, derby-6096_diff.txt


 In discussion on derby-dev regarding how much memory is used for hash joins, 
 Knut noted:
 I haven't verified, but I think HashJoinStrategy uses
 DataTypeDescriptor.estimatedMemoryUsage() to estimate how much memory
 the hash table will consume. That method has no case for BLOB or CLOB,
 so it looks as if it will return zero for LOB columns. If that's so, it
 will definitely overestimate how many rows fits in maxMemoryPerTable
 kilobytes if the rows contain LOBs.
 DataTypeDescriptor.estimatedMemoryUsage() should be updated to include BLOB 
 and CLOB and we should try verify if this theory is correct with a 
 reproduction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (DERBY-6096) OutOfMemoryError with Clob or Blob hash join: DataTypeDescriptor.estimatedMemoryUsage() has no case for BLOB or CLOB so would underestimate memory usage for those type

2013-04-02 Thread Mike Matrigali (JIRA)

[ 
https://issues.apache.org/jira/browse/DERBY-6096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13620154#comment-13620154
 ] 

Mike Matrigali commented on DERBY-6096:
---

i reviewed the change, and seems a good change to me.  some day we probably 
should just raise the default internal maxMemoryPerTable on a major release 
boundary to reflect increased default memory for most users, or maybe come up 
with a better zero admin auto config for it.  1 meg seems pretty small.  

 OutOfMemoryError with Clob or Blob hash join: 
 DataTypeDescriptor.estimatedMemoryUsage()  has no case for BLOB or CLOB so 
 would underestimate memory usage for those types at zero
 -

 Key: DERBY-6096
 URL: https://issues.apache.org/jira/browse/DERBY-6096
 Project: Derby
  Issue Type: Bug
  Components: SQL
Affects Versions: 10.1.3.1, 10.2.2.0, 10.3.3.0, 10.4.2.0, 10.5.3.0, 
 10.6.2.1, 10.7.1.1, 10.9.1.0, 10.10.1.1, 10.8.3.0
Reporter: Kathey Marsden
 Attachments: D6096.java, derby-6096_code_diff.txt, derby-6096_diff.txt


 In discussion on derby-dev regarding how much memory is used for hash joins, 
 Knut noted:
 I haven't verified, but I think HashJoinStrategy uses
 DataTypeDescriptor.estimatedMemoryUsage() to estimate how much memory
 the hash table will consume. That method has no case for BLOB or CLOB,
 so it looks as if it will return zero for LOB columns. If that's so, it
 will definitely overestimate how many rows fits in maxMemoryPerTable
 kilobytes if the rows contain LOBs.
 DataTypeDescriptor.estimatedMemoryUsage() should be updated to include BLOB 
 and CLOB and we should try verify if this theory is correct with a 
 reproduction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (DERBY-6096) OutOfMemoryError with Clob or Blob hash join: DataTypeDescriptor.estimatedMemoryUsage() has no case for BLOB or CLOB so would underestimate memory usage for those types

2013-04-02 Thread Kathey Marsden (JIRA)

 [ 
https://issues.apache.org/jira/browse/DERBY-6096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kathey Marsden updated DERBY-6096:
--

Attachment: releaseNote.html

Attached is a release note for this issue. I am not sure if it has too much 
information as I don't think the current default maxMemoryPerTable is published 
and I know there has been talk of increasing  it.


 OutOfMemoryError with Clob or Blob hash join: 
 DataTypeDescriptor.estimatedMemoryUsage()  has no case for BLOB or CLOB so 
 would underestimate memory usage for those types at zero
 -

 Key: DERBY-6096
 URL: https://issues.apache.org/jira/browse/DERBY-6096
 Project: Derby
  Issue Type: Bug
  Components: SQL
Affects Versions: 10.1.3.1, 10.2.2.0, 10.3.3.0, 10.4.2.0, 10.5.3.0, 
 10.6.2.1, 10.7.1.1, 10.9.1.0, 10.10.1.1, 10.8.3.0
Reporter: Kathey Marsden
 Attachments: D6096.java, derby-6096_code_diff.txt, 
 derby-6096_diff.txt, releaseNote.html


 In discussion on derby-dev regarding how much memory is used for hash joins, 
 Knut noted:
 I haven't verified, but I think HashJoinStrategy uses
 DataTypeDescriptor.estimatedMemoryUsage() to estimate how much memory
 the hash table will consume. That method has no case for BLOB or CLOB,
 so it looks as if it will return zero for LOB columns. If that's so, it
 will definitely overestimate how many rows fits in maxMemoryPerTable
 kilobytes if the rows contain LOBs.
 DataTypeDescriptor.estimatedMemoryUsage() should be updated to include BLOB 
 and CLOB and we should try verify if this theory is correct with a 
 reproduction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (DERBY-6096) OutOfMemoryError with Clob or Blob hash join: DataTypeDescriptor.estimatedMemoryUsage() has no case for BLOB or CLOB so would underestimate memory usage for those type

2013-04-02 Thread Mamta A. Satoor (JIRA)

[ 
https://issues.apache.org/jira/browse/DERBY-6096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13620178#comment-13620178
 ] 

Mamta A. Satoor commented on DERBY-6096:


I reviewed the change too and it looks good to me. Just one minor comment, in 
the tests, should we initialize the data to be inserted into clob and blob data 
types?

 OutOfMemoryError with Clob or Blob hash join: 
 DataTypeDescriptor.estimatedMemoryUsage()  has no case for BLOB or CLOB so 
 would underestimate memory usage for those types at zero
 -

 Key: DERBY-6096
 URL: https://issues.apache.org/jira/browse/DERBY-6096
 Project: Derby
  Issue Type: Bug
  Components: SQL
Affects Versions: 10.1.3.1, 10.2.2.0, 10.3.3.0, 10.4.2.0, 10.5.3.0, 
 10.6.2.1, 10.7.1.1, 10.9.1.0, 10.10.1.1, 10.8.3.0
Reporter: Kathey Marsden
 Attachments: D6096.java, derby-6096_code_diff.txt, 
 derby-6096_diff.txt, releaseNote.html


 In discussion on derby-dev regarding how much memory is used for hash joins, 
 Knut noted:
 I haven't verified, but I think HashJoinStrategy uses
 DataTypeDescriptor.estimatedMemoryUsage() to estimate how much memory
 the hash table will consume. That method has no case for BLOB or CLOB,
 so it looks as if it will return zero for LOB columns. If that's so, it
 will definitely overestimate how many rows fits in maxMemoryPerTable
 kilobytes if the rows contain LOBs.
 DataTypeDescriptor.estimatedMemoryUsage() should be updated to include BLOB 
 and CLOB and we should try verify if this theory is correct with a 
 reproduction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (DERBY-6096) OutOfMemoryError with Clob or Blob hash join: DataTypeDescriptor.estimatedMemoryUsage() has no case for BLOB or CLOB so would underestimate memory usage for those types

2013-04-02 Thread Kathey Marsden (JIRA)

 [ 
https://issues.apache.org/jira/browse/DERBY-6096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kathey Marsden updated DERBY-6096:
--

Attachment: derby-6096_diff2.txt

Thanks Mamta. derby-6096_diff2.txt fills the arrays.


 OutOfMemoryError with Clob or Blob hash join: 
 DataTypeDescriptor.estimatedMemoryUsage()  has no case for BLOB or CLOB so 
 would underestimate memory usage for those types at zero
 -

 Key: DERBY-6096
 URL: https://issues.apache.org/jira/browse/DERBY-6096
 Project: Derby
  Issue Type: Bug
  Components: SQL
Affects Versions: 10.1.3.1, 10.2.2.0, 10.3.3.0, 10.4.2.0, 10.5.3.0, 
 10.6.2.1, 10.7.1.1, 10.9.1.0, 10.10.1.1, 10.8.3.0
Reporter: Kathey Marsden
 Attachments: D6096.java, derby-6096_code_diff.txt, 
 derby-6096_diff2.txt, derby-6096_diff.txt, releaseNote.html


 In discussion on derby-dev regarding how much memory is used for hash joins, 
 Knut noted:
 I haven't verified, but I think HashJoinStrategy uses
 DataTypeDescriptor.estimatedMemoryUsage() to estimate how much memory
 the hash table will consume. That method has no case for BLOB or CLOB,
 so it looks as if it will return zero for LOB columns. If that's so, it
 will definitely overestimate how many rows fits in maxMemoryPerTable
 kilobytes if the rows contain LOBs.
 DataTypeDescriptor.estimatedMemoryUsage() should be updated to include BLOB 
 and CLOB and we should try verify if this theory is correct with a 
 reproduction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (DERBY-6096) OutOfMemoryError with Clob or Blob hash join: DataTypeDescriptor.estimatedMemoryUsage() has no case for BLOB or CLOB so would underestimate memory usage for those type

2013-03-06 Thread Mike Matrigali (JIRA)

[ 
https://issues.apache.org/jira/browse/DERBY-6096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13594913#comment-13594913
 ] 

Mike Matrigali commented on DERBY-6096:
---

what estimate is proposed for estimatedMemoryUsage of blob's clob's?  Since 
they are variable length objects it is hard to know what actual size they are.  
Given your repro program, it might be reasonable to use 32k and assume store 
will stream the rest of each.  definitely better than 0.  

Note that with the fix we may use way less memory for the query, and for some 
like your repro that will be good.  But for others that did not see errors their
queries may run slower now after the fix when we use less memory.  This may 
especially be a concern if the fix is to be backported.

 OutOfMemoryError with Clob or Blob hash join: 
 DataTypeDescriptor.estimatedMemoryUsage()  has no case for BLOB or CLOB so 
 would underestimate memory usage for those types at zero
 -

 Key: DERBY-6096
 URL: https://issues.apache.org/jira/browse/DERBY-6096
 Project: Derby
  Issue Type: Bug
  Components: SQL
Affects Versions: 10.1.3.1, 10.2.2.0, 10.3.3.0, 10.4.2.0, 10.5.3.0, 
 10.6.2.1, 10.7.1.1, 10.9.1.0, 10.10.0.0, 10.8.3.0
Reporter: Kathey Marsden
 Attachments: D6096.java


 In discussion on derby-dev regarding how much memory is used for hash joins, 
 Knut noted:
 I haven't verified, but I think HashJoinStrategy uses
 DataTypeDescriptor.estimatedMemoryUsage() to estimate how much memory
 the hash table will consume. That method has no case for BLOB or CLOB,
 so it looks as if it will return zero for LOB columns. If that's so, it
 will definitely overestimate how many rows fits in maxMemoryPerTable
 kilobytes if the rows contain LOBs.
 DataTypeDescriptor.estimatedMemoryUsage() should be updated to include BLOB 
 and CLOB and we should try verify if this theory is correct with a 
 reproduction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (DERBY-6096) OutOfMemoryError with Clob or Blob hash join: DataTypeDescriptor.estimatedMemoryUsage() has no case for BLOB or CLOB so would underestimate memory usage for those type

2013-03-06 Thread Kathey Marsden (JIRA)

[ 
https://issues.apache.org/jira/browse/DERBY-6096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13595119#comment-13595119
 ] 

Kathey Marsden commented on DERBY-6096:
---

My thought was to just match the existing entries for LONGVARCHAR_TYPE_ID 
LONGVARBIT_TYPE_ID
which is:

 /* Who knows? Let's just use some big number */
return 1.0;

I see the concern with backporting. Maybe the fix should just go into 10.10 
along with a release note with a  work around for performance issues of setting 
derby.language.maxMemoryPerTable higher. Maybe documenting 
derby.language.maxMemoryPerTable would be good at the same time.  


 OutOfMemoryError with Clob or Blob hash join: 
 DataTypeDescriptor.estimatedMemoryUsage()  has no case for BLOB or CLOB so 
 would underestimate memory usage for those types at zero
 -

 Key: DERBY-6096
 URL: https://issues.apache.org/jira/browse/DERBY-6096
 Project: Derby
  Issue Type: Bug
  Components: SQL
Affects Versions: 10.1.3.1, 10.2.2.0, 10.3.3.0, 10.4.2.0, 10.5.3.0, 
 10.6.2.1, 10.7.1.1, 10.9.1.0, 10.10.0.0, 10.8.3.0
Reporter: Kathey Marsden
 Attachments: D6096.java


 In discussion on derby-dev regarding how much memory is used for hash joins, 
 Knut noted:
 I haven't verified, but I think HashJoinStrategy uses
 DataTypeDescriptor.estimatedMemoryUsage() to estimate how much memory
 the hash table will consume. That method has no case for BLOB or CLOB,
 so it looks as if it will return zero for LOB columns. If that's so, it
 will definitely overestimate how many rows fits in maxMemoryPerTable
 kilobytes if the rows contain LOBs.
 DataTypeDescriptor.estimatedMemoryUsage() should be updated to include BLOB 
 and CLOB and we should try verify if this theory is correct with a 
 reproduction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (DERBY-6096) DataTypeDescriptor.estimatedMemoryUsage() has no case for BLOB or CLOB so would underestimate memory usage for those types at zero

2013-03-05 Thread Kathey Marsden (JIRA)

[ 
https://issues.apache.org/jira/browse/DERBY-6096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13593431#comment-13593431
 ] 

Kathey Marsden commented on DERBY-6096:
---

I am out today but thought I would post where I was on trying to get a 
reproduction for the memory usage with Clob hash joins.  I created this fixture 
in memory.ClobMemTest.  At one point I was getting an OOM on the query if 
derby.language.maxMemoryPerTable wasn't set running -Xmx64M   but then started 
cleaning up and it no longer occurs.  I will look more closely tomorrow but 
just wanted to post where I am and get input on how to reproduce.

public void testClobHashJoin() throws SQLException {
Statement s = createStatement();
try {
// Setting maxMemoryPerTable to 0 allows the query to complete
// until OOM is fixed.
//println(setSystemProperty(\derby.language.maxMemoryPerTable\, 
\0\));
//setSystemProperty(derby.language.maxMemoryPerTable, 0);

s.executeUpdate(CREATE TABLE T1 (ID INT , NAME VARCHAR(30)));
s.executeUpdate(CREATE TABLE T2 (ID INT , CDATA CLOB(1G)));
PreparedStatement ps = prepareStatement(insert into t1 
values(?,?));
PreparedStatement ps2 = prepareStatement(insert into t2 
values(?,?));
// insert 8 long rows
for (int i = 1; i = 8; i++) {
ps.setInt(1, i);
ps.setString(2, name + i);
ps.executeUpdate();
ps2.setInt(1, i);
ps2.setCharacterStream(2, new LoopingAlphabetReader(
LONG_CLOB_LENGTH), LONG_CLOB_LENGTH);
ps2.executeUpdate();
}
s.execute(CALL SYSCS_UTIL.SYSCS_SET_RUNTIMESTATISTICS(1));
// Do a query. Force a hash join
PreparedStatement ps3 = prepareStatement(SELECT * FROM t1, t2 
--DERBY-PROPERTIES joinStrategy=hash\n
+ where t1.id = t2.id AND t1.id  8  );
ResultSet rs = ps3.executeQuery();
int i = 0;
for (; rs.next(); i++) {
// just fetch don't materialize results
// derby.tests.trace prints memory usage
println(TotalMemory: + Runtime.getRuntime().totalMemory()
+   + Free Memory:
+ Runtime.getRuntime().freeMemory());
}
assertEquals(Expected 7 rows, got + i, 7, i);
rs.close();
RuntimeStatisticsParser p = SQLUtilities
.getRuntimeStatisticsParser(s);
println(p.toString());
assertTrue(p.usedHashJoin());

} finally {
removeSystemProperty(derby.language.maxMemoryPerTable);
s.execute(CALL SYSCS_UTIL.SYSCS_SET_RUNTIMESTATISTICS(0));
  }
}


 DataTypeDescriptor.estimatedMemoryUsage()  has no case for BLOB or CLOB so 
 would underestimate memory usage for those types at zero
 ---

 Key: DERBY-6096
 URL: https://issues.apache.org/jira/browse/DERBY-6096
 Project: Derby
  Issue Type: Bug
  Components: SQL
Affects Versions: 10.1.3.1, 10.2.2.0, 10.3.3.0, 10.4.2.0, 10.5.3.0, 
 10.6.2.1, 10.7.1.1, 10.9.1.0, 10.10.0.0, 10.8.3.0
Reporter: Kathey Marsden

 In discussion on derby-dev regarding how much memory is used for hash joins, 
 Knut noted:
 I haven't verified, but I think HashJoinStrategy uses
 DataTypeDescriptor.estimatedMemoryUsage() to estimate how much memory
 the hash table will consume. That method has no case for BLOB or CLOB,
 so it looks as if it will return zero for LOB columns. If that's so, it
 will definitely overestimate how many rows fits in maxMemoryPerTable
 kilobytes if the rows contain LOBs.
 DataTypeDescriptor.estimatedMemoryUsage() should be updated to include BLOB 
 and CLOB and we should try verify if this theory is correct with a 
 reproduction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (DERBY-6096) DataTypeDescriptor.estimatedMemoryUsage() has no case for BLOB or CLOB so would underestimate memory usage for those types at zero

2013-03-05 Thread Knut Anders Hatlen (JIRA)

[ 
https://issues.apache.org/jira/browse/DERBY-6096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13593464#comment-13593464
 ] 

Knut Anders Hatlen commented on DERBY-6096:
---

You may have better luck with smaller LOBs. LONG_CLOB_LENGTH is 1800, which 
means the SQLClob objects inserted into BackingStoreHashtable aren't 
materialized and don't take up that much space. Using a larger number of 
smaller LOBs (so small that they don't overflow to another page) should 
increase the memory footprint, as those LOBs will come out of store fully 
materialized.

 DataTypeDescriptor.estimatedMemoryUsage()  has no case for BLOB or CLOB so 
 would underestimate memory usage for those types at zero
 ---

 Key: DERBY-6096
 URL: https://issues.apache.org/jira/browse/DERBY-6096
 Project: Derby
  Issue Type: Bug
  Components: SQL
Affects Versions: 10.1.3.1, 10.2.2.0, 10.3.3.0, 10.4.2.0, 10.5.3.0, 
 10.6.2.1, 10.7.1.1, 10.9.1.0, 10.10.0.0, 10.8.3.0
Reporter: Kathey Marsden

 In discussion on derby-dev regarding how much memory is used for hash joins, 
 Knut noted:
 I haven't verified, but I think HashJoinStrategy uses
 DataTypeDescriptor.estimatedMemoryUsage() to estimate how much memory
 the hash table will consume. That method has no case for BLOB or CLOB,
 so it looks as if it will return zero for LOB columns. If that's so, it
 will definitely overestimate how many rows fits in maxMemoryPerTable
 kilobytes if the rows contain LOBs.
 DataTypeDescriptor.estimatedMemoryUsage() should be updated to include BLOB 
 and CLOB and we should try verify if this theory is correct with a 
 reproduction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (DERBY-6096) DataTypeDescriptor.estimatedMemoryUsage() has no case for BLOB or CLOB so would underestimate memory usage for those types at zero

2013-03-05 Thread Knut Anders Hatlen (JIRA)

 [ 
https://issues.apache.org/jira/browse/DERBY-6096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Knut Anders Hatlen updated DERBY-6096:
--

Attachment: D6096.java

The attached program, D6096.java, produces an OOME in BackingStoreHashtable 
during a join when I run it with -Xmx64M.

The program inserts 1500 32KB BLOBs into a table and joins the table with 
itself.

 DataTypeDescriptor.estimatedMemoryUsage()  has no case for BLOB or CLOB so 
 would underestimate memory usage for those types at zero
 ---

 Key: DERBY-6096
 URL: https://issues.apache.org/jira/browse/DERBY-6096
 Project: Derby
  Issue Type: Bug
  Components: SQL
Affects Versions: 10.1.3.1, 10.2.2.0, 10.3.3.0, 10.4.2.0, 10.5.3.0, 
 10.6.2.1, 10.7.1.1, 10.9.1.0, 10.10.0.0, 10.8.3.0
Reporter: Kathey Marsden
 Attachments: D6096.java


 In discussion on derby-dev regarding how much memory is used for hash joins, 
 Knut noted:
 I haven't verified, but I think HashJoinStrategy uses
 DataTypeDescriptor.estimatedMemoryUsage() to estimate how much memory
 the hash table will consume. That method has no case for BLOB or CLOB,
 so it looks as if it will return zero for LOB columns. If that's so, it
 will definitely overestimate how many rows fits in maxMemoryPerTable
 kilobytes if the rows contain LOBs.
 DataTypeDescriptor.estimatedMemoryUsage() should be updated to include BLOB 
 and CLOB and we should try verify if this theory is correct with a 
 reproduction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (DERBY-6096) OutOfMemoryError with Clob or Blob hash join:DataTypeDescriptor.estimatedMemoryUsage() has no case for BLOB or CLOB so would underestimate memory usage for those types a

2013-03-05 Thread Kathey Marsden (JIRA)

 [ 
https://issues.apache.org/jira/browse/DERBY-6096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kathey Marsden updated DERBY-6096:
--

Summary: OutOfMemoryError with Clob or Blob hash 
join:DataTypeDescriptor.estimatedMemoryUsage()  has no case for BLOB or CLOB so 
would underestimate memory usage for those types at zero  (was: 
DataTypeDescriptor.estimatedMemoryUsage()  has no case for BLOB or CLOB so 
would underestimate memory usage for those types at zero)

 OutOfMemoryError with Clob or Blob hash 
 join:DataTypeDescriptor.estimatedMemoryUsage()  has no case for BLOB or CLOB 
 so would underestimate memory usage for those types at zero
 

 Key: DERBY-6096
 URL: https://issues.apache.org/jira/browse/DERBY-6096
 Project: Derby
  Issue Type: Bug
  Components: SQL
Affects Versions: 10.1.3.1, 10.2.2.0, 10.3.3.0, 10.4.2.0, 10.5.3.0, 
 10.6.2.1, 10.7.1.1, 10.9.1.0, 10.10.0.0, 10.8.3.0
Reporter: Kathey Marsden
 Attachments: D6096.java


 In discussion on derby-dev regarding how much memory is used for hash joins, 
 Knut noted:
 I haven't verified, but I think HashJoinStrategy uses
 DataTypeDescriptor.estimatedMemoryUsage() to estimate how much memory
 the hash table will consume. That method has no case for BLOB or CLOB,
 so it looks as if it will return zero for LOB columns. If that's so, it
 will definitely overestimate how many rows fits in maxMemoryPerTable
 kilobytes if the rows contain LOBs.
 DataTypeDescriptor.estimatedMemoryUsage() should be updated to include BLOB 
 and CLOB and we should try verify if this theory is correct with a 
 reproduction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (DERBY-6096) OutOfMemoryError with Clob or Blob hash join: DataTypeDescriptor.estimatedMemoryUsage() has no case for BLOB or CLOB so would underestimate memory usage for those types

2013-03-05 Thread Kathey Marsden (JIRA)

 [ 
https://issues.apache.org/jira/browse/DERBY-6096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kathey Marsden updated DERBY-6096:
--

Summary: OutOfMemoryError with Clob or Blob hash join: 
DataTypeDescriptor.estimatedMemoryUsage()  has no case for BLOB or CLOB so 
would underestimate memory usage for those types at zero  (was: 
OutOfMemoryError with Clob or Blob hash 
join:DataTypeDescriptor.estimatedMemoryUsage()  has no case for BLOB or CLOB so 
would underestimate memory usage for those types at zero)

 OutOfMemoryError with Clob or Blob hash join: 
 DataTypeDescriptor.estimatedMemoryUsage()  has no case for BLOB or CLOB so 
 would underestimate memory usage for those types at zero
 -

 Key: DERBY-6096
 URL: https://issues.apache.org/jira/browse/DERBY-6096
 Project: Derby
  Issue Type: Bug
  Components: SQL
Affects Versions: 10.1.3.1, 10.2.2.0, 10.3.3.0, 10.4.2.0, 10.5.3.0, 
 10.6.2.1, 10.7.1.1, 10.9.1.0, 10.10.0.0, 10.8.3.0
Reporter: Kathey Marsden
 Attachments: D6096.java


 In discussion on derby-dev regarding how much memory is used for hash joins, 
 Knut noted:
 I haven't verified, but I think HashJoinStrategy uses
 DataTypeDescriptor.estimatedMemoryUsage() to estimate how much memory
 the hash table will consume. That method has no case for BLOB or CLOB,
 so it looks as if it will return zero for LOB columns. If that's so, it
 will definitely overestimate how many rows fits in maxMemoryPerTable
 kilobytes if the rows contain LOBs.
 DataTypeDescriptor.estimatedMemoryUsage() should be updated to include BLOB 
 and CLOB and we should try verify if this theory is correct with a 
 reproduction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (DERBY-6096) OutOfMemoryError with Clob or Blob hash join: DataTypeDescriptor.estimatedMemoryUsage() has no case for BLOB or CLOB so would underestimate memory usage for those type

2013-03-05 Thread Kathey Marsden (JIRA)

[ 
https://issues.apache.org/jira/browse/DERBY-6096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13593597#comment-13593597
 ] 

Kathey Marsden commented on DERBY-6096:
---

Thank you Knut.I was able to reproduce the OOM with your program. Also  a quick 
hack of DataTypeDescriptor.estimatedMemoryUsage()  rectifies the problem and 
verifies your original theory. Updating the summary accordingly.



 OutOfMemoryError with Clob or Blob hash join: 
 DataTypeDescriptor.estimatedMemoryUsage()  has no case for BLOB or CLOB so 
 would underestimate memory usage for those types at zero
 -

 Key: DERBY-6096
 URL: https://issues.apache.org/jira/browse/DERBY-6096
 Project: Derby
  Issue Type: Bug
  Components: SQL
Affects Versions: 10.1.3.1, 10.2.2.0, 10.3.3.0, 10.4.2.0, 10.5.3.0, 
 10.6.2.1, 10.7.1.1, 10.9.1.0, 10.10.0.0, 10.8.3.0
Reporter: Kathey Marsden
 Attachments: D6096.java


 In discussion on derby-dev regarding how much memory is used for hash joins, 
 Knut noted:
 I haven't verified, but I think HashJoinStrategy uses
 DataTypeDescriptor.estimatedMemoryUsage() to estimate how much memory
 the hash table will consume. That method has no case for BLOB or CLOB,
 so it looks as if it will return zero for LOB columns. If that's so, it
 will definitely overestimate how many rows fits in maxMemoryPerTable
 kilobytes if the rows contain LOBs.
 DataTypeDescriptor.estimatedMemoryUsage() should be updated to include BLOB 
 and CLOB and we should try verify if this theory is correct with a 
 reproduction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: How Much Memory for hash joins

2013-03-04 Thread Knut Anders Hatlen
Katherine Marsden kmarsdende...@sbcglobal.net writes:

 On 2/28/2013 9:11 AM, Mike Matrigali wrote:
 In BackingStoreHashTableFromScan I see.

this.max_inmemory_rowcnt = max_inmemory_rowcnt;
  if( max_inmemory_rowcnt  0)
  max_inmemory_size = Long.MAX_VALUE;
  else
  max_inmemory_size = Runtime.getRuntime().totalMemory()/100;



 I have been reading the comments and trying to make sense of the logic
 and understand all that is happening with max_inmemory_size. I don't
 have a test case that goes through the else part of the condition
 above.

 One thing I did notice is that Runtime.getRuntime().totalMemory()
 returns really different things if -Xms is set large, for example with
 nothing else going on with  -Xms1048m -Xmx1048m I get:

 Total Memory:1098907648 Free Memory:1088915600

 With just-Xmx1048m
 Total Memory:4194304 Free Memory:2750304

 Two questions
 1) Might hash joins use an unexpectedly large amount of memory if -Xms
 is set large?  I know at the user site where this was being set, they
 were setting -Xms in the hopes of optimizing memory usage but I wonder
 if it actually increased the amount of memory used by hash joins.

I think hash scans pass in a maximum row count to the
BackingStoreHashTable, so the totalMemory() calculation won't be used by
hash joins. The only use of the totalMemory() calculation I'm aware of,
is for scrollable result sets.

 2) Is there a test case that goes through the else clause above that I
 could use for my experimentation?

Executing this command in ij exercises the else clause:

ij get scroll insensitive cursor c as 'select * from sys.systables';

-- 
Knut Anders


Re: How Much Memory for hash joins

2013-03-04 Thread Knut Anders Hatlen
Mike Matrigali mikem_...@sbcglobal.net writes:

 Also note that these are all estimates within the system.  As Knut
 pointed out there are some known problems with the estimates.  And
 even with fixes he has suggested, the code is probably just guessing
 with things like blobs/clobs.

I haven't verified, but I think HashJoinStrategy uses
DataTypeDescriptor.estimatedMemoryUsage() to estimate how much memory
the hash table will consume. That method has no case for BLOB or CLOB,
so it looks as if it will return zero for LOB columns. If that's so, it
will definitely overestimate how many rows fits in maxMemoryPerTable
kilobytes if the rows contain LOBs.

-- 
Knut Anders


[jira] [Created] (DERBY-6096) DataTypeDescriptor.estimatedMemoryUsage() has no case for BLOB or CLOB so would underestimate memory usage for those types at zero

2013-03-04 Thread Kathey Marsden (JIRA)
Kathey Marsden created DERBY-6096:
-

 Summary: DataTypeDescriptor.estimatedMemoryUsage()  has no case 
for BLOB or CLOB so would underestimate memory usage for those types at zero
 Key: DERBY-6096
 URL: https://issues.apache.org/jira/browse/DERBY-6096
 Project: Derby
  Issue Type: Bug
  Components: SQL
Affects Versions: 10.8.3.0, 10.9.1.0, 10.7.1.1, 10.6.2.1, 10.5.3.0, 
10.4.2.0, 10.3.3.0, 10.2.2.0, 10.1.3.1, 10.10.0.0
Reporter: Kathey Marsden


In discussion on derby-dev regarding how much memory is used for hash joins, 
Knut noted:

I haven't verified, but I think HashJoinStrategy uses
DataTypeDescriptor.estimatedMemoryUsage() to estimate how much memory
the hash table will consume. That method has no case for BLOB or CLOB,
so it looks as if it will return zero for LOB columns. If that's so, it
will definitely overestimate how many rows fits in maxMemoryPerTable
kilobytes if the rows contain LOBs.


DataTypeDescriptor.estimatedMemoryUsage() should be updated to include BLOB and 
CLOB and we should try verify if this theory is correct with a reproduction.



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: How Much Memory for hash joins

2013-03-04 Thread Katherine Marsden

On 3/4/2013 5:21 AM, Knut Anders Hatlen wrote:

Mike Matrigali mikem_...@sbcglobal.net writes:


Also note that these are all estimates within the system.  As Knut
pointed out there are some known problems with the estimates.  And
even with fixes he has suggested, the code is probably just guessing
with things like blobs/clobs.

I haven't verified, but I think HashJoinStrategy uses
DataTypeDescriptor.estimatedMemoryUsage() to estimate how much memory
the hash table will consume. That method has no case for BLOB or CLOB,
so it looks as if it will return zero for LOB columns. If that's so, it
will definitely overestimate how many rows fits in maxMemoryPerTable
kilobytes if the rows contain LOBs.


Thanks Knut for finding that, I filed DERBY-6096 for the issue.



Re: How Much Memory for hash joins

2013-03-01 Thread Katherine Marsden

On 2/28/2013 9:11 AM, Mike Matrigali wrote:

In BackingStoreHashTableFromScan I see.


   this.max_inmemory_rowcnt = max_inmemory_rowcnt;
 if( max_inmemory_rowcnt  0)
 max_inmemory_size = Long.MAX_VALUE;
 else
 max_inmemory_size = Runtime.getRuntime().totalMemory()/100;




I have been reading the comments and trying to make sense of the logic 
and understand all that is happening with max_inmemory_size. I don't 
have a test case that goes through the else part of the condition above.


One thing I did notice is that Runtime.getRuntime().totalMemory() 
returns really different things if -Xms is set large, for example with  
nothing else going on with  -Xms1048m -Xmx1048m I get:


Total Memory:1098907648 Free Memory:1088915600

With just-Xmx1048m
Total Memory:4194304 Free Memory:2750304

Two questions
1) Might hash joins use an unexpectedly large amount of memory if -Xms 
is set large?  I know at the user site where this was being set, they 
were setting -Xms in the hopes of optimizing memory usage but I wonder 
if it actually increased the amount of memory used by hash joins.
2) Is there a test case that goes through the else clause above that I 
could use for my experimentation?


Thanks

Kathey


Re: How Much Memory for hash joins

2013-03-01 Thread Mike Matrigali

The goal was to use larger memory if it was available.  At the time Java
did not provide much access to this info, so only totalMemory() was 
available to use.  I think this translated to current allocated memory.

So if you start the jvm with a lot of memory (even if you are not using
it), then derby will try to use 1% of this larger value.  That is the
intent.

If you don't start with bigger memory, even if you have a mx that allows
for bigger memory, we won't use it for the 1% calculation.  This was
probably coded when 1.4.2 was the latest jvm, so there may be better
interfaces now, not sure.

Also note that these are all estimates within the system.  As Knut 
pointed out there are some known problems with the estimates.  And

even with fixes he has suggested, the code is probably just guessing
with things like blobs/clobs.


On 3/1/2013 9:28 AM, Katherine Marsden wrote:

On 2/28/2013 9:11 AM, Mike Matrigali wrote:

In BackingStoreHashTableFromScan I see.


   this.max_inmemory_rowcnt = max_inmemory_rowcnt;
 if( max_inmemory_rowcnt  0)
 max_inmemory_size = Long.MAX_VALUE;
 else
 max_inmemory_size = Runtime.getRuntime().totalMemory()/100;




I have been reading the comments and trying to make sense of the logic
and understand all that is happening with max_inmemory_size. I don't
have a test case that goes through the else part of the condition above.

One thing I did notice is that Runtime.getRuntime().totalMemory()
returns really different things if -Xms is set large, for example with
nothing else going on with  -Xms1048m -Xmx1048m I get:

Total Memory:1098907648 Free Memory:1088915600

With just-Xmx1048m
Total Memory:4194304 Free Memory:2750304

Two questions
1) Might hash joins use an unexpectedly large amount of memory if -Xms
is set large?  I know at the user site where this was being set, they
were setting -Xms in the hopes of optimizing memory usage but I wonder
if it actually increased the amount of memory used by hash joins.

i don't think it is unexpected, the intent is to use more memory if more
memory is available.  see above.  I think the code would have used the
mx value if at the time it could have gotten at that through jvm interfaces.

2) Is there a test case that goes through the else clause above that I
could use for my experimentation?

Thanks

Kathey





Re: How Much Memory for hash joins

2013-02-28 Thread Mike Matrigali

There are some good comments in
java/engine/org/apache/derby/iapi/store/access/BackingStoreHashTable.java which 
this class inherits from.


I am not sure what parameters are passed down usually to this class from
optimizer/execution.

I have always assumed that the intent of the option is per opened 
table in store.  This is not user friendly at all since user does not 
really know how this matches up to their query, which is likely

why the option was never made public for a zero admin db.

Internally one of these backing store things can be created for any
index or table access as part of query.  A single query with joins
could have an number of these depending on how many terms are in
the joins.


On 2/27/2013 12:47 PM, Katherine Marsden wrote:

I was wondering what is the default maximum memory for hash joins.

Looking at OptimizerFactoryImpl I see
protected int maxMemoryPerTable = 1048576 unless overridden by
derby.language.maxMemoryPerTable;
Is actually intended  per table or per active query?  I don't see the
property in the documentation.
If I set this to zero will it turn off hash joins all together?


In BackingStoreHashTableFromScan I see.

   this.max_inmemory_rowcnt = max_inmemory_rowcnt;
 if( max_inmemory_rowcnt  0)
 max_inmemory_size = Long.MAX_VALUE;
 else
 max_inmemory_size = Runtime.getRuntime().totalMemory()/100;

So what is the intent and actual behavior of
derby.language.maxMemoryPerTable and its default and do they match?
Are there other factors that go into setting the ceiling for memory
usage for has joins.

Thanks

Kathey

P.S.
In actual practice on a *very* old derby version   Apache Derby -
10.1.2.1 - (330608) I am looking at a hprof dump which shows almost 2GB
of Blob and clob objects that trace back to hash joins and a
BAckingStoreHashTableFromScan objects that have values as below with
max_inmemory_size as Long.MAX_VALUE as I would expect from the above code.

e.g.
instance of
org.apache.derby.impl.store.access.BackingStoreHashTableFromScan@0xa59f7d08
(63 bytes)
Class:
class org.apache.derby.impl.store.access.BackingStoreHashTableFromScan
Instance data members:
auxillary_runtimestats (L) : null
diskHashtable (L) : null
hash_table (L) : java.util.Hashtable@0xa59f7d48 (40 bytes)
inmemory_rowcnt (J) : 8686
keepAfterCommit (Z) : false
key_column_numbers (L) : [I@0xa33b6110 (12 bytes)
max_inmemory_rowcnt (J) : 58254
max_inmemory_size (J) : 9223372036854775807
open_scan (L) :
org.apache.derby.impl.store.access.heap.HeapScan@0xa59f7f10 (64 bytes)
remove_duplicates (Z) : false
row_source (L) : null
skipNullKeyColumns (Z) : true
tc (L) : org.apache.derby.impl.store.access.RAMTransaction@0x8f960428
(57 bytes)
References to this object:
org.apache.derby.impl.sql.execute.HashScanResultSet@0xa33b5fc8 (321
bytes) : field hashtable
Other Queries





Re: How Much Memory for hash joins

2013-02-28 Thread Knut Anders Hatlen
Mike Matrigali mikem_...@sbcglobal.net writes:

 I have always assumed that the intent of the option is per opened
 table in store.  This is not user friendly at all since user does
 not really know how this matches up to their query, which is likely
 why the option was never made public for a zero admin db.

There's a brief discussion in DERBY-4620 on how this setting could be
auto-tuned.

Also, the memory estimates we use for the hash tables are inaccurate
(they are too low), so maxMemoryPerTable is effectively higher than its
nominal value. There's a patch attached to the issue that fixes the
estimates, but I've been hesitant to check it in, since it changes a lot
of query plans in the regression tests (wisconsin  co).


How Much Memory for hash joins

2013-02-27 Thread Katherine Marsden

I was wondering what is the default maximum memory for hash joins.

Looking at OptimizerFactoryImpl I see
protected int maxMemoryPerTable = 1048576 unless overridden by 
derby.language.maxMemoryPerTable;
Is actually intended  per table or per active query?  I don't see the 
property in the documentation.

If I set this to zero will it turn off hash joins all together?


In BackingStoreHashTableFromScan I see.

  this.max_inmemory_rowcnt = max_inmemory_rowcnt;
if( max_inmemory_rowcnt  0)
max_inmemory_size = Long.MAX_VALUE;
else
max_inmemory_size = Runtime.getRuntime().totalMemory()/100;

So what is the intent and actual behavior of 
derby.language.maxMemoryPerTable and its default and do they match? 
Are there other factors that go into setting the ceiling for memory 
usage for has joins.


Thanks

Kathey

P.S.
In actual practice on a *very* old derby version   Apache Derby - 
10.1.2.1 - (330608) I am looking at a hprof dump which shows almost 2GB 
of Blob and clob objects that trace back to hash joins and a 
BAckingStoreHashTableFromScan objects that have values as below with 
max_inmemory_size as Long.MAX_VALUE as I would expect from the above code.


e.g.
instance of 
org.apache.derby.impl.store.access.BackingStoreHashTableFromScan@0xa59f7d08 
(63 bytes)

Class:
class org.apache.derby.impl.store.access.BackingStoreHashTableFromScan
Instance data members:
auxillary_runtimestats (L) : null
diskHashtable (L) : null
hash_table (L) : java.util.Hashtable@0xa59f7d48 (40 bytes)
inmemory_rowcnt (J) : 8686
keepAfterCommit (Z) : false
key_column_numbers (L) : [I@0xa33b6110 (12 bytes)
max_inmemory_rowcnt (J) : 58254
max_inmemory_size (J) : 9223372036854775807
open_scan (L) : 
org.apache.derby.impl.store.access.heap.HeapScan@0xa59f7f10 (64 bytes)

remove_duplicates (Z) : false
row_source (L) : null
skipNullKeyColumns (Z) : true
tc (L) : org.apache.derby.impl.store.access.RAMTransaction@0x8f960428 
(57 bytes)

References to this object:
org.apache.derby.impl.sql.execute.HashScanResultSet@0xa33b5fc8 (321 
bytes) : field hashtable

Other Queries


Re: Have Derby Network Server having an out of memory (PermGen)

2012-11-26 Thread Rick Hillegas

Thanks for starting this interesting thread, Brett. One comment inline...

On 11/23/12 4:39 AM, Bergquist, Brett wrote:

...

I know I have not take into consideration the table functions column name 
restriction in the initScan which I will eventually get to.   What would be 
useful I think would be to also possibly pass in the ORDER BY restriction and 
have the table function to be able to signal that it can return an ordered 
result set.   This might make it possible to optimize any sorting that might be 
required if the returned result set were known to be ordered.
I have logged an enhancement request to track this useful suggestion: 
https://issues.apache.org/jira/browse/DERBY-6004#comment-13503794.


Thanks,
-Rick


RE: Have Derby Network Server having an out of memory (PermGen)

2012-11-23 Thread Bergquist, Brett
 columnIndex, Calendar cal) throws 
SQLException {
return getResultSet().getTimestamp(columnIndex, cal);
}

public Timestamp getTimestamp(String columnLabel, Calendar cal) throws 
SQLException {
return getResultSet().getTimestamp(columnLabel, cal);
}

public URL getURL(int columnIndex) throws SQLException {
return getResultSet().getURL(columnIndex);
}

public URL getURL(String columnLabel) throws SQLException {
return getResultSet().getURL(columnLabel);
}

public void updateRef(int columnIndex, Ref x) throws SQLException {
getResultSet().updateRef(columnIndex, x);
}

public void updateRef(String columnLabel, Ref x) throws SQLException {
getResultSet().updateRef(columnLabel, x);
}

public void updateBlob(int columnIndex, Blob x) throws SQLException {
getResultSet().updateBlob(columnIndex, x);
}

public void updateBlob(String columnLabel, Blob x) throws SQLException {
getResultSet().updateBlob(columnLabel, x);
}

public void updateClob(int columnIndex, Clob x) throws SQLException {
getResultSet().updateClob(columnIndex, x);
}

public void updateClob(String columnLabel, Clob x) throws SQLException {
getResultSet().updateClob(columnLabel, x);
}

public void updateArray(int columnIndex, Array x) throws SQLException {
getResultSet().updateArray(columnIndex, x);
}

public void updateArray(String columnLabel, Array x) throws SQLException {
getResultSet().updateArray(columnLabel, x);
}
}


From: Knut Anders Hatlen [knut.hat...@oracle.com]
Sent: Thursday, November 22, 2012 4:57 AM
To: derby-dev@db.apache.org
Subject: Re: Have Derby Network Server having an out of memory (PermGen)

Mike Matrigali mikem_...@sbcglobal.net writes:

 On 11/21/2012 6:58 AM, Knut Anders Hatlen wrote:
 Bergquist, Brett bbergqu...@canoga.com writes:

 Yes, the statement cache size has been increased to 50K statements so
 that might be an issue. Maybe the PermGen space will need to be
 increased because of that. The documentation is not clear which type
 I am not an expert in this area, is there any case where we expect the
 re-execution of the same query to need to generate a different entry
 in the statement cache?

I think what's flooding the statement cache here is whatever gets
executed by the table function, which I understand is some dynamically
generated SQL statements.

This is also why I don't understand how changing from a view to a direct
table function call should change anything, as the top-level statement
should only have one entry in the cache, and the statements executed
inside the table function should be the same.

Two possible explanations:

1) Changing between view and direct call changes the plan picked by the
optimizer, so that the table function call one time ends up as the inner
table in a join, and another time as the outer table. This could change
the number of times the table function is called per query. If each call
to the table function generates truly unique SQL statements, calling it
more often will fill the cache quicker.

2) If it is a restricted table function, the actual
restriction/projection pushed down to the table function may vary
depending on which plan the optimizer picks. And this could affect what
kind of SQL is generated by the table function. Perhaps sometimes it
generates statements that are likely to be identical across invocations,
needing fewer entries in the cache, and other times it generates
statements that are less likely to be identical.

Following up on that last thought, if the queries generated by the table
function would be something like

  select * from t where x  N

where N varies between invocations, it's better for the statement cache
if a parameter marker is used, like

  select * from t where x  ?

rather than inlining the actual constant

  select * from t where x  5
  select * from t where x  42
  ...

Even though the table function itself doesn't execute the query more
than once, using parameter markers increases the likelihood of finding a
match in the statement cache.

Not sure if this affects Brett's table function. Just throwing out
ideas...

--
Knut Anders



Re: Have Derby Network Server having an out of memory (PermGen)

2012-11-22 Thread Knut Anders Hatlen
Mike Matrigali mikem_...@sbcglobal.net writes:

 On 11/21/2012 6:58 AM, Knut Anders Hatlen wrote:
 Bergquist, Brett bbergqu...@canoga.com writes:

 Yes, the statement cache size has been increased to 50K statements so
 that might be an issue. Maybe the PermGen space will need to be
 increased because of that. The documentation is not clear which type
 I am not an expert in this area, is there any case where we expect the
 re-execution of the same query to need to generate a different entry
 in the statement cache?

I think what's flooding the statement cache here is whatever gets
executed by the table function, which I understand is some dynamically
generated SQL statements.

This is also why I don't understand how changing from a view to a direct
table function call should change anything, as the top-level statement
should only have one entry in the cache, and the statements executed
inside the table function should be the same.

Two possible explanations:

1) Changing between view and direct call changes the plan picked by the
optimizer, so that the table function call one time ends up as the inner
table in a join, and another time as the outer table. This could change
the number of times the table function is called per query. If each call
to the table function generates truly unique SQL statements, calling it
more often will fill the cache quicker.

2) If it is a restricted table function, the actual
restriction/projection pushed down to the table function may vary
depending on which plan the optimizer picks. And this could affect what
kind of SQL is generated by the table function. Perhaps sometimes it
generates statements that are likely to be identical across invocations,
needing fewer entries in the cache, and other times it generates
statements that are less likely to be identical.

Following up on that last thought, if the queries generated by the table
function would be something like

  select * from t where x  N

where N varies between invocations, it's better for the statement cache
if a parameter marker is used, like

  select * from t where x  ?

rather than inlining the actual constant

  select * from t where x  5
  select * from t where x  42
  ...

Even though the table function itself doesn't execute the query more
than once, using parameter markers increases the likelihood of finding a
match in the statement cache.

Not sure if this affects Brett's table function. Just throwing out
ideas...

-- 
Knut Anders


Re: Have Derby Network Server having an out of memory (PermGen)

2012-11-21 Thread Knut Anders Hatlen
Bergquist, Brett bbergqu...@canoga.com writes:

 I have a customer that is periodically having a problem and unknown to
 me, they have been accessing the Derby database using a query such as
 in the following and have been repeatedly experienced a server issue.
 I finally figured out it was an OutOfMemory error (PermGen). So I just
 wrote a little application that performs the same query over and over
 against a copy of the database and the Derby Network Server. In just a
 few hours, the Derby Network Server gave up the ghost with a
 OutOfMemory(PermGen) error.

 This is running against Derby 10.8.2.2. There is no other access to
 the database from any other process during this test.

 Note that the query returns no data as there are no records in the
 database that satisfy the query. Also, note that the table NPAResults
 is actually a View that looks like:


[...]


 Each time I run this sample, after a few hours, the OutOfMemory
 occurs. Any ideas on this will be greatly appreciated.

Hi Brett,

I don't see anything obvious that should cause problems in the code you
posted. I tried to run it myself, but it didn't seem to cause any leak
in my environment. Of course, I didn't run the exact same code, since I
don't know exactly what your table function does. (And I ran it with
-XX:MaxPermSize=16M since I didn't want to wait for hours to see the
result...)

If you manage to come up with a full repro that others could run, it
might be easier to see what's going on.

By the way, have you changed the statement cache size, or are you
running with default size? I'm asking because the size of the statement
cache will affect how soon generated classes can be garbage collected.

-- 
Knut Anders


RE: Have Derby Network Server having an out of memory (PermGen)

2012-11-21 Thread Bergquist, Brett
Yes, the statement cache size has been increased to 50K statements so that 
might be an issue.  Maybe the PermGen space will need to be increased because 
of that.  The documentation is not clear which type of heap that the statement 
cache would affect, however.   As a test, I am going to lower my statement 
cache size to 100 statements and see what happens.   Thanks for the idea!

Some more info however.   It is definitely related to the View/table function 
mechanism (explained in my second email).   I just did a little more testing 
and found that if I change the query to use the table function directly instead 
of using the View that is created that uses the table function, then the loaded 
classes reported by VisualVM stays stable.   Any thoughts on why querying on a 
View that is created that uses the table function might be generating and 
holding onto classes whereas using the same query with the View name replaced 
by the table function name does not have this problem?

I much appreciate the feedback and thoughts.

Brett

-Original Message-
From: Knut Anders Hatlen [mailto:knut.hat...@oracle.com] 
Sent: Wednesday, November 21, 2012 8:28 AM
To: derby-dev@db.apache.org
Subject: Re: Have Derby Network Server having an out of memory (PermGen)

Bergquist, Brett bbergqu...@canoga.com writes:

 I have a customer that is periodically having a problem and unknown to 
 me, they have been accessing the Derby database using a query such as 
 in the following and have been repeatedly experienced a server issue.
 I finally figured out it was an OutOfMemory error (PermGen). So I just 
 wrote a little application that performs the same query over and over 
 against a copy of the database and the Derby Network Server. In just a 
 few hours, the Derby Network Server gave up the ghost with a
 OutOfMemory(PermGen) error.

 This is running against Derby 10.8.2.2. There is no other access to 
 the database from any other process during this test.

 Note that the query returns no data as there are no records in the 
 database that satisfy the query. Also, note that the table NPAResults 
 is actually a View that looks like:


[...]


 Each time I run this sample, after a few hours, the OutOfMemory 
 occurs. Any ideas on this will be greatly appreciated.

Hi Brett,

I don't see anything obvious that should cause problems in the code you posted. 
I tried to run it myself, but it didn't seem to cause any leak in my 
environment. Of course, I didn't run the exact same code, since I don't know 
exactly what your table function does. (And I ran it with -XX:MaxPermSize=16M 
since I didn't want to wait for hours to see the
result...)

If you manage to come up with a full repro that others could run, it might be 
easier to see what's going on.

By the way, have you changed the statement cache size, or are you running with 
default size? I'm asking because the size of the statement cache will affect 
how soon generated classes can be garbage collected.

--
Knut Anders




RE: Have Derby Network Server having an out of memory (PermGen)

2012-11-21 Thread Bergquist, Brett
I reset the statement cache size to 100 statements and set the MaxPermGen size 
to 256m and reran the test querying the View which uses the table function.  
Now when I run it, the loaded classes increase along with the PermGen, and then 
the loaded classes are unloaded and PermGen comes back down when it collects!   
 This is good!

So now I need to figure out what the MaxPermGen size needs to be for a larger 
statement cache.

-Original Message-
From: Bergquist, Brett [mailto:bbergqu...@canoga.com] 
Sent: Wednesday, November 21, 2012 8:40 AM
To: derby-dev@db.apache.org
Subject: RE: Have Derby Network Server having an out of memory (PermGen)

Yes, the statement cache size has been increased to 50K statements so that 
might be an issue.  Maybe the PermGen space will need to be increased because 
of that.  The documentation is not clear which type of heap that the statement 
cache would affect, however.   As a test, I am going to lower my statement 
cache size to 100 statements and see what happens.   Thanks for the idea!

Some more info however.   It is definitely related to the View/table function 
mechanism (explained in my second email).   I just did a little more testing 
and found that if I change the query to use the table function directly instead 
of using the View that is created that uses the table function, then the loaded 
classes reported by VisualVM stays stable.   Any thoughts on why querying on a 
View that is created that uses the table function might be generating and 
holding onto classes whereas using the same query with the View name replaced 
by the table function name does not have this problem?

I much appreciate the feedback and thoughts.

Brett

-Original Message-
From: Knut Anders Hatlen [mailto:knut.hat...@oracle.com]
Sent: Wednesday, November 21, 2012 8:28 AM
To: derby-dev@db.apache.org
Subject: Re: Have Derby Network Server having an out of memory (PermGen)

Bergquist, Brett bbergqu...@canoga.com writes:

 I have a customer that is periodically having a problem and unknown to 
 me, they have been accessing the Derby database using a query such as 
 in the following and have been repeatedly experienced a server issue.
 I finally figured out it was an OutOfMemory error (PermGen). So I just 
 wrote a little application that performs the same query over and over 
 against a copy of the database and the Derby Network Server. In just a 
 few hours, the Derby Network Server gave up the ghost with a
 OutOfMemory(PermGen) error.

 This is running against Derby 10.8.2.2. There is no other access to 
 the database from any other process during this test.

 Note that the query returns no data as there are no records in the 
 database that satisfy the query. Also, note that the table NPAResults 
 is actually a View that looks like:


[...]


 Each time I run this sample, after a few hours, the OutOfMemory 
 occurs. Any ideas on this will be greatly appreciated.

Hi Brett,

I don't see anything obvious that should cause problems in the code you posted. 
I tried to run it myself, but it didn't seem to cause any leak in my 
environment. Of course, I didn't run the exact same code, since I don't know 
exactly what your table function does. (And I ran it with -XX:MaxPermSize=16M 
since I didn't want to wait for hours to see the
result...)

If you manage to come up with a full repro that others could run, it might be 
easier to see what's going on.

By the way, have you changed the statement cache size, or are you running with 
default size? I'm asking because the size of the statement cache will affect 
how soon generated classes can be garbage collected.

--
Knut Anders






Re: Have Derby Network Server having an out of memory (PermGen)

2012-11-21 Thread Knut Anders Hatlen
Bergquist, Brett bbergqu...@canoga.com writes:

 Yes, the statement cache size has been increased to 50K statements so
 that might be an issue. Maybe the PermGen space will need to be
 increased because of that. The documentation is not clear which type
 of heap that the statement cache would affect, however. As a test, I
 am going to lower my statement cache size to 100 statements and see
 what happens. Thanks for the idea!

On the bright side, the current early-access builds of JDK 8 don't have
a separate PermGen area. So in the not too distant future one doesn't
have to spend time figuring out how to partition the heap space.

 Some more info however. It is definitely related to the View/table
 function mechanism (explained in my second email). I just did a little
 more testing and found that if I change the query to use the table
 function directly instead of using the View that is created that uses
 the table function, then the loaded classes reported by VisualVM stays
 stable. Any thoughts on why querying on a View that is created that
 uses the table function might be generating and holding onto classes
 whereas using the same query with the View name replaced by the table
 function name does not have this problem?

Nothing comes to mind. I think there are some transformations the query
optimizer isn't able to do on views, that it could do when working
directly on the tables, so the compiled plans may differ. But it
shouldn't affect how soon the generated classes can be reclaimed.

-- 
Knut Anders


Re: Have Derby Network Server having an out of memory (PermGen)

2012-11-21 Thread Mike Matrigali

On 11/21/2012 6:58 AM, Knut Anders Hatlen wrote:

Bergquist, Brett bbergqu...@canoga.com writes:


Yes, the statement cache size has been increased to 50K statements so
that might be an issue. Maybe the PermGen space will need to be
increased because of that. The documentation is not clear which type

I am not an expert in this area, is there any case where we expect the
re-execution of the same query to need to generate a different entry
in the statement cache?  If so, then any such query is likely to flood
the statement cache and make it useless.  If you have a complete test
case showing this it would be worth filing a jira.

I have to say I was surprised at the 50k setting for statement cache,
definitely do not think that kind of size was in mind when it was
developed.  If the cache was working correctly would your application
really generate 50k different queries which you expected to be executed
more than once?



[jira] [Created] (DERBY-5963) Memory leak when shutting down Derby system

2012-10-27 Thread Igor Sereda (JIRA)
Igor Sereda created DERBY-5963:
--

 Summary: Memory leak when shutting down Derby system
 Key: DERBY-5963
 URL: https://issues.apache.org/jira/browse/DERBY-5963
 Project: Derby
  Issue Type: Bug
  Components: Store
Affects Versions: 10.9.1.0, 10.5.3.0
 Environment: Embedded Derby
Windows 7
java version 1.6.0_31
Java(TM) SE Runtime Environment (build 1.6.0_31-b05)
Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode)

Reporter: Igor Sereda


I am using an embedded Derby on a server within OSGi environment, as a private 
library in my bundle. When the bundle is deactivated, I stop Derby database 
(with jdbc:derby:;shutdown=true;deregister=true URL)

But although otherwise the database is released, an instance of ContextManager 
stays in memory due to a leaked reference in a ThreadLocal variable (from 
ContextService, I presume). The instance of ContextManager is a big deal, 
because it also holds the whole page cache in memory (40MB), and also, via 
class loader, holds whole my OSGi bundle too.

Please let me know if you need any information on reproducing this problem.

Thanks!
Igor

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (DERBY-5963) Memory leak when shutting down Derby system

2012-10-27 Thread Igor Sereda (JIRA)

[ 
https://issues.apache.org/jira/browse/DERBY-5963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13485455#comment-13485455
 ] 

Igor Sereda commented on DERBY-5963:


Attached a screenshot of Yourkit profiler showing the problem

 Memory leak when shutting down Derby system
 ---

 Key: DERBY-5963
 URL: https://issues.apache.org/jira/browse/DERBY-5963
 Project: Derby
  Issue Type: Bug
  Components: Store
Affects Versions: 10.5.3.0, 10.9.1.0
 Environment: Embedded Derby
 Windows 7
 java version 1.6.0_31
 Java(TM) SE Runtime Environment (build 1.6.0_31-b05)
 Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode)
Reporter: Igor Sereda
 Attachments: yourkit.png


 I am using an embedded Derby on a server within OSGi environment, as a 
 private library in my bundle. When the bundle is deactivated, I stop Derby 
 database (with jdbc:derby:;shutdown=true;deregister=true URL)
 But although otherwise the database is released, an instance of 
 ContextManager stays in memory due to a leaked reference in a ThreadLocal 
 variable (from ContextService, I presume). The instance of ContextManager is 
 a big deal, because it also holds the whole page cache in memory (40MB), and 
 also, via class loader, holds whole my OSGi bundle too.
 Please let me know if you need any information on reproducing this problem.
 Thanks!
 Igor

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (DERBY-5963) Memory leak when shutting down Derby system

2012-10-27 Thread Igor Sereda (JIRA)

 [ 
https://issues.apache.org/jira/browse/DERBY-5963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Sereda updated DERBY-5963:
---

Attachment: yourkit.png

 Memory leak when shutting down Derby system
 ---

 Key: DERBY-5963
 URL: https://issues.apache.org/jira/browse/DERBY-5963
 Project: Derby
  Issue Type: Bug
  Components: Store
Affects Versions: 10.5.3.0, 10.9.1.0
 Environment: Embedded Derby
 Windows 7
 java version 1.6.0_31
 Java(TM) SE Runtime Environment (build 1.6.0_31-b05)
 Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode)
Reporter: Igor Sereda
 Attachments: yourkit.png


 I am using an embedded Derby on a server within OSGi environment, as a 
 private library in my bundle. When the bundle is deactivated, I stop Derby 
 database (with jdbc:derby:;shutdown=true;deregister=true URL)
 But although otherwise the database is released, an instance of 
 ContextManager stays in memory due to a leaked reference in a ThreadLocal 
 variable (from ContextService, I presume). The instance of ContextManager is 
 a big deal, because it also holds the whole page cache in memory (40MB), and 
 also, via class loader, holds whole my OSGi bundle too.
 Please let me know if you need any information on reproducing this problem.
 Thanks!
 Igor

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (DERBY-5963) Memory leak when shutting down Derby system

2012-10-27 Thread Igor Sereda (JIRA)

[ 
https://issues.apache.org/jira/browse/DERBY-5963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13485459#comment-13485459
 ] 

Igor Sereda commented on DERBY-5963:


Also, I found this in the comments in ServiceContext:

=
There are two cases we are trying to optimise.
- Typical JDBC client program where there a Connection is always executed using 
a single thread. In this case this variable will contain the Connection's 
context manager
- Typical application server pooled connection where a single thread may use a 
connection from a pool for the lifetime of the request. In this case this 
variable will contain a *WeakReference*.
=

I have the second case, and I don't see any WeakReferences being used.

 Memory leak when shutting down Derby system
 ---

 Key: DERBY-5963
 URL: https://issues.apache.org/jira/browse/DERBY-5963
 Project: Derby
  Issue Type: Bug
  Components: Store
Affects Versions: 10.5.3.0, 10.9.1.0
 Environment: Embedded Derby
 Windows 7
 java version 1.6.0_31
 Java(TM) SE Runtime Environment (build 1.6.0_31-b05)
 Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode)
Reporter: Igor Sereda
 Attachments: yourkit.png


 I am using an embedded Derby on a server within OSGi environment, as a 
 private library in my bundle. When the bundle is deactivated, I stop Derby 
 database (with jdbc:derby:;shutdown=true;deregister=true URL)
 But although otherwise the database is released, an instance of 
 ContextManager stays in memory due to a leaked reference in a ThreadLocal 
 variable (from ContextService, I presume). The instance of ContextManager is 
 a big deal, because it also holds the whole page cache in memory (40MB), and 
 also, via class loader, holds whole my OSGi bundle too.
 Please let me know if you need any information on reproducing this problem.
 Thanks!
 Igor

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (DERBY-5963) Memory leak when shutting down Derby system

2012-10-27 Thread Igor Sereda (JIRA)

 [ 
https://issues.apache.org/jira/browse/DERBY-5963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Sereda updated DERBY-5963:
---

Attachment: TestDerbyLeak.java

 Memory leak when shutting down Derby system
 ---

 Key: DERBY-5963
 URL: https://issues.apache.org/jira/browse/DERBY-5963
 Project: Derby
  Issue Type: Bug
  Components: Store
Affects Versions: 10.5.3.0, 10.9.1.0
 Environment: Embedded Derby
 Windows 7
 java version 1.6.0_31
 Java(TM) SE Runtime Environment (build 1.6.0_31-b05)
 Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode)
Reporter: Igor Sereda
 Attachments: TestDerbyLeak.java, yourkit.png


 I am using an embedded Derby on a server within OSGi environment, as a 
 private library in my bundle. When the bundle is deactivated, I stop Derby 
 database (with jdbc:derby:;shutdown=true;deregister=true URL)
 But although otherwise the database is released, an instance of 
 ContextManager stays in memory due to a leaked reference in a ThreadLocal 
 variable (from ContextService, I presume). The instance of ContextManager is 
 a big deal, because it also holds the whole page cache in memory (40MB), and 
 also, via class loader, holds whole my OSGi bundle too.
 Please let me know if you need any information on reproducing this problem.
 Thanks!
 Igor

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (DERBY-5963) Memory leak when shutting down Derby system

2012-10-27 Thread Igor Sereda (JIRA)

[ 
https://issues.apache.org/jira/browse/DERBY-5963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13485475#comment-13485475
 ] 

Igor Sereda commented on DERBY-5963:


More investigation showed that this problem happens if a connection is ever 
shared between more than one thread.

Attached please find a demo code. I couldn't unit-test it because some internal 
stuff is used. Use a profiler to see that the database is held in memory.

As a workaround, I will have to use reflection to clean up all active threads.

 Memory leak when shutting down Derby system
 ---

 Key: DERBY-5963
 URL: https://issues.apache.org/jira/browse/DERBY-5963
 Project: Derby
  Issue Type: Bug
  Components: Store
Affects Versions: 10.5.3.0, 10.9.1.0
 Environment: Embedded Derby
 Windows 7
 java version 1.6.0_31
 Java(TM) SE Runtime Environment (build 1.6.0_31-b05)
 Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode)
Reporter: Igor Sereda
 Attachments: TestDerbyLeak.java, yourkit.png


 I am using an embedded Derby on a server within OSGi environment, as a 
 private library in my bundle. When the bundle is deactivated, I stop Derby 
 database (with jdbc:derby:;shutdown=true;deregister=true URL)
 But although otherwise the database is released, an instance of 
 ContextManager stays in memory due to a leaked reference in a ThreadLocal 
 variable (from ContextService, I presume). The instance of ContextManager is 
 a big deal, because it also holds the whole page cache in memory (40MB), and 
 also, via class loader, holds whole my OSGi bundle too.
 Please let me know if you need any information on reproducing this problem.
 Thanks!
 Igor

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (DERBY-5481) Unit tests fail on a derby closed iterator test with a Invalid memory access of location

2012-09-27 Thread Mamta A. Satoor (JIRA)

 [ 
https://issues.apache.org/jira/browse/DERBY-5481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mamta A. Satoor updated DERBY-5481:
---

Labels: derby_triage10_10  (was: )

 Unit tests fail on a derby closed iterator test with a Invalid memory access 
 of location
 --

 Key: DERBY-5481
 URL: https://issues.apache.org/jira/browse/DERBY-5481
 Project: Derby
  Issue Type: Bug
  Components: Miscellaneous
Affects Versions: 10.8.1.2
 Environment: Eclipse 3.7.0 on Mac OSX 10.7.2
Reporter: Gray Watson
Priority: Minor
  Labels: derby_triage10_10
 Attachments: derby.log


 I'm the lead author of ORMLite, a smallish ORM project that supports Derby 
 and some other JDBC and Android databases.  I'm getting a reproducible memory 
 fault during one of my Derby tests.  I've just ignored the test for now in my 
 code but I thought I'd report it.
 To check out the tree svn co 
 http://ormlite.svn.sourceforge.net/svnroot/ormlite/ormliteTest/trunk
 You'll need maven. The DerbyEmbeddedBaseDaoImplTest is the one that fails.  
 Not by itself unfortunately but running the com.j256.ormlite.dao package 
 which is also testing some other database types causes it to fail every time 
 for me.  It's the testCloseInIterator() method defined in the test base class 
 JdbcBaseDaoImplTest.  This test closes the underlying database connection in 
 the middle of an iterator loop to test exception handling.
 Sorry if this is just too obscure to be useful.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (DERBY-5481) Unit tests fail on a derby closed iterator test with a Invalid memory access of location

2012-09-27 Thread Mamta A. Satoor (JIRA)

[ 
https://issues.apache.org/jira/browse/DERBY-5481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465249#comment-13465249
 ] 

Mamta A. Satoor commented on DERBY-5481:


It has been almost a year since any activity happened on this jira. Should we 
go ahead and close it until more information is available?

 Unit tests fail on a derby closed iterator test with a Invalid memory access 
 of location
 --

 Key: DERBY-5481
 URL: https://issues.apache.org/jira/browse/DERBY-5481
 Project: Derby
  Issue Type: Bug
  Components: Miscellaneous
Affects Versions: 10.8.1.2
 Environment: Eclipse 3.7.0 on Mac OSX 10.7.2
Reporter: Gray Watson
Priority: Minor
  Labels: derby_triage10_10
 Attachments: derby.log


 I'm the lead author of ORMLite, a smallish ORM project that supports Derby 
 and some other JDBC and Android databases.  I'm getting a reproducible memory 
 fault during one of my Derby tests.  I've just ignored the test for now in my 
 code but I thought I'd report it.
 To check out the tree svn co 
 http://ormlite.svn.sourceforge.net/svnroot/ormlite/ormliteTest/trunk
 You'll need maven. The DerbyEmbeddedBaseDaoImplTest is the one that fails.  
 Not by itself unfortunately but running the com.j256.ormlite.dao package 
 which is also testing some other database types causes it to fail every time 
 for me.  It's the testCloseInIterator() method defined in the test base class 
 JdbcBaseDaoImplTest.  This test closes the underlying database connection in 
 the middle of an iterator loop to test exception handling.
 Sorry if this is just too obscure to be useful.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (DERBY-3368) Threading issue with DependencyManager may cause in-memory dependencies to be lost.

2012-09-26 Thread Kathey Marsden (JIRA)

 [ 
https://issues.apache.org/jira/browse/DERBY-3368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kathey Marsden updated DERBY-3368:
--

  Issue  fix info:   (was: High Value Fix)
Bug behavior facts: Wrong query result
Labels: derby_triage10_10 derby_triage10_5_2  (was: 
derby_triage10_5_2)

Taking off of High Value Fix until we have a reproduction.  Adding wrong 
results which could occur with this issue.


 Threading issue with DependencyManager may cause in-memory dependencies to be 
 lost.
 ---

 Key: DERBY-3368
 URL: https://issues.apache.org/jira/browse/DERBY-3368
 Project: Derby
  Issue Type: Bug
  Components: Services
Affects Versions: 10.2.2.0
Reporter: Daniel John Debrunner
Priority: Minor
  Labels: derby_triage10_10, derby_triage10_5_2

 When a thread compiles a language prepared statement P a set of in-memory 
 Dependency objects is created, e.g. if  P accesses table A
 Dependency {P depends on A}
 When this Dependency is added to the dependency manager if an equivalent one 
 (same provider and dependent) exists then the duplicate will not be added.
 The StatementContext keeps track of these added Dependency so that if the 
 compilation fails the Dependency will be removed, comparing by the exact same 
 Dependency object (not by equivalence).
 If a thread T1 compiling P fails, then another thread may try to compile P 
 (same object). If this second thread T2 compiles successfully the following 
 could happen:
 1) T1 compiles P creates Dependency {P depends on A} in dependency manager
 2) T1 fails to compile, but does not yet execute its cleanup
 3) T2 compiles P successfully, attempts to add Dependency {P depends on A} to 
 the manager but it is a duplicate so T1's version is left and T2's is not 
 added.
 4) T1 completes its cleanup and removes Dependency {P depends on A}
 5) P no longer depends on A
 Concern is that the security system GRANT/REVOKE is based upon the dependency 
 manager as well as correctness for indexes (e.g. this could cause a recompile 
 to be missed for an INSERT table when an index is added).
 For this to actually happen there has to be a situation where one thread 
 (connection) can compile a statement that another one fails on (and be 
 compiling at near identical times). I haven't got a reproducible case yet, 
 but I can get two statements to be compiling the same statement plan (P). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Garbage-collecting Derby classes, was: Can't remove derby from memory

2012-03-27 Thread Rick Hillegas
This discussion is taking place on the user list. It is my understanding 
that graceful engine shutdown is supposed to remove references to Derby 
classes, making all of the engine code eligible to be garbage-collected. 
At least, that is what I thought Lily implemented for 10.8.


In the discussion below, the user is expecting that garbage-collection 
will happen after shutting down a single database. I wouldn't expect 
that. But it got me to wondering how a user is supposed to shutdown the 
engine gracefully via the Derby DataSources. I know how to do this by 
passing a shutdown URL to DriverManager, but how do you do this via a 
DataSource?


Thanks,
-Rick

 Original Message 
Subject:Re: Can't remove derby from memory
Date:   Mon, 26 Mar 2012 18:06:24 -0700
From:   Bryan Pendleton bpendleton.de...@gmail.com
Reply-To:   Derby Discussion derby-u...@db.apache.org
To: Derby Discussion derby-u...@db.apache.org




 room.  I have noticed that no matter what I do, the ~10MB of memory that is
 taken when the database connect is initiated is held no matter what commands


Certainly sounds like the database isn't getting fully shut down.


 dynamDS.setShutdownDatabase(shutdown);


It's not clear to me that this does anything by itself. The docs say:


If set to the string shutdown, this will cause the database to shutdown
when a java.sql.Connection object is obtained from the data source. E.g.,
If the data source is an XADataSource, a getXAConnection().getConnection()
is necessary to cause the database to shutdown.

This sounds like you have to get a final connection (and then close it) after
setting ShutdownDatabase.

Did you try getting a connection after calling setShutdownDatabase?

thanks,

bryan




Re: Garbage-collecting Derby classes, was: Can't remove derby from memory

2012-03-27 Thread Knut Anders Hatlen
Rick Hillegas rick.hille...@oracle.com writes:

 This discussion is taking place on the user list. It is my
 understanding that graceful engine shutdown is supposed to remove
 references to Derby classes, making all of the engine code eligible to
 be garbage-collected. At least, that is what I thought Lily
 implemented for 10.8.

I believe that shutting down the engine only makes the Derby engine
classes eligible for garbage collection if there are no references to
the classloader in which the engine classes lives. So I think the
classes will only be garbage collected if the driver was loaded in a
separate classloader. (There may also be references to connections,
statements, data sources, or other JDBC objects, in the user code that
prevent gc after shutdown.)

Instances of the engine classes (monitor, caches, etc), on the other
hand, should be eligible for garbage collection after engine shutdown,
even if the engine is not in a separate classloader.

 In the discussion below, the user is expecting that garbage-collection
 will happen after shutting down a single database. I wouldn't expect
 that. But it got me to wondering how a user is supposed to shutdown
 the engine gracefully via the Derby DataSources. I know how to do this
 by passing a shutdown URL to DriverManager, but how do you do this via
 a DataSource?

DataSourceConnector.shutEngine() in the test framework (used for
shutting down the engine on CDC/FP) does this using an empty database
name.

  ds.setDatabaseName();
  ds.setShutdownDatabase(shutdown);

Not sure if we have documented this approach.

-- 
Knut Anders


[jira] [Resolved] (DERBY-3009) Out of memory error when creating a very large table

2012-03-16 Thread Kathey Marsden (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/DERBY-3009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kathey Marsden resolved DERBY-3009.
---

   Resolution: Fixed
Fix Version/s: 10.7.1.4
   10.6.2.3
   10.5.3.2
 Assignee: Knut Anders Hatlen  (was: Kathey Marsden)

Completed merge back to 10.5. Assigning back to Knut and resolving.


 Out of memory error when creating a very large table
 

 Key: DERBY-3009
 URL: https://issues.apache.org/jira/browse/DERBY-3009
 Project: Derby
  Issue Type: Bug
  Components: SQL
Affects Versions: 10.5.3.0
 Environment: Win XP Pro
Reporter: Nick Williamson
Assignee: Knut Anders Hatlen
  Labels: derby_triage10_5_2
 Fix For: 10.5.3.2, 10.6.2.3, 10.7.1.4, 10.8.1.2

 Attachments: DERBY-3009.zip, derby-3009-1a.diff, derby-3009-1b.diff, 
 derby-3009_10_5_diff.txt


 When creating an extremely large table (c.50 indexes, c.50 FK constraints), 
 IJ crashes with an out of memory error. The table can be created successfully 
 if it is done in stages, each one in a different IJ session.
 From Kristian Waagan:
 With default settings on my machine, I also get the OOME.
 A brief investigation revealed a few things:
   1) The OOME occurs during constraint additions (with ALTER TABLE ... 
 ADD CONSTRAINT). I could observe this by monitoring the heap usage.
   2) The complete script can be run by increasing the heap size. I tried with 
 256 MB, but the monitoring showed usage peaked at around 150 MB.
   3) The stack traces produced when the OOME occurs varies (as could be 
 expected).
   4) It is the Derby engine that produce the OOME, not ij (i.e. when I ran 
 with the network server, the server failed).
 I have not had time to examine the heap content, but I do believe there is a 
 bug in Derby. It seems some resource is not freed after use.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (DERBY-3009) Out of memory error when creating a very large table

2012-03-15 Thread Kathey Marsden (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/DERBY-3009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kathey Marsden updated DERBY-3009:
--

Attachment: derby-3009_10_5_diff.txt

10.5 had to be merged manually. Attaching patch derby-3009_10_5_diff.txt

 Out of memory error when creating a very large table
 

 Key: DERBY-3009
 URL: https://issues.apache.org/jira/browse/DERBY-3009
 Project: Derby
  Issue Type: Bug
  Components: SQL
Affects Versions: 10.5.3.0
 Environment: Win XP Pro
Reporter: Nick Williamson
Assignee: Kathey Marsden
  Labels: derby_triage10_5_2
 Fix For: 10.8.1.2

 Attachments: DERBY-3009.zip, derby-3009-1a.diff, derby-3009-1b.diff, 
 derby-3009_10_5_diff.txt


 When creating an extremely large table (c.50 indexes, c.50 FK constraints), 
 IJ crashes with an out of memory error. The table can be created successfully 
 if it is done in stages, each one in a different IJ session.
 From Kristian Waagan:
 With default settings on my machine, I also get the OOME.
 A brief investigation revealed a few things:
   1) The OOME occurs during constraint additions (with ALTER TABLE ... 
 ADD CONSTRAINT). I could observe this by monitoring the heap usage.
   2) The complete script can be run by increasing the heap size. I tried with 
 256 MB, but the monitoring showed usage peaked at around 150 MB.
   3) The stack traces produced when the OOME occurs varies (as could be 
 expected).
   4) It is the Derby engine that produce the OOME, not ij (i.e. when I ran 
 with the network server, the server failed).
 I have not had time to examine the heap content, but I do believe there is a 
 bug in Derby. It seems some resource is not freed after use.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (DERBY-3009) Out of memory error when creating a very large table

2012-03-14 Thread Kathey Marsden (Assigned) (JIRA)

 [ 
https://issues.apache.org/jira/browse/DERBY-3009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kathey Marsden reassigned DERBY-3009:
-

Assignee: Kathey Marsden  (was: Knut Anders Hatlen)

Assigning to myself for backport to 10.5


 Out of memory error when creating a very large table
 

 Key: DERBY-3009
 URL: https://issues.apache.org/jira/browse/DERBY-3009
 Project: Derby
  Issue Type: Bug
  Components: SQL
Affects Versions: 10.5.3.0
 Environment: Win XP Pro
Reporter: Nick Williamson
Assignee: Kathey Marsden
  Labels: derby_triage10_5_2
 Fix For: 10.8.1.2

 Attachments: DERBY-3009.zip, derby-3009-1a.diff, derby-3009-1b.diff


 When creating an extremely large table (c.50 indexes, c.50 FK constraints), 
 IJ crashes with an out of memory error. The table can be created successfully 
 if it is done in stages, each one in a different IJ session.
 From Kristian Waagan:
 With default settings on my machine, I also get the OOME.
 A brief investigation revealed a few things:
   1) The OOME occurs during constraint additions (with ALTER TABLE ... 
 ADD CONSTRAINT). I could observe this by monitoring the heap usage.
   2) The complete script can be run by increasing the heap size. I tried with 
 256 MB, but the monitoring showed usage peaked at around 150 MB.
   3) The stack traces produced when the OOME occurs varies (as could be 
 expected).
   4) It is the Derby engine that produce the OOME, not ij (i.e. when I ran 
 with the network server, the server failed).
 I have not had time to examine the heap content, but I do believe there is a 
 bug in Derby. It seems some resource is not freed after use.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Reopened] (DERBY-5457) Memory is not freed after OutOfMemoryError, thus preventing Derby from recovering

2012-03-05 Thread Kathey Marsden (Reopened) (JIRA)

 [ 
https://issues.apache.org/jira/browse/DERBY-5457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kathey Marsden reopened DERBY-5457:
---


 Memory is not freed after OutOfMemoryError, thus preventing Derby from 
 recovering
 -

 Key: DERBY-5457
 URL: https://issues.apache.org/jira/browse/DERBY-5457
 Project: Derby
  Issue Type: Bug
  Components: Network Server
Affects Versions: 10.6.2.1
 Environment: Derby Server 10.6.2.1 on Windows 7 with Derby JDBC 
 Client 10.6.2.1 connections. Client uses OpenJPA as ORM Provider
Reporter: Dominik Stadler
Assignee: Kristian Waagan
  Labels: memory
 Fix For: 10.8.2.2, 10.9.0.0

 Attachments: GC Root of the object that keeps all the memory.jpg, One 
 instance of RAMTransaction keeps 700M.jpg


 After some uptime, my Derby Server goes OOM with the following errors:
 {quote}
 2011-10-03 10:35:21.002 GMT : Security manager installed using the Basic 
 server security policy.
 2011-10-03 10:35:23.295 GMT : Apache Derby Network Server - 10.6.2.1 - 
 (999685) started and ready to accept connections on port 11527
 Exception in thread DRDAConnThread_12 java.lang.OutOfMemoryError: Java heap 
 space
 Exception in thread NetworkServerThread_2 java.lang.OutOfMemoryError: GC 
 overhead limit exceeded
 Exception in thread DRDAConnThread_3 java.lang.OutOfMemoryError: GC 
 overhead limit exceeded
 {quote}
 I suspect that I create transactions that are too big, so the OOM is not 
 really of concern to me here untill I have investigated the actual cause. 
 However I would expect the Derby Server to recover from this situation as 
 soon as the connection to the Client application is closed, but this does not 
 seem to happen, I have memory dumps from a point in time when the Client was 
 already closed and they show that there are still large instances of 
 RAMTransaction kept in memory. 
 I will attach screenshots from MAT which shows the memory usage and the 
 objects keeping this in memory, please adjust handling of OOM so that the 
 memory is freed here.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Closed] (DERBY-5457) Memory is not freed after OutOfMemoryError, thus preventing Derby from recovering

2012-03-05 Thread Kathey Marsden (Closed) (JIRA)

 [ 
https://issues.apache.org/jira/browse/DERBY-5457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kathey Marsden closed DERBY-5457.
-

Resolution: Duplicate

Closing as duplicate of DRBY-5271

 Memory is not freed after OutOfMemoryError, thus preventing Derby from 
 recovering
 -

 Key: DERBY-5457
 URL: https://issues.apache.org/jira/browse/DERBY-5457
 Project: Derby
  Issue Type: Bug
  Components: Network Server
Affects Versions: 10.6.2.1
 Environment: Derby Server 10.6.2.1 on Windows 7 with Derby JDBC 
 Client 10.6.2.1 connections. Client uses OpenJPA as ORM Provider
Reporter: Dominik Stadler
Assignee: Kristian Waagan
  Labels: memory
 Fix For: 10.9.0.0, 10.8.2.2

 Attachments: GC Root of the object that keeps all the memory.jpg, One 
 instance of RAMTransaction keeps 700M.jpg


 After some uptime, my Derby Server goes OOM with the following errors:
 {quote}
 2011-10-03 10:35:21.002 GMT : Security manager installed using the Basic 
 server security policy.
 2011-10-03 10:35:23.295 GMT : Apache Derby Network Server - 10.6.2.1 - 
 (999685) started and ready to accept connections on port 11527
 Exception in thread DRDAConnThread_12 java.lang.OutOfMemoryError: Java heap 
 space
 Exception in thread NetworkServerThread_2 java.lang.OutOfMemoryError: GC 
 overhead limit exceeded
 Exception in thread DRDAConnThread_3 java.lang.OutOfMemoryError: GC 
 overhead limit exceeded
 {quote}
 I suspect that I create transactions that are too big, so the OOM is not 
 really of concern to me here untill I have investigated the actual cause. 
 However I would expect the Derby Server to recover from this situation as 
 soon as the connection to the Client application is closed, but this does not 
 seem to happen, I have memory dumps from a point in time when the Client was 
 already closed and they show that there are still large instances of 
 RAMTransaction kept in memory. 
 I will attach screenshots from MAT which shows the memory usage and the 
 objects keeping this in memory, please adjust handling of OOM so that the 
 memory is freed here.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Another question regarding memory and queries not using prepared statements

2012-03-01 Thread Bergquist, Brett
Looking at two heap dumps, one for yesterday and one for today, about 17 hours 
apart.  I notice that there is an increase in the classloaders of about 1150.   
 Somewhere I think I remember that derby creates classes on the fly for queries 
and loads them.  Is this true?

Related to the question is that I have a query that is created as a Statement, 
not a PreparedStatement.   I am not using a PreparedStatement as the tables 
involved in the query are dynamic.   A unique query is run about 4 times an 
hour.   Is this going to cause memory problems, permgen space in particular?

I could change the query to use a PeparedStatement but at the time I did not 
see any benefit as the query is going to be used exactly once.

Any thoughts would be appreciated.

Brett


Re: Another question regarding memory and queries not using prepared statements

2012-03-01 Thread Knut Anders Hatlen
Bergquist, Brett bbergqu...@canoga.com writes:

 Looking at two heap dumps, one for yesterday and one for today, about
 17 hours apart.  I notice that there is an increase in the
 classloaders of about 1150.Somewhere I think I remember that
 derby creates classes on the fly for queries and loads them.  Is this
 true?

Yes. And there will be a separate classloader instance for each
generated class.

 Related to the question is that I have a query that is created as a
 Statement, not a PreparedStatement.   I am not using a
 PreparedStatement as the tables involved in the query are dynamic.  
 A unique query is run about 4 times an hour.   Is this going to cause
 memory problems, permgen space in particular?

It shouldn't cause such problems. The query will stay in memory for a
while after completion, but it should be eligible for garbage collection
once it's no longer in the statement cache. The statement cache holds
100 statements by default, so the permgen footprint should be bounded.

 I could change the query to use a PeparedStatement but at the time I
 did not see any benefit as the query is going to be used exactly
 once.

The resource usage should be the same when you execute a
PreparedStatement once, so I don't think there's much benefit in
switching from Statement.

-- 
Knut Anders


RE: Another question regarding memory and queries not using prepared statements

2012-03-01 Thread Bergquist, Brett
Thanks for this response and the previous one.  Much knowledge being gained!

Brett

-Original Message-
From: Knut Anders Hatlen [mailto:knut.hat...@oracle.com] 
Sent: Thursday, March 01, 2012 4:44 PM
To: derby-dev@db.apache.org
Subject: Re: Another question regarding memory and queries not using prepared 
statements

Bergquist, Brett bbergqu...@canoga.com writes:

 Looking at two heap dumps, one for yesterday and one for today, about
 17 hours apart.  I notice that there is an increase in the
 classloaders of about 1150.Somewhere I think I remember that
 derby creates classes on the fly for queries and loads them.  Is this 
 true?

Yes. And there will be a separate classloader instance for each generated class.

 Related to the question is that I have a query that is created as a
 Statement, not a PreparedStatement.   I am not using a
 PreparedStatement as the tables involved in the query are dynamic.  
 A unique query is run about 4 times an hour.   Is this going to cause
 memory problems, permgen space in particular?

It shouldn't cause such problems. The query will stay in memory for a while 
after completion, but it should be eligible for garbage collection once it's no 
longer in the statement cache. The statement cache holds
100 statements by default, so the permgen footprint should be bounded.

 I could change the query to use a PeparedStatement but at the time I 
 did not see any benefit as the query is going to be used exactly once.

The resource usage should be the same when you execute a PreparedStatement 
once, so I don't think there's much benefit in switching from Statement.

--
Knut Anders




[jira] [Commented] (DERBY-5415) Memory leak in statement cache of PreparedStatement

2012-02-29 Thread Dag H. Wanvik (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/DERBY-5415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13219704#comment-13219704
 ] 

Dag H. Wanvik commented on DERBY-5415:
--

 So although Derby has no OutOfMemError, our code certainly does (in 
 competition with Derby). 

According to what Knut saw, it would seem that garbage collection should make 
the soace of all those (prepared) statements available again for the rest of 
the app, so I am curious to why you see the OOM. Does it help to insert manual 
calls to the garbage collector? Could there be any dangling references to the 
statements so as to stop them from being gc'ed?
 

 Memory leak in statement cache of PreparedStatement
 ---

 Key: DERBY-5415
 URL: https://issues.apache.org/jira/browse/DERBY-5415
 Project: Derby
  Issue Type: Bug
  Components: JDBC, Services
Affects Versions: 10.5.3.0, 10.7.1.1, 10.8.1.2
 Environment: Linux, java 1.6.0_27-b07
Reporter: Robert Hoffmann
Priority: Minor
  Labels: derby_triage10_9

 Hi,
 I)Description
 When making thousands of simple queries to one table using PreparedStatement, 
 I have noticed quickly increasing memory usage (hundreds of MB within a few 
 dozens of seconds): CASE A.
 I found that memory usage is NORMAL when I keep the PreparedStatement OPEN 
 for all queries (CASE B).
 CASE A (Closing and preparing statement - leaking):
 
 while(true) {
PreparedStatement ps = con.prepareStatement(SELECT * from t where a=?);
ps.setInt(1, r);
ResultSet rs = ps.executeQuery();
while (rs.next()) {
 rs.getInt(b);
}
rs.close();
ps.close();
 }
 
 CASE B (Keep prepared statement open - steady memory):
 
 PreparedStatement ps = con.prepareStatement(SELECT * from t where a=?);
 while(true) {
 ps.setInt(1, r);
ResultSet rs = ps.executeQuery();
while (rs.next()) {
 rs.getInt(b);
}
rs.close();
// keep open: ps.close(); // close later
 }
 
 II) Reproducibility and heap histogram
 I can easily reproduce this problem in our production environment. And the 
 heap of both cases is very distinct:
 CASE A:
 num #instances #bytes  class name
 --
1:   1133492   57289984  [Ljava.lang.Object;
2:   1035688   53548872  [C
3:249501   33051904  [I
4:152208   21917952  
 org.apache.derby.impl.jdbc.EmbedPreparedStatement40
5: 59773   20561912  
 org.apache.derby.impl.sql.execute.BulkTableScanResultSet
6:750585   18014040  java.util.ArrayList
7:674840   16196160  java.lang.String
8:989684   15834944  org.apache.derby.iapi.types.SQLInteger
9:391939   15677560  org.apache.derby.impl.sql.GenericParameter
   10:538700   14375272  
 [Lorg.apache.derby.iapi.types.DataValueDescriptor;
   11: 59775   13389600  
 org.apache.derby.impl.sql.execute.IndexRowToBaseRowResultSet
   12: 59775   12433200  
 org.apache.derby.impl.sql.execute.ProjectRestrictResultSet
   13: 597759085800  
 org.apache.derby.impl.store.access.btree.index.B2IForwardScan
   14:1793258607600  
 org.apache.derby.impl.store.raw.data.BaseContainerHandle
   15:3517218441304  java.util.HashMap$Entry
   16:2391177651744  java.util.HashMap$KeyIterator
   17: 597756694800  
 org.apache.derby.impl.jdbc.EmbedResultSet40
   18:2391195738856  
 org.apache.derby.impl.store.access.heap.HeapRowLocation
   19:1793255738400  
 org.apache.derby.impl.store.access.conglomerate.OpenConglomerateScratchSpace
   20:1195505738400  
 org.apache.derby.impl.store.access.heap.OpenHeap
   21:1195485738240  
 [[Lorg.apache.derby.iapi.types.DataValueDescriptor;
 ...
 CASE B:
 num #instances #bytes  class name
 --
1:2241869471600  [C
2: 210308223200  [I
3:1050205553016  [Ljava.lang.Object;
4: 436504931368  constMethodKlass
5:2011574827768  java.lang.String
6:1744744187376  java.util.HashMap$Entry
7: 436503846512  methodKlass
8:  76543317816  [B
9: 656332663504  symbolKlass
   10: 161432481304  [Ljava.util.HashMap$Entry;
   11:  34422056408  constantPoolKlass
   12: 792901902960  java.util.ArrayList
   13:  34421554272  instanceKlassKlass
   14: 455961459072  
 org.apache.derby.impl.store.raw.data.StoredRecordHeader
   15

  1   2   3   4   5   6   7   8   9   10   >