Thanks for the update, Rik!
--David
From: Rick Hillegas
Date: Sunday, 28 January 2024 at 14:32
To: derby-dev@db.apache.org , David Delabassee
Subject: [External] : Re: JDK 22 RDP2 & Deprecate sun.misc.Unsafe Memory-Access
Methods…
Thanks, David. Derby found no problems with build 22-e
Candidates now less than 2 weeks away (Feb.
8th) [2], it is time to shift our attention to JDK 23.
After multiple rounds of incubations and preview, the Foreign Function & Memory
API is becoming standard and permanent in JDK 22. If we put its 'Function' angle
aside, this API also offers a stan
& Memory
API is becoming standard and permanent in JDK 22. If we put its 'Function'
angle aside, this API also offers a standard and secure way to access off-heap
API. And that brings us to the heads-up below 'Deprecate the memory-access
methods in sun.misc.Unsafe for removal in a fu
What's the definition of "does not work successfully?
I'm not aware of such a limitation, but I imagine some classloading issues can
break
Thanks Kristian!
I realize the question was pretty vague.
Here's what little context there was:
In a separate forum, someone has claimed that multiple
connections to the same database from the same hosting
JVM does not work successfully if the database is an
in-memory database (uses jdbc:derby:memory:whatever).
Is this true?
If it is true, is it documented anywhere?
I didn't believe
mail.com>:
> In a separate forum, someone has claimed that multiple
> connections to the same database from the same hosting
> JVM does not work successfully if the database is an
> in-memory database (uses jdbc:derby:memory:whatever).
>
> Is this true?
>
> If it is true, is i
On 5/19/2015 11:00 AM, Bergquist, Brett wrote:
I found how to export to a text file. So here is one of the areas that the
leak detection pointed out:
'- buf, lock java.io.StringWriter @ 0xfffd86693928 | 48 |
706,545,952
'- stringWriter
I am having an out of memory condition on Derby 10.9.2.0 in our production
environment. Derby is given 8G maximum heap but I am able to get a heap dump
periodically and analyze it via Eclipse Memory Analyzer.
I see a couple of strange things and was wondering if I can attach a screen
shot
| 240
| 4,104
From: Bergquist, Brett [mailto:bbergqu...@canoga.com]
Sent: Tuesday, May 19, 2015 1:39 PM
To: derby-dev@db.apache.org
Subject: Having an out of memory condition on Derby 10.9.2.0
I am having
Osric Wilkinson created DERBY-6799:
--
Summary: Suggested update to in-memory database docs
Key: DERBY-6799
URL: https://issues.apache.org/jira/browse/DERBY-6799
Project: Derby
Issue Type
What makes you say that the excess memory consumption is caused by
Derby? Do you have a memory profile showing that Derby consumes the heap?
On 04/13/2015 02:08 PM, rohitsonawat wrote:
While Inserting the large data record content say (450kb)using CLOB it
is throwing Out of Memory Error.this
While Inserting the large data record content say (450kb)using CLOB it is
throwing Out of Memory Error.this error is accruing after inserting 200
times data content.Below is the stack trace of
it.DBAppender::updateParameters()::Data.Length()::432164 Exception in thread
AppointmentFileListener
as cannot
reproduce.
Backup error when in-memory with jar
Key: DERBY-6271
URL: https://issues.apache.org/jira/browse/DERBY-6271
Project: Derby
Issue Type: Bug
Components: Store
Affects
[
https://issues.apache.org/jira/browse/DERBY-6271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Mike Matrigali updated DERBY-6271:
--
Labels: derby_triage10_12 (was: )
10.12 bug triage effort.
Backup error when in-memory
effort.
Memory leak when shutting down Derby system
---
Key: DERBY-5963
URL: https://issues.apache.org/jira/browse/DERBY-5963
Project: Derby
Issue Type: Bug
Components: Store
Affects
10.10.2.1
DatabaseMetaData.usesLocalFiles() returns true for in-memory databases
--
Key: DERBY-6662
URL: https://issues.apache.org/jira/browse/DERBY-6662
Project: Derby
1628101 from [~myrna] in branch 'code/branches/10.10'
[ https://svn.apache.org/r1628101 ]
DERBY-6662; DatabaseMetaData.usesLocalFiles() returns true for in-memory
databases
preventing the optional tools test case to run with JDBC lower than 4
DatabaseMetaData.usesLocalFiles() returns true
1628008 from [~myrna] in branch 'code/trunk'
[ https://svn.apache.org/r1628008 ]
DERBY-6662; DatabaseMetaData.usesLocalFiles() returns true for in-memory
databases
changing the in-memory databasename used in the test.
DatabaseMetaData.usesLocalFiles() returns true for in-memory databases
1628009 from [~myrna] in branch 'code/branches/10.11'
[ https://svn.apache.org/r1628009 ]
DERBY-6662; DatabaseMetaData.usesLocalFiles() returns true for in-memory
databases
merge of revision 1628008 from trunk to change the name of the db in the
test.
DatabaseMetaData.usesLocalFiles() returns
1628025 from [~myrna] in branch 'code/branches/10.10'
[ https://svn.apache.org/r1628025 ]
DERBY-6662; DatabaseMetaData.usesLocalFiles() returns true for in-memory
databases
backport of revision 1627895 and revision 1628009 from the 10.11 branch;
making the affected methods return false
() returns true for in-memory databases
--
Key: DERBY-6662
URL: https://issues.apache.org/jira/browse/DERBY-6662
Project: Derby
Issue Type: Bug
Components: JDBC
Affects
optional tool, as that's
what was used in the repro description.
DatabaseMetaData.usesLocalFiles() returns true for in-memory databases
--
Key: DERBY-6662
URL: https://issues.apache.org/jira/browse
1627851 from [~myrna] in branch 'code/trunk'
[ https://svn.apache.org/r1627851 ]
DERBY-6662; DatabaseMetaData.usesLocalFiles() returns true for in-memory
databases
Adding a test case using the metadata optional tool.
DatabaseMetaData.usesLocalFiles() returns true for in-memory databases
1627895 from [~myrna] in branch 'code/branches/10.11'
[ https://svn.apache.org/r1627895 ]
DERBY-6662; DatabaseMetaData.usesLocalFiles() returns true for in-memory
databases
merge of revision 1627671 and 1627851 from trunk
DatabaseMetaData.usesLocalFiles() returns true for in-memory databases
for in-memory databases
--
Key: DERBY-6662
URL: https://issues.apache.org/jira/browse/DERBY-6662
Project: Derby
Issue Type: Bug
Components: JDBC
Affects Versions
(Derby6662Test) to the memorydb._Suite.
I ran the memorydb suite and the test passed in the environment with the fix,
but failed without. I will commit this shortly.
DatabaseMetaData.usesLocalFiles() returns true for in-memory databases
1627671 from [~myrna] in branch 'code/trunk'
[ https://svn.apache.org/r1627671 ]
DERBY-6662; DatabaseMetaData.usesLocalFiles() returns true for in-memory
databases
Making the usesLocalFiles and usesLocalFilePerTable method return false if
it's a memory database and add a test
Rick Hillegas created DERBY-6662:
Summary: DatabaseMetaData.usesLocalFiles() returns true for
in-memory databases
Key: DERBY-6662
URL: https://issues.apache.org/jira/browse/DERBY-6662
Project: Derby
() returns true for in-memory databases
--
Key: DERBY-6662
URL: https://issues.apache.org/jira/browse/DERBY-6662
Project: Derby
Issue Type: Bug
Components: JDBC
[
https://issues.apache.org/jira/browse/DERBY-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Rick Hillegas updated DERBY-6662:
-
Description:
DatabaseMetaData.usesLocalFiles() returns true for in-memory databases
with Clob or Blob hash join:
DataTypeDescriptor.estimatedMemoryUsage() has no case for BLOB or CLOB so
would underestimate memory usage for those types at zero
Hi Rick,
Thanks for responding to my email. You are correct about differences in
legal length limit on filename on various file systems and that is why it
will be nice to write the test with in memory url so we are not limited by
the differences of various file systems. In order to get around
Hi Mamta,
Some comments inline...
On 1/31/14 3:18 PM, Mamta Satoor wrote:
Hi,
I am not familiar with Derby's in memory db implementation (accessed
through jdbc url jdbc:derby:memory:... ) but I thought there will not
be any file system access for such a db. But when I tried a long
dbname
Hi,
I am not familiar with Derby's in memory db implementation (accessed
through jdbc url jdbc:derby:memory:... ) but I thought there will not be
any file system access for such a db. But when I tried a long dbname with
such a url, I got the exception(the complete stack trace is at the bottom
Rick Hillegas created DERBY-6449:
Summary: Analyze and possible correct the suspicious addition of
in-memory dependencies during statement execution
Key: DERBY-6449
URL: https://issues.apache.org/jira/browse
[
https://issues.apache.org/jira/browse/DERBY-6449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Rick Hillegas updated DERBY-6449:
-
Summary: Analyze and possibly correct the suspicious addition of in-memory
dependencies during
[
https://issues.apache.org/jira/browse/DERBY-6271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Mike Matrigali updated DERBY-6271:
--
Component/s: Store
Backup error when in-memory with jar
, could you be out of file space?
d) Did you see any stack trace in derby.log? If so, could you post it?
Backup error when in-memory with jar
Key: DERBY-6271
URL: https://issues.apache.org/jira/browse/DERBY
obejcts rather than as compiled byte code,
reduces memory usage and generated class size.
---
Key: DERBY-1699
URL: https://issues.apache.org/jira
Albert created DERBY-6271:
-
Summary: Backup error when in-memory with jar
Key: DERBY-6271
URL: https://issues.apache.org/jira/browse/DERBY-6271
Project: Derby
Issue Type: Bug
Affects Versions
[
https://issues.apache.org/jira/browse/DERBY-5057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Rick Hillegas closed DERBY-5057.
Out-of-memory error in istat tests
--
Key
:
DataTypeDescriptor.estimatedMemoryUsage() has no case for BLOB or CLOB so
would underestimate memory usage for those types at zero
in
performance for some applications that expect the zero estimate.
OutOfMemoryError with Clob or Blob hash join:
DataTypeDescriptor.estimatedMemoryUsage() has no case for BLOB or CLOB so
would underestimate memory usage for those types at zero
:
DataTypeDescriptor.estimatedMemoryUsage() has no case for BLOB or CLOB so
would underestimate memory usage for those types at zero
?
OutOfMemoryError with Clob or Blob hash join:
DataTypeDescriptor.estimatedMemoryUsage() has no case for BLOB or CLOB so
would underestimate memory usage for those types at zero
in the tests so that they
only run System.gc() if the memory statistics are actually going to be printed.
(The call to gc() doesn't seem to slow down the test when it's run separately,
but I suppose it could take longer if it runs as part of a larger test suite
and there's more data on the heap
\iapi\types\DataTypeDescriptor.java
Sending
java\testing\org\apache\derbyTesting\functionTests\tests\memory\BlobMemTest.java
Sending
java\testing\org\apache\derbyTesting\functionTests\tests\memory\ClobMemTest.java
Transmitting file data ...
Committed revision 1464247.
URL: http
:
DataTypeDescriptor.estimatedMemoryUsage() has no case for BLOB or CLOB so
would underestimate memory usage for those types at zero
.
OutOfMemoryError with Clob or Blob hash join:
DataTypeDescriptor.estimatedMemoryUsage() has no case for BLOB or CLOB so
would underestimate memory usage for those types at zero
on Knut's repro. Tests
are in progress, please review.
OutOfMemoryError with Clob or Blob hash join:
DataTypeDescriptor.estimatedMemoryUsage() has no case for BLOB or CLOB so
would underestimate memory usage for those types at zero
[
https://issues.apache.org/jira/browse/DERBY-6096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13620147#comment-13620147
]
Kathey Marsden commented on DERBY-6096:
---
Suites.All, derbyall, and the memory suite
change to me. some day we probably
should just raise the default internal maxMemoryPerTable on a major release
boundary to reflect increased default memory for most users, or maybe come up
with a better zero admin auto config for it. 1 meg seems pretty small.
OutOfMemoryError
underestimate memory usage for those types at zero
-
Key: DERBY-6096
URL: https
good to me. Just one minor comment, in
the tests, should we initialize the data to be inserted into clob and blob data
types?
OutOfMemoryError with Clob or Blob hash join:
DataTypeDescriptor.estimatedMemoryUsage() has no case for BLOB or CLOB so
would underestimate memory
.
OutOfMemoryError with Clob or Blob hash join:
DataTypeDescriptor.estimatedMemoryUsage() has no case for BLOB or CLOB so
would underestimate memory usage for those types at zero
memory for the query, and for some
like your repro that will be good. But for others that did not see errors their
queries may run slower now after the fix when we use less memory. This may
especially be a concern if the fix is to be backported.
OutOfMemoryError with Clob or Blob
underestimate memory usage for those types at zero
-
Key: DERBY-6096
URL: https
where I was on trying to get a
reproduction for the memory usage with Clob hash joins. I created this fixture
in memory.ClobMemTest. At one point I was getting an OOM on the query if
derby.language.maxMemoryPerTable wasn't set running -Xmx64M but then started
cleaning up and it no longer occurs
with smaller LOBs. LONG_CLOB_LENGTH is 1800, which
means the SQLClob objects inserted into BackingStoreHashtable aren't
materialized and don't take up that much space. Using a larger number of
smaller LOBs (so small that they don't overflow to another page) should
increase the memory footprint
in BackingStoreHashtable
during a join when I run it with -Xmx64M.
The program inserts 1500 32KB BLOBs into a table and joins the table with
itself.
DataTypeDescriptor.estimatedMemoryUsage() has no case for BLOB or CLOB so
would underestimate memory usage for those types at zero
() has no case for BLOB or CLOB so
would underestimate memory usage for those types at zero (was:
DataTypeDescriptor.estimatedMemoryUsage() has no case for BLOB or CLOB so
would underestimate memory usage for those types at zero)
OutOfMemoryError with Clob or Blob hash
() has no case for BLOB or CLOB so
would underestimate memory usage for those types at zero (was:
OutOfMemoryError with Clob or Blob hash
join:DataTypeDescriptor.estimatedMemoryUsage() has no case for BLOB or CLOB so
would underestimate memory usage for those types at zero
no case for BLOB or CLOB so
would underestimate memory usage for those types at zero
-
Key: DERBY-6096
.
One thing I did notice is that Runtime.getRuntime().totalMemory()
returns really different things if -Xms is set large, for example with
nothing else going on with -Xms1048m -Xmx1048m I get:
Total Memory:1098907648 Free Memory:1088915600
With just-Xmx1048m
Total Memory:4194304 Free Memory
verified, but I think HashJoinStrategy uses
DataTypeDescriptor.estimatedMemoryUsage() to estimate how much memory
the hash table will consume. That method has no case for BLOB or CLOB,
so it looks as if it will return zero for LOB columns. If that's so, it
will definitely overestimate how many rows
Kathey Marsden created DERBY-6096:
-
Summary: DataTypeDescriptor.estimatedMemoryUsage() has no case
for BLOB or CLOB so would underestimate memory usage for those types at zero
Key: DERBY-6096
URL: https
guessing
with things like blobs/clobs.
I haven't verified, but I think HashJoinStrategy uses
DataTypeDescriptor.estimatedMemoryUsage() to estimate how much memory
the hash table will consume. That method has no case for BLOB or CLOB,
so it looks as if it will return zero for LOB columns. If that's so
().totalMemory()
returns really different things if -Xms is set large, for example with
nothing else going on with -Xms1048m -Xmx1048m I get:
Total Memory:1098907648 Free Memory:1088915600
With just-Xmx1048m
Total Memory:4194304 Free Memory:2750304
Two questions
1) Might hash joins use
The goal was to use larger memory if it was available. At the time Java
did not provide much access to this info, so only totalMemory() was
available to use. I think this translated to current allocated memory.
So if you start the jvm with a lot of memory (even if you are not using
of query. A single query with joins
could have an number of these depending on how many terms are in
the joins.
On 2/27/2013 12:47 PM, Katherine Marsden wrote:
I was wondering what is the default maximum memory for hash joins.
Looking at OptimizerFactoryImpl I see
protected int maxMemoryPerTable
for a zero admin db.
There's a brief discussion in DERBY-4620 on how this setting could be
auto-tuned.
Also, the memory estimates we use for the hash tables are inaccurate
(they are too low), so maxMemoryPerTable is effectively higher than its
nominal value. There's a patch attached to the issue that fixes
I was wondering what is the default maximum memory for hash joins.
Looking at OptimizerFactoryImpl I see
protected int maxMemoryPerTable = 1048576 unless overridden by
derby.language.maxMemoryPerTable;
Is actually intended per table or per active query? I don't see the
property
Thanks for starting this interesting thread, Brett. One comment inline...
On 11/23/12 4:39 AM, Bergquist, Brett wrote:
...
I know I have not take into consideration the table functions column name
restriction in the initScan which I will eventually get to. What would be
useful I think
);
}
}
From: Knut Anders Hatlen [knut.hat...@oracle.com]
Sent: Thursday, November 22, 2012 4:57 AM
To: derby-dev@db.apache.org
Subject: Re: Have Derby Network Server having an out of memory (PermGen)
Mike Matrigali mikem_...@sbcglobal.net writes:
On 11/21/2012 6:58 AM, Knut
Mike Matrigali mikem_...@sbcglobal.net writes:
On 11/21/2012 6:58 AM, Knut Anders Hatlen wrote:
Bergquist, Brett bbergqu...@canoga.com writes:
Yes, the statement cache size has been increased to 50K statements so
that might be an issue. Maybe the PermGen space will need to be
increased
Bergquist, Brett bbergqu...@canoga.com writes:
I have a customer that is periodically having a problem and unknown to
me, they have been accessing the Derby database using a query such as
in the following and have been repeatedly experienced a server issue.
I finally figured out it was an
@db.apache.org
Subject: Re: Have Derby Network Server having an out of memory (PermGen)
Bergquist, Brett bbergqu...@canoga.com writes:
I have a customer that is periodically having a problem and unknown to
me, they have been accessing the Derby database using a query such as
in the following and have
: Have Derby Network Server having an out of memory (PermGen)
Yes, the statement cache size has been increased to 50K statements so that
might be an issue. Maybe the PermGen space will need to be increased because
of that. The documentation is not clear which type of heap that the statement
Bergquist, Brett bbergqu...@canoga.com writes:
Yes, the statement cache size has been increased to 50K statements so
that might be an issue. Maybe the PermGen space will need to be
increased because of that. The documentation is not clear which type
of heap that the statement cache would
On 11/21/2012 6:58 AM, Knut Anders Hatlen wrote:
Bergquist, Brett bbergqu...@canoga.com writes:
Yes, the statement cache size has been increased to 50K statements so
that might be an issue. Maybe the PermGen space will need to be
increased because of that. The documentation is not clear which
Igor Sereda created DERBY-5963:
--
Summary: Memory leak when shutting down Derby system
Key: DERBY-5963
URL: https://issues.apache.org/jira/browse/DERBY-5963
Project: Derby
Issue Type: Bug
the problem
Memory leak when shutting down Derby system
---
Key: DERBY-5963
URL: https://issues.apache.org/jira/browse/DERBY-5963
Project: Derby
Issue Type: Bug
Components
[
https://issues.apache.org/jira/browse/DERBY-5963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Igor Sereda updated DERBY-5963:
---
Attachment: yourkit.png
Memory leak when shutting down Derby system
may use a
connection from a pool for the lifetime of the request. In this case this
variable will contain a *WeakReference*.
=
I have the second case, and I don't see any WeakReferences being used.
Memory leak when shutting down Derby system
[
https://issues.apache.org/jira/browse/DERBY-5963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Igor Sereda updated DERBY-5963:
---
Attachment: TestDerbyLeak.java
Memory leak when shutting down Derby system
happens if a connection is ever
shared between more than one thread.
Attached please find a demo code. I couldn't unit-test it because some internal
stuff is used. Use a profiler to see that the database is held in memory.
As a workaround, I will have to use reflection to clean up all active threads
with a Invalid memory access
of location
--
Key: DERBY-5481
URL: https://issues.apache.org/jira/browse/DERBY-5481
Project: Derby
Issue Type: Bug
activity happened on this jira. Should we
go ahead and close it until more information is available?
Unit tests fail on a derby closed iterator test with a Invalid memory access
of location
Labels: derby_triage10_10 derby_triage10_5_2 (was:
derby_triage10_5_2)
Taking off of High Value Fix until we have a reproduction. Adding wrong
results which could occur with this issue.
Threading issue with DependencyManager may cause in-memory dependencies
to DriverManager, but how do you do this via a
DataSource?
Thanks,
-Rick
Original Message
Subject:Re: Can't remove derby from memory
Date: Mon, 26 Mar 2012 18:06:24 -0700
From: Bryan Pendleton bpendleton.de...@gmail.com
Reply-To: Derby Discussion derby-u
Rick Hillegas rick.hille...@oracle.com writes:
This discussion is taking place on the user list. It is my
understanding that graceful engine shutdown is supposed to remove
references to Derby classes, making all of the engine code eligible to
be garbage-collected. At least, that is what I
10.5.3.2
Assignee: Knut Anders Hatlen (was: Kathey Marsden)
Completed merge back to 10.5. Assigning back to Knut and resolving.
Out of memory error when creating a very large table
Key: DERBY
-3009_10_5_diff.txt
Out of memory error when creating a very large table
Key: DERBY-3009
URL: https://issues.apache.org/jira/browse/DERBY-3009
Project: Derby
Issue Type: Bug
to 10.5
Out of memory error when creating a very large table
Key: DERBY-3009
URL: https://issues.apache.org/jira/browse/DERBY-3009
Project: Derby
Issue Type: Bug
[
https://issues.apache.org/jira/browse/DERBY-5457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Kathey Marsden reopened DERBY-5457:
---
Memory is not freed after OutOfMemoryError, thus preventing Derby from
recovering
[
https://issues.apache.org/jira/browse/DERBY-5457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Kathey Marsden closed DERBY-5457.
-
Resolution: Duplicate
Closing as duplicate of DRBY-5271
Memory is not freed
is that I have a query that is created as a Statement,
not a PreparedStatement. I am not using a PreparedStatement as the tables
involved in the query are dynamic. A unique query is run about 4 times an
hour. Is this going to cause memory problems, permgen space in particular?
I could
.
A unique query is run about 4 times an hour. Is this going to cause
memory problems, permgen space in particular?
It shouldn't cause such problems. The query will stay in memory for a
while after completion, but it should be eligible for garbage collection
once it's no longer
Thanks for this response and the previous one. Much knowledge being gained!
Brett
-Original Message-
From: Knut Anders Hatlen [mailto:knut.hat...@oracle.com]
Sent: Thursday, March 01, 2012 4:44 PM
To: derby-dev@db.apache.org
Subject: Re: Another question regarding memory and queries
to the garbage collector? Could there be any dangling references to the
statements so as to stop them from being gc'ed?
Memory leak in statement cache of PreparedStatement
---
Key: DERBY-5415
URL
1 - 100 of 1022 matches
Mail list logo