[ 
https://issues.apache.org/jira/browse/DERBY-7049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16945270#comment-16945270
 ] 

Marco edited comment on DERBY-7049 at 10/6/19 8:12 AM:
-------------------------------------------------------

I did a lot more research, but I was not able to solve the real issue :-( I'm 
pretty sure, now, that it is a bug in the JRE. I tried the following JVM-args:

{code:java}
-XX:MaxMetaspaceSize=2G
-XX:-UseCompressedClassPointers
-XX:+PrintGCDetails
-XX:+TraceClassUnloading
-XX:+TraceClassLoading
-XX:+PrintGCDateStamps
-XX:+PrintGCTimeStamps
-XX:+PrintGCCause
-XX:+HeapDumpOnOutOfMemoryError
{code}

I then grepped the resulting log and counted the occurrences of "Loaded" and 
"Unloading" and saw that over many many hours it still continued to 
generate+load more and more apache-derby-classes, but also unloaded them again. 
I still don't understand why it generated+loaded new classes given that I set 
{{derby.language.statementCacheSize=500}} which definitely had an effect (=> 
less often classes were generated+loaded). I compared it with 
{{derby.language.statementCacheSize=5}} and could see the difference. IMHO 
derby should not have generated+loaded new classes, anymore, because my total 
number of distinct SQL queries is below 400. Thus, this seems one part of the 
issue, but not the essential part, because:

When grepping + counting, I saw that the number of classes in memory was always 
constant, i.e. the number of "Loaded" and "Unloading" lines in the log grew in 
parallel over the hours. After many hours I had sth. like 50000 classes loaded 
and 40000 classes unloaded. Many hours more, I had 90000 classes loaded and 
80000 classes unloaded, thus the number of classes in memory was constantly 
about 10000. Still my meta-memory shrunk smaller and smaller over this time. 
Hence, I'm certain, now, that while the gc actually releases the classes (at 
least its log messages claim this), their memory seems not to be released.

Anyway, during my intensive analysis, I stumbled over a commit-hook in my code 
doing unnecessarily heaps and heaps of work (iterating over hundreds of 
thousands of rows in a situation in which I knew exactly which rows needed 
processing rather than all "pending" (in my app's context) rows). So, I 
refactored this code and not only speeded up things extremely, but also 
circumvented the class-loading-memory-issue. My application now works fine over 
multiple days (even though the meta-space still shrinks, it does so slowly 
enough to not cause problems in a real-world-scenario, anymore).

I therefore stop analysing this problem, further.

THANKS a lot to everyone who helped me with this!!!


was (Author: nlmarco):
I did a lot more research, but I was not able to solve the real issue :-( I'm 
pretty sure, now, that it is a bug in the JRE. I tried the following JVM-args:

{code:java}
-XX:MaxMetaspaceSize=2G
-XX:-UseCompressedClassPointers
-XX:+PrintGCDetails
-XX:+TraceClassUnloading
-XX:+TraceClassLoading
-XX:+PrintGCDateStamps
-XX:+PrintGCTimeStamps
-XX:+PrintGCCause
-XX:+HeapDumpOnOutOfMemoryError
{code}

I then grepped the resulting log and counted the occurrences of "Loaded" and 
"Unloading" and saw that over many many hours it still continued to 
generate+load more and more apache-derby-classes, but also unloaded them again. 
I still don't understand why it generated+loaded new classes given that I set 
{{derby.language.statementCacheSize=500}} which definitely had an effect (=> 
less often classes were generated+loaded). I compared it with 
{{derby.language.statementCacheSize=5}} and could see the difference. IMHO 
derby should not have generated+loaded new classes, anymore, because my total 
number of distinct SQL queries is below 400. Thus, this seems one part of the 
issue, but not the essential part, because:

When grepping + counting, I saw that the number of classes in memory was always 
constant, i.e. the number of "Loaded" and "Unloading" lines in the log grew in 
parallel over the hours. After many hours I had sth. like 50000 classes loaded 
and 40000 classes unloaded. Many hours more, I had 90000 classes loaded and 
80000 classes unloaded, thus the number of classes in memory was constantly 
about 10000. Still my meta-memory shrunk smaller and smaller over this time. 
Hence, I'm certain, now, that while the gc actually releases the classes (at 
least its log messages claim this), their memory seems not to be released.

Anyway, during my intensive analysis, I stumbled over a commit-hook in my code 
doing unnecessarily heaps and heaps of work (iterating over a hundreds of 
thousands of rows in a situation in which I knew exactly which rows needed 
processing rather than all "pending" (in my app's context) rows). So, I 
refactored this code and not only speeded up things extremely, but also 
circumvented the class-loading-memory-issue. My application now works fine over 
multiple days (even though the meta-space still shrinks, it does so slowly 
enough to not cause problems in a real-world-scenario, anymore).

I therefore stop analysing this problem, further.

THANKS a lot to everyone who helped me with this!!!

> OutOfMemoryError: Compressed class space
> ----------------------------------------
>
>                 Key: DERBY-7049
>                 URL: https://issues.apache.org/jira/browse/DERBY-7049
>             Project: Derby
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 10.13.1.1
>            Reporter: Marco
>            Priority: Major
>         Attachments: StatementLogReadingVTI.java
>
>
> After a few days of working with an embedded Derby database (currently 
> version 10.13.1.1 on Oracle Java 1.8.0_201-b09), the following error occurs:
> *java.lang.OutOfMemoryError: Compressed class space*
> {code:java}
> java.lang.OutOfMemoryError: Compressed class space
>     at java.lang.ClassLoader.defineClass1(Native Method) ~[na:1.8.0_201]
>     at java.lang.ClassLoader.defineClass(ClassLoader.java:763) ~[na:1.8.0_201]
>     at java.lang.ClassLoader.defineClass(ClassLoader.java:642) ~[na:1.8.0_201]
>     at 
> org.apache.derby.impl.services.reflect.ReflectLoaderJava2.loadGeneratedClass(Unknown
>  Source) ~[derby-10.13.1.1.jar:na]
>     at 
> org.apache.derby.impl.services.reflect.ReflectClassesJava2.loadGeneratedClassFromData(Unknown
>  Source) ~[derby-10.13.1.1.jar:na]
>     at 
> org.apache.derby.impl.services.reflect.DatabaseClasses.loadGeneratedClass(Unknown
>  Source) ~[derby-10.13.1.1.jar:na]
>     at 
> org.apache.derby.impl.services.bytecode.GClass.getGeneratedClass(Unknown 
> Source) ~[derby-10.13.1.1.jar:na]
>     at 
> org.apache.derby.impl.sql.compile.ExpressionClassBuilder.getGeneratedClass(Unknown
>  Source) ~[derby-10.13.1.1.jar:na]
>     at org.apache.derby.impl.sql.compile.StatementNode.generate(Unknown 
> Source) ~[derby-10.13.1.1.jar:na]
>     at org.apache.derby.impl.sql.GenericStatement.prepMinion(Unknown Source) 
> ~[derby-10.13.1.1.jar:na]
>     at org.apache.derby.impl.sql.GenericStatement.prepare(Unknown Source) 
> ~[derby-10.13.1.1.jar:na]
>     at 
> org.apache.derby.impl.sql.conn.GenericLanguageConnectionContext.prepareInternalStatement(Unknown
>  Source) ~[derby-10.13.1.1.jar:na]
>     at org.apache.derby.impl.jdbc.EmbedPreparedStatement.<init>(Unknown 
> Source) ~[derby-10.13.1.1.jar:na]
>     at org.apache.derby.impl.jdbc.EmbedPreparedStatement42.<init>(Unknown 
> Source) ~[derby-10.13.1.1.jar:na]
>     at org.apache.derby.jdbc.Driver42.newEmbedPreparedStatement(Unknown 
> Source) ~[derby-10.13.1.1.jar:na]
>     at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown 
> Source) ~[derby-10.13.1.1.jar:na]
>     at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown 
> Source) ~[derby-10.13.1.1.jar:na]
>     at 
> org.datanucleus.store.rdbms.datasource.dbcp.DelegatingConnection.prepareStatement(DelegatingConnection.java:259)
>  ~[datanucleus-rdbms-4.0.12.jar:na]{code}
> I tried to solve the problem by periodically shutting down the database, 
> because I read that the generated classes as well as all other allocated 
> resources should be released when the DB is shut-down.
> I thus perform the following code once per roughly 20 minutes:
> {code:java}
> String shutdownConnectionURL = connectionURL + ";shutdown=true";
> try {
>     DriverManager.getConnection(shutdownConnectionURL);
> } catch (SQLException e) {
>     int errorCode = e.getErrorCode();
>     if (DERBY_ERROR_CODE_SHUTDOWN_DATABASE_SUCCESSFULLY != errorCode &&
>             DERBY_ERROR_CODE_SHUTDOWN_DATABASE_WAS_NOT_RUNNING != errorCode) {
>         throw new RuntimeException(e);
>     }
> }
> {code}
> Unfortunately, this has no effect :( The OutOfMemoryError still occurs after 
> about 2 days. Do I assume correctly that the above code _should_ properly 
> shut-down the database? And do I assume correctly that this shut-down should 
> release the generated classes?
> IMHO, it is already a bug in Derby that I need to shut-down the database at 
> all in order to prevent it from piling up generated classes. Shouldn't it 
> already release the generated classes at the end of each transaction? But 
> even if I really have to shut-down the DB, it is certainly a bug that the 
> classes are still kept in "compressed class space" even after the shut-down.
> I searched the release notes and the existing bugs (here in JIRA) and did not 
> find anything related to this {{OutOfMemoryError}}. Hence, I open this 
> bug-report, now.
> This issue was originally reported in 
> [subshare#74|https://github.com/subshare/subshare/issues/74], but it is IMHO 
> clearly a Derby bug.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to