[
https://issues.apache.org/jira/browse/JCR-2483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14729118#comment-14729118
]
Unico Hommes commented on JCR-2483:
-----------------------------------
The problem with the patch is that it uses a SQL statement that is not
universally supported (the LIMIT keyword). So I'm hesitant to use that fix as
it will break backward compatibility (for instance SQL Server and Oracle
installations would need to start using a SQL Server and Oracle specific
DatabaseJournal). JCBC provides the Statement#setFetchSize method to give hints
to the driver as to how many rows to retrieve at once. MySQL Connector/J only
supports fetching either all rows at once or one at a time and you need to set
the fetch size to Integer.MIN_VALUE to make it behave in the latter way. It's
not an optimal solution for MySQL as performance would probably be much better
if the result could be retrieved in batches, but at least that solution could
be implemented in a backward compatible way.
> Out of memory error while adding a new host due to large number of revisions
> ----------------------------------------------------------------------------
>
> Key: JCR-2483
> URL: https://issues.apache.org/jira/browse/JCR-2483
> Project: Jackrabbit Content Repository
> Issue Type: Improvement
> Components: clustering
> Affects Versions: 1.6
> Environment: MySQL DB. 512 MB memory allocated to java app.
> Reporter: aasoj
> Attachments: patch
>
>
> In a cluster deployment, revisions are saved in Journal Table in the DB.
> After a while a huge number of revisions can get created (around 70 k in our
> test). When a new host is added to the cluster, it tries to read all the
> revisions and hence the following error:
> Caused by: java.lang.OutOfMemoryError: Java heap space
> at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:2931)
> at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:2871)
> at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3414)
> at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:910)
> at com.mysql.jdbc.MysqlIO.nextRow(MysqlIO.java:1405)
> at com.mysql.jdbc.MysqlIO.readSingleRowSet(MysqlIO.java:2816)
> at com.mysql.jdbc.MysqlIO.getResultSet(MysqlIO.java:467)
> at
> com.mysql.jdbc.MysqlIO.readResultsForQueryOrUpdate(MysqlIO.java:2510)
> at com.mysql.jdbc.MysqlIO.readAllResults(MysqlIO.java:1746)
> at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2135)
> at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2542)
> at
> com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1734)
> at
> com.mysql.jdbc.PreparedStatement.execute(PreparedStatement.java:995)
> at
> org.apache.jackrabbit.core.journal.DatabaseJournal.getRecords(DatabaseJournal.java:460)
> at
> org.apache.jackrabbit.core.journal.AbstractJournal.doSync(AbstractJournal.java:201)
> at
> org.apache.jackrabbit.core.journal.AbstractJournal.sync(AbstractJournal.java:188)
> at
> org.apache.jackrabbit.core.cluster.ClusterNode.sync(ClusterNode.java:329)
> at
> org.apache.jackrabbit.core.cluster.ClusterNode.start(ClusterNode.java:270)
> This can also happen to an existing host in the cluster when the number of
> revisions returned is very high.
> Possible solutions:
> 1. Cleaning old revisions using Janitor thread: This may be good for new
> hosts. But it will fail in a scenario when sync delay is high (few hours) and
> number of updates is high in existing hosts in the cluster
> 2. Increases memory allocated to Java process: This is not a feasible option
> always
> 3. Limit the number of updates read from the DB in any cycle.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)