[
https://issues.apache.org/jira/browse/OPENJPA-441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12546703
]
Christiaan commented on OPENJPA-441:
------------------------------------
Patrick,
in org.apache.openjpa.jdbc.sql.RowImpl the generated sql is cached in _sql.
When dealing with many objects of the same type in one transaction all having
a similar modification (eg. they are deleted or a field is updated) there is
quite some memory overhead for caching this sql, since each object (or RowImpl)
caches the same sql. In the attached sql it basically comes down to 2 sql
statements which are duplicated many times.
I think there are two options to improve this:
1) reuse the sql instead of duplicating it
2) not cache it, possibly in certain conditions eg, when many objects of the
same type are involved, but generated it each time it is needed;
Option 2) is probably the most straight forward. The caching is probably done
for a reason but I wonder if generating the sql when needed has that much
performance impact?
> Memory increase when deleting objects
> -------------------------------------
>
> Key: OPENJPA-441
> URL: https://issues.apache.org/jira/browse/OPENJPA-441
> Project: OpenJPA
> Issue Type: Bug
> Affects Versions: 1.0.0
> Environment: Kodo 4.1.4, JDK 6, ms sql server 2005, JTDS 1.2
> Reporter: Christiaan
> Attachments: results.ZIP, TestCaseMemoryAndDelete.zip
>
>
> This isssue is based on issue:
> http://issues.apache.org/jira/browse/OPENJPA-439
> When executing a delete on objects which all have been loaded into memory,
> the memory usage is doubled when calling pm.deletePersistentAll().
> The same testcase can be used which is attached to the linked issue.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.