Probably Martin you know all this ... If so, sorry. Until flush is not invoked explicitly the DN commands with the changes will not be sent to the database. They're simply queued. For that reason that exception is not thrown by the DBMS.
If there are queued commands on DN that include changes on data, as they're still not persisted, they wouldn't be taken into account when querying the database, so the result set would not reflect them (I.e., we would be querying an "open" data model that still not includes latest changes -those enqueued-). For that reason we also always execute a flush() before querying in our own custom query methods. But there must be other problem here on Isis or your implementation of that unique key violation is only thrown if flushed... Is it possible ? > El 17/4/2015, a las 23:07, Martin Grigorov (JIRA) <[email protected]> escribió: > > > [ > https://issues.apache.org/jira/browse/ISIS-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14500668#comment-14500668 > ] > > Martin Grigorov commented on ISIS-1134: > --------------------------------------- > > org.apache.isis.core.metamodel.services.container.DomainObjectContainerDefault#firstMatch(org.apache.isis.applib.query.Query<T>) > starts with flush() call. > Why this is needed? Why by default? > It looks like a custom transaction isolation mechanism. > > Removing this #flush() call fixes the issue above and my app seems to work > fine. But I'm sure some applications' use cases must be broken now. > >> DN connections leak due to non-closed queries (?!) >> -------------------------------------------------- >> >> Key: ISIS-1134 >> URL: https://issues.apache.org/jira/browse/ISIS-1134 >> Project: Isis >> Issue Type: Bug >> Components: Core >> Affects Versions: core-1.8.0 >> Reporter: Martin Grigorov >> Assignee: Dan Haywood >> >> My application failed twice with OutOfMemoryError in heap space so I've >> dumped a .hprof of its memory (jmap -dump:format=b,file=some-file.hprof) and >> analyzed it with Eclipse MAT (https://eclipse.org/mat/). >> It appears that there are many >> org.datanucleus.store.rdbms.query.JDOQLQuery$2 objects. >> JDOQLQuery$2 appears to be ManagedConnectionResourceListener ( >> https://github.com/datanucleus/datanucleus-rdbms/blob/651c77ff3b2af76ada97d14b537cd41fb0524a0c/src/java/org/datanucleus/store/rdbms/query/JDOQLQuery.java#L740). >> The listener is removed only when >> org.datanucleus.store.query.AbstractQueryResult#close() method is called. >> org.apache.isis.objectstore.jdo.datanucleus.persistence.queries.PersistenceQueryFindAllInstancesProcessor#process() >> does: >> {code} >> final List<?> pojos = (List<?>) jdoQuery.execute(); >> return loadAdapters(specification, pojos); >> {code} >> So it consumes the result and returns ObjectAdapter for each pojo but it >> doesn't close the Query. >> AFAIK open-session-in-view pattern is not used in Isis so the resources >> should be closed explicitly after usage. >> A simple solution is to try/finally this code and close the query but I may >> miss some detail here. >> Related: recently I've profiled the application with Yourkit and it showed >> that DomainObjectContainer#allMatches() method is slow in one of the use >> cases. Some quick investigation showed that DomainObjectContainer delegates >> the work to >> org.apache.isis.objectstore.jdo.datanucleus.persistence.queries.PersistenceQueryProcessor#process(). >> It loads the POJOs from the DB, then wraps them in ObjectAdapters, and >> finally >> org.apache.isis.core.metamodel.services.container.DomainObjectContainerDefault#allMatches(org.apache.isis.applib.query.Query<T>) >> unwraps them back to POJOs. >> Why this is done? >> This solves the issue of "consume the QueryResult before closing it" for the >> memory leak issue but it also adds to the processing time. > > > > -- > This message was sent by Atlassian JIRA > (v6.3.4#6332)
