Hi Dirk, On Jul 4, 2008, at 4:27 AM, Dirk Brenckmann wrote:
Craig L Russell:This is "the" reason for the EntityManager.flush() method that gives youimmediate feedback if there is some application reason to fail the transaction.Ty for your quick response.I'm not this sure if I want this to be "the" reason for calling a flush(). I'd call a flush if the next DB query would need the previous data released to the database before being executed.
I agree, also a valid reason if you're executing a query that might be dependent on information in the persistence context.
Having a stateless(?) bean calling flush anytime after a merge occurs, just to be sure it did not fail, might result in a large overhead. E.g. a batch merge:List<MyEntity> entities = ... for ( MyEntity entity : entities ) { manager.merge( entity ); try { manager.flush(); } catch( ... t ) {throw new UserfeedBackException( "Updating <eintity> failed due to: ", t );} }If I want to get rid of <many> calls to flush(), I finally might not know which entity failed the merge:List<MyEntity> entities = ... for ( MyEntity entity : entities ) { manager.merge( entity ); } try { manager.flush(); } catch( ... t ) {throw new UserfeedBackException( "Updating <???> failed due to: ", t );}Which finally leads me to the question if flush() is a 'lightweight' or a 'heavyweight' call?
Flush is a heavyweight call. It is orders of magnitude more processing than telling the entity manager that an instance is to be persisted. And flush is a coarse-grained operation, effectively synchronizing the entire persistence context with the database.
Maybe there are docs that cover 'openjpa persistence strategies' in more detail?
OpenJPA tries to defer processing until the last flush before commit. This is to allow efficient operation such as batching inserts, updates, and deletes.
The JPA specification is deliberately vague on when changes to the persistence context are propagated to the database, since different products (JDO, Hibernate, and TopLink) had different semantics.
Also note that in an optimistic transaction scenario, as soon as you modify the database using a connection, that connection must remain bound to that transaction until the transaction completes. Which means that the bound connection is removed from the available connection pool for the duration of the transaction. So by requiring early modification the overall system performance might be affected.
That said, I think that there is value in fine-grained propagation strategies, one of which you mention. Sometimes efficiency is trumped by information. In your case, it sounds like you'd rather suffer potential performance degradation in exchange for more information as to what failed.
One possibility is to allow a flag to specify under what circumstances OpenJPA should go to the database: persist (the insert could throw duplicate key exception); delete (the delete could throw no such entity exception); merge (the select could throw no such entity exception).
Craig
-- GMX startet ShortView.de. Hier findest Du Leute mit Deinen Interessen! Jetzt dabei sein: http://www.shortview.de/[EMAIL PROTECTED]
Craig L Russell Architect, Sun Java Enterprise System http://java.sun.com/products/jdo 408 276-5638 mailto:[EMAIL PROTECTED] P.S. A good JDO? O, Gasp!
smime.p7s
Description: S/MIME cryptographic signature
