Hi Daryl,
It appears to me that the issue you have is the conflation of entities
(job1 and job2) with EntityManagers (the same entity manager is used
by both job1 and job2 since you are using a thread local EM).
If you want job1 and job2 to stay isolated then each of them would
need its own entity manager. And you would need to have some other
mechanism (not thread local EM) to group related entities.
Or have I misunderstood your example below? The key for me is the "not
logical"
assertEquals("XX_newname_1", job1b.getName()); // !!! this is not
logical since I didn't "save" job1
Well, reading the code, both job1 and job2 share the same entity
manager so "begin transaction, save, commit" will save changed
instances.
You don't mention it, but I assume that setName immediately changes
the state of a Role. So the "save" doesn't really do anything.
If you want to isolate changes to job1 and job2 you would have to
buffer changes (e.g. setName) and only apply them when performing the
save.
I also don't quite understand what you are trying to accomplish with:
deleteViaJdbc(job1b);
Is this intended to simulate a third party deleting data that you're
using? If so, then the observed behavior seems rational.
Craig
On Feb 8, 2011, at 9:32 AM, Daryl Stultz wrote:
On Tue, Feb 8, 2011 at 11:08 AM, Michael Dick <[email protected]
>wrote:
getting the deleted object out of L1. I thought that if the object
was
modified prior to the JDBC delete, then another object was
modified and
"saved", the save transaction would cause the deleted but dirty
object
still
in L1 to be saved as well (assuming it is still managed), but I
can't
demonstrate this in a unit test.
Haven't tried that scenario myself, and I'd be interested to hear
what you
find out if you have time to try it.
Here's a unit test that exposes what I consider a big problem with
the JPA
architecture:
Role job1 = setup.insertJob("XX_1"); // instantiates and inserts via
ThreadLocal EM
Role job2 = setup.insertJob("XX_2");
System.out.println("job1.getId() = " + job1.getId());
System.out.println("job2.getId() = " + job2.getId());
// both jobs "managed"
job1.setName("XX_newname_1");
job2.setName("XX_newname_2");
// both dirty and managed
job2.save(); // begin transaction, merge, commit
setup.resetEntityManagerForNewTransaction(); // close previous EM,
open new
one
Role job1b = getEntityManager().find(Role.class, job1.getId());
assertFalse(job1b == job1);
assertEquals("XX_newname_1", job1b.getName()); // !!! this is not
logical
since I didn't "save" job1
Role job2b = getEntityManager().find(Role.class, job2.getId());
assertFalse(job2b == job2);
assertEquals("XX_newname_2", job2b.getName()); // this is expected
// part two
job1b.setName("XX_newname_1b"); // job1b dirty
deleteViaJdbc(job1b);
job2b.setName("XX_newname_2b"); // job2b dirty
try {
job2b.save();
fail("Expected exception");
} catch (OptimisticLockException exc) { // trying to update deleted
job
exc.printStackTrace();
}
So I'm making changes to two objects, then saving just one of them.
The side
effect is that both objects are saved. I repeat the process but
delete the
changed object via JDBC. OpenJPA appears to be trying to save the
deleted
object. Here's the output:
job1.getId() = 170
job2.getId() = 171
<openjpa-1.2.2-r422266:898935 nonfatal store error>
org.apache.openjpa.persistence.OptimisticLockException: An
optimistic lock
violation was detected when flushing object instance
"com.sixdegreessoftware.apps.rss.model.Role-170" to the data store.
This
indicates that the object was concurrently modified in another
transaction.
FailedObject: com.sixdegreessoftware.apps.rss.model.Role-170
I don't think OpenJPA should be expected to handle my JDBC delete,
I'm just
trying to illustrate why I called em.clear() after deleting via
JDBC. It
would make the second problem go away but lead to lazy load
problems. The
first problem marked with !!! is the "nature" of JPA I don't like
and I'm
having to find a workaround for since I have to do the delete via
JDBC. At
this point, I'm just hoping the "user" doesn't modify the object to be
deleted.
The main impact I foresee is that the other entities could have a
reference
to a deleted row. Resulting in a constraint violation
at runtime. Then again, you're running that risk so long as you
have the
legacy cron job that deletes rows unless your code gets a callback.
There are certain things the "user" should not expect to do after
deleting
an object, so if find() fails, that's OK. But the above scenario is
a little
harder to understand. Why can't I change a property on an object,
delete it,
then save some *other* object. I'm glazing over the problem of using
JDBC to
do the delete, of course, but you see my point.
--
Daryl Stultz
_____________________________________
6 Degrees Software and Consulting, Inc.
http://www.6degrees.com
http://www.opentempo.com
mailto:[email protected]
Craig L Russell
Architect, Oracle
http://db.apache.org/jdo
408 276-5638 mailto:[email protected]
P.S. A good JDO? O, Gasp!