Hi Pinaki,I think the issue is whether a LockModeType.READ holds for the entire transaction (the subject tx), from the moment that the lock is obtained until the moment when the transaction has successfully committed. By "hold", I mean that either another tx cannot successfully commit a change to an object that the subject tx has locked until the subject's tx ends, or that the subject tx will fail if another transaction has successfully committed a change prior to the subject tx's end.
In the case of the OpenJPA implementation and the time sequence under discussion, the lock would hold if the implementation obtained a database row level lock (SELECT FOR UPDATE) when it checked the locked object's version.
A peripheral question is whether the spec requires that a read lock hold for the entire tx (as defined above.) If it does, it certainly doesn't test for that compliance, and OpenJPA is not in compliance.
A clear downside to locking the row when checking the version for a read lock is that two or more transactions with no incompatible changes but a variety of read locks for unchanged objects could end up in deadlock.
My take is that the tradeoff is worthwhile, especially since LockModeType.WRITE will give the consistency desired. SFAIK, there are not a lot of implementation options to make a read lock hold as defined. However, the expert group, currently discussing lock mode types, should make clear exactly what can be expected for all lock modes, and have TCK tests to ensure compliance. Intentional ambiguity in a spec is like infidelity in a marriage: it's a knife in the heart of reasonable expectations.
Cheers, David Pinaki Poddar wrote:
The expressed view relates to Philip's use case by his own observation:It "works" if I run with non-enhanced classes, since then there is no change detection and all rows get > written and version checked.The point is the way OpenJPA decides what is flushed in a commit is not the entire set {A,B,C} but only {B,C} that are dirty. So 'transaction consistency' is ensured but not 'database consistency' because another transaction may have committed {A,B}. And that breaks the parity invarianceof the entire set {A,B,C}.dezzio wrote:Hi Pinaki,Actually, much as I like your concepts, I don't yet see how they illuminate the issue.Cheers, David Pinaki Poddar wrote:Hi David & Philip, I have not had the time to pay attention to the use case it deserves -- but reading the case brings up certain aspects that I will like to share with your experience.This interesting use case can fail and is failing. But the issue it reveals goes beyond locking semantics.Behavior of lock is described at datum level -- the levels of warrantyfor shared access/mutation to a *single datum* in a consistent way. Transaction goes to the next stage and describes the level of warranty ofaset of datum as an atomic 'unit of work'. But this test case demands a even higher level of warranty -- consistencyor invariance of a set-based property (in this case the odd-even parity of 3 instances), which is neither the property of an individual datum nor the property of a unit of work. Of course, optimism of optimistic transaction model results in weaker warranty of set-based invariance. To ensure set-based property invariance, a transaction must commit all 3 instances (with consistent odd-even parity) as a unit of work, but what it does is that it reads {A,B,C} and writes only{B, C}.I will refrain from describing what flags of which OpenJPA configuration property can be tweaked to get there because let me first hear your commentson the expressed views in this posts.Philip Aston wrote:Hi David, Thanks for confirming this. So to summarise where we are, we have: 1. A reasonable use case that can fail with some unlucky timing. 2. A technical test case demonstrating the problem that does not rely on unlucky timing. 3. A disagreement in our readings of whether 1 and 2 are spec. compliant. Personally, I don't share your reading of the spec. In my reading, read locks are safe and provide a concrete guarantee that if locked entity is changed by another transaction, the locking transaction will not complete. (This is a different QoS compared to a write lock - if a write lock is obtained and the pc flushed, the transaction knows that it will not fail due to another transaction updating the locked entity. Read locks are "more optimistic" and can support higher concurrency if there is minimal contention - many transactions can hold read locks, only one can hold right locks.). How can I convince you to change your interpretation of the spec? Anyone else have an opinion? FWIW, EclipseLink passes the test case. - Phil dezzio (via Nabble) wrote:Hi Philip, Let's take a closer look. We have two bank accounts, Account[1] and Account[2], shared jointly by customers Innocent[1] and Innocent[2]. The bank's business rule is that no withdrawal can be made the draws the combined total of the accounts below zero. This rule is enforced in the server side Java application that customer's use. At the start of the banking day, the accounts stand at: Account[1] balance 100. Account[2] balance 50. Innocent[1] wants to draw out all the money, and asks the application to take 150 from Account[1]. Innocent[2] also wants to draw out all the money, and asks the application to take 150 from Account[2]. By itself, either transaction would conform to the bank's business rule. The application implements the withdrawal logic by doing the following for each transaction. For Innocent[1], read Account[1] and Account[2]. Obtain a read lock on Account[2]. Refresh Account[2]. Deduct 150 from Account[1]. Verify business rule, result, sum of balances = 0. Call JPA commit. For Innocent[2], read Account[1] and Account[2]. Obtain a read lock on Account[1]. Refresh Account[1]. Deduct 150 from Account[2]. Verify business rule, result, sum of balances = 0. Call JPA commit. Within JPA commit, as seen over the JDBC connections, the following time sequence occurs. (Other time sequences can yield the same result.) Innocent[1]: Check version of Account[2]: passes. Innocent[2]: Check version of Account[1]: passes. Innocent[2]: Update balance of Account[2], withdraw 150, setting balance to -100: does not block. Innocent[2]: commit: successful Innocent[2]: Receives 150. Innocent[1]: Update balance of Account[1], withdraw 150, setting balance to -50: does not block. Innocent[1]: commit: successful. Innocent[1]: Receives 150. After the two transactions: Account[1]: balance -50 Account[2]: balance -100 Clearly the bank would not be happy. What's a developer to do? I think the developer needs an education about what is meant by the JPA spec. What JPA is guaranteeing is that when JPA commit is called, the objects with read locks will have their versions checked. The objects with write locks will have their versions checked and changed. The objects that have been modified will have their versions checked, their information updated, and their versions changed. Clearly all of these rules were enforced in the above example. If the developer had used write locks, both transactions would not have succeeded. In fact, for the above example and a similar time sequence, if write locks had been used in place of read locks, there would have been deadlock. Now, if in fact, I'm wrong about my interpretation of the JPA spec (and it wouldn't be the first time) then you have a case. I'd be curious to know whether other JPA implementations pass your elegant test case, and what they are doing differently that makes it so. Also, if I am wrong about my interpretation, then the JPA TCK needs a test case that will snag this failure, because OpenJPA passes the current JPA TCK. Cheers, David Philip Aston wrote:Oh yeah - my bad. Try this one instead: Suppose there are a set of Accounts, and a business rule that says that the net balance must be positive. Innocent wants to draw down on Account 1 as far as possible. It read locks the of Accounts, sums up the value, and and subtracts the positive total from Account 1. Innocent begins its commit, and its read locks are validated. Meanwhile InnocentToo does the same for Account 2, and commits. Innocent updates Account 1 and finishes its commit. The total in account summary is now negative, violating the business rule. If read locks worked as I think they should, Innocent would have received an OptimisticLockException. dezzio wrote:Hi Philip, When two transactions read the same version of AccountSummary, both cannot successfully update its sum. Only one will successfully commit. David
