On Wed, Jun 21, 2006 at 07:30:30AM +1000, Tim Churches wrote: > > 1) read the data and display it > > 2) do any amount of user interaction > > 3) only then lock the row and write it > > My point is that there is no need to explicitly lock the row at step 3. > Just put a transaction around your update at step 3 and let the PG MVCC > features look after the atomicity of the transaction. I see. It seems you are right. The reason I initially put in the "select for update" is that I was worried about *other* code not "properly" using transactions. Hence I lock the row just for good measure.
> Yes, no argument there. The issue is what you do when you detect that > the data has been updated since you read it. That becomes a user > interface issue which is difficult to solve generically - but unless > you worry about what the user sees when you detect this condition, there > is little point in trying to detect it. Well, we can't pretend nothing went wrong either. > Remember the old medical maxim: > Never do a test if you don't know how to interpret or treat the results. But if I know the test is indicated I better get the patient to someone who does know what to do. Ideally, I'd send along uptodate test results. > >> I am pretty sure that Postgres offers more fine-grained concurrency > >> control than this. > > No. Better-than-row-level was introduced in 8.1 or so. > Yeah, but MVCC has been in Postgres since the beginning. Sure, but - see above - I wasn't sure it'd entirely suffice in the face of misbehaving *other* code. > >> Doing this row-level locking is ignoring the MVCC > >> feature of Postgres, > > Absolutely not. > I am not a Postgres expert, but I think you may be mistaken on this point. I may well be given the way you mean it. > >> I thought that SELECT FOR UPDATE is only supported in Postgres in order to > >> allow applications designed for less sophisticated database backends to > >> be ported to PG... > > Well, yes and no. Thinking about it we just *might* shove > > the XMIN checking into the UPDATE query itself and detect > > things that way. Let me think about this. > Or get rid of SELECT FOR UPDATE entirely... Well, yeah, that'd be the point of putting the XMIN check into the update :-) I am still not sure I can trust the transactions *only* with regard to misbehaving code. I'd be happy to be proven wrong. >> Writers only block each other when updating the same row. > Yeah, and PG looks after this automatically - no need to manually issue > a row lock as you are doing. When I start a transaction doing updates can I be sure that other code *not* using transactions would be safe ? But wait, other code *always* uses transactions and be it single-statement auto-commit. Hm. Perhaps it should work. > > Serializable is logically cleaner. (but produces more work) > > > > Exactly what we do. We run serializable transactions. No > > bullshit in our database. > > Yeah, and you can set that behaviour as a config option in PG... But we cannot rely on it being set to on. That would be like relying on MySQL having InnoDB table by default. Hence we set this behaviour whenever opening a new connection. Which is unrelated to whether we need "select for update". > >>> - 3) checking XMIN > > It cannot because at the time the second transaction runs > > the first one DOES NOT HOLD A LOCK ANYMORE on the row in > > question. Because it has already commited. And that is what > > we need to detect - that something else was committed > > between our data retrieval and our attempt at storing > > modifications thereof. > > Karsten, read what I said. I did not say that you did not need to use > XMIN or some other mechanism to check for modified data before you > update, just that you don't need to worry about issuing an explicit row > lock before updating. I understand now after more careful reading. > OK, its your computer science project - I'me just an outside observer > offering unsolicited comments. But if people as smart as Ian are > bamboozled by GNUmed's complexity, This complexity is actually quite hidden and can be pretty much ignored. And BTW I was just implementing Horst's original concept on this one. When he said/suggested that we want to be absolutely sure no crap enters the database I went out and did it. I can't help it that I can't do it any better. I don't have education in such matters. I can only write it the way I understand it. Someone else could have done it but no one did. I am always happy to simplify when someone can show me that doing so does not sacrifice data integrity. Ian doesn't really seem to like the whole *concept* of how the middleware is currently put together and I cannot blame him for it. I wish to have better middleware but no one wrote it for me. > (and to write some unit tests while you're at it)? Oh, we do have some test code, particularly for this detection stuff :) Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 _______________________________________________ Gnumed-devel mailing list [email protected] http://lists.gnu.org/mailman/listinfo/gnumed-devel
