Thanks for the reply. On Wed, Dec 14, 2016 at 9:26 AM, Kevin Grittner <kgri...@gmail.com> wrote: > considered. Essentially, the position Ian has been taking is that > PostgreSQL should provide the guarantee of (2) above. As far as I > can see, that would require using S2PL -- something the community > ripped out of PostgreSQL because of its horrible performance and > has refused to consider restoring (for good reason, IMO).
I'm not sure Ian is intentionally taking that position. Not all of us are as familiar with the ramifications of every serializability behavior we may want as you are. > Huh, I had to think about that for a minute, but you are right > about never rolling back a read-only transaction at commit time ... Yeah, I had to think about it for about an hour ... and look at the code and README-SSI. So if you only had to think about it for a minute, you win. :-) >> if either transaction had executed before the other in a >> serial schedule, the second transaction in the schedule would have had >> to have seen (A, B') or (A', B) rather than (A, B), but that's not >> what happened. But what if each of T1 and T2 did the reads in a >> subtransaction, rolled it back, and then did the write in the main >> transaction and committed? The database system has two options. >> First, it could assume that the toplevel transaction may have relied >> on the results of the aborted subtransaction. > > This is what we do in our implementation of SSI. Predicate locks > from reads within subtransactions are not discarded, even if the > work of the subtransaction is otherwise discarded. Oh, interesting. Just to be clear, I'm not lobbying to change that; I was guessing (very late at night) what decision you probably made and it seems I was incorrect. But doesn't that imply that if a read fails in a subtransaction with serialization_error, the parent MUST also be killed with serialization_error? If not, then I don't see how you can escape having results that, overall, are not serializable. > What we missed is that, while we took action to try to ensure that > a serialization failure could not be discarded, we didn't consider > that a constraint violation exception which was preventing an > anomaly *could* be discarded. Effectively, this has escalated > detection of serialization failures involving reads which are part > of enforcing declarative constraints from the level of a feature > request to cure a significant annoyance for those using them, to a > bug fix necessary to prevent serialization anomalies. Hmm. I see. > Fortunately, Thomas Munro took an interest in the problem as it > related to duplicates on primary keys, unique constraints, and > unique indexes, and put forward a patch that cured the defect in > the common cases, and provided an easy workaround for the one case > he was unable to fix in that initial patch. To finish the work for > these constraints and indexes, I think we need to add predicate > locking while descending to the insertion point during the check > for an existing duplicate. I suggest adding something about this to README-SSI as a known issue. > I'm not sure about foreign key constraints and exclusion > constraints. I have neither seen a failure related to either of > these, nor proven that there cannot be one. Without having > assessed the scope of the problems (if any) in those constraints, > it's hard to say what needs to be done or how much work it is. I think it's a natural result of implementing techniques from academic research papers that there will sometimes be open research questions. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (firstname.lastname@example.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers