Kevin Grittner writes ("Re: [HACKERS] [OSSTEST PATCH 0/1] PostgreSQL db: Retry on constraint violation [and 2 more messages] [and 1 more messages]"): > On Thu, Dec 15, 2016 at 6:09 AM, Ian Jackson <ian.jack...@eu.citrix.com> > wrote: > > [...] Are there other reasons, > > besides previously suppressed serialisation failures, why commit of a > > transaction that did only reads might fail ? > > I'm pretty confident that if you're not using prepared transactions > the answer is "no". [...] I fear that [for now], if "pre-crash" > prepared transactions are still open, some of the deductions above > may not hold.
I think it is reasonable to write in the documentation "if you use prepared transactions, even read only serialisable transctions might throw a serialisation failure during commit, and they might do so after returning data which is not consistent with any global serialisation". Prepared transactions are a special purpose feature intended for use by external transaction management software which I hope could cope with a requirement to not trust data from a read only transaction until it had been committed. (Also, frankly, the promise that a prepared transaction is can be committed successfully with "very high probability" is not sufficiently precise to be of use when building robust software at the next layer up.) > One other situation in which I'm not entirely sure, and it would > take me some time to review code to be sure, is if > max_pred_locks_per_transaction is not set high enough to > accommodate tracking all serializable transactions in allocated RAM > (recognizing that they must often be tracked after commit, until > overlapping serializable transactions commit), we have a mechanism > to summarize some of the committed transactions and spill them to > disk (using an internal SLRU module). The summarized data might > not be able to determine all of the above as precisely as the > "normal" data tracked in RAM. To avoid this, be generous when > setting max_pred_locks_per_transaction; not only will it avoid this > summarization, but it will reduce the amount of summarization of > multiple page locks in the predicate locking system to relation > locks. Coarser locks increase the "false positive" rate of > serialization failures, reducing performance. I don't think "set max_pred_locks_per_transaction generously" is a practical thing to write in the documentation, because the application programmer, or admin, has no sensible way to calculate what a sufficiently generous value is. You seem to be implying that code relying on the summarised data might make over-optimistic decisions. That seems dangerous to me, but (with my very dim view of database innards) I can't immediately see how to demonstrate that it must in any case be excluded. But, I think this can only be a problem (that is, it can only cause a return of un-serialisable results within such a transaction) if, after such a spill, COMMIT would recalculate the proper answers, in full, and thus be able to belatedly report the serialisation failure. Is that the case ? > > If so presumably it always throws a serialisation failure at that > > point. I think that is then sufficient. There is no need to tell the > > application programmer they have to commit even transactions which > > only read. > > Well, if they don't explicitly start a transaction there is no need > to explicitly commit it, period. [...] Err, yes, I meant multi-statement transactions. (Or alternatively by "have to commit" I meant to include the implicit commit of an implicit transaction.) > If you can put together a patch to improve the documentation, that > is always welcome! Thanks. I hope I will be able to do that. Right now I am still trying to figure out what guarantees the application programmer can be offered. Regards, Ian. -- Sent via pgsql-hackers mailing list (firstname.lastname@example.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers