Re: [HACKERS] [OSSTEST PATCH 0/1] PostgreSQL db: Retry on constraint violation [and 2 more messages] [and 1 more messages]

2016-12-16 Thread Kevin Grittner
I guess the preceding posts leave us with these guarantees about
read-only transactions which we might want to make explicit in the
documentation:

(1)  A serialization failure cannot be initially thrown on a COMMIT
attempt for a read-only transaction; however, if a subtransaction
catches a serialization failure exception on a statement within the
transaction, and doesn't re-throw it or throw any other error which
terminates the transaction, the serialization failure can show up
on the COMMIT attempt.  (NOTE:  We may want to check whether the
"doomed" flag is set on a write conflict for a serializable
transaction.  It seems to me that it should be, but that might have
been missed.  If so, that should be treated as a bug and fixed.)

(2)  A read-only transaction cannot show results inconsistent with
already-committed transactions, so there is no chance of needing to
discard results from a read-only transaction due to failure of the
transaction to commit.

Both of these should hold for both explicit read-only transactions
(which are set to READ ONLY after a BEGIN or START, or due to
default_transaction_read_only being set tot true and not
overridden), and implicit read-only transactions.  It is still
worthwhile to explicitly set serializable transactions to read-only
whenever possible, for performance reasons.

The idea that a serialization failure is not possible on the first
(or only) statement o a read-only transaction was in error, and
should be disregarded.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [OSSTEST PATCH 0/1] PostgreSQL db: Retry on constraint violation [and 2 more messages] [and 1 more messages]

2016-12-16 Thread Robert Haas
On Fri, Dec 16, 2016 at 9:39 AM, Kevin Grittner  wrote:
> Good catch!

Thanks!

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [OSSTEST PATCH 0/1] PostgreSQL db: Retry on constraint violation [and 2 more messages] [and 1 more messages]

2016-12-16 Thread Kevin Grittner
On Fri, Dec 16, 2016 at 8:24 AM, Robert Haas  wrote:
> On Thu, Dec 15, 2016 at 9:01 AM, Kevin Grittner  wrote:
>> I also realized some other properties of read-only transactions
>> that might interest you (and that I should probably document).
>> Since the only way for a read-only transaction to be the on
>> experiencing a serialization failure is if Tout commits before the
>> read-only transaction (which is always Tin) acquires its snapshot,
>> Tpivot is still running when Tin acquires its snapshot, Tpivot
>> commits before a serialization failure involving Tin is detected,
>> and *then* Tin reads a data set affected by the writes of Tpivot.
>> Since a snapshot is only acquired when a statement is run which
>> requires a snapshot, that means that a query run in an implicit
>> transaction (i.e., there is no START or BEGIN statement to
>> explicitly start it; the SELECT or other data-accessing statement
>> automatically starts the transaction so it has a valid context in
>> which to run) that does not write data can never return bad results
>> nor receive a serialization failure.  Nor can those things happen
>> on the *first* or *only* non-writing statement in an explicit
>> transaction.
>
> I don't understand this argument.  Every statement in a read-only,
> serializable transaction runs with the same snapshot, so I don't see
> how it can make a difference whether we're in the middle of running
> the first statement or the tenth.  Tpivot might commit in the middle
> of executing the first statement of the transaction, which might then
> -- later on during the execution of that same statement -- do
> something that causes it to acquire a bunch more SIREAD locks.

Good point.  For the read only transaction to be the one to receive
a serialization failure, it must acquire a snapshot while Tpivot is
still running, and read a data set which was affected by Tpivot,
and must do so after Tpivot has successfully committed.  However,
if the commit of Tpivot comes after Tin has parsed the statement,
determined that it is one that requires a snapshot, and acquired
its snapshot and before it reads the modified data set, Tin could
get the serialization failure.  Muddled thinking on my part to
think of acquiring the snapshot to be atomic with running the
statement which caused the snapshot to be acquired.

Good catch!

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [OSSTEST PATCH 0/1] PostgreSQL db: Retry on constraint violation [and 2 more messages] [and 1 more messages]

2016-12-16 Thread Robert Haas
On Thu, Dec 15, 2016 at 9:01 AM, Kevin Grittner  wrote:
> I also realized some other properties of read-only transactions
> that might interest you (and that I should probably document).
> Since the only way for a read-only transaction to be the on
> experiencing a serialization failure is if Tout commits before the
> read-only transaction (which is always Tin) acquires its snapshot,
> Tpivot is still running when Tin acquires its snapshot, Tpivot
> commits before a serialization failure involving Tin is detected,
> and *then* Tin reads a data set affected by the writes of Tpivot.
> Since a snapshot is only acquired when a statement is run which
> requires a snapshot, that means that a query run in an implicit
> transaction (i.e., there is no START or BEGIN statement to
> explicitly start it; the SELECT or other data-accessing statement
> automatically starts the transaction so it has a valid context in
> which to run) that does not write data can never return bad results
> nor receive a serialization failure.  Nor can those things happen
> on the *first* or *only* non-writing statement in an explicit
> transaction.

I don't understand this argument.  Every statement in a read-only,
serializable transaction runs with the same snapshot, so I don't see
how it can make a difference whether we're in the middle of running
the first statement or the tenth.  Tpivot might commit in the middle
of executing the first statement of the transaction, which might then
-- later on during the execution of that same statement -- do
something that causes it to acquire a bunch more SIREAD locks. For
example, suppose the query involves calling a function which is
defined like this:

create or replace function getval(t text) returns integer as $$declare
q int; begin execute 'select aid from ' || t || ' limit 1;' into q;
return q; end$$ language plpgsql;

Obviously, every call to this function may grab an SIREAD lock on a new object.

Even without recourse to nested queries, I think we don't know which
index or heap pages will be locked at the start of execution.  We
acquire them as we go along.  At any point in that we could acquire
one which creates an rw-conflict with Tpivot, couldn't we?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [OSSTEST PATCH 0/1] PostgreSQL db: Retry on constraint violation [and 2 more messages] [and 1 more messages]

2016-12-15 Thread Kevin Grittner
On Thu, Dec 15, 2016 at 9:53 AM, Ian Jackson  wrote:

> I don't think "set max_pred_locks_per_transaction generously" is a
> practical thing to write in the documentation, because the application
> programmer, or admin, has no sensible way to calculate what a
> sufficiently generous value is.

ok

> You seem to be implying that code relying on the summarised data might
> make over-optimistic decisions.  That seems dangerous to me, but (with
> my very dim view of database innards) I can't immediately see how to
> demonstrate that it must in any case be excluded.

No, with any of these conditions, the information on which a
decision to generate a serialization failure is summarized into
something less granular, and in all cases we turn any "in doubt"
situations into serialization failures; that is, you can get false
positives (a serialization failure exception where complete
information could have avoided it) but not false negatives (a
serialization anomaly appearing in the database or query results
from a transaction which commits).  Based on that alone, I think it
is fair to say that it does not weaken guarantees about
serialization failures for read-only transactions not being
possible on commit unless the initial exception is suppressed in a
subtransaction nor that anomalous results are not possible in a
read-only transaction.  The granularity promotion of predicate
locks could not affect the guarantees about never seeing a
serialization failure on the first statement that reads data in a
read-only transaction, but I would need to take a very close look
at how the SLRU summarization of committed transactions might
affect that one -- we lose some of the detail about the relative
order of the commits and snapshot acquisitions, and that might be
enough to allow a false positive on that first statement.  That
would require more study than I can give it this month.

I do remember that Dan ran some saturation workloads to stress this
feature for days and weeks at a time pushing things to the point of
using the SLRU summarization, and I remember thinking it odd that
certain tests generated zero errors on the read-only transactions,
which I'm pretty sure were single-statement transactions.  It was
only during this week's discussion that I had the epiphany about
that only being possible if the read-only transaction had multiple
statements; but the fact that such long saturation runs with SLRU
summarization showed no errors on read-only transactions supports
the idea that such summarization doesn't compromise that guarantee.
Unfortunately, it falls short of proof.  :-(

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [OSSTEST PATCH 0/1] PostgreSQL db: Retry on constraint violation [and 2 more messages] [and 1 more messages]

2016-12-15 Thread Ian Jackson
Kevin Grittner writes ("Re: [HACKERS] [OSSTEST PATCH 0/1] PostgreSQL db: Retry 
on constraint violation [and 2 more messages] [and 1 more messages]"):
> On Thu, Dec 15, 2016 at 6:09 AM, Ian Jackson <ian.jack...@eu.citrix.com> 
> wrote:
> > [...]  Are there other reasons,
> > besides previously suppressed serialisation failures, why commit of a
> > transaction that did only reads[1] might fail ?
> 
> I'm pretty confident that if you're not using prepared transactions
> the answer is "no".  [...] I fear that [for now], if "pre-crash"
> prepared transactions are still open, some of the deductions above
> may not hold.

I think it is reasonable to write in the documentation "if you use
prepared transactions, even read only serialisable transctions might
throw a serialisation failure during commit, and they might do so
after returning data which is not consistent with any global
serialisation".

Prepared transactions are a special purpose feature intended for use
by external transaction management software which I hope could cope
with a requirement to not trust data from a read only transaction
until it had been committed.  (Also, frankly, the promise that a
prepared transaction is can be committed successfully with "very high
probability" is not sufficiently precise to be of use when building
robust software at the next layer up.)

> One other situation in which I'm not entirely sure, and it would
> take me some time to review code to be sure, is if
> max_pred_locks_per_transaction is not set high enough to
> accommodate tracking all serializable transactions in allocated RAM
> (recognizing that they must often be tracked after commit, until
> overlapping serializable transactions commit), we have a mechanism
> to summarize some of the committed transactions and spill them to
> disk (using an internal SLRU module).  The summarized data might
> not be able to determine all of the above as precisely as the
> "normal" data tracked in RAM.  To avoid this, be generous when
> setting max_pred_locks_per_transaction; not only will it avoid this
> summarization, but it will reduce the amount of summarization of
> multiple page locks in the predicate locking system to relation
> locks.  Coarser locks increase the "false positive" rate of
> serialization failures, reducing performance.

I don't think "set max_pred_locks_per_transaction generously" is a
practical thing to write in the documentation, because the application
programmer, or admin, has no sensible way to calculate what a
sufficiently generous value is.

You seem to be implying that code relying on the summarised data might
make over-optimistic decisions.  That seems dangerous to me, but (with
my very dim view of database innards) I can't immediately see how to
demonstrate that it must in any case be excluded.

But, I think this can only be a problem (that is, it can only cause a
return of un-serialisable results within such a transaction) if, after
such a spill, COMMIT would recalculate the proper answers, in full,
and thus be able to belatedly report the serialisation failure.  Is
that the case ?

> > If so presumably it always throws a serialisation failure at that
> > point.  I think that is then sufficient.  There is no need to tell the
> > application programmer they have to commit even transactions which
> > only read.
> 
> Well, if they don't explicitly start a transaction there is no need
> to explicitly commit it, period.  [...]

Err, yes, I meant multi-statement transactions.  (Or alternatively by
"have to commit" I meant to include the implicit commit of an implicit
transaction.)

> If you can put together a patch to improve the documentation, that
> is always welcome!

Thanks.  I hope I will be able to do that.  Right now I am still
trying to figure out what guarantees the application programmer can be
offered.

Regards,
Ian.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [OSSTEST PATCH 0/1] PostgreSQL db: Retry on constraint violation [and 2 more messages] [and 1 more messages]

2016-12-15 Thread Kevin Grittner
On Thu, Dec 15, 2016 at 6:09 AM, Ian Jackson  wrote:

> However, in that example, as you seem to allude to, there is still a
> complete serialisation of all the transactions, even the failed T3:
> T1,T2,T3.  The database has detected the problem before returning data
> in T3 that would contradict the serialisation order.

In that case, yes.  Thinking about it, I suspect that a read-only
transaction will never actually return results that need to be
ignored.  Let me take a quick run at an argument to support that.
I'm working from the previous conclusion about read-only
transactions: that the read-only transaction will only be the one
to experience serialization failure if the other two transactions
involved in the "dangerous structure" have already committed
without developing a serialization failure, and the failure will be
detected during a read of the data by the read-only transaction --
never during commit.  Catching the initial exception and trying to
suppress it can cause it to resurface on the commit, but it would
have been initially detected and a serialization error thrown on
the read, but even if it is re-thrown on the commit, the initial
exception would have prevented the data which contradicted
already-committed state from being returned.

I also realized some other properties of read-only transactions
that might interest you (and that I should probably document).
Since the only way for a read-only transaction to be the on
experiencing a serialization failure is if Tout commits before the
read-only transaction (which is always Tin) acquires its snapshot,
Tpivot is still running when Tin acquires its snapshot, Tpivot
commits before a serialization failure involving Tin is detected,
and *then* Tin reads a data set affected by the writes of Tpivot.
Since a snapshot is only acquired when a statement is run which
requires a snapshot, that means that a query run in an implicit
transaction (i.e., there is no START or BEGIN statement to
explicitly start it; the SELECT or other data-accessing statement
automatically starts the transaction so it has a valid context in
which to run) that does not write data can never return bad results
nor receive a serialization failure.  Nor can those things happen
on the *first* or *only* non-writing statement in an explicit
transaction.

> The thing that was puzzling me, after having slept on it, and before I
> read your mail, was how it could happen that the serialisation failure
> (of a transaction that did only reads) would only be detected at
> commit.  The point about attempts to suppress the serialisation
> failure is part of the answer to that.  Are there other reasons,
> besides previously suppressed serialisation failures, why commit of a
> transaction that did only reads[1] might fail ?

I'm pretty confident that if you're not using prepared transactions
the answer is "no".  Our initial implementation of serializable
prepared transactions was found to have a bug after crash and when
dealing with the persisted data found during recovery.  The safest
way to fix that on stable branches was, until those prepared
transactions which were found during recovery were committed or
rolled back, to be *very* eager to throw serialization failures for
any new transactions which developed a rw-dependency with them.
This can be improved, but I fear that until and unless that
happens, if "pre-crash" prepared transactions are still open, some
of the deductions above may not hold.  If you don't use prepared
transactions, or promptly clean up any that were pending when a
server crashes, that should not be a problem, but it's probably
worth mentioning.

One other situation in which I'm not entirely sure, and it would
take me some time to review code to be sure, is if
max_pred_locks_per_transaction is not set high enough to
accommodate tracking all serializable transactions in allocated RAM
(recognizing that they must often be tracked after commit, until
overlapping serializable transactions commit), we have a mechanism
to summarize some of the committed transactions and spill them to
disk (using an internal SLRU module).  The summarized data might
not be able to determine all of the above as precisely as the
"normal" data tracked in RAM.  To avoid this, be generous when
setting max_pred_locks_per_transaction; not only will it avoid this
summarization, but it will reduce the amount of summarization of
multiple page locks in the predicate locking system to relation
locks.  Coarser locks increase the "false positive" rate of
serialization failures, reducing performance.

> [1] I mean to include transactions which don't update even if they're
> not explicitly declared `read only', so that the application retained
> (until it said to commit) the option to try to make changes.

There is an attempt to recognize, at commit time, *implicit*
read-only transactions -- those that, in spite of not being
*declared* as READ ONLY never wrote any data.  Although these 

Re: [HACKERS] [OSSTEST PATCH 0/1] PostgreSQL db: Retry on constraint violation [and 2 more messages] [and 1 more messages]

2016-12-15 Thread Ian Jackson
Kevin Grittner writes ("Re: [HACKERS] [OSSTEST PATCH 0/1] PostgreSQL db: Retry 
on constraint violation [and 2 more messages] [and 1 more messages]"):
> As Robert pointed out, a read-only transaction cannot normally be
> aborted by a serialization failure on COMMIT.  Under exceptional
> conditions, like an attempt to suppress the serialization failure,
> you might see the commit aborted, though.

Thanks for the detailed explanation.  Sorry for missing the 2nd
example in that page, which does indeed show a read-only transaction
failing:

However, in that example, as you seem to allude to, there is still a
complete serialisation of all the transactions, even the failed T3:
T1,T2,T3.  The database has detected the problem before returning data
in T3 that would contradict the serialisation order.

The thing that was puzzling me, after having slept on it, and before I
read your mail, was how it could happen that the serialisation failure
(of a transaction that did only reads) would only be detected at
commit.  The point about attempts to suppress the serialisation
failure is part of the answer to that.  Are there other reasons,
besides previously suppressed serialisation failures, why commit of a
transaction that did only reads[1] might fail ?

[1] I mean to include transactions which don't update even if they're
not explicitly declared `read only', so that the application retained
(until it said to commit) the option to try to make changes.

Supposing I understand your `doomed' flag correctly, I think it is
then probably possible to construct an argument that proves that
allowing the application to trap and suppress serialisation failures
does not make it harder to provide coherency guarantees.

Or to put it another way: does pgsql already detect serialisation
problems (in transactions which only read) at the point where it would
otherwise return data not consistent with any serialisation order ?
(As it does in the `Rollover' example.)

If so presumably it always throws a serialisation failure at that
point.  I think that is then sufficient.  There is no need to tell the
application programmer they have to commit even transactions which
only read.

If my supposition is right then I will try to develop this argument
more formally.  I think that would be worthwhile because the converse
property is very surprising to non-database programmers, and would
require very explicit documentation by pgsql, and careful attention by
application programmers.  It would be nice to be able to document a
stronger promise.

Ian.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [OSSTEST PATCH 0/1] PostgreSQL db: Retry on constraint violation [and 2 more messages] [and 1 more messages]

2016-12-14 Thread Kevin Grittner
On Wed, Dec 14, 2016 at 11:12 AM, Ian Jackson <ian.jack...@eu.citrix.com> wrote:
> Kevin Grittner writes ("Re: [HACKERS] [OSSTEST PATCH 0/1] PostgreSQL db: 
> Retry on constraint violation [and 2 more messages] [and 1 more messages]"):
>> On Wed, Dec 14, 2016 at 10:20 AM, Ian Jackson <ian.jack...@eu.citrix.com> 
>> wrote:

>> I would alter that slightly to:
>>
>> There is a consistent serialization of all serializable
>> transactions which successfully commit.
>
> Here `serializable' means SERIALIZABLE ?

I'm not entirely sure what you mean to convey by the
capitalization, so I'll just say that 'serializable' there referred
to the transaction isolation level.  (I *think* that was what you
were getting at.)

>> For examples, please see this Wiki page.  You might be particularly
>> interested in the examples in the "Read Only Transactions" section:
>>
>> https://wiki.postgresql.org/wiki/SSI
>
> Thanks.  I read that part of the wiki page.  But in that example, we
> are told that T1 will be aborted, not T3.

That is true in the first "Deposit Report") example in that
section.  The second ("Rollover") example shows the read-only
transaction (T3) being the one which is aborted and retried.

> Can it happen that a transaction which does not make any update
> attempts, will see "impossible" data, and that this is only detected
> at COMMIT ?  Does that apply even to READ ONLY transactions ?

As Robert pointed out, a read-only transaction cannot normally be
aborted by a serialization failure on COMMIT.  Under exceptional
conditions, like an attempt to suppress the serialization failure,
you might see the commit aborted, though.

Also as pointed out by Robert, the state seen by a read-only
transaction doesn't lack internal consistency, but it will be
rolled back with a serialization failure exception if it can show a
state which is inconsistent with some successfully-committed state
of the database.

In the "Rollover" example, the first time T3 is attempted its
SELECT it would have shown rows containing 100 and 11, were it not
canceled.  That could have been consistent with the earlier state
of 100 and 10 and the business rules that the first number can only
change by having the second number added to it, and the second
number can only change by being incremented; but that state and
those rules don't fit with the *later* state of 110, 11, because
that requires that the second number be added to the first before
it was incremented, and if we allow the result of the first T3
transaction attempt to be seen, it would tell us that the increment
happened first.  Since we've already allowed successful commit of
transactions putting things into a state only consistent with
adding 10 to 100 before incrementing 10 to 11, cancel the read-only
transaction and start over.  This time it will show something
consistent with the apparent order of execution of the other
transactions.

Note that neither the order that the first two transaction started
in (T1->T2) nor the order they committed in (T2->T1) determines the
apparent order of execution.  It is the rw-dependencies that
control (T1 reads a version of data before T2's work is applied, so
T1 *appears* to have run before T2 in apparent order of execution.)
Since both are successfully committed with that apparent order of
execution, a third transaction (T3), which sees the work of T2
(since it had committed when the snapshot for T3 was taken) but not
T1 (since it had not committed when the snapshot for T3 was taken)
cannot be allowed to proceed.

I know an example like that can cause one's head to hurt a bit
(been there), but even if you don't fight your way to a full grasp
of that case, it will hopefully give you some idea of both why we
can have high concurrency with this approach, and why it is
necessary to discard results from failed transactions.

>> Once a serialization failure occurs the transaction is flagged as
>> "doomed" and will not, under any circumstances be allowed to
>> commit.  If you find any exception to that, please report it as a
>> bug.
>
> Right.  I think this prevents any exception-catching arrangements from
> suppressing the serialisation failure.  Since AIUI it is not possible
> to run the outer COMMIT from within an exception trapping context.

Right.

> If it /is/ possible to run that outer COMMIT in a way which swallows
> the exception then [...]

That is not possible, as I understand things.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [OSSTEST PATCH 0/1] PostgreSQL db: Retry on constraint violation [and 2 more messages] [and 1 more messages]

2016-12-14 Thread Robert Haas
On Wed, Dec 14, 2016 at 11:20 AM, Ian Jackson  wrote:
>> I'm not sure Ian is intentionally taking that position.  Not all of us
>> are as familiar with the ramifications of every serializability
>> behavior we may want as you are.
>
> Indeed.  I think it's fair to say that I'm totally unfamiliar with the
> ramifications.  You might also fairly characterise me as naive; I had
> certainly made some assumptions which it seems are known (around here)
> to be both false and difficult to make true.

We can't all be database gurus...

> Let me try to summarise my understanding of what the developers think
> they can and intend to promise, about SERIALIZABLE transactions:
>
>  There is a consistent serialisation of all transactions which
>  successfully commit; or which do not attempt to make any changes.

I think we've figured out that it is limited to transactions which
successfully commit plus read-only transactions that roll back at the
top level but never roll back a subtransaction.  And I'm not sure
there aren't other exceptions.  Basically, be very wary about relying
on information extracted from a transaction that rolled back: there
might be dragons there.

>  A "consistent serialisation" means that there is an order in which
>  the same transactions might have been executed giving exactly the
>  answers.  This includes, if applicable, the same errors.  (The
>  database is free to generate synchronisation failure errors 40P01 and
>  40001 whenever it chooses.)

Seems right.

>  A transaction which attempts to make any changes, and which does not
>  commit (whether because the application never asks for COMMIT, or
>  because of reported synchronisation failure) might see internally
>  inconsistent data, or an internally-consistent view which is not
>  compatible with any serialisation of other transactions.  An
>  application which needs a coherent view should not rely on any of the
>  information from such a transaction.

I think it will see an internally-consistent view which is not
compatible with any global serial ordering.  I don't see why it would
see an internally-inconsistent view; inconsistent how?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [OSSTEST PATCH 0/1] PostgreSQL db: Retry on constraint violation [and 2 more messages] [and 1 more messages]

2016-12-14 Thread Ian Jackson
Kevin Grittner writes ("Re: [HACKERS] [OSSTEST PATCH 0/1] PostgreSQL db: Retry 
on constraint violation [and 2 more messages] [and 1 more messages]"):
> On Wed, Dec 14, 2016 at 10:20 AM, Ian Jackson <ian.jack...@eu.citrix.com> 
> wrote:
> 
> > Let me try to summarise my understanding of what the developers think
> > they can and intend to promise, about SERIALIZABLE transactions:
> >
> >  There is a consistent serialisation of all transactions which
> >  successfully commit; or which do not attempt to make any changes.
> >
> >  A "consistent serialisation" means that there is an order in which
> >  the same transactions might have been executed giving exactly the
> >  answers.  This includes, if applicable, the same errors.  (The
> >  database is free to generate synchronisation failure errors 40P01 and
> >  40001 whenever it chooses.)
> 
> I would alter that slightly to:
> 
> There is a consistent serialization of all serializable
> transactions which successfully commit.

Here `serializable' means SERIALIZABLE ?

> >  A transaction which attempts to make any changes, and which does not
> >  commit (whether because the application never asks for COMMIT, or
> >  because of reported synchronisation failure) might see internally
> >  inconsistent data, or an internally-consistent view which is not
> >  compatible with any serialisation of other transactions.  An
> >  application which needs a coherent view should not rely on any of the
> >  information from such a transaction.
> 
> Even a read-only transaction can see a state that is not consistent
> with business rules (as enforced in the software) given that some
> particular later state is reached.
> 
> For examples, please see this Wiki page.  You might be particularly
> interested in the examples in the "Read Only Transactions" section:
> 
> https://wiki.postgresql.org/wiki/SSI

Thanks.  I read that part of the wiki page.  But in that example, we
are told that T1 will be aborted, not T3.

Can it happen that a transaction which does not make any update
attempts, will see "impossible" data, and that this is only detected
at COMMIT ?  Does that apply even to READ ONLY transactions ?

> >  Serialisation failures in subtransactions might cause the parent
> >  transaction to experience a serialisation failure too.
> 
> There is currently at least one bug

Right.  I was trying to capture the intent, modulo bugs.

> Once a serialization failure occurs the transaction is flagged as
> "doomed" and will not, under any circumstances be allowed to
> commit.  If you find any exception to that, please report it as a
> bug.

Right.  I think this prevents any exception-catching arrangements from
suppressing the serialisation failure.  Since AIUI it is not possible
to run the outer COMMIT from within an exception trapping context.

If it /is/ possible to run that outer COMMIT in a way which swallows
the exception then this is not a practical problem but the wording
ought to be changed to refer to the success of the COMMIT statement.

Ian.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [OSSTEST PATCH 0/1] PostgreSQL db: Retry on constraint violation [and 2 more messages] [and 1 more messages]

2016-12-14 Thread Kevin Grittner
On Wed, Dec 14, 2016 at 10:20 AM, Ian Jackson  wrote:

> Let me try to summarise my understanding of what the developers think
> they can and intend to promise, about SERIALIZABLE transactions:
>
>  There is a consistent serialisation of all transactions which
>  successfully commit; or which do not attempt to make any changes.
>
>  A "consistent serialisation" means that there is an order in which
>  the same transactions might have been executed giving exactly the
>  answers.  This includes, if applicable, the same errors.  (The
>  database is free to generate synchronisation failure errors 40P01 and
>  40001 whenever it chooses.)

I would alter that slightly to:

There is a consistent serialization of all serializable
transactions which successfully commit.

>  A transaction which attempts to make any changes, and which does not
>  commit (whether because the application never asks for COMMIT, or
>  because of reported synchronisation failure) might see internally
>  inconsistent data, or an internally-consistent view which is not
>  compatible with any serialisation of other transactions.  An
>  application which needs a coherent view should not rely on any of the
>  information from such a transaction.

Even a read-only transaction can see a state that is not consistent
with business rules (as enforced in the software) given that some
particular later state is reached.

For examples, please see this Wiki page.  You might be particularly
interested in the examples in the "Read Only Transactions" section:

https://wiki.postgresql.org/wiki/SSI

>  Serialisation failures in subtransactions might cause the parent
>  transaction to experience a serialisation failure too.

There is currently at least one bug which may allow serialization
anomalies into the database if a constraint violation error is
thrown in a subtransaction and that subtransaction catches and
suppresses that exception and rolls back its work without throwing
an error.  I expect that any bugs of this type are will be fixed in
a minor release set soon -- probably the next one that is released.
Note that I don't think that an exception from any source other
than a declarative constraint can cause this type of problem, and
that other conditions must exist in combination with this to create
a serialization anomaly.

A serialization failure within any subtransaction will ensure the
top level transaction will fail, even if there is an attempt to
catch this exception and commit the top level transaction.  It
would be possible to catch a serialization failure exception and
throw some *other* exception to terminate the transaction; however,
(to step into very convoluted territory) if that other exception is
caught and suppressed, the serialization failure error would occur.
Once a serialization failure occurs the transaction is flagged as
"doomed" and will not, under any circumstances be allowed to
commit.  If you find any exception to that, please report it as a
bug.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [OSSTEST PATCH 0/1] PostgreSQL db: Retry on constraint violation [and 2 more messages] [and 1 more messages]

2016-12-14 Thread Ian Jackson
Robert Haas writes ("Re: [HACKERS] [OSSTEST PATCH 0/1] PostgreSQL db: Retry on 
constraint violation [and 2 more messages]"):
> On Wed, Dec 14, 2016 at 9:26 AM, Kevin Grittner <kgri...@gmail.com> wrote:
> > considered.  Essentially, the position Ian has been taking is that
> > PostgreSQL should provide  the guarantee of (2) above.  As far as I
> > can see, that would require using S2PL -- something the community
> > ripped out of PostgreSQL because of its horrible performance and
> > has refused to consider restoring (for good reason, IMO).
> 
> I'm not sure Ian is intentionally taking that position.  Not all of us
> are as familiar with the ramifications of every serializability
> behavior we may want as you are.

Indeed.  I think it's fair to say that I'm totally unfamiliar with the
ramifications.  You might also fairly characterise me as naive; I had
certainly made some assumptions which it seems are known (around here)
to be both false and difficult to make true.

Robert Haas writes ("Re: [HACKERS] [OSSTEST PATCH 0/1] PostgreSQL db: Retry on 
constraint violation [and 2 more messages]"):
>  For example, imagine a transaction that queries pg_stat_activity or
> pg_locks and then makes decisions based on the contents thereof.

I agree that such behviour is unreasonable and should be excluded from
consistency guarantees!  I don't think (even very naive) application
programmers would disagree.  From my point of those tables are `part
of the innards', and expecting transactional behaviour from them is
clearly too optimistic.  (I guess that should be made clear somewhere
near where these kind of system tables are mentioned in the docs.)


Let me try to summarise my understanding of what the developers think
they can and intend to promise, about SERIALIZABLE transactions:

 There is a consistent serialisation of all transactions which
 successfully commit; or which do not attempt to make any changes.

 A "consistent serialisation" means that there is an order in which
 the same transactions might have been executed giving exactly the
 answers.  This includes, if applicable, the same errors.  (The
 database is free to generate synchronisation failure errors 40P01 and
 40001 whenever it chooses.)

 A transaction which attempts to make any changes, and which does not
 commit (whether because the application never asks for COMMIT, or
 because of reported synchronisation failure) might see internally
 inconsistent data, or an internally-consistent view which is not
 compatible with any serialisation of other transactions.  An
 application which needs a coherent view should not rely on any of the
 information from such a transaction.

 "Transactions which do not attempt to make any changes" excludes any
 transactions whose subtransactions try to make changes.
 Serialisation failures in subtransactions might cause the parent
 transaction to experience a serialisation failure too.  "Try to make
 changes" includes even DML statements which are bound to fail.

Is that an accurate statement of the current thinking ?

Ian.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers