> laurenz.a...@cybertec.at wrote:
> 
>> b...@yugabyte.com wrote:
>> 
>> …Internet Search turned up this 2019 post by Laurenz Albe—but nothing else 
>> at all.
>> 
>> https://www.cybertec-postgresql.com/en/triggers-to-enforce-constraints 
>> <https://www.cybertec-postgresql.com/en/triggers-to-enforce-constraints>
>> 
>> (This is why I CC'd you, Laurenz.)
> 
> So I guess I should answer.

Thanks for replying to my original post with this subject line, Laurenz. Sorry 
to bring up notions that you wrote about three years ago. I judged that, 
because I referred to those notions, it would be polite to copy you—especially 
because I interpreted (maybe incorrectly) what you had written.

> About the starting paragraph of your mail: Constraint triggers are a 
> syntactic leftover from the way that triggers are implemented in PostgreSQL. 
> There is different syntax now, but it was decided to leave constraint 
> triggers, since they may have some use.

If constraint triggers are, in the hands of an informed application programmer, 
to have some sound uses, then the semantics must be clearly defined. And you do 
say that they are supported. David Johnson argues that, as long as you read the 
right snippets from various parts of the doc and synthesize their joint 
meaning, then the semantics are defined. Yes, David, I can accept that—with a 
loud caveat about the answer to my (a) or (b) question below.

I re-read the penultimate paragraph in Laurenz's post:

«
By making the trigger INITIALLY DEFERRED, we tell PostgreSQL to check the 
condition at COMMIT time.
»

I have always understood that (in Postgres and any respectable RDBMS) commits 
in a multi-session environment are always strictly serialized—irrespective of 
the transaction's isolation level. Am I correct to assume this is the case for 
Postgres? I took "at COMMIT time" to mean "as part of the strictly serialized 
operations that implement a session's COMMIT". But I see (now) that you argue 
that this is not the case, thus:

«
This will reduce the window for the race condition a little, but the problem is 
still there. If concurrent transactions run the trigger function at the same 
time, they won’t see each other’s modifications.
»

I take what you say in your post to mean that each session executes its 
deferred constraint check (by extension, not just for constraint triggers but 
for all deferred constraint cases) momentarily *before* COMMIT so that the 
effect is only to reduce the duration of the race condition window rather than 
to eliminate it.

So it all depends on a lawyerly reading of the wording "at COMMIT time". The 
current CREATE TABLE doc says this:

«
If the constraint is INITIALLY DEFERRED, it is checked only at the end of the 
transaction.
»

The wording "at the end of the transaction" is not precise enough to 
adjudicate—and so the key question remains: Is a deferred constraint checked:

(a) as part of the strictly serialized operations that implement a session's 
COMMIT?

or

(b) momentarily *before* COMMIT and not within the serialized COMMIT execution?

So… (asking the wider audience) is the answer (a) or (b)? An if it's (b), why? 
After all, (b) brings the race condition risk. Is (a) simply not feasible?

> [Lots of ruminations and wandering throughts]
> 
> Sorry, that was too much for me to comment on - that would require a 
> mid-sized article.

Oh… I'm sorry to hear that I rambled and lost clarity. I find it impossible to 
say what I want to, striving for precision, without sacrificing brevity. I 
always find that I can improve my wording with successive reads. But life is 
short and I have, eventually, just to publish and be damned.

>> Is my entire concept (and Laurenz's too) fundamentally flawed? Specifically, 
>> is querying a trigger's base table in a "for each row" trigger fundamentally 
>> unsound and not supported?
> 
> My post claims that constraint triggers alone are *not* a sufficient solution 
> to validate constraints - you need additional locking or SERIALIZABLE 
> isolation to make that work reliably.

This doesn't seem to be what you wrote. These two headings

> Solving the problem with “optimistic locking” (which you explain means using 
> SERIALIZABLE)

and

> What about these “constraint triggers”?

read as if they are orthogonal schemes where the constraint trigger approach 
does not rely on SERIALIZABLE.

As I reason it, if you use the SERIALIZABLE approach, then an ordinary 
immediate AFTER EACH STATEMENT trigger will work fine—precisely because of how 
that isolation level is defined. So here, a deferred constraint trigger isn't 
needed and brings no value.

This implies that if a deferred constraint trigger is to have any utility, it 
must be safe to use it (as I tested it) at the READ COMMITTED level. I do see 
that, though I appear to be testing this, I cannot do a reliable test because I 
cannot, in application code, open up, and exploit, a race condition window 
after COMMIT has been issued. (I *am* able to do this to expose the fact that 
"set constraints all immediate" is unsafe.)

Reply via email to