Re: error in trigger creation

2024-04-21 Thread Adrian Klaver

On 4/21/24 14:21, Tom Lane wrote:

Adrian Klaver  writes:

On 4/21/24 11:20, yudhi s wrote:

So in this case i was wondering if "event trigger" can cause any
additional threat and thus there is no such privilege like "create
trigger" exist in postgres and so it should be treated cautiously?



An event trigger runs as a superuser and executes a function that in
turn can do many things, you do the math on the threat level.


As a trivial example: an event trigger could prevent the legitimate
superuser(s) from doing anything at all in that database, just by
blocking all their commands.  This might not even require malicious
intent, merely faulty coding --- but the opportunity for malicious
intent is staggeringly large.


As an FYI to above:

https://www.postgresql.org/docs/current/sql-createeventtrigger.html

"Event triggers are disabled in single-user mode (see postgres). If an 
erroneous event trigger disables the database so much that you can't 
even drop the trigger, restart in single-user mode and you'll be able to 
do that."





regards, tom lane


--
Adrian Klaver
adrian.kla...@aklaver.com





Re: error in trigger creation

2024-04-21 Thread Tom Lane
Adrian Klaver  writes:
> On 4/21/24 11:20, yudhi s wrote:
>> So in this case i was wondering if "event trigger" can cause any 
>> additional threat and thus there is no such privilege like "create 
>> trigger" exist in postgres and so it should be treated cautiously?

> An event trigger runs as a superuser and executes a function that in 
> turn can do many things, you do the math on the threat level.

As a trivial example: an event trigger could prevent the legitimate
superuser(s) from doing anything at all in that database, just by
blocking all their commands.  This might not even require malicious
intent, merely faulty coding --- but the opportunity for malicious
intent is staggeringly large.

regards, tom lane




Re: error in trigger creation

2024-04-21 Thread Adrian Klaver

On 4/21/24 11:20, yudhi s wrote:


On Sun, Apr 21, 2024 at 8:13 PM Tom Lane > wrote:






So do you mean , we should not create the event trigger using the 
"security definer" , rather have the super user do this each time we 
have to create the event trigger?


Actually , I am not very much aware about the security part, but is it 
fine to give the super user privilege to the application user(say 
app_user) from which normally scripts/procedures get executed by the 
application, but nobody(individual person) can login using that user.


Additionally in other databases, triggers are driven by some 
specific privileges (say for example in oracle "create trigger" 
privilege). And it doesn't need any super user and we were having many 


Which Postgres has

https://www.postgresql.org/docs/current/ddl-priv.html

TRIGGER

Allows creation of a trigger on a table, view, etc.


but you are talking about event triggers

https://www.postgresql.org/docs/current/sql-createeventtrigger.html

where

"Only superusers can create event triggers."

To paraphrase Henry Ford, you can have any user for an event trigger as 
long as the user is a superuser.




applications in which the application user (which were used for app to 
app login) was having these privileges, similar to "create table" 
privileges which comes by default to the schema who owns the objects  
etc. So in this case i was wondering if "event trigger" can cause any 
additional threat and thus there is no such privilege like "create 
trigger" exist in postgres and so it should be treated cautiously?


An event trigger runs as a superuser and executes a function that in 
turn can do many things, you do the math on the threat level.



--
Adrian Klaver
adrian.kla...@aklaver.com





Re: error in trigger creation

2024-04-21 Thread yudhi s
On Mon, 22 Apr, 2024, 1:34 am Ron Johnson,  wrote:

> On Sun, Apr 21, 2024 at 2:58 PM yudhi s 
> wrote:
>
>> the partition drop from parent is taking longer as it scans all the
>> partitions of the child table
>>
>
> Does the relevant supporting index exist on the child table?
>

Yes all the child tables have foreign keys indexed.

 Again I don't want to divert the main topic(trigger creation) . I will
provide the exact test case how it puts exclusive lock and runs longer as
scans all child partitions (which must be using indexes though) but as the
number of partitions increase the time of drop partitions increases.

>


Re: error in trigger creation

2024-04-21 Thread Ron Johnson
On Sun, Apr 21, 2024 at 2:58 PM yudhi s  wrote:

> the partition drop from parent is taking longer as it scans all the
> partitions of the child table
>

Does the relevant supporting index exist on the child table?


Re: error in trigger creation

2024-04-21 Thread yudhi s
On Mon, Apr 22, 2024 at 12:02 AM David G. Johnston <
david.g.johns...@gmail.com> wrote:

>
> I suggest you share a script that demonstrates exactly what you are trying
> to accomplish.  Which event triggers you need to create from the
> application and what the functions those triggers call do.
>
>
We are using pg_partman for automatic partition maintenance however as we
have foreign keys created on the tables, so the partition drop from parent
is taking longer as it scans all the partitions of the child table and also
locks the full child table for that duration(even SELECT query not allowed
during that period). So we are thinking of creating foreign keys on
partitions rather than on tables however there is no direct option for that
to happen through pg_partman.

So we are thinking of first creating the table without any foreign keys and
creating the partitions using pg_partman, then create the below event
trigger which will add the foreign key to the new partitions for all new
future partitions. And we are planning to create such an event trigger for
all such child tables that are partitioned and having FK's.

CREATE OR REPLACE FUNCTION add_partition_foreign_key()
RETURNS event_trigger
LANGUAGE plpgsql
AS $$
DECLARE
partition_table TEXT;
parent_table TEXT;
partition_name TEXT;
BEGIN
IF TG_TAG = 'CREATE TABLE' THEN
partition_table := TG_TABLE_NAME;
parent_table := 'parent_table_' || to_char(NEW.partition_key, '_MM_DD');

EXECUTE format('ALTER TABLE %I ADD CONSTRAINT fk_%I_%I FOREIGN KEY
(partition_key, id) REFERENCES %I (partition_key, id)', partition_table,
partition_table, parent_table, parent_table);
END IF;
END;
$$;

CREATE EVENT TRIGGER add_partition_foreign_key_trigger
ON ddl_command_end
WHEN TAG IN ('CREATE TABLE')
EXECUTE FUNCTION add_partition_foreign_key();


Re: error in trigger creation

2024-04-21 Thread David G. Johnston
On Sun, Apr 21, 2024 at 11:20 AM yudhi s 
wrote:

>
> On Sun, Apr 21, 2024 at 8:13 PM Tom Lane  wrote:
>
>> "David G. Johnston"  writes:
>> > On Sunday, April 21, 2024, yudhi s  wrote:
>> >> Are you saying something like below, in which we first create the
>> >> function from super user and then execute the grant? But doesn't that
>> mean,
>> >> each time we want to create a new event trigger we have to be again
>> >> dependent on the "super user" to modify the security definer function?
>>
>> > Dynamic SQL.  See “execute” in plpgsql.
>>
>> You might as well just give that user superuser and be done with it.
>> It's foolish to imagine that you have any shred of security left
>> if you're letting a user that's not 100.00% trusted write event
>> triggers.  (Much less execute any SQL command whatsoever, which
>> is what it sounds like David is suggesting you create a function
>> to do.)
>>
>>
> So do you mean , we should not create the event trigger using the
> "security definer" , rather have the super user do this each time we have
> to create the event trigger?
>

I suggest you share a script that demonstrates exactly what you are trying
to accomplish.  Which event triggers you need to create from the
application and what the functions those triggers call do.


> Actually , I am not very much aware about the security part, but is it
> fine to give the super user privilege to the application user(say app_user)
> from which normally scripts/procedures get executed by the application, but
> nobody(individual person) can login using that user.
>

app_user should not be superuser nor own objects in the database.  The role
that performs schema migrations for the database should be able to become
superuser via set role so when doing migrations if there is a need to do
something as superuser it is possible but explicit.

It is during schema migrations that event triggers are expected to be
installed, not in response to some user hitting your website and having
your middleware execute some SQL while connected as the app_user role.

David J.


Re: error in trigger creation

2024-04-21 Thread yudhi s
On Sun, Apr 21, 2024 at 8:13 PM Tom Lane  wrote:

> "David G. Johnston"  writes:
> > On Sunday, April 21, 2024, yudhi s  wrote:
> >> Are you saying something like below, in which we first create the
> >> function from super user and then execute the grant? But doesn't that
> mean,
> >> each time we want to create a new event trigger we have to be again
> >> dependent on the "super user" to modify the security definer function?
>
> > Dynamic SQL.  See “execute” in plpgsql.
>
> You might as well just give that user superuser and be done with it.
> It's foolish to imagine that you have any shred of security left
> if you're letting a user that's not 100.00% trusted write event
> triggers.  (Much less execute any SQL command whatsoever, which
> is what it sounds like David is suggesting you create a function
> to do.)
>
>
So do you mean , we should not create the event trigger using the "security
definer" , rather have the super user do this each time we have to create
the event trigger?

Actually , I am not very much aware about the security part, but is it fine
to give the super user privilege to the application user(say app_user) from
which normally scripts/procedures get executed by the application, but
nobody(individual person) can login using that user.

Additionally in other databases, triggers are driven by some
specific privileges (say for example in oracle "create trigger" privilege).
And it doesn't need any super user and we were having many applications in
which the application user (which were used for app to app login) was
having these privileges, similar to "create table" privileges which comes
by default to the schema who owns the objects  etc. So in this case i was
wondering if "event trigger" can cause any additional threat and thus there
is no such privilege like "create trigger" exist in postgres and so it
should be treated cautiously?


Re: error in trigger creation

2024-04-21 Thread David G. Johnston
On Sun, Apr 21, 2024 at 11:10 AM yudhi s 
wrote:

>
> On Sun, Apr 21, 2024 at 7:55 PM David G. Johnston <
> david.g.johns...@gmail.com> wrote:
>
>> On Sunday, April 21, 2024, yudhi s  wrote:
>>
>>> On Sun, Apr 21, 2024 at 1:55 PM David G. Johnston <
>>> david.g.johns...@gmail.com> wrote:
>>>
 On Sunday, April 21, 2024, yudhi s  wrote:

> so that it will be able to assign the privilege, so we will be able to
> create the event trigger without need to run the event trigger script from
> super user itself?
>

 Write a security-definer function owned by superuser and grant app_user
 permission to execute it.

 David J.


>>>
>>> Thank You David.
>>>
>>>  Are you saying something like below, in which we first create the
>>> function from super user and then execute the grant? But doesn't that mean,
>>> each time we want to create a new event trigger we have to be again
>>> dependent on the "super user" to modify the security definer function?
>>>
>>
>> Dynamic SQL.  See “execute” in plpgsql.
>>
>> David J.
>>
>>
>
> Even if we create the  event trigger using "security definer" function
> embedding the "create event trigger" with in its body using dynamic
> sql(something as below), and in future if we need to create another event
> trigger , we need to again update the function and re-compile and for that
> , we will need it it to be compiled using user "super user", is my
> understanding correct here?
> Or
> it will just need the "super user" to create the function for the first
> time , but after that the user who has the "execute grant" given (say
> app_user) will be able to perform updates and compile to the function body?
>
> CREATE OR REPLACE FUNCTION create_event_trigger_func()
>   RETURNS void
>   LANGUAGE plpgsql
>   SECURITY DEFINER
>   AS $$
>   BEGIN
> EXECUTE 'CREATE EVENT TRIGGER event_trigger_name ON schema_name ...';
>   END;
> $$;
>
> GRANT EXECUTE ON FUNCTION create_event_trigger_func() TO app_user;
>

If you don't allow the caller to pass in parameters then no, you likely
gain nothing from using a security definer function.  It is a tool and I
don't have enough info or desire to write the internals of said function(s)
for your need.  As Tom says, it very well may be impossible to accomplish
your goal even with a security definer function.  But absent a predefined
role there is no other mechanism for the owners of objects or superusers to
delegate their non-grantable abilities to ordinary users.

David J.


Re: error in trigger creation

2024-04-21 Thread yudhi s
On Sun, Apr 21, 2024 at 7:55 PM David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Sunday, April 21, 2024, yudhi s  wrote:
>
>> On Sun, Apr 21, 2024 at 1:55 PM David G. Johnston <
>> david.g.johns...@gmail.com> wrote:
>>
>>> On Sunday, April 21, 2024, yudhi s  wrote:
>>>
 so that it will be able to assign the privilege, so we will be able to
 create the event trigger without need to run the event trigger script from
 super user itself?

>>>
>>> Write a security-definer function owned by superuser and grant app_user
>>> permission to execute it.
>>>
>>> David J.
>>>
>>>
>>
>> Thank You David.
>>
>>  Are you saying something like below, in which we first create the
>> function from super user and then execute the grant? But doesn't that mean,
>> each time we want to create a new event trigger we have to be again
>> dependent on the "super user" to modify the security definer function?
>>
>
> Dynamic SQL.  See “execute” in plpgsql.
>
> David J.
>
>

Even if we create the  event trigger using "security definer" function
embedding the "create event trigger" with in its body using dynamic
sql(something as below), and in future if we need to create another event
trigger , we need to again update the function and re-compile and for that
, we will need it it to be compiled using user "super user", is my
understanding correct here?
Or
it will just need the "super user" to create the function for the first
time , but after that the user who has the "execute grant" given (say
app_user) will be able to perform updates and compile to the function body?

CREATE OR REPLACE FUNCTION create_event_trigger_func()
  RETURNS void
  LANGUAGE plpgsql
  SECURITY DEFINER
  AS $$
  BEGIN
EXECUTE 'CREATE EVENT TRIGGER event_trigger_name ON schema_name ...';
  END;
$$;

GRANT EXECUTE ON FUNCTION create_event_trigger_func() TO app_user;


Re: error in trigger creation

2024-04-21 Thread Tom Lane
"David G. Johnston"  writes:
> On Sunday, April 21, 2024, yudhi s  wrote:
>> Are you saying something like below, in which we first create the
>> function from super user and then execute the grant? But doesn't that mean,
>> each time we want to create a new event trigger we have to be again
>> dependent on the "super user" to modify the security definer function?

> Dynamic SQL.  See “execute” in plpgsql.

You might as well just give that user superuser and be done with it.
It's foolish to imagine that you have any shred of security left
if you're letting a user that's not 100.00% trusted write event
triggers.  (Much less execute any SQL command whatsoever, which
is what it sounds like David is suggesting you create a function
to do.)

regards, tom lane




Re: error in trigger creation

2024-04-21 Thread David G. Johnston
On Sunday, April 21, 2024, yudhi s  wrote:

> On Sun, Apr 21, 2024 at 1:55 PM David G. Johnston <
> david.g.johns...@gmail.com> wrote:
>
>> On Sunday, April 21, 2024, yudhi s  wrote:
>>
>>> so that it will be able to assign the privilege, so we will be able to
>>> create the event trigger without need to run the event trigger script from
>>> super user itself?
>>>
>>
>> Write a security-definer function owned by superuser and grant app_user
>> permission to execute it.
>>
>> David J.
>>
>>
>
> Thank You David.
>
>  Are you saying something like below, in which we first create the
> function from super user and then execute the grant? But doesn't that mean,
> each time we want to create a new event trigger we have to be again
> dependent on the "super user" to modify the security definer function?
>

Dynamic SQL.  See “execute” in plpgsql.

David J.


Re: error in trigger creation

2024-04-21 Thread yudhi s
On Sun, Apr 21, 2024 at 1:55 PM David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Sunday, April 21, 2024, yudhi s  wrote:
>
>> so that it will be able to assign the privilege, so we will be able to
>> create the event trigger without need to run the event trigger script from
>> super user itself?
>>
>
> Write a security-definer function owned by superuser and grant app_user
> permission to execute it.
>
> David J.
>
>

Thank You David.

 Are you saying something like below, in which we first create the function
from super user and then execute the grant? But doesn't that mean, each
time we want to create a new event trigger we have to be again dependent on
the "super user" to modify the security definer function?

CREATE OR REPLACE FUNCTION create_event_trigger_func()
  RETURNS void
  LANGUAGE plpgsql
  SECURITY DEFINER
  AS $$
  BEGIN
EXECUTE 'CREATE EVENT TRIGGER event_trigger_name ON schema_name ...';
  END;
$$;

GRANT EXECUTE ON FUNCTION create_event_trigger_func() TO app_user;


Re: error in trigger creation

2024-04-21 Thread David G. Johnston
On Sunday, April 21, 2024, yudhi s  wrote:

> so that it will be able to assign the privilege, so we will be able to
> create the event trigger without need to run the event trigger script from
> super user itself?
>

Write a security-definer function owned by superuser and grant app_user
permission to execute it.

David J.


Re: Question on trigger

2024-04-16 Thread Adrian Klaver

On 4/16/24 12:39, veem v wrote:



On Sat, 13 Apr 2024 at 21:44, Adrian Klaver > wrote:


On 4/13/24 00:03, veem v wrote:
 > Thank you Adrian.
 >
 > So it seems the heavy DML tables will see an impact if having
triggers
 > (mainly for each row trigger) created on them.
 >
 > And also the bulk DML/array based insert (which inserts multiple
rows in
 > one short or one batch) , in those cases it seems the trigger
will not
 > make that happen as it will force it to make it happen row by
row, as
 > the trigger is row based. Will test anyway though.

You said you have triggers in the Oracle database and I assumed they
worked and where not a show stopping issue there. What makes you think
that would be different in Postgres?

What type of triggers where there in Oracle, per row, per statement
or a
mix?


Actually we have row level triggers  in oracle which are running for 
smaller volume DML and are making the direct path inserts to happen in 
conventional row by row insert, in presence of trigger. So was wondering 


Not sure what the above means, you will need to provide a more detailed 
description. Though any DML you are doing on table that has any sort of 
constraint, index, trigger, foreign key, default values, etc is going to 
have more overhead then into an unencumbered table. FYI, some of the 
preceding are system triggers, for example foreign keys.


if it postgres we will be encountering a similar issue and batch inserts 
may be converted back to row by row automatically. And here we are going 
to process higher volume DMLS in postgresql database.




Hard to say with the information provided. Easiest way to find out is 
create a test setup  and run the code. Though I guess, as I have not 
actually tried this, you could have a per row trigger and per statement 
trigger for the same action and disable the per row and enable the per 
statement trigger for batch operations. Then once the batch operation is 
done reverse the process. Again something to test to verify.



--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Question on trigger

2024-04-16 Thread veem v
On Sat, 13 Apr 2024 at 21:44, Adrian Klaver 
wrote:

> On 4/13/24 00:03, veem v wrote:
> > Thank you Adrian.
> >
> > So it seems the heavy DML tables will see an impact if having triggers
> > (mainly for each row trigger) created on them.
> >
> > And also the bulk DML/array based insert (which inserts multiple rows in
> > one short or one batch) , in those cases it seems the trigger will not
> > make that happen as it will force it to make it happen row by row, as
> > the trigger is row based. Will test anyway though.
>
> You said you have triggers in the Oracle database and I assumed they
> worked and where not a show stopping issue there. What makes you think
> that would be different in Postgres?
>
> What type of triggers where there in Oracle, per row, per statement or a
> mix?
>
>
Actually we have row level triggers  in oracle which are running for
smaller volume DML and are making the direct path inserts to happen in
conventional row by row insert, in presence of trigger. So was wondering if
it postgres we will be encountering a similar issue and batch inserts may
be converted back to row by row automatically. And here we are going to
process higher volume DMLS in postgresql database.


Re: Question on trigger

2024-04-13 Thread Adrian Klaver

On 4/13/24 00:03, veem v wrote:

Thank you Adrian.

So it seems the heavy DML tables will see an impact if having triggers 
(mainly for each row trigger) created on them.


And also the bulk DML/array based insert (which inserts multiple rows in 
one short or one batch) , in those cases it seems the trigger will not 
make that happen as it will force it to make it happen row by row, as 
the trigger is row based. Will test anyway though.


You said you have triggers in the Oracle database and I assumed they 
worked and where not a show stopping issue there. What makes you think 
that would be different in Postgres?


What type of triggers where there in Oracle, per row, per statement or a 
mix?





On Thu, 11 Apr 2024 at 22:00, Adrian Klaver > wrote:


On 4/11/24 07:31, veem v wrote:
 > Hi, We used to use Oracle database in which we had audit
 > triggers(something as below) mandated for all tables by the control
 > team. Now we are going to use the postgresql 15.4 database for
one of
 > our applications. So,wanted to understand if there exists any
downside
 > of such audit trigger setup for all the tables? Will it impact
the bulk
 > data insert/update/delete OR slowdown of any of the DML operations
 > significantly (and thus will not be advisable to use for all
tables but
 > selected ones)?

Triggers are overhead in Postgres as they where in Oracle. If they
didn't cause an issue in Oracle I would suspect that would also be the
case in Postgres. To confirm you would need to create a test setup and
run some common operations and see what the overhead is.

Some potential performance improvements:

https://www.postgresql.org/docs/current/sql-createtrigger.html


"...a trigger that is marked FOR EACH STATEMENT only executes once for
any given operation, regardless of how many rows it modifies (in
particular, an operation that modifies zero rows will still result in
the execution of any applicable FOR EACH STATEMENT triggers)."

<...>

"The REFERENCING option enables collection of transition relations,
which are row sets that include all of the rows inserted, deleted, or
modified by the current SQL statement. This feature lets the trigger
see
a global view of what the statement did, not just one row at a time.
This option is only allowed for an AFTER trigger that is not a
constraint trigger; also, if the trigger is an UPDATE trigger, it must
not specify a column_name list. OLD TABLE may only be specified once,
and only for a trigger that can fire on UPDATE or DELETE; it creates a
transition relation containing the before-images of all rows updated or
deleted by the statement. Similarly, NEW TABLE may only be specified
once, and only for a trigger that can fire on UPDATE or INSERT; it
creates a transition relation containing the after-images of all rows
updated or inserted by the statement."


As example:

https://www.postgresql.org/docs/current/plpgsql-trigger.html


Example 43.7. Auditing with Transition Tables

 >
 > CREATE OR REPLACE TRIGGER TAB_AUD_TRG
 >    BEFORE DELETE OR INSERT OR UPDATE
 >    ON tab
 >    FOR EACH ROW
 > BEGIN
 >        IF inserting THEN
 >          :NEW.create_timestamp := systimestamp;
 >          :NEW.create_userid  :=
sys_context('USERENV','SESSION_USER');
 >          :NEW.update_timestamp := systimestamp;
 >          :NEW.update_userid := sys_context('USERENV','SESSION_USER');
 >        ELSIF updating THEN
 >          IF  updating('create_userid') OR
updating('create_timestamp') THEN
 >              :new.create_userid   := :old.create_userid;
 >              :new.create_timestamp  := :old.create_timestamp;
 >          END IF;
 >          :NEW.update_timestamp := systimestamp;
 >          :NEW.update_userid := sys_context('USERENV','SESSION_USER');
 >        END IF;
 >    END;
 > /
 >
 > Regards
 > Veem

-- 
Adrian Klaver

adrian.kla...@aklaver.com 



--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Question on trigger

2024-04-13 Thread veem v
Thank you Adrian.

So it seems the heavy DML tables will see an impact if having triggers
(mainly for each row trigger) created on them.

And also the bulk DML/array based insert (which inserts multiple rows in
one short or one batch) , in those cases it seems the trigger will not make
that happen as it will force it to make it happen row by row, as the
trigger is row based. Will test anyway though.

On Thu, 11 Apr 2024 at 22:00, Adrian Klaver 
wrote:

> On 4/11/24 07:31, veem v wrote:
> > Hi, We used to use Oracle database in which we had audit
> > triggers(something as below) mandated for all tables by the control
> > team. Now we are going to use the postgresql 15.4 database for one of
> > our applications. So,wanted to understand if there exists any downside
> > of such audit trigger setup for all the tables? Will it impact the bulk
> > data insert/update/delete OR slowdown of any of the DML operations
> > significantly (and thus will not be advisable to use for all tables but
> > selected ones)?
>
> Triggers are overhead in Postgres as they where in Oracle. If they
> didn't cause an issue in Oracle I would suspect that would also be the
> case in Postgres. To confirm you would need to create a test setup and
> run some common operations and see what the overhead is.
>
> Some potential performance improvements:
>
> https://www.postgresql.org/docs/current/sql-createtrigger.html
>
> "...a trigger that is marked FOR EACH STATEMENT only executes once for
> any given operation, regardless of how many rows it modifies (in
> particular, an operation that modifies zero rows will still result in
> the execution of any applicable FOR EACH STATEMENT triggers)."
>
> <...>
>
> "The REFERENCING option enables collection of transition relations,
> which are row sets that include all of the rows inserted, deleted, or
> modified by the current SQL statement. This feature lets the trigger see
> a global view of what the statement did, not just one row at a time.
> This option is only allowed for an AFTER trigger that is not a
> constraint trigger; also, if the trigger is an UPDATE trigger, it must
> not specify a column_name list. OLD TABLE may only be specified once,
> and only for a trigger that can fire on UPDATE or DELETE; it creates a
> transition relation containing the before-images of all rows updated or
> deleted by the statement. Similarly, NEW TABLE may only be specified
> once, and only for a trigger that can fire on UPDATE or INSERT; it
> creates a transition relation containing the after-images of all rows
> updated or inserted by the statement."
>
>
> As example:
>
> https://www.postgresql.org/docs/current/plpgsql-trigger.html
>
> Example 43.7. Auditing with Transition Tables
>
> >
> > CREATE OR REPLACE TRIGGER TAB_AUD_TRG
> >BEFORE DELETE OR INSERT OR UPDATE
> >ON tab
> >FOR EACH ROW
> > BEGIN
> >IF inserting THEN
> >  :NEW.create_timestamp := systimestamp;
> >  :NEW.create_userid  := sys_context('USERENV','SESSION_USER');
> >  :NEW.update_timestamp := systimestamp;
> >  :NEW.update_userid := sys_context('USERENV','SESSION_USER');
> >ELSIF updating THEN
> >  IF  updating('create_userid') OR updating('create_timestamp')
> THEN
> >  :new.create_userid   := :old.create_userid;
> >  :new.create_timestamp  := :old.create_timestamp;
> >  END IF;
> >  :NEW.update_timestamp := systimestamp;
> >  :NEW.update_userid := sys_context('USERENV','SESSION_USER');
> >END IF;
> >END;
> > /
> >
> > Regards
> > Veem
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>


Re: Question on trigger

2024-04-11 Thread Adrian Klaver

On 4/11/24 07:31, veem v wrote:
Hi, We used to use Oracle database in which we had audit 
triggers(something as below) mandated for all tables by the control 
team. Now we are going to use the postgresql 15.4 database for one of 
our applications. So,wanted to understand if there exists any downside 
of such audit trigger setup for all the tables? Will it impact the bulk 
data insert/update/delete OR slowdown of any of the DML operations 
significantly (and thus will not be advisable to use for all tables but 
selected ones)?


Triggers are overhead in Postgres as they where in Oracle. If they 
didn't cause an issue in Oracle I would suspect that would also be the 
case in Postgres. To confirm you would need to create a test setup and 
run some common operations and see what the overhead is.


Some potential performance improvements:

https://www.postgresql.org/docs/current/sql-createtrigger.html

"...a trigger that is marked FOR EACH STATEMENT only executes once for 
any given operation, regardless of how many rows it modifies (in 
particular, an operation that modifies zero rows will still result in 
the execution of any applicable FOR EACH STATEMENT triggers)."


<...>

"The REFERENCING option enables collection of transition relations, 
which are row sets that include all of the rows inserted, deleted, or 
modified by the current SQL statement. This feature lets the trigger see 
a global view of what the statement did, not just one row at a time. 
This option is only allowed for an AFTER trigger that is not a 
constraint trigger; also, if the trigger is an UPDATE trigger, it must 
not specify a column_name list. OLD TABLE may only be specified once, 
and only for a trigger that can fire on UPDATE or DELETE; it creates a 
transition relation containing the before-images of all rows updated or 
deleted by the statement. Similarly, NEW TABLE may only be specified 
once, and only for a trigger that can fire on UPDATE or INSERT; it 
creates a transition relation containing the after-images of all rows 
updated or inserted by the statement."



As example:

https://www.postgresql.org/docs/current/plpgsql-trigger.html

Example 43.7. Auditing with Transition Tables



CREATE OR REPLACE TRIGGER TAB_AUD_TRG
   BEFORE DELETE OR INSERT OR UPDATE
   ON tab
   FOR EACH ROW
BEGIN
       IF inserting THEN
         :NEW.create_timestamp := systimestamp;
         :NEW.create_userid  := sys_context('USERENV','SESSION_USER');
         :NEW.update_timestamp := systimestamp;
         :NEW.update_userid := sys_context('USERENV','SESSION_USER');
       ELSIF updating THEN
         IF  updating('create_userid') OR updating('create_timestamp') THEN
             :new.create_userid   := :old.create_userid;
             :new.create_timestamp  := :old.create_timestamp;
         END IF;
         :NEW.update_timestamp := systimestamp;
         :NEW.update_userid := sys_context('USERENV','SESSION_USER');
       END IF;
   END;
/

Regards
Veem


--
Adrian Klaver
adrian.kla...@aklaver.com





Re: wiki.postgres ​ Tighten trigger permission checks already resolved

2023-04-24 Thread Tom Lane
jian he  writes:
> The following Todo item seems  already resolved in pg15.
> https://wiki.postgresql.org/wiki/Todo#Triggers
>> Tighten trigger permission checks
>> - Security leak with trigger functions?
>> 

> But it seems to not appear in the pg15 release notes. (I searched for the
> keywords "trigger" and "function").

The case shown at the head of that thread was fixed more than a decade
ago, cf commit 891e6e7bf (CVE-2012-0866).  However, the followup questions
discussed in the thread are still live: should there be a run-time not
only trigger-creation-time privilege check, and if so what should it
check exactly?  And is a separate TRIGGER privilege even reasonable,
rather than just saying you must be table owner to create a trigger?

regards, tom lane




Re: Deferred constraint trigger semantics

2022-05-16 Thread Bryn Llewellyn
> laurenz.albe@cybertec.atwrote:
> 
>> b...@yugabyte.com wrote:
> 
>> 
>> …I tried this:
>> 
>> create table t(
>>   k serial primary key,
>>   v int not null,
>>   constraint t_v_unq unique(v) initially deferred);

Here's a better test:

-- BLUE session
start transaction isolation level read committed;
insert into t(v) values (1), (2);

-- RED session
start transaction isolation level read committed;
insert into t(v) values (1), (3);

-- BLUE session
set constraints all immediate;

-- RED session (hangs until BLUE commits).
-- Then, when it does, gets ERROR... "Key (v)=(1) already exists"
set constraints all immediate;

-- BLUE session
commit;

-- RED session
-- There are no changes to commit 'cos they were rolled back.
commit;

-- "select k, v from t order by k" (in each session) now shows that both 
sessions meet the constraint.

>> Where, in the PG doc, can I read the account of the proper mental model for 
>> the application programmer?
> 
> [See https://www.postgresql.org/docs/current/index-unique-checks.html.]

Thanks for referring me to the account "62.5. Index Uniqueness Checks". It's in 
the section "Part VII. Internals" (…contains assorted information that might be 
of use to PostgreSQL developers). I wouldn't expect to read this because I 
don't intend to write code that might become part of PG's implementation.

> I'd say that the proper mental model is that you don't need to care… The 
> exact sequence of what happens during COMMIT is interesting, but irrelevant 
> to the programmer. All that counts is "a deferred constraint is checked 
> between the time that COMMIT starts processing and the time that it returns".

Yes, I very much like this stance. It seems that, for built-in constraints 
(like "unique" or "foreign key") it's enough to understand that PG implements 
these at the "read committed" isolation level by using methods (that aren't 
exposed via SQL) to peep below the application programmer's MVCC view of the 
world to check the uncommitted state of other, concurrent, sessions.

This explains why, in the (new) test that I used above, the conflict is 
detected when the second session issues "set constraints all immediate" after 
the first already did this (i.e. long before COMMIT). In this case, the second 
session hangs until the first commits—at which point the second sees the 
uniqueness violation error.

In other words, the automagic implementation of the enforcement of built-in 
constraints allows the safe use of "set constraints all immediate" to provoke a 
possible early error that can, very usefully, be handled in PL/pgSQL code. This 
is the clue to understanding why the check of a built-in constraint, when it's 
performed as an implicit consequence of "commit", doesn't need to be within the 
small part of the operations that "commit" causes that are strictly serialized. 
(You've explained how this helps performance in multi-session scenarios.)

Critically, the special methods that implement the enforcement of built-in 
constraints aren't accessible in PL/pgSQL code and therefore not accessible in 
the "ordinary" implementation of trigger functions. This is the point that I 
failed to grasp. (Though I do see, now, that Laurenz's post says this clearly.)

I was able to demonstrate this by implementing a unique constraint with a 
deferred constraint trigger (and no use of "set constraints all immediate"). I 
simply introduced "pg_sleep(5)" between the trigger function's actual check and 
its final "return null". I copied the code below for completeness.

The "BLUE" session, because it reaches its serialized "commit" actions first, 
sees an outcome that meets the constraint. But the "RED" session has enough 
time to do its check before "BLUE" does its serialized "commit" actions. So its 
test passes too. This leaves the final database in conflict with the intended 
constraint.

I see now that the only robust use of an ordinarily (i.e. not using C) 
implemented constraint trigger (deferred or otherwise) is to enforce a single 
row-constraint. (There's a caveat that maybe, after careful analysis, you can 
work out a cunning lockings scheme to allow the safe implementation of an 
entity-level constraint without using C. But the "exactly one or two admins in 
a department" example shows that this isn't generally possible.) So it's 
reasonable that a constraint trigger must be AFTER EACH ROW. Further, it would  
make no sense to do SQL from its implementation function because the only 
values that you might defensibly use are available simply via "old" and "new".

So all that stuff I was concerned about where the deferred constraint fires 
many times when once is enough falls away because the larger endeavor makes no 
sense. (I suppose that it might matter if you implemented the trigger function 
robustly in C.)

It does strike me that the ordinary application programmer—who reads just the 
sections "CREATE TRIGGER", "Chapter 39. Triggers", "CREATE TABLE", and "SET 

Re: Deferred constraint trigger semantics

2022-05-13 Thread Bryn Llewellyn
> laurenz.a...@cybertec.at wrote:
> 
> Be "seeing" I didn't mean "show to the user". I mean that the code that 
> implements PostgreSQL constraints takes uncommitted data into account.
> 
> The documentation describes that for the case of uniqueness in some detail:
> 
> https://www.postgresql.org/docs/current/index-unique-checks.html
> 
> I'd say that the proper mental model is that you don't need to care. The ACID 
> properties are guarantees that the database makes, and these guarantees are 
> usually negative: "no sequence of actions can result in the violation of a 
> unique constraint" or "you don't get to see uncommitted data".
> 
> The exact sequence of what happens during COMMIT is interesting, but 
> irrelevant to the programmer.  All that counts is "a deferred constraint is 
> checked between the time that COMMIT starts processing and the time that it 
> returns".
> 
> If you want to know more, you have to start reading the code. It is open 
> source and well documented.

Thanks for the quick reply, Laurenz. I'm signing off now (US/Pacific) for a 
long weekend. I'll try to digest what you wrote on Monday.

Re: Deferred constraint trigger semantics

2022-05-13 Thread Laurenz Albe
On Thu, 2022-05-12 at 22:06 -0700, Bryn Llewellyn wrote:
> 
> > In the case of constraint triggers, yes. But there is no race condition for 
> > primary key,
> > unique and foreign key constraints, because they also "see" uncommitted 
> > data.
> 
> I can't follow you here, sorry. I tried this:
> 
> create table t(
>   k serial primary key,
>   v int not null,
>   constraint t_v_unq unique(v) initially deferred);
> 
> -- RED
> start transaction isolation level read committed;
> insert into t(v) values (1), (2);
> select k, v from t order by k;
> 
> -- BLUE
> start transaction isolation level read committed;
> insert into t(v) values (1), (3);
> select k, v from t order by k;
> 
> -- RED
> commit;
> select k, v from t order by k;
> 
> -- BLUE
> select k, v from t order by k;
> commit;
> 
> select k, v from t order by k;
> 
> The first "select" from the "BLUE" session at the very end produces this:
> 
>  k | v 
> ---+---
>  1 | 1
>  2 | 2
>  3 | 1
>  4 | 3
> 
> This doesn't surprise me. It's ultimately illegal. But not yet. (Before "RED" 
> committed, "BLUE"
> didn't see the rows with "k = 1" and "k = 2". So it isn't seeing any other 
> sessions uncommitted
> data—but only it's own uncommitted data.)

Be "seeing" I didn't mean "show to the user".
I mean that the code that implements PostgreSQL constraints takes uncommitted 
data into account.

The documentation describes that for the case of uniqueness in some detail:
https://www.postgresql.org/docs/current/index-unique-checks.html

> Then, when "BLUE" commits, it (of course) gets this:
> 
> ERROR:  duplicate key value violates unique constraint "t_v_unq"
> DETAIL:  Key (v)=(1) already exists.
> 
> Then it sees (of course, again) only the rows with "k = 1" and "k = 2"—the 
> same as what "RED" saw.
> 
> It seems to be impossible to do a test in slow motion where "RED" and "BLUE" 
> each issues "commit"
> at the exact same moment. So thinking about this scenario doesn't tell me if:
> 
> (a) Each session runs its constraint check and the rest of what "commit" 
> entails in a genuinely serialized fashion.
> 
> OR
> 
> (b) Each session first runs its constraint check (and some other stuff) 
> non-serializedly—and only
> then runs the small part of the total "commit" action (the WAL part) 
> serializedly.
> (This would result in bad data in the database at rest—just as my 
> contrived misuse of
> "set constraints all immediate" left things in my "one or two admins" 
> scenario.)

I'd say that (b) is a more accurate description.

> The (a) scheme sounds correct. And the (b) scheme sounds wrong. Why would PG 
> prefer to implement (b) rather than (a)?
> 
> I'm clearly missing something.

Because (a) would result in terrible performance if there are many concurrent 
transactions.

I don't see why (b) is wrong - as your example shows, the behavior is correct.

Perhaps you have to understand what a PostgreSQL "snapshot" is and that the 
exact moment at
which a row was created is not important - it is the transaction numbers in 
"xmin" and "xmax"
that count.

> > 
> Where, in the PG doc, can I read the account of the proper mental model for 
> the application programmer?
> It seems to be impossible to conduct an experiment that would disprove the 
> hypothesis that one,
> or the other, of these mental models is correct.

I'd say that the proper mental model is that you don't need to care.
The ACID properties are guarantees that the database makes, and these 
guarantees are usually
negative: "no sequence of actions can result in the violation of a unique 
constraint" or
"you don't get to see uncommitted data".

The exact sequence of what happens during COMMIT is interesting, but irrelevant 
to the
programmer.  All that counts is "a deferred constraint is checked between the 
time that
COMMIT starts processing and the time that it returns".

If you want to know more, you have to start reading the code.  It is open 
source and
well documented.

> > > > 
> Your post's testcase used the condition "at least one guard on duty" [...]
> 
> My testcase used a stricter rule: the table of staff must have exactly one or 
> two rows where
> the job is "Admin". So, here, concurrent sessions can break the rule (when 
> the txn starts
> with one "Admin") by updating different rows to make them "Admin" or by 
> inserting different
> new "Admin" rows. I've convinced myself by experiment that an ordinary 
> trigger can enforce
> this rule when contending sessions use "serializable" isolation. Am I right 
> that you'd say
> that no pessimistic locking scheme can enforce the rule at lower isolation 
> levels except the
> brute-force "lock table"?

If the transaction can insert two rows, I'd agree.
If the transaction only inserts a single row, than it could SELECT ... FOR NO 
KEY UPDATE the
one existing row, thus serializing the concurrent transactions.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com




Re: Deferred constraint trigger semantics

2022-05-12 Thread Bryn Llewellyn
> laurenz.a...@cybertec.at wrote:
> 
>> b...@yugabyte.com wrote:
>> 
>> …I have always understood that (in Postgres and any respectable RDBMS) 
>> commits in a multi-session environment are always strictly 
>> serialized—irrespective of the transaction's isolation level. Am I correct 
>> to assume this is the case for Postgres? I took "at COMMIT time" to mean "as 
>> part of the strictly serialized operations that implement a session's 
>> COMMIT".
> 
> I am not sure what you mean by serialized commits. Transactions are 
> concurrent, and so are commits. COMMIT takes some time, during which several 
> things happen, among them executing deferred constraints, writing a WAL 
> record and flushing the WAL. The only thing that is necessarily serialized is 
> writing the WAL record.

Oh. I was wrong, then. I'll say more on this below.

>> …I take what you say in your post to mean that each session executes its 
>> deferred constraint check (by extension, not just for constraint triggers 
>> but for all deferred constraint cases) momentarily *before* COMMIT so that 
>> the effect is only to reduce the duration of the race condition window 
>> rather than to eliminate it.
> 
> In the case of constraint triggers, yes. But there is no race condition for 
> primary key, unique and foreign key constraints, because they also "see" 
> uncommitted data.

I can't follow you here, sorry. I tried this:

create table t(
  k serial primary key,
  v int not null,
  constraint t_v_unq unique(v) initially deferred);

-- RED
start transaction isolation level read committed;
insert into t(v) values (1), (2);
select k, v from t order by k;

-- BLUE
start transaction isolation level read committed;
insert into t(v) values (1), (3);
select k, v from t order by k;

-- RED
commit;
select k, v from t order by k;

-- BLUE
select k, v from t order by k;
commit;
select k, v from t order by k;

The first "select" from the "BLUE" session at the very end produces this:

 k | v 
---+---
 1 | 1
 2 | 2
 3 | 1
 4 | 3

This doesn't surprise me. It's ultimately illegal. But not yet. (Before "RED" 
committed, "BLUE" didn't see the rows with "k = 1" and "k = 2". So it isn't 
seeing any other sessions uncommitted data—but only it's own uncommitted data.)

Then, when "BLUE" commits, it (of course) gets this:

ERROR:  duplicate key value violates unique constraint "t_v_unq"
DETAIL:  Key (v)=(1) already exists.

Then it sees (of course, again) only the rows with "k = 1" and "k = 2"—the same 
as what "RED" saw.

It seems to be impossible to do a test in slow motion where "RED" and "BLUE" 
each issues "commit" at the exact same moment. So thinking about this scenario 
doesn't tell me if:

(a) Each session runs its constraint check and the rest of what "commit" 
entails in a genuinely serialized fashion.

OR

(b) Each session first runs its constraint check (and some other stuff) 
non-serializedly—and only then runs the small part of the total "commit" action 
(the WAL part) serializedly. (This would result in bad data in the database at 
rest—just as my contrived misuse of "set constraints all immediate" left things 
in my "one or two admins" scenario.)

I appreciate that this just is a different wording of what I wrote before—but 
now w.r.t. the system-implemented unique constraint use-case.

The (a) scheme sounds correct. And the (b) scheme sounds wrong. Why would PG 
prefer to implement (b) rather than (a)?

I'm clearly missing something.

>> So it all depends on a lawyerly reading of the wording "at COMMIT time". The 
>> current CREATE TABLE doc says this:
>> 
>> «
>> If the constraint is INITIALLY DEFERRED, it is checked only at the end of 
>> the transaction.
>> »
>> 
>> The wording "at the end of the transaction" is not precise enough to 
>> adjudicate—and so the key question remains: Is a deferred constraint checked:
>> 
>> (a) as part of the strictly serialized operations that implement a session's 
>> COMMIT?
>> 
>> or
>> 
>> (b) momentarily *before* COMMIT and not within the serialized COMMIT 
>> execution?
>> 
>> So… (asking the wider audience) is the answer (a) or (b)? An if it's (b), 
>> why? After all, (b) brings the race condition risk. Is (a) simply not 
>> feasible?
> 
> COMMITs are not serialized. You seem to think that as soon as one 
> transaction's COMMIT starts processing, no other transaction may COMMIT at 
> the same time. That is not the case.

Yes, I most certainly did think this.

Where, in the PG doc, can I read the account of the proper mental model for the 
application programmer? It seems to be impossible to conduct an experiment that 
would disprove the hypothesis that one, or the other, of these mental models is 
correct.

 Is my entire concept (and Laurenz's too) fundamentally flawed? 
 Specifically, is querying a trigger's base table in a "for each row" 
 trigger fundamentally unsound and not supported?
>>> 
>>> My post claims that constraint triggers alone are *not* a sufficient 
>>> solution to validate 

Re: Deferred constraint trigger semantics

2022-05-12 Thread Bryn Llewellyn
> postgres.ro...@gmail.com wrote:
> 
> It adds the "initially deferred" decoration to the "create constraint 
> trigger" statement. This is (still) the result:
> 
> INFO:  trg fired. new.v = 10, n = 5
> INFO:  trg fired. new.v = 20, n = 5
> INFO:  trg fired. new.v = 30, n = 5
> INFO:  trg fired. new.v = 40, n = 5
> INFO:  trg fired. new.v = 50, n = 5
> INFO:  trg fired. new.v = 60, n = 8
> INFO:  trg fired. new.v = 70, n = 8
> INFO:  trg fired. new.v = 80, n = 8
> Because You can do 
> create constraint trigger trg
> after insert on t2
> deferrable initially deferred
> for each row
> execute function trg_fn();
> 
> You didn't explicitly defer the trigger trg on t1!. That means after you 
> insert on t1 then the trigger trg on t1 invoked rather than on commit time.  
> If you 
> create constraint trigger trg
> after insert on t1
> deferrable initially deferred
> for each row
> execute function trg_fn();
> 
> create constraint trigger trg
> after insert on t2
> deferrable initially deferred
> for each row
> execute function trg_fn();
> then you will get 
> INFO:  0: trg fired. new.v = 10, n = 8
> INFO:  0: trg fired. new.v = 20, n = 8
> INFO:  0: trg fired. new.v = 30, n = 8
> INFO:  0: trg fired. new.v = 40, n = 8
> INFO:  0: trg fired. new.v = 50, n = 8
> INFO:  0: trg fired. new.v = 60, n = 8
> INFO:  0: trg fired. new.v = 70, n = 8
> INFO:  0: trg fired. new.v = 80, n = 8 

Er… yes. David Johnston pointed that out too. I'm embarrassed beyond belief. 
Sorry to have wasted folks' time because of my mistake.



Re: Deferred constraint trigger semantics

2022-05-12 Thread alias
>
>
> It adds the "initially deferred" decoration to the "create constraint
> trigger" statement. This is (still) the result:
>
> INFO:  trg fired. new.v = 10, n = 5
> INFO:  trg fired. new.v = 20, n = 5
> INFO:  trg fired. new.v = 30, n = 5
> INFO:  trg fired. new.v = 40, n = 5
> INFO:  trg fired. new.v = 50, n = 5
> INFO:  trg fired. new.v = 60, n = 8
> INFO:  trg fired. new.v = 70, n = 8
> INFO:  trg fired. new.v = 80, n = 8
>
Because You can do
create constraint trigger trg
after insert on t2
deferrable initially deferred
for each row
execute function trg_fn();

You didn't explicitly defer the trigger trg on t1!. That means after you
insert on t1 then the trigger trg on t1 invoked rather than on commit
time.
If you
create constraint trigger trg
after insert on t1
deferrable initially deferred
for each row
execute function trg_fn();

create constraint trigger trg
after insert on t2
deferrable initially deferred
for each row
execute function trg_fn();
then you will get

> INFO:  0: trg fired. new.v = 10, n = 8
> INFO:  0: trg fired. new.v = 20, n = 8
> INFO:  0: trg fired. new.v = 30, n = 8
> INFO:  0: trg fired. new.v = 40, n = 8
> INFO:  0: trg fired. new.v = 50, n = 8
> INFO:  0: trg fired. new.v = 60, n = 8
> INFO:  0: trg fired. new.v = 70, n = 8
> INFO:  0: trg fired. new.v = 80, n = 8
>




On Thu, May 12, 2022 at 4:13 AM Bryn Llewellyn  wrote:

> *david.g.johns...@gmail.com  wrote:*
>
> *b...@yugabyte.com  wrote:*
>
>
> Thanks for the point-by-point reply, David.
>
> ...makes no mention of what you might expect to see in an AFTER EACH
> ROW trigger.
>
>
> ...the absence of a comment declaring a guarantee of order means that,
> like the comment for the row-level BEFORE trigger, the row-level AFTER row
> ordering is not guaranteed (even if one cannot produce a counter-example in
> today's codebase).
>
>
> Got it!
>
> ...unless anybody contradicts me.
>
>
> Caveat emptor...? I wouldn't be surprised that doing so is technically
> possible in all cases - as to whether a particular algorithm is sound, to
> some extent, isn't something we try to predict. We do try to describe all
> the known interactions though - and let the user work within what those
> mean for them.
>
>
> Got it again!
>
> ...implies that there's no such unpredictability in the AFTER EACH ROW
> cases.
>
>
> I would not read it that way. In general, absence of mention of
> predictability like this means there is none - that some other sentence
> goes into more detail doesn't change that.
>
>
> OK.
>
> But there has to be a subtle caveat here for the deferred constraint
> trigger when the txn changes two or more tables, all of which participate
> in the query that the trigger function issues… The "raise info"
> output below illustrates my point (n changes from 5 to 8).
>
>
> I'm failing to see the deferral aspect of that example. First statement
> finishes, sees the 5 inserts, next statement finishes, sees 3 more inserts.
> Not, both statements finish, triggers fire, triggers see all 8 inserts
> (which I suspect they will if you actually perform deferral).
>
>
> Oops. I did a copy-and-paste error on going from my test env. to email and
> missed out the "deferral" that I'd intended. For completeness, here's the
> test that I meant:
>
> create table t1(k serial primary key, v int not null);
> create table t2(k serial primary key, v int not null);
>
> create function trg_fn()
>   returns trigger
>   language plpgsql
> as $body$
> declare
>   n int not null := 0;
> begin
>   n := (select count(*) from t1) + (select count(*) from t2);
>   raise info 'trg fired. new.v = %, n = %', new.v, n;
>   return new;
> end;
> $body$;
>
> create constraint trigger trg
> after insert on t1
> for each row
> execute function trg_fn();
>
> create constraint trigger trg
> after insert on t2
> initially deferred
> for each row
> execute function trg_fn();
>
> set default_transaction_isolation = 'read committed';
> do $body$
> begin
>   insert into t1(v)
>   values (10), (20), (30), (40), (50);
>
>   insert into t2(v)
>   values (60), (70), (80);
> end;
> $body$;
>
> It adds the "initially deferred" decoration to the "create constraint
> trigger" statement. This is (still) the result:
>
> INFO:  trg fired. new.v = 10, n = 5
> INFO:  trg fired. new.v = 20, n = 5
> INFO:  trg fired. new.v = 30, n = 5
> INFO:  trg fired. new.v = 40, n = 5
> INFO:  trg fired. new.v = 50, n = 5
> INFO:  trg fired. new.v = 60, n = 8
> INFO:  trg fired. new.v = 70, n = 8
> INFO:  trg fired. new.v = 80, n = 8
>
> Even though both inserts have completed by commit time, only the trigger
> firing caused by the second statement sees the final state that obtains the
> moment before commit. The first statement sees only the state after it
> finishes and before the second statement executes. You said « I suspect
> [that both statements will see the final state] if you actually perform
> deferral ». My test shows that this is not the case.
>
> *Did I 

Re: Deferred constraint trigger semantics

2022-05-12 Thread Laurenz Albe
On Wed, 2022-05-11 at 15:54 -0700, Bryn Llewellyn wrote:
> I re-read the penultimate paragraph in Laurenz's post:
> 
> «
> By making the trigger INITIALLY DEFERRED, we tell PostgreSQL to check the 
> condition at COMMIT time.
> »
> 
> I have always understood that (in Postgres and any respectable RDBMS) commits 
> in a multi-session
> environment are always strictly serialized—irrespective of the transaction's 
> isolation level.
> Am I correct to assume this is the case for Postgres? I took "at COMMIT time" 
> to mean "as part
> of the strictly serialized operations that implement a session's COMMIT".

I am not sure what you mean by serialized commits.  Transactions are 
concurrent, and so are
commits.  COMMIT takes some time, during which several things happen, among 
them executing
deferred constraints, writing a WAL record and flushing the WAL.  The only 
thing that is
necessarily serialized is writing the WAL record.

> But I see (now) that you argue that this is not the case, thus:
> 
> «
> This will reduce the window for the race condition a little, but the problem 
> is still there.
> If concurrent transactions run the trigger function at the same time, they 
> won’t see each other’s modifications.
> »
> 
> I take what you say in your post to mean that each session executes its 
> deferred constraint
> check (by extension, not just for constraint triggers but for all deferred 
> constraint cases)
> momentarily *before* COMMIT so that the effect is only to reduce the duration 
> of the race condition
> window rather than to eliminate it.

In the case of constraint triggers, yes.
But there is no race condition for primary key, unique and foreign key 
constraints, because
they also "see" uncommitted data.

> So it all depends on a lawyerly reading of the wording "at COMMIT time". The 
> current CREATE TABLE doc says this:
> 
> «
> If the constraint is INITIALLY DEFERRED, it is checked only at the end of the 
> transaction.
> »
> 
> The wording "at the end of the transaction" is not precise enough to 
> adjudicate—and so the key
> question remains: Is a deferred constraint checked:
> 
> (a) as part of the strictly serialized operations that implement a session's 
> COMMIT?
> 
> or
> 
> (b) momentarily *before* COMMIT and not within the serialized COMMIT 
> execution?
> 
> So… (asking the wider audience) is the answer (a) or (b)? An if it's (b), 
> why? After all, (b) brings
> the race condition risk. Is (a) simply not feasible?

COMMITs are not serialized.  You seem to think that as soon as one 
transaction's COMMIT starts
processing, no other transaction may COMMIT at the same time.  That is not the 
case.

> 
> > > Is my entire concept (and Laurenz's too) fundamentally flawed? 
> > > Specifically, is querying
> > > a trigger's base table in a "for each row" trigger fundamentally unsound 
> > > and not supported?
> > 
> > My post claims that constraint triggers alone are *not* a sufficient 
> > solution to validate
> > constraints - you need additional locking or SERIALIZABLE isolation to make 
> > that work reliably.
> 
> This doesn't seem to be what you wrote. These two headings [...]

Then I must have been unclear.  Or you only looked at the headings.

> As I reason it, if you use the SERIALIZABLE approach, then an ordinary 
> immediate AFTER EACH
> STATEMENT trigger will work fine—precisely because of how that isolation 
> level is defined.
> So here, a deferred constraint trigger isn't needed and brings no value.

Now that is absolutely true.  If you use the big hammer of SERIALIZABLE, there 
can be no
anomaly, and it is unnecessary to keep the window for a race condition small.
Deferred triggers and constraints still have a value, because they see the state
of the database at the end of the whole transaction.

> This implies that if a deferred constraint trigger is to have any utility, it 
> must be safe
> to use it (as I tested it) at the READ COMMITTED level. I do see that, though 
> I appear to
> be testing this, I cannot do a reliable test because I cannot, in application 
> code, open up,
> and exploit, a race condition window after COMMIT has been issued. (I *am* 
> able to do this
> to expose the fact that "set constraints all immediate" is unsafe.)

This sentence lacks the definition of what you mean by "safe", on which all 
hinges.

If "safe" means that you can use them to make sure that a certain condition is 
always
satisfied (like in a constraint), they are not safe.  But that is not the only 
use for
a trigger.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com




Re: Deferred constraint trigger semantics

2022-05-11 Thread Bryn Llewellyn
> david.g.johns...@gmail.com wrote:
> 
>> b...@yugabyte.com wrote:
>> 
>> Oops. I did a copy-and-paste error on going from my test env. to email and 
>> missed out the "deferral" that I'd intended. For completeness, here's the 
>> test that I meant:
>> 
>> create constraint trigger trg
>> after insert on t1
>> for each row
>> execute function trg_fn();
>> 
>> create constraint trigger trg
>> after insert on t2
>> initially deferred
>> for each row
>> execute function trg_fn();
>> 
>> It adds the "initially deferred" decoration to the "create constraint 
>> trigger" statement. This is (still) the result:
> 
> You only added it to the uninteresting trigger on t2.  It's the t1 trigger 
> where I'd expect the behavior to change.  I'm assuming your test does both 
> (not in a position to test it myself at the moment).

Damn. I'm horrified that, despite my best intentions, I still managed to do a 
typo. How embarrassing… With the correction in place, I now get this output:

INFO:  trg fired. new.v = 10, n = 8
INFO:  trg fired. new.v = 20, n = 8
INFO:  trg fired. new.v = 30, n = 8
INFO:  trg fired. new.v = 40, n = 8
INFO:  trg fired. new.v = 50, n = 8
INFO:  trg fired. new.v = 60, n = 8
INFO:  trg fired. new.v = 70, n = 8
INFO:  trg fired. new.v = 80, n = 8

This is exactly what you predicted. I'm delighted (finally) to see this outcome.

>> [What I wrote here was rubbish, given that my test code was not what I 
>> claimed it was.]
> 
> [David's response here is now moot.]
>  
>> With respect to « having to keep around a working set of what are the 
>> changed records » I think that the complexity that you envisaged is avoided 
>> by the (emergent) rule that an AFTER EACH STATEMENT trigger cannot see "old" 
>> and "new" values. In other words, all you can sensibly do in its function is 
>> ordinary SQL that sees the current state at the moment it fires.
>> 
>> To my surprise, it *is* legal to write code that accesses "old" and "new" 
>> values. But, because many rows can be affected by a single statement, and 
>> the trigger fires just once, the meanings of "old" and "new" are undefined. 
>> I've seen that, in any test that I do, both are always set to NULL (which 
>> seems reasonable).
> 
> I was thinking more about transition tables - though I admit it's not a 
> complete thought given their opt-in nature.

Ah… I hadn't considered transition tables. However, they don't seem to be 
valuable for a constraint trigger. So your concern could be removed at a stroke 
by adding a semantic rule to the account of the CREATE TRIGGER syntax 
specification thus:

« The SQL that creates a statement-level AFTER constraint trigger may not 
specify using transition tables. »

Might this device give me hope?

Finally, it seems that a not deferrable constraint trigger has no value—except 
in that using the keyword CONSTRAINT is a nice bit of self-documentation. Did I 
miss something? Is this trigger flavor valuable for a use-case that I haven't 
spotted?



Re: Deferred constraint trigger semantics

2022-05-11 Thread David G. Johnston
On Wed, May 11, 2022 at 3:43 PM Bryn Llewellyn  wrote:

>
> Oops. I did a copy-and-paste error on going from my test env. to email and
> missed out the "deferral" that I'd intended. For completeness, here's the
> test that I meant:
>
> create constraint trigger trg
> after insert on t1
> for each row
> execute function trg_fn();
>
> create constraint trigger trg
> after insert on t2
> initially deferred
> for each row
> execute function trg_fn();
>
> It adds the "initially deferred" decoration to the "create constraint
> trigger" statement. This is (still) the result:
>

You only added it to the uninteresting trigger on t2.  It's the t1 trigger
where I'd expect the behavior to change.  I'm assuming your test does both
(not in a position to test it myself at the moment).


> Even though both inserts have completed by commit time, only the trigger
> firing caused by the second statement sees the final state that obtains the
> moment before commit. The first statement sees only the state after it
> finishes and before the second statement executes. You said « I suspect
> [that both statements will see the final state] if you actually perform
> deferral ». My test shows that this is not the case.
>
> *Did I misunderstand you? Or does this result surprise you? If it does, do
> you think that this is a bug*?
>

It both surprises me and fails to surprise me at the same time.  I have no
opinion on whether the observed behavior constitutes a bug or not.  I'd
lean toward not - just maybe a chance to improve the documentation.


>
> With respect to « having to keep around a working set of what are the
> changed records » I think that the complexity that you envisaged is avoided
> by the (emergent) rule that an AFTER EACH STATEMENT trigger cannot see
> "old" and "new" values. In other words, all you can sensibly do in its
> function is ordinary SQL that sees the current state at the moment it fires.
>
> To my surprise, it *is* legal to write code that accesses "old" and "new"
> values. But, because many rows can be affected by a single statement, and
> the trigger fires just once, the meanings of "old" and "new" are undefined.
> I've seen that, in any test that I do, both are always set to NULL (which
> seems reasonable).
>

I was thinking more about transition tables - though I admit it's not a
complete thought given their opt-in nature.

David J.


Re: Deferred constraint trigger semantics

2022-05-11 Thread Bryn Llewellyn
> laurenz.a...@cybertec.at wrote:
> 
>> b...@yugabyte.com wrote:
>> 
>> …Internet Search turned up this 2019 post by Laurenz Albe—but nothing else 
>> at all.
>> 
>> https://www.cybertec-postgresql.com/en/triggers-to-enforce-constraints 
>> 
>> 
>> (This is why I CC'd you, Laurenz.)
> 
> So I guess I should answer.

Thanks for replying to my original post with this subject line, Laurenz. Sorry 
to bring up notions that you wrote about three years ago. I judged that, 
because I referred to those notions, it would be polite to copy you—especially 
because I interpreted (maybe incorrectly) what you had written.

> About the starting paragraph of your mail: Constraint triggers are a 
> syntactic leftover from the way that triggers are implemented in PostgreSQL. 
> There is different syntax now, but it was decided to leave constraint 
> triggers, since they may have some use.

If constraint triggers are, in the hands of an informed application programmer, 
to have some sound uses, then the semantics must be clearly defined. And you do 
say that they are supported. David Johnson argues that, as long as you read the 
right snippets from various parts of the doc and synthesize their joint 
meaning, then the semantics are defined. Yes, David, I can accept that—with a 
loud caveat about the answer to my (a) or (b) question below.

I re-read the penultimate paragraph in Laurenz's post:

«
By making the trigger INITIALLY DEFERRED, we tell PostgreSQL to check the 
condition at COMMIT time.
»

I have always understood that (in Postgres and any respectable RDBMS) commits 
in a multi-session environment are always strictly serialized—irrespective of 
the transaction's isolation level. Am I correct to assume this is the case for 
Postgres? I took "at COMMIT time" to mean "as part of the strictly serialized 
operations that implement a session's COMMIT". But I see (now) that you argue 
that this is not the case, thus:

«
This will reduce the window for the race condition a little, but the problem is 
still there. If concurrent transactions run the trigger function at the same 
time, they won’t see each other’s modifications.
»

I take what you say in your post to mean that each session executes its 
deferred constraint check (by extension, not just for constraint triggers but 
for all deferred constraint cases) momentarily *before* COMMIT so that the 
effect is only to reduce the duration of the race condition window rather than 
to eliminate it.

So it all depends on a lawyerly reading of the wording "at COMMIT time". The 
current CREATE TABLE doc says this:

«
If the constraint is INITIALLY DEFERRED, it is checked only at the end of the 
transaction.
»

The wording "at the end of the transaction" is not precise enough to 
adjudicate—and so the key question remains: Is a deferred constraint checked:

(a) as part of the strictly serialized operations that implement a session's 
COMMIT?

or

(b) momentarily *before* COMMIT and not within the serialized COMMIT execution?

So… (asking the wider audience) is the answer (a) or (b)? An if it's (b), why? 
After all, (b) brings the race condition risk. Is (a) simply not feasible?

> [Lots of ruminations and wandering throughts]
> 
> Sorry, that was too much for me to comment on - that would require a 
> mid-sized article.

Oh… I'm sorry to hear that I rambled and lost clarity. I find it impossible to 
say what I want to, striving for precision, without sacrificing brevity. I 
always find that I can improve my wording with successive reads. But life is 
short and I have, eventually, just to publish and be damned.

>> Is my entire concept (and Laurenz's too) fundamentally flawed? Specifically, 
>> is querying a trigger's base table in a "for each row" trigger fundamentally 
>> unsound and not supported?
> 
> My post claims that constraint triggers alone are *not* a sufficient solution 
> to validate constraints - you need additional locking or SERIALIZABLE 
> isolation to make that work reliably.

This doesn't seem to be what you wrote. These two headings

> Solving the problem with “optimistic locking” (which you explain means using 
> SERIALIZABLE)

and

> What about these “constraint triggers”?

read as if they are orthogonal schemes where the constraint trigger approach 
does not rely on SERIALIZABLE.

As I reason it, if you use the SERIALIZABLE approach, then an ordinary 
immediate AFTER EACH STATEMENT trigger will work fine—precisely because of how 
that isolation level is defined. So here, a deferred constraint trigger isn't 
needed and brings no value.

This implies that if a deferred constraint trigger is to have any utility, it 
must be safe to use it (as I tested it) at the READ COMMITTED level. I do see 
that, though I appear to be testing this, I cannot do a reliable test because I 
cannot, in application code, open up, and exploit, a race condition window 
after COMMIT has been issued. (I *am* able 

Re: Deferred constraint trigger semantics

2022-05-11 Thread Bryn Llewellyn
> david.g.johns...@gmail.com wrote:
> 
>> b...@yugabyte.com wrote:

Thanks for the point-by-point reply, David.

>> ...makes no mention of what you might expect to see in an AFTER EACH ROW 
>> trigger.
> 
> ...the absence of a comment declaring a guarantee of order means that, like 
> the comment for the row-level BEFORE trigger, the row-level AFTER row 
> ordering is not guaranteed (even if one cannot produce a counter-example in 
> today's codebase).

Got it!

>> ...unless anybody contradicts me.
> 
> Caveat emptor...? I wouldn't be surprised that doing so is technically 
> possible in all cases - as to whether a particular algorithm is sound, to 
> some extent, isn't something we try to predict. We do try to describe all the 
> known interactions though - and let the user work within what those mean for 
> them.

Got it again!

>> ...implies that there's no such unpredictability in the AFTER EACH ROW cases.
> 
> I would not read it that way. In general, absence of mention of 
> predictability like this means there is none - that some other sentence goes 
> into more detail doesn't change that.

OK.

>> But there has to be a subtle caveat here for the deferred constraint trigger 
>> when the txn changes two or more tables, all of which participate in the 
>> query that the trigger function issues… The "raise info" output below 
>> illustrates my point (n changes from 5 to 8).
> 
> I'm failing to see the deferral aspect of that example. First statement 
> finishes, sees the 5 inserts, next statement finishes, sees 3 more inserts. 
> Not, both statements finish, triggers fire, triggers see all 8 inserts (which 
> I suspect they will if you actually perform deferral).

Oops. I did a copy-and-paste error on going from my test env. to email and 
missed out the "deferral" that I'd intended. For completeness, here's the test 
that I meant:

create table t1(k serial primary key, v int not null);
create table t2(k serial primary key, v int not null);

create function trg_fn()
  returns trigger
  language plpgsql
as $body$
declare
  n int not null := 0;
begin
  n := (select count(*) from t1) + (select count(*) from t2);
  raise info 'trg fired. new.v = %, n = %', new.v, n;
  return new;
end;
$body$;

create constraint trigger trg
after insert on t1
for each row
execute function trg_fn();

create constraint trigger trg
after insert on t2
initially deferred
for each row
execute function trg_fn();

set default_transaction_isolation = 'read committed';
do $body$
begin
  insert into t1(v)
  values (10), (20), (30), (40), (50);

  insert into t2(v)
  values (60), (70), (80);
end;
$body$;

It adds the "initially deferred" decoration to the "create constraint trigger" 
statement. This is (still) the result:

INFO:  trg fired. new.v = 10, n = 5
INFO:  trg fired. new.v = 20, n = 5
INFO:  trg fired. new.v = 30, n = 5
INFO:  trg fired. new.v = 40, n = 5
INFO:  trg fired. new.v = 50, n = 5
INFO:  trg fired. new.v = 60, n = 8
INFO:  trg fired. new.v = 70, n = 8
INFO:  trg fired. new.v = 80, n = 8

Even though both inserts have completed by commit time, only the trigger firing 
caused by the second statement sees the final state that obtains the moment 
before commit. The first statement sees only the state after it finishes and 
before the second statement executes. You said « I suspect [that both 
statements will see the final state] if you actually perform deferral ». My 
test shows that this is not the case.

Did I misunderstand you? Or does this result surprise you? If it does, do you 
think that this is a bug?

> ...Your test case for the deferred constraint, that supposedly allows for the 
> insertion of invalid data per the specification of the constraint trigger, 
> isn't something I've worked through yet; and as written reads like a bug 
> report.

It's not a report of a PG bug. Rather, it shows how an application programmer 
might write a bug in *their* code. When "set constraints all immediate" is 
used, it opens a race condition window between its execution and the commit. 
I'd speculated on that earlier. So I felt that I should show a self-contained 
demo of this possible trap for the application programmer. As long as  "set 
constraints all immediate" is not used, the demo shows proper behavior. Of 
course, it moves the constraint violation error to commit time—and this means 
that PL/pgSQL code cannot handle it (as discussed at length elsewhere).

I appreciate that using  "set constraints all immediate" is useful in many 
scenarios to allow handling the violation error in PL/pgSQL. (Thanks again for 
that tip, David.) For example, and as I reason it, the famous "mandatory 
one-to-one relationship" use-case is safe when you use this technique. This 
use-case needs mutual FK constraints between the two tables. But you have to 
insert one of the two rows (call it "main") that are so related before the 
other (call it "extra"). And so the mutual FK constraints cannot be satisfied 
until both new rows are in place. 

Re: Deferred constraint trigger semantics

2022-05-11 Thread Laurenz Albe
On Tue, 2022-05-10 at 17:46 -0700, Bryn Llewellyn wrote:
> I looked at the sections "CREATE TRIGGER" and "Chapter 39. Triggers" in the 
> Current PG doc.
> But I failed to find any information about the semantics of the deferred 
> constraint trigger
> or about the use cases that motivated this feature. Nor could I find any code 
> examples.
> Internet Search turned up this 2019 post by Laurenz Albe's—but nothing else 
> at all.
> 
> https://www.cybertec-postgresql.com/en/triggers-to-enforce-constraints/
> 
> (This is why I CC'd you, Laurenz.)

So I guess I should answer.

About the starting paragraph of your mail: Constraint triggers are a syntactic 
leftover
from the way that triggers are implemented in PostgreSQL.  There is different 
syntax now,
but it was decided to leave constraint triggers, since they may have some use.

> [Lots of ruminations and wandering throughts]

Sorry, that was too much for me to comment on - that would require a mid-sized
article.

> Is my entire concept (and Laurenz's too) fundamentally flawed? Specifically, 
> is
> querying a trigger's base table in a "for each row" trigger fundamentally 
> unsound
> and not supported? (In Oracle Database, it causes the notorious "mutating 
> table"
> runtime error.)

My post claims that constraint triggers alone are *not* a sufficient solution to
validate constraints - you need additional locking or SERIALIZABLE isolation to
make that work reliably.

That does not mean that using constraint triggers is unsound or unsupported,
and the fact that Oracle's implementation of transaction isolation is somewhat
shoddy has little impact on that.

Yours,
Laurenz Albe




Re: Deferred constraint trigger semantics

2022-05-10 Thread David G. Johnston
On Tue, May 10, 2022 at 7:52 PM Bryn Llewellyn  wrote:

>
> *david.g.johns...@gmail.com  wrote:*
>
> * However, SQL commands executed in a row-level BEFORE trigger will see
> the effects of data changes for rows previously processed in the same outer
> command. This requires caution, since the ordering of these change events
> is not in general predictable; an SQL command that affects multiple rows
> can visit the rows in any order.
> »
>
> Strangely, the wording here, explicit as it is, makes no mention of what
> you might expect to see in an AFTER EACH ROW trigger.
>

Because the previous, quoted, paragraph did that.  Because of the absence
of a comment declaring a guarantee of order means that, like the comment
for the row-level BEFORE trigger, the row-level AFTER row ordering is not
guaranteed (even if one cannot produce a counter-example in today's
codebase).

I'll take this to be the case unless anybody contradicts me.
>

Caveat emptor...?  I wouldn't be surprised that doing so is technically
possible in all cases - as to whether a particular algorithm is sound, to
some extent, isn't something we try to predict.  We do try to describe all
the known interactions though - and let the user work within what those
mean for them.


> The mention of unpredictable results in the third bullet in the BEFORE
> case implies that there's no such unpredictability in the AFTER EACH ROW
> cases.
>

I would not read it that way.  In general, absence of mention of
predictability like this means there is none - that some other sentence
goes into more detail doesn't change that.

But there has to be a subtle caveat here for the deferred constraint
> trigger when the txn changes two or more tables, all of which participate
> in the query that the trigger function issues. I'll assume that you (all)
> know what I mean. The "raise info" output below illustrates my point (n
> changes from 5 to 8).
>

I'm failing to see the deferral aspect of that example.  First statement
finishes, sees the 5 inserts, next statement finishes, sees 3 more
inserts.  Not, both statements finish, triggers fire, triggers see all 8
inserts (which I suspect they will if you actually perform deferral).


> the actual order in which I see the "raise info" output is unpredictable
> while the values shown in each *are* predictable. Can I rely on this rule?
>

Absent a deferred constraint I would say yes.  Your test case for the
deferred constraint, that supposedly allows for the insertion of invalid
data per the specification of the constraint trigger, isn't something I've
worked through yet; and as written reads like a bug report.


> Is there a fundamental reason why a deferred AFTER EACH STATEMENT
> constraint trigger is not allowed? Nothing in what is explained in the
> "Overview of Trigger Behavior" and "Visibility of Data Changes" sections
> lets me see why the present restriction is needed.
>
>
I imagine having to keep around a working set of what are the changed
records is both memory intensive and also problematic should a future
statement make yet more changes to the table.  This is also an area that
the SQL Standard does make rules in.  And given that constraints are
defined per-row everywhere else there is a pull to not push the envelope of
our extension too far.

David J.


Re: Deferred constraint trigger semantics

2022-05-10 Thread Bryn Llewellyn
> david.g.johns...@gmail.com wrote:
> 
>> b...@yugabyte.com wrote:
>> 
>> INFO:  trg fired. new.v = 80, n = 8
>> 
>> It shows the expected "new" value each time it fires. And yet the query 
>> reflects the table content on statement completion. This seems to be very 
>> strange.
> 
> From the documentation:
> 
> "Row-level BEFORE triggers fire immediately before a particular row is 
> operated on, while row-level AFTER triggers fire at the end of the statement 
> (but before any statement-level AFTER triggers)."
> 
> https://www.postgresql.org/docs/current/trigger-definition.html

Thanks, David. Those sentences were too deeply buried, in a page with no 
subsections, for me to spot by ordinary searching in the page. The term 
"Row-level BEFORE" trigger subverted my search for "BEFORE EACH ROW" 
trigger—which is the term that I'm used to. This is another lesson for me to 
read every word in what looks like relevant doc, from start to finish, like a 
book. I'm chastened.

The next section, "39.2. Visibility of Data Changes"
https://www.postgresql.org/docs/current/trigger-datachanges.html

says this:

«
* Statement-level triggers follow simple visibility rules: none of the changes 
made by a statement are visible to statement-level BEFORE triggers, whereas all 
modifications are visible to statement-level AFTER triggers.

* The data change (insertion, update, or deletion) causing the trigger to fire 
is naturally not visible to SQL commands executed in a row-level BEFORE 
trigger, because it hasn't happened yet.

* However, SQL commands executed in a row-level BEFORE trigger will see the 
effects of data changes for rows previously processed in the same outer 
command. This requires caution, since the ordering of these change events is 
not in general predictable; an SQL command that affects multiple rows can visit 
the rows in any order.
»

Strangely, the wording here, explicit as it is, makes no mention of what you 
might expect to see in an AFTER EACH ROW trigger. It's a bit of a stretch to 
put the sentences from the previous section that you quoted together with these 
three bullets to conclude this: querying the trigger's base-table's content 
*is* allowed from the trigger's function for all of the five timing points: 
BEFORE and AFTER EACH ROW (not deferred), BEFORE and AFTER EACH STATEMENT  (not 
deferred), and AFTER EACH ROW (deferred to commit time) is indeed supported. 
I'll take this to be the case unless anybody contradicts me.

The mention of unpredictable results in the third bullet in the BEFORE case 
implies that there's no such unpredictability in the AFTER EACH ROW cases. But 
there has to be a subtle caveat here for the deferred constraint trigger when 
the txn changes two or more tables, all of which participate in the query that 
the trigger function issues. I'll assume that you (all) know what I mean. The 
"raise info" output below illustrates my point (n changes from 5 to 8). But 
this seems to be sound inference from the rules that were stated. I'll take 
this, too, to be the case unless anybody contradicts me.

I assume, though, that considering this output that I showed in my original 
mail:

INFO:  trg fired. new.v = 10, n = 5
INFO:  trg fired. new.v = 20, n = 5
INFO:  trg fired. new.v = 30, n = 5
INFO:  trg fired. new.v = 40, n = 5
INFO:  trg fired. new.v = 50, n = 5
INFO:  trg fired. new.v = 60, n = 8
INFO:  trg fired. new.v = 70, n = 8
INFO:  trg fired. new.v = 80, n = 8

the actual order in which I see the "raise info" output is unpredictable while 
the values shown in each *are* predictable. Can I rely on this rule?

Is there a fundamental reason why a deferred AFTER EACH STATEMENT constraint 
trigger is not allowed? Nothing in what is explained in the "Overview of 
Trigger Behavior" and "Visibility of Data Changes" sections lets me see why the 
present restriction is needed.



Re: Deferred constraint trigger semantics

2022-05-10 Thread David G. Johnston
On Tue, May 10, 2022 at 5:47 PM Bryn Llewellyn  wrote:

>
> INFO:  trg fired. new.v = 80, n = 8
>
> It shows the expected "new" value each time it fires. And yet the query
> reflects the table content on statement completion. This seems to be very
> strange.
>

>From the documentation:

"Row-level BEFORE triggers fire immediately before a particular row is
operated on, while row-level AFTER triggers fire at the end of the
statement (but before any statement-level AFTER triggers)."

https://www.postgresql.org/docs/current/trigger-definition.html

David J.


Re: [Extern] Re: postgres event trigger workaround

2022-01-15 Thread Дмитрий Иванов
The goal was to limit access to the schema to users who manage roles, in
all likelihood to a specific group of administrators. This can be done with
this solution. I have no problem with managing roles through plpgsql
functions. There has been no need to create databases until now. In my
solution, the schema is a data element, for flexibility.
--
Regards, Dmitry!


сб, 15 янв. 2022 г. в 10:01, Julien Rouhaud :

> Hi,
>
> On Sat, Jan 15, 2022 at 08:36:21AM +0500, Дмитрий Иванов wrote:
> > In my solution, all users don't need direct access to the schema because
> > you have to use the functional API to access it. If you can manage users
> > with functions, you can close the schema in the same way.
> > Usually the function is executed with the permissions of the calling
> user,
> > which requires permissions for all affected entities. However, if you
> > specify the "SECURITY DEFINER" parameter at creation, the function will
> be
> > executed with the owner's permissions. The owner of the function has no
> > login permissions but has permissions on the affected entities. In this
> way
> > you will close the schema from the roles that have rights to the role
> > management functions.
>
> Sure you can solve most problems with that.  But you can't create a
> database
> (or a tablespace) from a function so this approach wouldn't cover all of
> OP's
> needs, as different approach would be needed for role and db creation.
>


Re: [Extern] Re: postgres event trigger workaround

2022-01-14 Thread Julien Rouhaud
Hi,

On Sat, Jan 15, 2022 at 08:36:21AM +0500, Дмитрий Иванов wrote:
> In my solution, all users don't need direct access to the schema because
> you have to use the functional API to access it. If you can manage users
> with functions, you can close the schema in the same way.
> Usually the function is executed with the permissions of the calling user,
> which requires permissions for all affected entities. However, if you
> specify the "SECURITY DEFINER" parameter at creation, the function will be
> executed with the owner's permissions. The owner of the function has no
> login permissions but has permissions on the affected entities. In this way
> you will close the schema from the roles that have rights to the role
> management functions.

Sure you can solve most problems with that.  But you can't create a database
(or a tablespace) from a function so this approach wouldn't cover all of OP's
needs, as different approach would be needed for role and db creation.




Re: [Extern] Re: postgres event trigger workaround

2022-01-14 Thread Дмитрий Иванов
Hi
In my solution, all users don't need direct access to the schema because
you have to use the functional API to access it. If you can manage users
with functions, you can close the schema in the same way.
Usually the function is executed with the permissions of the calling user,
which requires permissions for all affected entities. However, if you
specify the "SECURITY DEFINER" parameter at creation, the function will be
executed with the owner's permissions. The owner of the function has no
login permissions but has permissions on the affected entities. In this way
you will close the schema from the roles that have rights to the role
management functions.
--
Regards, Dmitry!


пт, 14 янв. 2022 г. в 15:24, Julien Rouhaud :

> Hi,
>
> On Fri, Jan 14, 2022 at 09:01:12AM +, Zwettler Markus (OIZ) wrote:
> >
> > We have the need to separate user (role) management from infrastructure
> (database) management.
> >
> > Granting CREATEROLE to any role also allows this role to create other
> roles having CREATEDB privileges and therefore also getting CREATEDB
> privileges.
> >
> > My use case would have been to grant CREATEROLE to any role while still
> restricting "create database".
>
> I see, that's indeed a problem.  You could probably enforce that using some
> custom module to enforce additional rules on top of CREATE ROLE
> processing, but
> it would have to be written in C.
>
>
>


Re: [Extern] Re: postgres event trigger workaround

2022-01-14 Thread Julien Rouhaud
Hi,

On Fri, Jan 14, 2022 at 09:01:12AM +, Zwettler Markus (OIZ) wrote:
> 
> We have the need to separate user (role) management from infrastructure 
> (database) management.
> 
> Granting CREATEROLE to any role also allows this role to create other roles 
> having CREATEDB privileges and therefore also getting CREATEDB privileges.
> 
> My use case would have been to grant CREATEROLE to any role while still 
> restricting "create database".

I see, that's indeed a problem.  You could probably enforce that using some
custom module to enforce additional rules on top of CREATE ROLE processing, but
it would have to be written in C.




Re: [Extern] Re: postgres event trigger workaround

2022-01-14 Thread Dominique Devienne
On Fri, Jan 14, 2022 at 10:01 AM Zwettler Markus (OIZ) <
markus.zwett...@zuerich.ch> wrote:

> We have the need to separate user (role) management from infrastructure
> (database) management.
> Granting CREATEROLE to any role also allows this role to create other
> roles having CREATEDB privileges and therefore also getting CREATEDB
> privileges.
> My use case would have been to grant CREATEROLE to any role while still
> restricting "create database".
>

I also which for my granular privileges around ROLEs.
Lite a CREATEROLE role that can only DROP the ROLEs it created (or created
by other ROLEs its a member of).
Or a (NOLOGIN) ROLE that's restricted to have table privileges in some
SCHEMAs only,
or in SCHEMAs owned by a given ROLE only. or ROLEs local to a given
DATABASE only. These kind of things.

An idea I'm toying with is having a SCHEMA with (DEFINER RIGHTS) functions
that acts as a wrapper around CREATE/DROP ROLE to impose custom
restrictions.
It would record in private tables more context at creating times, and use
that context to restrict the DROPs.
Could also solve your CREATEROLE vs CREATEDB conundrum maybe.

I have no time to develop that idea ATM though... Nor am I sure it would
work.
And it would force my code to rip out it's current direct SQL DDLs, by
equivalent functions from that mediator "admin" schema.
It would also not solve all my issues, like some ROLEs being restricted to
GRANTs from a given SCHEMA.
(but maybe event trigger would allow to intercept that to check those too?)

Just thinking aloud :). --DD


AW: [Extern] Re: postgres event trigger workaround

2022-01-14 Thread Zwettler Markus (OIZ)
> 
> Hi,
> 
> On Wed, Jan 12, 2022 at 11:57:45AM +, Zwettler Markus (OIZ) wrote:
> >
> > PG event triggers are not firing on CREATE ROLE, CREATE DATABASE,
> > CREATE TABLESPACE by definition (would be nice if they do).
> >
> > Is there any workaround to react with ddl_command_start behavior on
> > such an event?
> 
> That's not possible.  The limitation exists because those objects are shared 
> objects
> and therefore could be created from any database in the cluster.
> 
> What is your use case?  Maybe you could rely on logging all DDL instead for
> instance.
> 


We have the need to separate user (role) management from infrastructure 
(database) management.

Granting CREATEROLE to any role also allows this role to create other roles 
having CREATEDB privileges and therefore also getting CREATEDB privileges.

My use case would have been to grant CREATEROLE to any role while still 
restricting "create database".





Re: postgres event trigger workaround

2022-01-12 Thread Julien Rouhaud
Hi,

On Wed, Jan 12, 2022 at 11:57:45AM +, Zwettler Markus (OIZ) wrote:
> 
> PG event triggers are not firing on CREATE ROLE, CREATE DATABASE, CREATE
> TABLESPACE by definition (would be nice if they do).
> 
> Is there any workaround to react with ddl_command_start behavior on such an
> event?

That's not possible.  The limitation exists because those objects are shared
objects and therefore could be created from any database in the cluster.

What is your use case?  Maybe you could rely on logging all DDL instead for
instance.




Re: Using a TRIGGER with window functions.

2021-08-17 Thread Pól Ua Laoínecháin
Hi all,

> If you want to update many rows after updating N rows,


Thanks to those who helped me with this - I think that Michael Lewis
has given me some good ideas on how I should go about this. Also,
thanks to those who took the time to explain why some of my ideas were
not well grounded (wouldn't be the first time! :-) ).

Rgs,


Pól...




Re: Using a TRIGGER with window functions.

2021-08-16 Thread Tom Lane
I wrote:
> "David G. Johnston"  writes:
>> On Sun, Aug 15, 2021 at 1:24 AM Pól Ua Laoínecháin  wrote:
>>> Why are window functions now allowed in UPDATEs

> I suspect the error check was just copied from the aggregate-function
> case.  It's clear why we can't put aggregates in UPDATE: there'd no
> longer be a one-to-one correspondence with original rows.  But that
> argument doesn't hold for window functions, so at least in principle
> it seems like we could allow it.  The utility doesn't seem very high
> though, so if it takes more work than "delete the error check" I'm
> not sure anyone will care to bother.

Out of curiosity, I did spend a few minutes poking at this, and as
I feared it's not all that trivial.  I think that the planner and
executor might mostly Just Work, but there are various gaps in the
parser.  One interesting point is that the UPDATE syntax has no
provision for a WINDOW clause, so there'd be no way to share window
definitions across different window functions.  While that's not exactly
a deal-breaker, it'd be weird compared to how things work in SELECT.
Would we be willing to go to the trouble of adding such a clause to the
syntax?  I dunno; it'd certainly enlarge the footprint of a patch for
this by a lot.

regards, tom lane




Re: Using a TRIGGER with window functions.

2021-08-16 Thread Tom Lane
"David G. Johnston"  writes:
> On Sun, Aug 15, 2021 at 1:24 AM Pól Ua Laoínecháin  wrote:
>> Why are window functions now allowed in UPDATEs

> You can get it to work via a subquery/FROM clause computation.  That it
> doesn't work directly in the SET clause I don't know off-hand, but most
> likely the development and runtime cost of making it work isn't worth the
> benefit.

I suspect the error check was just copied from the aggregate-function
case.  It's clear why we can't put aggregates in UPDATE: there'd no
longer be a one-to-one correspondence with original rows.  But that
argument doesn't hold for window functions, so at least in principle
it seems like we could allow it.  The utility doesn't seem very high
though, so if it takes more work than "delete the error check" I'm
not sure anyone will care to bother.

>> Why aren't window functions allowed in GENERATED columns?

> Because the expressions allowed in GENERATED can only immutably reference
> other columns in the same row.  The underlying rationale is probably quite
> similar to the UPDATE comment above.

Yeah --- GENERATED is supposed to be immutable, and there's about 0%
chance that anything involving a subquery or window function would
really be immutable.  I think there are implementation issues too,
but they're not worth getting into given that point.

regards, tom lane




Re: Using a TRIGGER with window functions.

2021-08-16 Thread David G. Johnston
On Sun, Aug 15, 2021 at 1:24 AM Pól Ua Laoínecháin  wrote:

>
> ERROR: window functions are not allowed in UPDATE LINE 2: SET
> sort_order = activity_id - FIRST_VALUE(activity_id)
>
> Why are window functions now allowed in UPDATEs
>

You can get it to work via a subquery/FROM clause computation.  That it
doesn't work directly in the SET clause I don't know off-hand, but most
likely the development and runtime cost of making it work isn't worth the
benefit.

Why aren't window functions allowed in GENERATED columns?


Because the expressions allowed in GENERATED can only immutably reference
other columns in the same row.  The underlying rationale is probably quite
similar to the UPDATE comment above.

  INSERT INTO t1 (user_id, activity_id, sort_order)
>   VALUES (NEW.user_id, NEW.activity_id, (SELECT so FROM cte));
>
>
Yes, an insert trigger that performs a literal insert into the same table
is fundamentally broken due to exhibiting infinite loop behavior.  Same
goes for update/delete - during trigger execution you are already in the
middle of doing the required thing.


> If triggers aren't the ideal solution, I'm open to other avenues
>

Off the top of my head - I'd have a second table that is dedicated to
dealing with ordering.  It would have a one-to-one relationship with the
main table.  Upon executing DML against the main table, ideally for a
single user_id at a time, remove all of the records from the ordering table
for that user_id and then insert them anew.  I would probably do this
within functions and disallow direct access to the main and ordering tables
generally - but triggers can probably be made to work.

Or just discard the idea of pre-computing this data and wrap the ordering
logic in a view.

David J.


Re: Using a TRIGGER with window functions.

2021-08-16 Thread Michael Lewis
If you want to update many rows after updating N rows, you want an after
STATEMENT trigger which is executed after insert, update or delete. You
also want to ensure that only that function maintains sort_order field and
that you don't update recursively, perhaps by executing that function when
NEW.sort_order IS NOT DISTINCT FROM old.sort_order to prevent an infinite
loop, and executing a different function when NEW.sort_order IS DISTINCT
FROM OLD.sort_order such that you ensure all other fields have not changed.

By the way, your window function could be row_number() - 1 instead of
activity_id - FIRST_VALUE(activity_id).


Re: Schema in trigger in logical replication

2021-03-04 Thread Tom Lane
Adrian Klaver  writes:
> On 3/3/21 3:58 PM, Fontana Daniel C. (Desartec S.R.L.) wrote:
>> When the update is manual, it works.
>> It does not work when the update is done using logical replication.
>> It is as if the logical replication wizard did not use the search_path

> Replication would imply at least two database instances in use. If they 
> both don't have the same search_path set then there would be a problem. 

I'm fairly sure that replication workers run with search_path set
to just "pg_catalog" for security reasons.  Any user-written code
that needs to execute in a replication worker *must* schema-qualify
all references to non-system objects.

As a general rule, code that runs in expression indexes, check
constraints, and the like needs to be written to not make assumptions
about what search path it's invoked with.  Just fix it; your life
will be less painful.

regards, tom lane




Re: Schema in trigger in logical replication

2021-03-04 Thread Adrian Klaver

On 3/3/21 3:58 PM, Fontana Daniel C. (Desartec S.R.L.) wrote:
Please reply to list also.
Ccing list.
Also please do not top post, use inline and/or bottom posting.


When the update is manual, it works.
It does not work when the update is done using logical replication.
It is as if the logical replication wizard did not use the search_path


Replication would imply at least two database instances in use. If they 
both don't have the same search_path set then there would be a problem. 
In psql does:


SHOW search_path;

return the same thing on both sides of the replication?

FYI, your life will be easier if you schema qualify objects whenever 
possible in any case.




-Mensaje original-
De: Adrian Klaver [mailto:adrian.kla...@aklaver.com]
Enviado el: miércoles, 3 de marzo de 2021 20:19
Para: Fontana Daniel C. (Desartec S.R.L.); pgsql-general@lists.postgresql.org
Asunto: Re: Schema in trigger in logical replication

On 3/3/21 2:35 PM, Fontana Daniel C. (Desartec S.R.L.) wrote:

Using postgres 12.5 in DBA schema, this trigger is executed when the
table is updated through a logical replication. Why is it necessary to
name the schema for it to work?


Because the search_path does include the schema?


When I update the table manually, if it Works.

Example.

This trigger function does not work

CREATE FUNCTION dba.ft_pos_sync_eco_tx()
  RETURNS trigger
  LANGUAGE 'plpgsql'
  COST 100
  VOLATILE NOT LEAKPROOF
AS $BODY$
begin
INSERT INTO pos_sync_eco_rx
  ( id_terminales,
fecha_y_hora_tx,
fecha_y_hora_rx,
dato,
usuario,
fecha_y_hora_proceso )
  VALUES (  new.id_terminales,
localtimestamp,
localtimestamp,
new.dato,
new.usuario ,
localtimestamp )
ON CONFLICT (id_terminales)
DO UPDATE SET (fecha_y_hora_tx,
   fecha_y_hora_rx,
   dato,
   usuario,
   fecha_y_hora_proceso ) =
  (new.fecha_y_hora_tx,
   localtimestamp,
   new.dato,
   new.usuario,
   new.fecha_y_hora_proceso );

return  new;

end
$BODY$;

This trigger function, if it works

CREATE FUNCTION dba.ft_pos_sync_eco_tx()
  RETURNS trigger
  LANGUAGE 'plpgsql'
  COST 100
  VOLATILE NOT LEAKPROOF
AS $BODY$
begin
INSERT INTO dba.pos_sync_eco_rx
  ( id_terminales,
fecha_y_hora_tx,
fecha_y_hora_rx,
dato,
usuario,
fecha_y_hora_proceso )
  VALUES (  new.id_terminales,
localtimestamp,
localtimestamp,
new.dato,
new.usuario ,
localtimestamp )
ON CONFLICT (id_terminales)
DO UPDATE SET (fecha_y_hora_tx,
   fecha_y_hora_rx,
   dato,
   usuario,
   fecha_y_hora_proceso ) =
  (new.fecha_y_hora_tx,
   localtimestamp,
   new.dato,
   new.usuario,
   new.fecha_y_hora_proceso );

return  new;

end
$BODY$;





--
Adrian Klaver
adrian.kla...@aklaver.com





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Schema in trigger in logical replication

2021-03-03 Thread Adrian Klaver

On 3/3/21 2:35 PM, Fontana Daniel C. (Desartec S.R.L.) wrote:

Using postgres 12.5 in DBA schema, this trigger is executed when the table
is updated through a logical replication. Why is it necessary to name the
schema for it to work?


Because the search_path does include the schema?


When I update the table manually, if it Works.

Example.

This trigger function does not work

CREATE FUNCTION dba.ft_pos_sync_eco_tx()
 RETURNS trigger
 LANGUAGE 'plpgsql'
 COST 100
 VOLATILE NOT LEAKPROOF
AS $BODY$
begin
INSERT INTO pos_sync_eco_rx
  ( id_terminales,
fecha_y_hora_tx,
fecha_y_hora_rx,
dato,
usuario,
fecha_y_hora_proceso )
  VALUES (  new.id_terminales,
localtimestamp,
localtimestamp,
new.dato,
new.usuario ,
localtimestamp )
ON CONFLICT (id_terminales)
DO UPDATE SET (fecha_y_hora_tx,
   fecha_y_hora_rx,
   dato,
   usuario,
   fecha_y_hora_proceso ) =
  (new.fecha_y_hora_tx,
   localtimestamp,
   new.dato,
   new.usuario,
   new.fecha_y_hora_proceso );

return  new;

end
$BODY$;

This trigger function, if it works

CREATE FUNCTION dba.ft_pos_sync_eco_tx()
 RETURNS trigger
 LANGUAGE 'plpgsql'
 COST 100
 VOLATILE NOT LEAKPROOF
AS $BODY$
begin
INSERT INTO dba.pos_sync_eco_rx
  ( id_terminales,
fecha_y_hora_tx,
fecha_y_hora_rx,
dato,
usuario,
fecha_y_hora_proceso )
  VALUES (  new.id_terminales,
localtimestamp,
localtimestamp,
new.dato,
new.usuario ,
localtimestamp )
ON CONFLICT (id_terminales)
DO UPDATE SET (fecha_y_hora_tx,
   fecha_y_hora_rx,
   dato,
   usuario,
   fecha_y_hora_proceso ) =
  (new.fecha_y_hora_tx,
   localtimestamp,
   new.dato,
   new.usuario,
   new.fecha_y_hora_proceso );

return  new;

end
$BODY$;





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Problem with trigger function

2021-02-11 Thread David G. Johnston
On Thursday, February 11, 2021, Steve Baldwin 
wrote:

> David, from what I can see of the docs, for 9.6 it is PROCEDURE.  It seems
> FUNCTION didn't appear until 11.
>

Indeed.  I didn’t pay attention to the version.

David J.


Re: Problem with trigger function

2021-02-11 Thread Steve Baldwin
David, from what I can see of the docs, for 9.6 it is PROCEDURE.  It seems
FUNCTION didn't appear until 11.

Steve

On Fri, Feb 12, 2021 at 7:05 AM David G. Johnston <
david.g.johns...@gmail.com> wrote:

>
> On Thursday, February 11, 2021, Steve Baldwin 
> wrote:
>
>> Try ... EXECUTE PROCEDURE customer_num_informix()
>>
>
>
> FUNCTION, not PROCEDURE
>
> David J.
>


Re: Problem with trigger function

2021-02-11 Thread David G. Johnston
On Thursday, February 11, 2021, Steve Baldwin 
wrote:

> Try ... EXECUTE PROCEDURE customer_num_informix()
>


FUNCTION, not PROCEDURE

David J.


Re: Problem with trigger function

2021-02-11 Thread Dave Cramer
FWIW, messing with serial numbers like this is pretty risky.Sequences have
transactional semantics for a reason.

Dave Cramer
www.postgres.rocks


On Thu, 11 Feb 2021 at 14:57, Steve Baldwin  wrote:

> Try ... EXECUTE PROCEDURE customer_num_informix()
>
> Steve
>
> On Fri, Feb 12, 2021 at 6:47 AM James B. Byrne 
> wrote:
>
>> I am trying to implement a trigger in a PostgreSQL-9.6.17 database:
>>
>> CREATE OR REPLACE FUNCTION customer_num_informix()
>>   RETURNS trigger AS $$
>> BEGIN
>>   -- An Aubit/Informix 4GL INSERT statement passes the value 0 to Informix
>>   -- DBMS for columns that have the SERIAL data type.  Informix will then
>>   -- use the incremented serial number in place of 0. PostgreSQL instead
>>   -- will simply take the value 0 and replace the incremented serial
>> number.
>>   -- This trigger function emulates the Informix DBMS behaviour.
>>   --
>>   -- The NEW variable contains the data for the row to be INSERTed or
>>   -- UPDATEd. It, and its complement OLD used for UPDATE and DELETE,
>>   -- are automatically created and populated by PostgreSQL whenever
>>   -- a data-change function is called.
>>   --
>>   IF NEW.customer_num = 0 THEN
>> SELECT nextval('customer_customer_num_seq') INTO
>> NEW.customer_customer_num;
>>   ELSE
>> IF NEW.customer_customer_num > 0 THEN
>>   PERFORM setval('customer_customer_num_seq',
>> NEW.customer_customer_num);
>> END IF;
>>   END IF;
>>   RETURN NEW;
>> END;
>> $$ LANGUAGE plpgsql;
>>
>> ALTER FUNCTION public.customer_num_informix() OWNER TO hll_4gl_testing;
>>
>> CREATE TRIGGER customer_num_serial
>>   BEFORE INSERT ON customer
>> FOR EACH ROW EXECUTE customer_num_informix();
>>
>> The problem is that I am getting a syntax error on the CREATE TRIGGER
>> statement:
>>
>> ERROR:  syntax error at or near "customer_num_informix"
>> LINE 3: FOR EACH ROW EXECUTE customer_num_informix();
>>
>> I do not see what the error is. What is wrong with the syntax I used?
>>
>> --
>> ***  e-Mail is NOT a SECURE channel  ***
>> Do NOT transmit sensitive data via e-Mail
>>Unencrypted messages have no legal claim to privacy
>>  Do NOT open attachments nor follow links sent by e-Mail
>>
>> James B. Byrnemailto:byrn...@harte-lyne.ca
>> Harte & Lyne Limited  http://www.harte-lyne.ca
>> 9 Brockley Drive  vox: +1 905 561 1241
>> Hamilton, Ontario fax: +1 905 561 0757
>> Canada  L8E 3C3
>>
>>
>>
>>


Re: Problem with trigger function

2021-02-11 Thread Steve Baldwin
Try ... EXECUTE PROCEDURE customer_num_informix()

Steve

On Fri, Feb 12, 2021 at 6:47 AM James B. Byrne 
wrote:

> I am trying to implement a trigger in a PostgreSQL-9.6.17 database:
>
> CREATE OR REPLACE FUNCTION customer_num_informix()
>   RETURNS trigger AS $$
> BEGIN
>   -- An Aubit/Informix 4GL INSERT statement passes the value 0 to Informix
>   -- DBMS for columns that have the SERIAL data type.  Informix will then
>   -- use the incremented serial number in place of 0. PostgreSQL instead
>   -- will simply take the value 0 and replace the incremented serial
> number.
>   -- This trigger function emulates the Informix DBMS behaviour.
>   --
>   -- The NEW variable contains the data for the row to be INSERTed or
>   -- UPDATEd. It, and its complement OLD used for UPDATE and DELETE,
>   -- are automatically created and populated by PostgreSQL whenever
>   -- a data-change function is called.
>   --
>   IF NEW.customer_num = 0 THEN
> SELECT nextval('customer_customer_num_seq') INTO
> NEW.customer_customer_num;
>   ELSE
> IF NEW.customer_customer_num > 0 THEN
>   PERFORM setval('customer_customer_num_seq',
> NEW.customer_customer_num);
> END IF;
>   END IF;
>   RETURN NEW;
> END;
> $$ LANGUAGE plpgsql;
>
> ALTER FUNCTION public.customer_num_informix() OWNER TO hll_4gl_testing;
>
> CREATE TRIGGER customer_num_serial
>   BEFORE INSERT ON customer
> FOR EACH ROW EXECUTE customer_num_informix();
>
> The problem is that I am getting a syntax error on the CREATE TRIGGER
> statement:
>
> ERROR:  syntax error at or near "customer_num_informix"
> LINE 3: FOR EACH ROW EXECUTE customer_num_informix();
>
> I do not see what the error is. What is wrong with the syntax I used?
>
> --
> ***  e-Mail is NOT a SECURE channel  ***
> Do NOT transmit sensitive data via e-Mail
>Unencrypted messages have no legal claim to privacy
>  Do NOT open attachments nor follow links sent by e-Mail
>
> James B. Byrnemailto:byrn...@harte-lyne.ca
> Harte & Lyne Limited  http://www.harte-lyne.ca
> 9 Brockley Drive  vox: +1 905 561 1241
> Hamilton, Ontario fax: +1 905 561 0757
> Canada  L8E 3C3
>
>
>
>


Re: Can I trigger an action from a coalesce ?

2020-02-24 Thread Peter J. Holzer
On 2020-02-24 05:20:49 +, sivapostg...@yahoo.com wrote:
> It could also be done by putting those values in square bracket, if 
> substituted
> with default values.
> eg. [0]
>4
>45
> [100]
> Values within square brackets are default values.

This would also work in many cases (especially if the values only have
to be displayed and not processed further).

In this case the OP wrote that "the default value is reasonable and
could actually come from the source table". I assumed that he had a
reason for this choice and wanted to preserve it.

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: Can I trigger an action from a coalesce ?

2020-02-23 Thread sivapostg...@yahoo.com
 It could also be done by putting those values in square bracket, if 
substituted with default values.eg. [0]       4       45        [100]Values 
within square brackets are default values.


On Sunday, 23 February, 2020, 04:52:11 pm IST, Peter J. Holzer 
 wrote:  
 
 On 2020-02-22 16:02:06 -0500, stan wrote:
> I have a case where if a value does not exist, I am going to use a default,
> which is easy with coalesce. But I would like to warn the user that a
> default has been supplied. The default value is reasonable, and could
> actually come from the source table, so I can't just check the value.
> I'd like to do a raise NOTICE, if the default portion of the coalesce fires.

Might I suggest a different approach?

Instead of raising a notice, add an additional column. Something like
this:

wds=> create table mytable (id serial, value int);
CREATE TABLE
Time: 127.124 ms
wds=> insert into mytable (value) values (2), (23), (null), (42), (78);
INSERT 0 5
Time: 48.223 ms
wds=> select * from mytable; 
╔╤═══╗
║ id │ value ║
╟┼───╢
║  1 │    2 ║
║  2 │    23 ║
║  3 │  (∅) ║
║  4 │    42 ║
║  5 │    78 ║
╚╧═══╝
(5 rows)

Time: 0.657 ms
wds=> select id, coalesce(value, 42) as value, value is null as value_was_null
wds-> from mytable;
╔╤═══╤╗
║ id │ value │ value_was_null ║
╟┼───┼╢
║  1 │    2 │ f              ║
║  2 │    23 │ f              ║
║  3 │    42 │ t              ║
║  4 │    42 │ f              ║
║  5 │    78 │ f              ║
╚╧═══╧╝
(5 rows)

Time: 0.247 ms

This lets the user distinguish the real 42 with id 4 from the
substituted 42 with id 3. I think this would be awkward with a notice.
At most you could tell the user "some values were substituted", but not
which ones (at least not if the query can return a large number of
rows).

        hp


-- 
  _  | Peter J. Holzer    | Story must make more sense than reality.
|_|_) |                    |
| |  | h...@hjp.at        |    -- Charles Stross, "Creative writing
__/  | http://www.hjp.at/ |      challenge!"  

Re: Can I trigger an action from a coalesce ?

2020-02-23 Thread Peter J. Holzer
On 2020-02-22 16:02:06 -0500, stan wrote:
> I have a case where if a value does not exist, I am going to use a default,
> which is easy with coalesce. But I would like to warn the user that a
> default has been supplied. The default value is reasonable, and could
> actually come from the source table, so I can't just check the value.
> I'd like to do a raise NOTICE, if the default portion of the coalesce fires.

Might I suggest a different approach?

Instead of raising a notice, add an additional column. Something like
this:

wds=> create table mytable (id serial, value int);
CREATE TABLE
Time: 127.124 ms
wds=> insert into mytable (value) values (2), (23), (null), (42), (78);
INSERT 0 5
Time: 48.223 ms
wds=> select * from mytable; 
╔╤═══╗
║ id │ value ║
╟┼───╢
║  1 │ 2 ║
║  2 │23 ║
║  3 │   (∅) ║
║  4 │42 ║
║  5 │78 ║
╚╧═══╝
(5 rows)

Time: 0.657 ms
wds=> select id, coalesce(value, 42) as value, value is null as value_was_null
wds-> from mytable;
╔╤═══╤╗
║ id │ value │ value_was_null ║
╟┼───┼╢
║  1 │ 2 │ f  ║
║  2 │23 │ f  ║
║  3 │42 │ t  ║
║  4 │42 │ f  ║
║  5 │78 │ f  ║
╚╧═══╧╝
(5 rows)

Time: 0.247 ms

This lets the user distinguish the real 42 with id 4 from the
substituted 42 with id 3. I think this would be awkward with a notice.
At most you could tell the user "some values were substituted", but not
which ones (at least not if the query can return a large number of
rows).

hp


-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: Can I trigger an action from a coalesce ?

2020-02-22 Thread Christophe Pettus



> On Feb 22, 2020, at 14:36, Tom Lane  wrote:
> The problem that I'm worried about is premature evaluation of the
> "immutable" function, causing the NOTICE to come out once during
> query planning, independently of whether/how many times it should
> come out during execution.

Ah, good point.  My solution also does assume that a DEFAULT expression is only 
evaluated if the default is required, and that behavior isn't (afaik) a promise.

--
-- Christophe Pettus
   x...@thebuild.com





Re: Can I trigger an action from a coalesce ?

2020-02-22 Thread Tom Lane
Christophe Pettus  writes:
>> On Feb 22, 2020, at 14:02, Tom Lane  wrote:
>> It's a really bad idea to mark a function that has side-effects
>> (i.e., emitting a NOTICE) as immutable, especially if the occurrence
>> of the side-effect at well-defined times is exactly what you're
>> desirous of.

> True, and it doesn't actually need to be immutable here; just cut and pasted 
> from the wrong example.

> (That being said, I'm not coming up with a specific bad thing that a RAISE 
> NOTICE in an immutable function will cause.  Is there one?)

The problem that I'm worried about is premature evaluation of the
"immutable" function, causing the NOTICE to come out once during
query planning, independently of whether/how many times it should
come out during execution.

regards, tom lane




Re: Can I trigger an action from a coalesce ?

2020-02-22 Thread Christophe Pettus



> On Feb 22, 2020, at 14:02, Tom Lane  wrote:
> It's a really bad idea to mark a function that has side-effects
> (i.e., emitting a NOTICE) as immutable, especially if the occurrence
> of the side-effect at well-defined times is exactly what you're
> desirous of.

True, and it doesn't actually need to be immutable here; just cut and pasted 
from the wrong example.

(That being said, I'm not coming up with a specific bad thing that a RAISE 
NOTICE in an immutable function will cause.  Is there one?)

--
-- Christophe Pettus
   x...@thebuild.com





Re: Can I trigger an action from a coalesce ?

2020-02-22 Thread Tom Lane
Christophe Pettus  writes:
> Something like this:

> create function supply_default() returns int as $$
> begin
>raise notice 'Supplied default';
>return 1;
> end;
> $$ immutable language plpgsql;

It's a really bad idea to mark a function that has side-effects
(i.e., emitting a NOTICE) as immutable, especially if the occurrence
of the side-effect at well-defined times is exactly what you're
desirous of.

> xof=# create table t ( i integer default supply_default(), t text );
> CREATE TABLE
> xof=# insert into t(i, t) values (2, 'text');
> INSERT 0 1
> xof=# insert into t(t) values ('text');
> NOTICE:  Supplied default
> INSERT 0 1

Other than the mislabeled volatility, I think this will mostly work.

Another possibility is to use a before-row-insert trigger
that does something like

if new.i is null then
begin
  new.i := whatever;
  raise notice 'Supplied default';
end if;

This seems cleaner in principle, but a problem is that it can't tell
an inserted-by-default NULL from one that was intentionally supplied.
That might be OK if you never want the field to be null anyway.

regards, tom lane




Re: Can I trigger an action from a coalesce ?

2020-02-22 Thread Christophe Pettus



> On Feb 22, 2020, at 13:33, stan  wrote:
> I suppose you are suggesting that the function try the original SELECT, and
> if it returns a NULL then retun the default AND do the raise NOTICE?

Something like this:

create function supply_default() returns int as $$
begin
   raise notice 'Supplied default';
   return 1;
end;
$$ immutable language plpgsql;

xof=# create table t ( i integer default supply_default(), t text );
CREATE TABLE
xof=# insert into t(i, t) values (2, 'text');
INSERT 0 1
xof=# insert into t(t) values ('text');
NOTICE:  Supplied default
INSERT 0 1

--
-- Christophe Pettus
   x...@thebuild.com





Re: Can I trigger an action from a coalesce ?

2020-02-22 Thread stan
On Sat, Feb 22, 2020 at 01:06:57PM -0800, Christophe Pettus wrote:
> 
> 
> > On Feb 22, 2020, at 13:05, Adrian Klaver  wrote:
> > 
> > On 2/22/20 1:02 PM, stan wrote:
> >> I have a case where if a value does not exist, I am going to use a default,
> >> which is easy with coalesce. But I would like to warn the user that a
> >> default has been supplied. The default value is reasonable, and could
> >> actually come from the source table, so I can't just check the value.
> >> I'd like to do a raise NOTICE, if the default portion of the coalesce 
> >> fires.
> >> Anyone have a good way to accomplish this?
> > 
> > No.
> 
> You can, of course, create a PL/pgSQL function and use that as the default.
I suppose you are suggesting that the function try the original SELECT, and
if it returns a NULL then retun the default AND do the raise NOTICE?

Or is there a simpler way?


-- 
"They that would give up essential liberty for temporary safety deserve
neither liberty nor safety."
-- Benjamin Franklin




Re: Can I trigger an action from a coalesce ?

2020-02-22 Thread Christophe Pettus



> On Feb 22, 2020, at 13:05, Adrian Klaver  wrote:
> 
> On 2/22/20 1:02 PM, stan wrote:
>> I have a case where if a value does not exist, I am going to use a default,
>> which is easy with coalesce. But I would like to warn the user that a
>> default has been supplied. The default value is reasonable, and could
>> actually come from the source table, so I can't just check the value.
>> I'd like to do a raise NOTICE, if the default portion of the coalesce fires.
>> Anyone have a good way to accomplish this?
> 
> No.

You can, of course, create a PL/pgSQL function and use that as the default.

--
-- Christophe Pettus
   x...@thebuild.com





Re: Can I trigger an action from a coalesce ?

2020-02-22 Thread Adrian Klaver

On 2/22/20 1:02 PM, stan wrote:

I have a case where if a value does not exist, I am going to use a default,
which is easy with coalesce. But I would like to warn the user that a
default has been supplied. The default value is reasonable, and could
actually come from the source table, so I can't just check the value.
I'd like to do a raise NOTICE, if the default portion of the coalesce fires.

Anyone have a good way to accomplish this?


No.


--
Adrian Klaver
adrian.kla...@aklaver.com




Re: with and trigger

2019-05-29 Thread Tom Lane
Ron  writes:
> On 5/29/19 8:26 AM, Tom Lane wrote:
>> This doesn't explicitly talk about triggers, but I think our attitude
>> about the case you're discussing is that the results are unspecified.
>> If a trigger fired in one WITH arm tries to look at the table(s)
>> modified by other WITH arms, it might or might not see those changes.

> Are CTEs still optimization fences?
> https://www.2ndquadrant.com/en/blog/postgresql-ctes-are-optimization-fences/

Yes, if they contain data-modifying statements ... but that's not
relevant to the point at hand, which is that the run-time behavior
is not specified.

regards, tom lane




Re: with and trigger

2019-05-29 Thread Andreas Kretschmer


>
>Are CTEs still optimization fences?
>https://www.2ndquadrant.com/en/blog/postgresql-ctes-are-optimization-fences/

Yes, but not in 12.


Regards, Andreas


-- 
2ndQuadrant - The PostgreSQL Support Company




Re: with and trigger

2019-05-29 Thread Ron

On 5/29/19 8:26 AM, Tom Lane wrote:

PegoraroF10  writes:

We like to use With to insert, update and return some value to user. But some
informations of those related tables are not available on that time, is that
a bug ?

No, see the "WITH Clause" section of the SELECT reference page:

 The primary query and the WITH queries are all (notionally) executed
 at the same time. This implies that the effects of a data-modifying
 statement in WITH cannot be seen from other parts of the query, other
 than by reading its RETURNING output. If two such data-modifying
 statements attempt to modify the same row, the results are
 unspecified.

This doesn't explicitly talk about triggers, but I think our attitude
about the case you're discussing is that the results are unspecified.
If a trigger fired in one WITH arm tries to look at the table(s)
modified by other WITH arms, it might or might not see those changes.


Are CTEs still optimization fences?
https://www.2ndquadrant.com/en/blog/postgresql-ctes-are-optimization-fences/

--
Angular momentum makes the world go 'round.




Re: with and trigger

2019-05-29 Thread Tom Lane
PegoraroF10  writes:
> We like to use With to insert, update and return some value to user. But some
> informations of those related tables are not available on that time, is that
> a bug ?

No, see the "WITH Clause" section of the SELECT reference page:

The primary query and the WITH queries are all (notionally) executed
at the same time. This implies that the effects of a data-modifying
statement in WITH cannot be seen from other parts of the query, other
than by reading its RETURNING output. If two such data-modifying
statements attempt to modify the same row, the results are
unspecified.

This doesn't explicitly talk about triggers, but I think our attitude
about the case you're discussing is that the results are unspecified.
If a trigger fired in one WITH arm tries to look at the table(s)
modified by other WITH arms, it might or might not see those changes.

regards, tom lane




Re: with and trigger

2019-05-29 Thread PegoraroF10
Well, I think is not a transaction problem, because if you do the same thing
on a DO it will work. 
DO $$
  declare vMaster_ID integer;
begin
  insert into Master(Customer_ID, Field2) values(1, 'BlaBla') returning
Master_ID into vMaster_ID;
  insert into Detail(Master_ID, Product_ID, ProductValue) values(vMaster_ID,
5, 50);
end $$

As you can see, works exactly the same way and Detail trigger works as
expected, why ?



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: with and trigger

2019-05-29 Thread Fabrízio de Royes Mello
Em qua, 29 de mai de 2019 às 08:52, PegoraroF10 
escreveu:
>
> We like to use With to insert, update and return some value to user. But
some
> informations of those related tables are not available on that time, is
that
> a bug ?
>
> with
>   Master(Master_ID) as (insert into Master(Customer_ID, Field2) values(1,
> 'BlaBla') returning Master_ID),
>   Detail as (insert into Detail(Master_ID, Product_ID, ProductValue)
select
> Master_ID, 5, 50 from Master)
> select Master_ID from Master;
>
> This code works but not as expected because we have a trigger which does
not
> see that data yet.
> Suppose a trigger on Detail which needs to find any info from a table
which
> was inserted on this With. That info is not available, like ...
> create function DetailOfDetail() returns trigger() as -- this trigger
> function is before insert on Detail
> begin
>   new.Discount = (select discount from Customer inner join Master
> using(Customer_ID) where Master_ID = new.Master_ID)
> end;
> This trigger will not work because Master record was not inserted yet.
>
> If change it to a DO it would work but we would like that result, so ...
> This trigger is obviously an example, our tables have more complex
> structures but here we want only to understand the way postgres works or
if
> it´s not working properly.
>

To see updated data using a trigger in the same transaction you should
create trigger on AFTER event.

Regards,

--
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Re: with and trigger

2019-05-29 Thread Geoff Winkless
On Wed, 29 May 2019 at 12:52, PegoraroF10  wrote:

> This trigger will not work because Master record was not inserted yet.
>

That seems reasonable. Since the transaction is meant to be atomic any
select within the query should return data from tables as they are at the
start of the transaction, the insert won't happen (as far as the rest of
the query is concerned) until it commits.

Or have I misunderstood what you're saying?

Geoff


Re: Statement-level trigger results in recursion

2019-02-19 Thread Jitendra Loyal
Thanks for all your efforts. I appreciate it.

Let us wait and see if someone can enlighten us, or you locate the
conversation.

Thanks once again

Regards,
Jiten

On Tue 19 Feb, 2019, 3:19 AM Adrian Klaver, 
wrote:

> On 2/18/19 9:07 AM, Jitendra Loyal wrote:
> > I do understand that the statement level trigger will be executed once
> > before the operation. My point is.. if one does not know the rows, what
> > kind of use it can be put to. What is the use case? Like in after
> > triggers, one gets the rows in transition tables, how does one do with
> > vefore trigger.
>
> Use FOR EACH ROW.
>
> Why you cannot use a FOR EACH STATEMENT trigger is something I thought I
> remember being discussed on the list before. Unfortunately I cannot find
> that conversation at the moment. Someone else will need to weigh in on
> this.
>
> >
> > Thanks and regards,
> > Jiten
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Statement-level trigger results in recursion

2019-02-19 Thread Jitendra Loyal
I do understand that the statement level trigger will be executed once
before the operation. My point is.. if one does not know the rows, what
kind of use it can be put to. What is the use case? Like in after triggers,
one gets the rows in transition tables, how does one do with vefore trigger.

Thanks and regards,
Jiten


Re: Statement-level trigger results in recursion

2019-02-19 Thread Jitendra Loyal
My bad!

It is a transition table. Consider the following revised definition of
trigger:


CREATE TRIGGER storage_locations_b_u_AS_DML
AFTER UPDATE
ON storage_locations
REFERENCING NEW TABLE AS new_table
OLD TABLE AS old_table
FOR EACH STATEMENT EXECUTE FUNCTION TRG_storage_locations_b_u_AS_DML ();

Thanks and regards,
Jiten

On Mon 18 Feb, 2019, 9:32 PM Adrian Klaver, 
wrote:

> On 2/18/19 4:11 AM, Jitendra Loyal wrote:
> >
> > The AFTER Statement-level Trigger runs into infinite execution when
> > another set of rows are affected for the same table through this
> > trigger. Consider this use case where a table storage_locations that
> > manages a hierarchy of storage_locations in stores, and thus having
> > following columns (for simplicity):
> >
> >
> >
> >
> > storage_location_id SERIAL NOT NULL PRIMARY KEY,
> > store_id INTEGER NOT NULL, -- REFERENCES stores
> > storage_location_nm VARCHAR (25) NOT NULL,
> > parent_storage_location_id INTEGER NULL REFERENCES storage_locations,
> >  NULL for root storage locations
> > storage_location_path TEXT NOT NULL
> >
> >
> >
> >
> >
> > I have a BEFORE ROW trigger, which updates the storage_location_path with
> > parent's storage_location_path, if any, concatenated with its
> > storage_location_name. This works fine - no issues.
> >
> > I have another AFTER UPDATE STATEMENT-level Trigger and function
> definitions
> > as below (which updates the storage_path of the children):
> >
> >
> >
> >
> > CREATE FUNCTION TRG_storage_locations_b_u_AS_DML ()
> > RETURNS TRIGGER
> > AS $$
> > DECLARE
> > v_separator VARCHAR (1) = '/';
> > v_cnt INT;
> > BEGIN
> > -- [ -- Required to prevent infinite recursion
> > SELECT COUNT (*) INTO v_cnt
> > FROM new_table;
>
> Where is new_table coming from?
>
> >
> > IF (v_cnt > 0) THEN
> > -- ] -- Required to prevent infinite recursion
> > UPDATE storage_locations
> > SET storage_location_path = COALESCE (i.storage_location_path ||
> > v_separator, '') || storage_locations.storage_location_nm
> > FROM inserted i
> > JOIN deleted d
> > ON ( i.storage_location_id = d.storage_location_id
> > AND i.storage_location_path != d.storage_location_path
> > )
> > WHERE storage_locations.parent_storage_location_id =
> i.storage_location_id;
> > END IF;
> > RETURN NULL;
> > END
> > $$ LANGUAGE plpgsql;
> >
> > CREATE TRIGGER storage_locations_b_u_AS_DML
> > AFTER UPDATE
> > ON storage_locations
> > REFERENCING NEW TABLE AS inserted
> > OLD TABLE AS deleted
> > FOR EACH STATEMENT EXECUTE FUNCTION TRG_storage_locations_b_u_AS_DML ();
> >
> > Notice that the Trigger is getting called endlessly (if the number of
> > rows in the NEW TABLE are NOT checked). I reckon if there are not any
> > rows, what is the need to call the trigger. Or, may be, I am missing
> > something, which I need to learn.
>
> Yes:
>
> https://www.postgresql.org/docs/10/sql-createtrigger.html
>
> "... In contrast, a trigger that is marked FOR EACH STATEMENT only
> executes once for any given operation, regardless of how many rows it
> modifies (in particular, an operation that modifies zero rows will still
> result in the execution of any applicable FOR EACH STATEMENT triggers)."
>
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > Thanks,
> >
> >
> >
> >
> >
> > Jiten
> >
> >
> >
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Statement-level trigger results in recursion

2019-02-19 Thread Jitendra Loyal
Unfortunately no! Where can I see those? Will I don't my answer there; I
have referred to the documentation and tried various things.

Thanks and regards,
Ken i

On Mon 18 Feb, 2019, 9:59 PM Adrian Klaver, 
wrote:

> On 2/18/19 8:23 AM, Jitendra Loyal wrote:
> > My bad!
> >
> > It is a transition table. Consider the following revised definition of
> > trigger:
> >
> >
> > CREATE TRIGGER storage_locations_b_u_AS_DML
> > AFTER UPDATE
> > ON storage_locations
> > REFERENCING NEW TABLE AS new_table
> > OLD TABLE AS old_table
> > FOR EACH STATEMENT EXECUTE FUNCTION TRG_storage_locations_b_u_AS_DML ();
>
> Alright I understand now.
>
> Did you see the rest of my previous post about AFTER STATEMENT running
> regardless of number of rows affected?
>
> >
> > Thanks and regards,
> > Jiten
> >
> > On Mon 18 Feb, 2019, 9:32 PM Adrian Klaver,  > > wrote:
> >
> > On 2/18/19 4:11 AM, Jitendra Loyal wrote:
> >  >
> >  > The AFTER Statement-level Trigger runs into infinite execution
> when
> >  > another set of rows are affected for the same table through this
> >  > trigger. Consider this use case where a table storage_locations
> that
> >  > manages a hierarchy of storage_locations in stores, and thus
> having
> >  > following columns (for simplicity):
> >  >
> >  >
> >  >
> >  >
> >  > storage_location_id SERIAL NOT NULL PRIMARY KEY,
> >  > store_id INTEGER NOT NULL, -- REFERENCES stores
> >  > storage_location_nm VARCHAR (25) NOT NULL,
> >  > parent_storage_location_id INTEGER NULL REFERENCES
> > storage_locations,
> >  >  NULL for root storage locations
> >  > storage_location_path TEXT NOT NULL
> >  >
> >  >
> >  >
> >  >
> >  >
> >  > I have a BEFORE ROW trigger, which updates the
> > storage_location_path with
> >  > parent's storage_location_path, if any, concatenated with its
> >  > storage_location_name. This works fine - no issues.
> >  >
> >  > I have another AFTER UPDATE STATEMENT-level Trigger and function
> > definitions
> >  > as below (which updates the storage_path of the children):
> >  >
> >  >
> >  >
> >  >
> >  > CREATE FUNCTION TRG_storage_locations_b_u_AS_DML ()
> >  > RETURNS TRIGGER
> >  > AS $$
> >  > DECLARE
> >  > v_separator VARCHAR (1) = '/';
> >  > v_cnt INT;
> >  > BEGIN
> >  > -- [ -- Required to prevent infinite recursion
> >  > SELECT COUNT (*) INTO v_cnt
> >  > FROM new_table;
> >
> > Where is new_table coming from?
> >
> >  >
> >  > IF (v_cnt > 0) THEN
> >  > -- ] -- Required to prevent infinite recursion
> >  > UPDATE storage_locations
> >  > SET storage_location_path = COALESCE (i.storage_location_path ||
> >  > v_separator, '') || storage_locations.storage_location_nm
> >  > FROM inserted i
> >  > JOIN deleted d
> >  > ON ( i.storage_location_id = d.storage_location_id
> >  > AND i.storage_location_path != d.storage_location_path
> >  > )
> >  > WHERE storage_locations.parent_storage_location_id =
> > i.storage_location_id;
> >  > END IF;
> >  > RETURN NULL;
> >  > END
> >  > $$ LANGUAGE plpgsql;
> >  >
> >  > CREATE TRIGGER storage_locations_b_u_AS_DML
> >  > AFTER UPDATE
> >  > ON storage_locations
> >  > REFERENCING NEW TABLE AS inserted
> >  > OLD TABLE AS deleted
> >  > FOR EACH STATEMENT EXECUTE FUNCTION
> > TRG_storage_locations_b_u_AS_DML ();
> >  >
> >  > Notice that the Trigger is getting called endlessly (if the
> > number of
> >  > rows in the NEW TABLE are NOT checked). I reckon if there are not
> > any
> >  > rows, what is the need to call the trigger. Or, may be, I am
> missing
> >  > something, which I need to learn.
> >
> > Yes:
> >
> > https://www.postgresql.org/docs/10/sql-createtrigger.html
> >
> > "... In contrast, a trigger that is marked FOR EACH STATEMENT only
> > executes once for any given operation, regardless of how many rows it
> > modifies (in particular, an operation that modifies zero rows will
> > still
> > result in the execution of any applicable FOR EACH STATEMENT
> triggers)."
> >
> >  >
> >  >
> >  >
> >  >
> >  >
> >  >
> >  >
> >  >
> >  >
> >  >
> >  >
> >  >
> >  >
> >  >
> >  >
> >  > Thanks,
> >  >
> >  >
> >  >
> >  >
> >  >
> >  > Jiten
> >  >
> >  >
> >  >
> >  >
> >
> >
> > --
> > Adrian Klaver
> > adrian.kla...@aklaver.com 
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Statement-level trigger results in recursion

2019-02-18 Thread Adrian Klaver

On 2/18/19 9:07 AM, Jitendra Loyal wrote:
I do understand that the statement level trigger will be executed once 
before the operation. My point is.. if one does not know the rows, what 
kind of use it can be put to. What is the use case? Like in after 
triggers, one gets the rows in transition tables, how does one do with 
vefore trigger.


Use FOR EACH ROW.

Why you cannot use a FOR EACH STATEMENT trigger is something I thought I 
remember being discussed on the list before. Unfortunately I cannot find 
that conversation at the moment. Someone else will need to weigh in on this.




Thanks and regards,
Jiten



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Statement-level trigger results in recursion

2019-02-18 Thread Adrian Klaver

On 2/18/19 8:38 AM, Jitendra Loyal wrote:
Unfortunately no! Where can I see those? Will I don't my answer there; I 
have referred to the documentation and tried various things.


Please do not top post. The style on this list is to use inline posting.

https://www.postgresql.org/docs/10/sql-createtrigger.html

"...In contrast, a trigger that is marked FOR EACH STATEMENT only 
executes once for any given operation, regardless of how many rows it 
modifies (in particular, an operation that modifies zero rows will still 
result in the execution of any applicable FOR EACH STATEMENT triggers).

"




Thanks and regards,
Ken i

On Mon 18 Feb, 2019, 9:59 PM Adrian Klaver, > wrote:


On 2/18/19 8:23 AM, Jitendra Loyal wrote:
 > My bad!
 >
 > It is a transition table. Consider the following revised
definition of
 > trigger:
 >
 >
 > CREATE TRIGGER storage_locations_b_u_AS_DML
 > AFTER UPDATE
 > ON storage_locations
 > REFERENCING NEW TABLE AS new_table
 > OLD TABLE AS old_table
 > FOR EACH STATEMENT EXECUTE FUNCTION
TRG_storage_locations_b_u_AS_DML ();

Alright I understand now.

Did you see the rest of my previous post about AFTER STATEMENT running
regardless of number of rows affected?

 >
 > Thanks and regards,
 > Jiten
 >
 > On Mon 18 Feb, 2019, 9:32 PM Adrian Klaver,
mailto:adrian.kla...@aklaver.com>
 > >> wrote:
 >
 >     On 2/18/19 4:11 AM, Jitendra Loyal wrote:
 >      >
 >      > The AFTER Statement-level Trigger runs into infinite
execution when
 >      > another set of rows are affected for the same table
through this
 >      > trigger. Consider this use case where a table
storage_locations that
 >      > manages a hierarchy of storage_locations in stores, and
thus having
 >      > following columns (for simplicity):
 >      >
 >      >
 >      >
 >      >
 >      > storage_location_id SERIAL NOT NULL PRIMARY KEY,
 >      > store_id INTEGER NOT NULL, -- REFERENCES stores
 >      > storage_location_nm VARCHAR (25) NOT NULL,
 >      > parent_storage_location_id INTEGER NULL REFERENCES
 >     storage_locations,
 >      >  NULL for root storage locations
 >      > storage_location_path TEXT NOT NULL
 >      >
 >      >
 >      >
 >      >
 >      >
 >      > I have a BEFORE ROW trigger, which updates the
 >     storage_location_path with
 >      > parent's storage_location_path, if any, concatenated with its
 >      > storage_location_name. This works fine - no issues.
 >      >
 >      > I have another AFTER UPDATE STATEMENT-level Trigger and
function
 >     definitions
 >      > as below (which updates the storage_path of the children):
 >      >
 >      >
 >      >
 >      >
 >      > CREATE FUNCTION TRG_storage_locations_b_u_AS_DML ()
 >      > RETURNS TRIGGER
 >      > AS $$
 >      > DECLARE
 >      > v_separator VARCHAR (1) = '/';
 >      > v_cnt INT;
 >      > BEGIN
 >      > -- [ -- Required to prevent infinite recursion
 >      > SELECT COUNT (*) INTO v_cnt
 >      > FROM new_table;
 >
 >     Where is new_table coming from?
 >
 >      >
 >      > IF (v_cnt > 0) THEN
 >      > -- ] -- Required to prevent infinite recursion
 >      > UPDATE storage_locations
 >      > SET storage_location_path = COALESCE
(i.storage_location_path ||
 >      > v_separator, '') || storage_locations.storage_location_nm
 >      > FROM inserted i
 >      > JOIN deleted d
 >      > ON ( i.storage_location_id = d.storage_location_id
 >      > AND i.storage_location_path != d.storage_location_path
 >      > )
 >      > WHERE storage_locations.parent_storage_location_id =
 >     i.storage_location_id;
 >      > END IF;
 >      > RETURN NULL;
 >      > END
 >      > $$ LANGUAGE plpgsql;
 >      >
 >      > CREATE TRIGGER storage_locations_b_u_AS_DML
 >      > AFTER UPDATE
 >      > ON storage_locations
 >      > REFERENCING NEW TABLE AS inserted
 >      > OLD TABLE AS deleted
 >      > FOR EACH STATEMENT EXECUTE FUNCTION
 >     TRG_storage_locations_b_u_AS_DML ();
 >      >
 >      > Notice that the Trigger is getting called endlessly (if the
 >     number of
 >      > rows in the NEW TABLE are NOT checked). I reckon if there
are not
 >     any
 >      > rows, what is the need to call the trigger. Or, may be, I
am missing
 >      > something, which I need to learn.
 >
 >     Yes:
 >
 > https://www.postgresql.org/docs/10/sql-createtrigger.html
 >
 >     "... In contrast, a trigger that is marked FOR EACH STATEMENT
only
 >     executes once 

Re: Statement-level trigger results in recursion

2019-02-18 Thread Adrian Klaver

On 2/18/19 8:23 AM, Jitendra Loyal wrote:

My bad!

It is a transition table. Consider the following revised definition of 
trigger:



CREATE TRIGGER storage_locations_b_u_AS_DML
AFTER UPDATE
ON storage_locations
REFERENCING NEW TABLE AS new_table
OLD TABLE AS old_table
FOR EACH STATEMENT EXECUTE FUNCTION TRG_storage_locations_b_u_AS_DML ();


Alright I understand now.

Did you see the rest of my previous post about AFTER STATEMENT running 
regardless of number of rows affected?




Thanks and regards,
Jiten

On Mon 18 Feb, 2019, 9:32 PM Adrian Klaver, > wrote:


On 2/18/19 4:11 AM, Jitendra Loyal wrote:
 >
 > The AFTER Statement-level Trigger runs into infinite execution when
 > another set of rows are affected for the same table through this
 > trigger. Consider this use case where a table storage_locations that
 > manages a hierarchy of storage_locations in stores, and thus having
 > following columns (for simplicity):
 >
 >
 >
 >
 > storage_location_id SERIAL NOT NULL PRIMARY KEY,
 > store_id INTEGER NOT NULL, -- REFERENCES stores
 > storage_location_nm VARCHAR (25) NOT NULL,
 > parent_storage_location_id INTEGER NULL REFERENCES
storage_locations,
 >  NULL for root storage locations
 > storage_location_path TEXT NOT NULL
 >
 >
 >
 >
 >
 > I have a BEFORE ROW trigger, which updates the
storage_location_path with
 > parent's storage_location_path, if any, concatenated with its
 > storage_location_name. This works fine - no issues.
 >
 > I have another AFTER UPDATE STATEMENT-level Trigger and function
definitions
 > as below (which updates the storage_path of the children):
 >
 >
 >
 >
 > CREATE FUNCTION TRG_storage_locations_b_u_AS_DML ()
 > RETURNS TRIGGER
 > AS $$
 > DECLARE
 > v_separator VARCHAR (1) = '/';
 > v_cnt INT;
 > BEGIN
 > -- [ -- Required to prevent infinite recursion
 > SELECT COUNT (*) INTO v_cnt
 > FROM new_table;

Where is new_table coming from?

 >
 > IF (v_cnt > 0) THEN
 > -- ] -- Required to prevent infinite recursion
 > UPDATE storage_locations
 > SET storage_location_path = COALESCE (i.storage_location_path ||
 > v_separator, '') || storage_locations.storage_location_nm
 > FROM inserted i
 > JOIN deleted d
 > ON ( i.storage_location_id = d.storage_location_id
 > AND i.storage_location_path != d.storage_location_path
 > )
 > WHERE storage_locations.parent_storage_location_id =
i.storage_location_id;
 > END IF;
 > RETURN NULL;
 > END
 > $$ LANGUAGE plpgsql;
 >
 > CREATE TRIGGER storage_locations_b_u_AS_DML
 > AFTER UPDATE
 > ON storage_locations
 > REFERENCING NEW TABLE AS inserted
 > OLD TABLE AS deleted
 > FOR EACH STATEMENT EXECUTE FUNCTION
TRG_storage_locations_b_u_AS_DML ();
 >
 > Notice that the Trigger is getting called endlessly (if the
number of
 > rows in the NEW TABLE are NOT checked). I reckon if there are not
any
 > rows, what is the need to call the trigger. Or, may be, I am missing
 > something, which I need to learn.

Yes:

https://www.postgresql.org/docs/10/sql-createtrigger.html

"... In contrast, a trigger that is marked FOR EACH STATEMENT only
executes once for any given operation, regardless of how many rows it
modifies (in particular, an operation that modifies zero rows will
still
result in the execution of any applicable FOR EACH STATEMENT triggers)."

 >
 >
 >
 >
 >
 >
 >
 >
 >
 >
 >
 >
 >
 >
 >
 > Thanks,
 >
 >
 >
 >
 >
 > Jiten
 >
 >
 >
 >


-- 
Adrian Klaver

adrian.kla...@aklaver.com 




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Statement-level trigger results in recursion

2019-02-18 Thread Adrian Klaver

On 2/18/19 4:11 AM, Jitendra Loyal wrote:


The AFTER Statement-level Trigger runs into infinite execution when 
another set of rows are affected for the same table through this 
trigger. Consider this use case where a table storage_locations that 
manages a hierarchy of storage_locations in stores, and thus having 
following columns (for simplicity):





storage_location_id SERIAL NOT NULL PRIMARY KEY,
store_id INTEGER NOT NULL, -- REFERENCES stores
storage_location_nm VARCHAR (25) NOT NULL,
parent_storage_location_id INTEGER NULL REFERENCES storage_locations, 
 NULL for root storage locations

storage_location_path TEXT NOT NULL





I have a BEFORE ROW trigger, which updates the storage_location_path with
parent's storage_location_path, if any, concatenated with its
storage_location_name. This works fine - no issues.

I have another AFTER UPDATE STATEMENT-level Trigger and function definitions
as below (which updates the storage_path of the children):




CREATE FUNCTION TRG_storage_locations_b_u_AS_DML ()
RETURNS TRIGGER
AS $$
DECLARE
v_separator VARCHAR (1) = '/';
v_cnt INT;
BEGIN
-- [ -- Required to prevent infinite recursion
SELECT COUNT (*) INTO v_cnt
FROM new_table;


Where is new_table coming from?



IF (v_cnt > 0) THEN
-- ] -- Required to prevent infinite recursion
UPDATE storage_locations
SET storage_location_path = COALESCE (i.storage_location_path || 
v_separator, '') || storage_locations.storage_location_nm

FROM inserted i
JOIN deleted d
ON ( i.storage_location_id = d.storage_location_id
AND i.storage_location_path != d.storage_location_path
)
WHERE storage_locations.parent_storage_location_id = i.storage_location_id;
END IF;
RETURN NULL;
END
$$ LANGUAGE plpgsql;

CREATE TRIGGER storage_locations_b_u_AS_DML
AFTER UPDATE
ON storage_locations
REFERENCING NEW TABLE AS inserted
OLD TABLE AS deleted
FOR EACH STATEMENT EXECUTE FUNCTION TRG_storage_locations_b_u_AS_DML ();

Notice that the Trigger is getting called endlessly (if the number of 
rows in the NEW TABLE are NOT checked). I reckon if there are not any 
rows, what is the need to call the trigger. Or, may be, I am missing 
something, which I need to learn.


Yes:

https://www.postgresql.org/docs/10/sql-createtrigger.html

"... In contrast, a trigger that is marked FOR EACH STATEMENT only 
executes once for any given operation, regardless of how many rows it 
modifies (in particular, an operation that modifies zero rows will still 
result in the execution of any applicable FOR EACH STATEMENT triggers)."


















Thanks,





Jiten







--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Create DDL trigger to catch which column was altered

2018-07-10 Thread Łukasz Jarych
It is no possible?

Jacek

pon., 9 lip 2018 o 13:38 Łukasz Jarych  napisał(a):

> Hi Guys,
>
> i am using sqls like below to track ddl changes:
>
> CREATE TABLE track_ddl
>> (
>>   event text,
>>   command text,
>>   ddl_time timestamptz,
>>   usr text
>> );
>> CREATE OR REPLACE FUNCTION track_ddl_function()
>> RETURNS event_trigger
>> AS
>> $$
>> BEGIN
>>   INSERT INTO track_ddl values(tg_tag, tg_event, now(), session_user);
>>   RAISE NOTICE 'DDL logged';
>> END
>> $$ LANGUAGE plpgsql SECURITY DEFINER;
>>
>
>
>> CREATE EVENT TRIGGER track_ddl_event ON ddl_command_start
>> WHEN TAG IN ('CREATE TABLE', 'DROP TABLE', 'ALTER TABLE')
>> EXECUTE PROCEDURE track_ddl_function();
>> CREATE TABLE event_check(i int);
>> SELECT * FROM track_ddl;
>
>
> And and drop table is ok. But when i am altering i would like to know new
> vales and old values like when i am catching DML changes:
>
> CREATE OR REPLACE FUNCTION change_trigger() RETURNS trigger AS $$
>>
>>
>>> BEGIN
>>
>>
>>> IF  TG_OP = 'INSERT'
>>
>>
>>> THEN
>>
>>
>>> INSERT INTO logging.t_history (tabname,
>>> schemaname, operation, who, new_val)
>>
>>
>>> VALUES (TG_RELNAME, TG_TABLE_SCHEMA,
>>> TG_OP, current_user, row_to_json(NEW));
>>
>>
>>> RETURN NEW;
>>
>>
>>> ELSIF   TG_OP = 'UPDATE'
>>
>>
>>> THEN
>>
>>
>>> INSERT INTO logging.t_history (tabname,
>>> schemaname, operation, who, new_val, old_val)
>>
>>
>>> VALUES (TG_RELNAME, TG_TABLE_SCHEMA,
>>> TG_OP, current_user,
>>
>>
>>> row_to_json(NEW),
>>> row_to_json(OLD));
>>
>>
>>> RETURN NEW;
>>
>>
>>> ELSIF   TG_OP = 'DELETE'
>>
>>
>>> THEN
>>
>>
>>> INSERT INTO logging.t_history (tabname,
>>> schemaname, operation, who, old_val)
>>
>>
>>> VALUES (TG_RELNAME, TG_TABLE_SCHEMA,
>>> TG_OP, current_user, row_to_json(OLD));
>>
>>
>>> RETURN OLD;
>>
>>
>>> END IF;
>>
>>
>>> END;
>>
>>
>>> $$ LANGUAGE 'plpgsql' SECURITY DEFINER;
>>
>>
> It is possible?
> Or write function which will tell me all new values in new columns?
>
> I was trying to change sqls like here:
>
> CREATE TABLE track_ddl
>> (
>>   event text,
>>   command text,
>>   ddl_time timestamptz,
>>   usr json
>> );
>> CREATE OR REPLACE FUNCTION track_ddl_function()
>> RETURNS event_trigger
>> AS
>> $$
>> BEGIN
>>   INSERT INTO track_ddl values(tg_tag, tg_event, now(), row_to_json(NEW));
>>   RAISE NOTICE 'DDL logged';
>> END
>> $$ LANGUAGE plpgsql SECURITY DEFINER;
>
>
> but this is not working.
>
> Please help,
> Jacek
>
>


Re: Problem with trigger makes Detail record be invalid

2018-04-20 Thread Adrian Klaver

On 04/20/2018 01:30 PM, PegoraroF10 wrote:

Well, talking about responsabilities, I think one of responsabilities of a
mature database is that it can only accept data it was configured for. If
you try to store a timestamp in a integer field or a huge numeric value in a


Actually there have been examples on this list where folks have stored a 
timestamp as seconds from an epoch in an integer field. Of course then 
someone has to know what that field really represents. This is not nit 
picking on my part so much as an example of end user inventiveness. To 
that end Postgres has many ways of coming to a solution for a problem. 
Unfortunately, there are paths to a solution can trip you up. This means 
there is often no simple answer to a problem. Basically, more choices 
means more pre-thinking, testing, re-thinking, repeat as needed.



smallint field, Postgres will block you because that operation is not
acceptable.
So, it's not acceptable to break referential integrity, is it ?


That is a maybe:

https://www.postgresql.org/docs/10/static/sql-altertable.html

"
DISABLE TRIGGER [ trigger_name | ALL | USER ]

ALL

Disable or enable all triggers belonging to the table. (This 
requires superuser privilege if any of the triggers are internally 
generated constraint triggers such as those that are used to implement 
foreign key constraints or deferrable uniqueness and exclusion constraints.)



ADD table_constraint [ NOT VALID ]

This form adds a new constraint to a table using the same syntax as 
CREATE TABLE, plus the option NOT VALID, which is currently only allowed 
for foreign key and CHECK constraints. If the constraint is marked NOT 
VALID, the potentially-lengthy initial check to verify that all rows in 
the table satisfy the constraint is skipped. The constraint will still 
be enforced against subsequent inserts or updates (that is, they'll fail 
unless there is a matching row in the referenced table, in the case of 
foreign keys; and they'll fail unless the new row matches the specified 
check constraints). But the database will not assume that the constraint 
holds for all rows in the table, until it is validated by using the 
VALIDATE CONSTRAINT option.

"

https://www.postgresql.org/docs/10/static/sql-createtrigger.html

"SQL specifies that BEFORE DELETE triggers on cascaded deletes fire 
after the cascaded DELETE completes. The PostgreSQL behavior is for 
BEFORE DELETE to always fire before the delete action, even a cascading 
one. This is considered more consistent. There is also nonstandard 
behavior if BEFORE triggers modify rows or prevent updates during an 
update that is caused by a referential action. This can lead to 
constraint violations or stored data that does not honor the referential 
constraint."


https://www.postgresql.org/docs/10/static/plpgsql-trigger.html

"Row-level triggers fired BEFORE can return null to signal the trigger 
manager to skip the rest of the operation for this row (i.e., subsequent 
triggers are not fired, and the INSERT/UPDATE/DELETE does not occur for 
this row). "


A certain amount of this came about because folks are dealing with messy 
data and want to get it into the database first, do the clean up there 
and then apply the RI constraints. Other folks have different ways of 
doing it. It comes done to personal choice. That means though you have 
to know that dangerous paths exist and how to avoid them. A lot of this 
is ingrained in the code and in use in the wild, so I would not expect 
there would be major changes in how things work. Instead as has already 
been indicated there maybe better documentation on the way detailing all 
the above.






--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Problem with trigger makes Detail record be invalid

2018-04-20 Thread PegoraroF10
Well, talking about responsabilities, I think one of responsabilities of a
mature database is that it can only accept data it was configured for. If
you try to store a timestamp in a integer field or a huge numeric value in a
smallint field, Postgres will block you because that operation is not
acceptable. 
So, it's not acceptable to break referential integrity, is it ?



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Problem with trigger makes Detail record be invalid

2018-04-20 Thread Adrian Klaver

On 04/20/2018 07:21 AM, David G. Johnston wrote:

On 04/19/2018 06:49 PM, PegoraroF10 wrote:
On Fri, Apr 20, 2018 at 6:55 AM, Adrian Klaver 
>wrote:


I know I did that trigger incorrectly but referential integrity is
obligatory.


I would agree if the FK relationship was entirely driven by the
system trigger e.g:

alter table Detail add constraint FKMasterDetail foreign key (Master_ID)
references Master(ID) on update cascade on delete cascade;

As soon as you added your UPDATE/DELETE trigger you took on
responsibility for how the data was passed around.


Such responsibility is an artifact of our specific implementation and 
not an inherent property of writing triggers in the presence of FK 
constraints.


https://en.wikipedia.org/wiki/Foreign_key#Triggers



We've left a foot-gun laying around and should not be surprised when 
less experienced users pick it up and shoot themselves in the foot.


IOW, I do agree with the OP - its just an unfortunate reality that this 
isn't how things work today.  Whether one can accept and work within 
this reality is a personal decision.


This does reinforce that testing the restoration of ones backups is 
important.


David J.




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Problem with trigger makes Detail record be invalid

2018-04-20 Thread Adrian Klaver

On 04/19/2018 06:49 PM, PegoraroF10 wrote:

Correct, that delete done a partial commit. And this is absolutely
unacceptable.


Yet a known possible outcome. See the section on Triggers towards bottom 
of page:


https://en.wikipedia.org/wiki/Foreign_key


I know I did that trigger incorrectly but referential integrity is
obligatory.


I would agree if the FK relationship was entirely driven by the system 
trigger e.g:


alter table Detail add constraint FKMasterDetail foreign key (Master_ID)
references Master(ID) on update cascade on delete cascade;

As soon as you added your UPDATE/DELETE trigger you took on 
responsibility for how the data was passed around. I understand that 
this was not communicated as well as it should be in the docs.



Imagine if I have a database crash and need to restore as soon as possible.
How much time I´ll spend removing those records from a backup to get entire
database restored properly.


Myself, having written more then my fair share of poorly thought out 
trigger  functions, I test new ones extensively before I release them 
into the wild.




Well, I´ll review all my triggers. And I have hundreds of them.







--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Problem with trigger makes Detail record be invalid

2018-04-19 Thread PegoraroF10
Correct, that delete done a partial commit. And this is absolutely
unacceptable.
I know I did that trigger incorrectly but referential integrity is
obligatory.
Imagine if I have a database crash and need to restore as soon as possible.
How much time I´ll spend removing those records from a backup to get entire
database restored properly. 

Well, I´ll review all my triggers. And I have hundreds of them.



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Problem with trigger makes Detail record be invalid

2018-04-19 Thread Ken Tanzer
On Thu, Apr 19, 2018 at 12:21 PM, Tom Lane  wrote:

> =?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?=  writes:
> > 2018-04-19 15:57 GMT-03:00 Tom Lane :
> >> (I'm not sure that this issue is adequately documented, though.
> >> I'd have expected to find something about it in triggers.sgml and/or
> >> create_trigger.sgml, but in a quick look neither of them mentions
> foreign
> >> keys.)
>
> > We don't have it properly documented... at the time I answered this
> > question on pt-br stackoverflow I noticed the lack o documentation and
> > unfortunately I completely forgot to propose a small patch for it.
>
> It strikes me that there are basically two things a trigger could do to
> break FK consistency:
>
> 1. Turn an FK-commanded update into a no-op by returning NULL.
>
> 2. Change the content of the FK-related columns during an FK-commanded
> update.
>
> Both of these apply only to BEFORE ROW triggers, of course.
>
> It might not be unreasonable or unduly expensive to throw an error for
> case #1.  I don't think I want to get into the expense of checking
> for case #2, but covering case #1 would be enough to catch all of the
> reports of this type of problem that I can remember.
>
> IIRC, you can also break FK consistency with poorly-thought-out rules,
> but given that rules are close-to-deprecated, I'm not very concerned
> about sanding down rough edges in that case.
>
> (But if you feel like writing a documentation patch, please do, because
> we'd not be likely to back-patch a behavioral change like this even
> if we get around to making it.)
>
> regards, tom lane
>
>
I'm gonna chime in here from a simple user perspective.  I'm kinda shocked
reading this thread that any of this is possible.  I had always understood
and relied on foreign keys being a _guarantee_ of referential integrity.
I'd personally be in favor of at least an option to disallow this, even
with a performance cost.  Maybe you could even call it "Strict Mode." ;)

But regardless, I think some better documentation is in order, and not just
in the triggers section.  I'd suggest this be prominently mentioned as a
big asterisk in any places that talk about constratints.  This page seems
like an obvious candidate:
https://www.postgresql.org/docs/9.5/static/ddl-constraints.html), as it has
nothing qualifying lots of statements such as "If a user attempts to store
data in a column that would violate a constraint, an error is raised."

I do understand none of this happens unless you break it yourself, but it
might change both how I write and test triggers, and how I might look at
using other people's triggers or materials.  Knowing my referential
integrity can't be broken is a nice guard rail to have, but if you can't
necessarily count on it, some prominent signs saying "warning, no guard
rail ahead" seem like a good idea.

Thanks for listening!

Ken




-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://demo.agency-software.org/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: Problem with trigger makes Detail record be invalid

2018-04-19 Thread Tom Lane
=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?=  writes:
> 2018-04-19 15:57 GMT-03:00 Tom Lane :
>> (I'm not sure that this issue is adequately documented, though.
>> I'd have expected to find something about it in triggers.sgml and/or
>> create_trigger.sgml, but in a quick look neither of them mentions foreign
>> keys.)

> We don't have it properly documented... at the time I answered this
> question on pt-br stackoverflow I noticed the lack o documentation and
> unfortunately I completely forgot to propose a small patch for it.

It strikes me that there are basically two things a trigger could do to
break FK consistency:

1. Turn an FK-commanded update into a no-op by returning NULL.

2. Change the content of the FK-related columns during an FK-commanded
update.

Both of these apply only to BEFORE ROW triggers, of course.

It might not be unreasonable or unduly expensive to throw an error for
case #1.  I don't think I want to get into the expense of checking
for case #2, but covering case #1 would be enough to catch all of the
reports of this type of problem that I can remember.

IIRC, you can also break FK consistency with poorly-thought-out rules,
but given that rules are close-to-deprecated, I'm not very concerned
about sanding down rough edges in that case.

(But if you feel like writing a documentation patch, please do, because
we'd not be likely to back-patch a behavioral change like this even
if we get around to making it.)

regards, tom lane



Re: Problem with trigger makes Detail record be invalid

2018-04-19 Thread David G. Johnston
On Thu, Apr 19, 2018 at 11:57 AM, Tom Lane  wrote:

> (I'm not sure that this issue is adequately documented, though.
> I'd have expected to find something about it in triggers.sgml and/or
> create_trigger.sgml, but in a quick look neither of them mentions foreign
> keys.)
>

​I'm leading toward inadequate myself...though create_trigger.sgml does
have:

"""
​SQL specifies that BEFORE DELETE triggers on cascaded deletes fire after
the cascaded DELETE completes. The PostgreSQL behavior is for BEFORE DELETE
to always fire before the delete action, even a cascading one. This is
considered more consistent. There is also nonstandard behavior if BEFORE
triggers modify rows or prevent updates during an update that is caused by
a referential action. This can lead to constraint violations or stored data
that does not honor the referential constraint.
"""

And triggers.sgml has:

"""
A row-level BEFORE trigger that does not intend to cause [a delete] must be
careful to return as its result the same row that was passed in (that is,
the NEW row for INSERT and UPDATE triggers, the OLD row for DELETE
triggers).
"""

There is a lot of surrounding text to sift through though - and the former
is a "compatibility" comment.  Warning blurb after the triggers.sgml quoted
section about preventing the action from occurring potentially breaking FK
constraints would be a reasonable response to this report.

I'd rather have a developer spend time coding up having an FK constraint
define an AFTER STATEMENT trigger using a transition table and ensure that
all FK constraints remain enforced for all changed records.  Correctly or
incorrectly written triggers do not have any liberty to violate FK
constraints and the fact that they can is reasonably considered by the user
base to be a bug.

David J.


Re: Problem with trigger makes Detail record be invalid

2018-04-19 Thread Fabrízio de Royes Mello
2018-04-19 15:57 GMT-03:00 Tom Lane :
>
> Adrian Klaver  writes:
> > On 04/19/2018 10:55 AM, PegoraroF10 wrote:
> >> Is this a bug or it´s mine responsability to check that trigger result
?
>
> > Without seeing exactly what the trigger function on Detail is doing that
> > is not answerable.
>
> I think the OP is complaining because his misimplemented trigger can break
> the consistency of the foreign key constraint.  That is not a bug, it's
> an intentional design decision: triggers are lower-level than foreign key
> enforcement queries, and fire during such queries.  It's easy to construct
> examples where people would be very unhappy if this were not so, because
> then FK-driven updates would not be seen by the table's triggers.  It does
> mean that you have to be careful when writing a trigger.
>

Yeap... it's already mentioned in stackoverflow in ptbr sometime ago [1]
with a reproducible test case.

> (I'm not sure that this issue is adequately documented, though.
> I'd have expected to find something about it in triggers.sgml and/or
> create_trigger.sgml, but in a quick look neither of them mentions foreign
> keys.)
>

We don't have it properly documented... at the time I answered this
question on pt-br stackoverflow I noticed the lack o documentation and
unfortunately I completely forgot to propose a small patch for it.

Regards,


[1]
https://pt.stackoverflow.com/questions/256115/postgresql-foreign-keys-falhando/256398#256398

--
   Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


Re: Problem with trigger makes Detail record be invalid

2018-04-19 Thread Adrian Klaver

On 04/19/2018 11:52 AM, PegoraroF10 wrote:

My point of view that there was a partial rollback, just on detail table. If
I´ve done a delete from Master and I have a foreign key to it with cascade
option, or all records should be deleted or no one should, this is my point.


Except you now have a trigger or possibly triggers that are altering the 
delete process and possibly counteracting the system trigger that is a 
FK. There have been enough instances of this show up on this list for me 
to know this is a distinct possibility.




Did you see that Master table has no records and Detail table has one record
?
I think you agree with me that we have a a detail record with no master, so
it´s unusable, right ?


We have not seen the actual records, so I cannot say or agree/disagree.



I´m not able to do a backup/restore of this database because that record
doesn´t match.



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Problem with trigger makes Detail record be invalid

2018-04-19 Thread Tom Lane
Adrian Klaver  writes:
> On 04/19/2018 10:55 AM, PegoraroF10 wrote:
>> Is this a bug or it´s mine responsability to check that trigger result ?

> Without seeing exactly what the trigger function on Detail is doing that 
> is not answerable.

I think the OP is complaining because his misimplemented trigger can break
the consistency of the foreign key constraint.  That is not a bug, it's
an intentional design decision: triggers are lower-level than foreign key
enforcement queries, and fire during such queries.  It's easy to construct
examples where people would be very unhappy if this were not so, because
then FK-driven updates would not be seen by the table's triggers.  It does
mean that you have to be careful when writing a trigger.

(I'm not sure that this issue is adequately documented, though.
I'd have expected to find something about it in triggers.sgml and/or
create_trigger.sgml, but in a quick look neither of them mentions foreign
keys.)

regards, tom lane



Re: Problem with trigger makes Detail record be invalid

2018-04-19 Thread PegoraroF10
My point of view that there was a partial rollback, just on detail table. If
I´ve done a delete from Master and I have a foreign key to it with cascade
option, or all records should be deleted or no one should, this is my point.

Did you see that Master table has no records and Detail table has one record
?
I think you agree with me that we have a a detail record with no master, so
it´s unusable, right ?

I´m not able to do a backup/restore of this database because that record
doesn´t match. 



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Problem with trigger makes Detail record be invalid

2018-04-19 Thread Adrian Klaver

On 04/19/2018 11:30 AM, PegoraroF10 wrote:

I know my trigger is incorrect. I know that I can use TG_OP to know what
operation is being done.
My question is ...

Is this a bug or it´s mine responsability to check that trigger result ?


I think it´s a bug because if something got wrong on detail deletion and it
was rolled back, how could be a parent record be deleted ?


Another thought, are there are any other triggers on the Master and/or 
Detail tables?






--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Problem with trigger makes Detail record be invalid

2018-04-19 Thread Adrian Klaver

On 04/19/2018 11:30 AM, PegoraroF10 wrote:

I know my trigger is incorrect. I know that I can use TG_OP to know what
operation is being done.
My question is ...

Is this a bug or it´s mine responsability to check that trigger result ?


I think it´s a bug because if something got wrong on detail deletion and it
was rolled back, how could be a parent record be deleted ?


In your example I saw no rollback or error message:

"delete from Master where ID=2;
select * from Master; --will show no records.
select * from Detail; --will show one record pointing to Master_ID=2, that
doesn´t exist anymore."

Was there an error message?

Then there is the fact that your trigger is doing something to the row 
BEFORE the delete or update and presumably modifying it. Without knowing 
what the function is doing or what it is actually returning then we are 
in full on guessing mode.






--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Problem with trigger makes Detail record be invalid

2018-04-19 Thread PegoraroF10
I know my trigger is incorrect. I know that I can use TG_OP to know what
operation is being done. 
My question is ...
> Is this a bug or it´s mine responsability to check that trigger result ?

I think it´s a bug because if something got wrong on detail deletion and it
was rolled back, how could be a parent record be deleted ?



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Problem with trigger makes Detail record be invalid

2018-04-19 Thread Adrian Klaver

On 04/19/2018 10:55 AM, PegoraroF10 wrote:

I´m using Postgres 10 on ubuntu.

suppose a simple Master/Detail structure like this:

create table Master(ID integer primary key, name text);
create table Detail(ID integer primary key, Master_ID Integer, OtherInfo
text);
alter table Detail add constraint FKMasterDetail foreign key (Master_ID)
references Master(ID) on update cascade on delete cascade;

Then insert some records on it:
insert into Master(ID, Name) values(1,'First'), values(2,'Second');
insert into Detail(ID, Master_ID, OtherInfo) values(1,1,'Detail
Information'), (2,2,'Detail Information2');

Then, if I delete on Master will delete on detail too. Fine.
delete from Master where ID=1;

But now, suppose I have a complex trigger before update or delete that runs
on Detail table.
create function DoAComplexJobOnDetail() returns trigger as $$
begin
   -- Do lots of things then
   Return new; --This is the error, because I´m returning new even for
delete;


That can be dealt with using TG_OP value to conditionally change what is 
RETURNed:


https://www.postgresql.org/docs/10/static/plpgsql-trigger.html

"TG_OP

Data type text; a string of INSERT, UPDATE, DELETE, or TRUNCATE 
telling for which operation the trigger was fired.

"

See Example 42.4. A PL/pgSQL Trigger Procedure For Auditing at bottom of 
page.



end;$$ language plpgsql;
create trigger DetailDoAComplexJob before update or delete on Detail for
each row execute procedure DoAComplexJobOnDetail();

Then try to delete the other Master record. It will be deleted on Master but
Detail record doesn´t and will obviously become invalid because the foreign
key.
delete from Master where ID=2;
select * from Master; --will show no records.
select * from Detail; --will show one record pointing to Master_ID=2, that
doesn´t exist anymore.

Is this a bug or it´s mine responsability to check that trigger result ?


Without seeing exactly what the trigger function on Detail is doing that 
is not answerable.



If that trigger responds incorrectly I think that no information could be
executed.



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: AFTER UPDATE trigger updating other records

2018-01-26 Thread Ian Harding
On Thu, Jan 25, 2018 at 2:33 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Thu, Jan 25, 2018 at 3:06 PM, Ian Harding 
> wrote:
>
>>
>>   4 |3 | Top.Bar.Blah
>>   5 |4 | Top.Bar.Blah.Scooby
>> ​​
>>
>
>
>> barf$# UPDATE area SET areapath = (select areapath from area
>> a where areaid = area.parentid) || subpath(areapath, nlevel(areapath) - 1)
>> barf$# WHERE OLD.areapath @> areapath;
>>
>
> barf=# update area set parentid = 2 where areaid = 4;
>>
>>
> OLD.areapath = Top.Bar.Blah
>
> ​When its Top.Bar.Blah.Scooby 's turn to be updated its parent is 4 which
> has a value of "Top.Bar.Blah" and so nothing happens.  It doesn't matter
> whether row 4 or row 5 occurs first - when multiple rows are updated your
> subselect presents the exact same data to each row and it doesn't take into
> account updates applied to other rows made concurrently.
>
> ​Top.Bar.Blah 's parent was changed to 2 so it does go from "Top.Bar.Blah"
> to "Top.Foo.Blah"
>
> When you then run your update manually row 5 sees the newly committed
> areapath value for row 4 and now affects the change to
> "Top.Foo.Blah.Scooby"​
>
> IOW, cascading updates are not possible (regardless of whether you are
> doing them in a trigger or manually).  If you had a three-deep parent-child
> hierarchy to change in your example you would have seen that your example
> script would only have corrected the first two levels.
>
> Probably your trigger will need to capture (SELECT areapath ​.. WHERE id =
> NEW.parentid) and then use that constant in your SET clause.
>
> ​David J.
> ​
>
> For the record:

CREATE OR REPLACE FUNCTION trig_areapath_u() RETURNS trigger AS
$$
DECLARE
  ltree_parentpath ltree;
  ltree_mypath ltree;
  int_cnt int;
BEGIN
  IF TG_OP = 'UPDATE' THEN
  IF (COALESCE(OLD.parentid,0) != COALESCE(NEW.parentid,0)) THEN
-- Get the new parent path and save it
-- Get the old path for this item and save it
-- Replace the first X elements of the path for this and all
-- my children with the parent path

SELECT areapath
INTO ltree_parentpath
FROM area
WHERE areaid = NEW.parentid;

ltree_mypath := OLD.areapath;

UPDATE area SET areapath = ltree_parentpath ||
subpath(areapath, nlevel(ltree_parentpath) )
WHERE ltree_mypath @> areapath;

GET DIAGNOSTICS int_cnt = ROW_COUNT;
RAISE NOTICE 'Rows affected: %', int_cnt;
  END IF;
  END IF;

  RETURN NULL;
END
$$
LANGUAGE 'plpgsql' VOLATILE;


Re: Deferrable constraint trigger

2018-01-26 Thread Laurenz Albe
Maciej Kołuda wrote:
> I am trying to write constraint trigger to assure that at any given point of 
> time I have an exactly one record "active" based on the time.
> To give you some background I would like to have an exactly one box per owner 
> active in which items will be added.
> 
> And trigger itself:
> 
> CREATE OR REPLACE FUNCTION check_active_box() RETURNS trigger AS 
> $check_active_box$
> BEGIN
> IF (select count(*) from boxes where owner_id = NEW.owner_id and 
> validity_time >= now()) > 1 THEN
> RAISE EXCEPTION 'At least one valid box exists';
> END IF;
> RETURN NEW;
> END;
> $check_active_box$ LANGUAGE plpgsql;
> 
> CREATE CONSTRAINT TRIGGER check_active_box AFTER INSERT OR UPDATE ON boxes 
> DEFERRABLE INITIALLY DEFERRED FOR EACH ROW EXECUTE PROCEDURE 
> check_active_box();
> 
> Here is the part of the pg_log:
> 
> 2018-01-25 15:10:08 CET [15256-666] myuser@db LOG:  execute : SELECT 
> 1
> 2018-01-25 15:10:08 CET [15256-667] myuser@db LOG:  execute : BEGIN
> 2018-01-25 15:10:08 CET [15256-668] myuser@db LOG:  execute : select 
> nextval ('hibernate_sequence')
> 2018-01-25 15:10:09 CET [15255-2] myuser@db LOG:  execute : SELECT 1
> 2018-01-25 15:10:09 CET [15255-3] myuser@db LOG:  execute : BEGIN
> ...
> 2018-01-25 15:10:09 CET [15254-2] myuser@db LOG:  execute : SELECT 1
> 2018-01-25 15:10:09 CET [15254-3] myuser@db LOG:  execute : BEGIN
> 
> 2018-01-25 15:10:09 CET [15254-10] myuser@db LOG:  execute : insert 
> into boxes (inserted_at, owner_id, validity_time, version, id) values ($1, 
> $2, $3, $4, $5)
> 2018-01-25 15:10:09 CET [15254-11] myuser@db DETAIL:  parameters: $1 = 
> '2018-01-25 15:10:09.245', $2 = '10', $3 = '2018-01-26 15:10:09.244', $4 = 
> '0', $5 = '5082'
> 2018-01-25 15:10:09 CET [15256-676] myuser@db LOG:  execute : insert 
> into boxes (inserted_at, owner_id, validity_time, version, id) values ($1, 
> $2, $3, $4, $5)
> 2018-01-25 15:10:09 CET [15256-677] myuser@db DETAIL:  parameters: $1 = 
> '2018-01-25 15:10:09.244', $2 = '10', $3 = '2018-01-26 15:10:09.231', $4 = 
> '0', $5 = '5080'
> 2018-01-25 15:10:09 CET [15254-12] myuser@db LOG:  execute : insert 
> into box_messages (box_id, item_id) values ($1, $2)
> 2018-01-25 15:10:09 CET [15254-13] myuser@db DETAIL:  parameters: $1 = 
> '5082', $2 = '5072'
> 2018-01-25 15:10:09 CET [15255-10] myuser@db LOG:  execute : insert 
> into boxes (inserted_at, owner_id, validity_time, version, id) values ($1, 
> $2, $3, $4, $5)
> 2018-01-25 15:10:09 CET [15255-11] myuser@db DETAIL:  parameters: $1 = 
> '2018-01-25 15:10:09.246', $2 = '10', $3 = '2018-01-26 15:10:09.232', $4 = 
> '0', $5 = '5081'
> 2018-01-25 15:10:09 CET [15256-678] myuser@db LOG:  execute : insert 
> into box_items (box_id, item_id) values ($1, $2)
> 2018-01-25 15:10:09 CET [15256-679] myuser@db DETAIL:  parameters: $1 = 
> '5080', $2 = '5070'
> 2018-01-25 15:10:09 CET [15255-12] myuser@db LOG:  execute : insert 
> into box_items (box_id, item_id) values ($1, $2)
> 2018-01-25 15:10:09 CET [15255-13] myuser@db DETAIL:  parameters: $1 = 
> '5081', $2 = '5071'
> 2018-01-25 15:10:09 CET [15256-680] myuser@db LOG:  execute S_2: COMMIT
> 2018-01-25 15:10:09 CET [15254-14] myuser@db LOG:  execute S_1: COMMIT
> 2018-01-25 15:10:09 CET [15255-14] myuser@db LOG:  execute S_1: COMMIT
> 2018-01-25 15:10:09 CET [15255-15] myuser@db ERROR:  At least one valid box 
> exists
> 2018-01-25 15:10:09 CET [15255-16] myuser@db STATEMENT:  COMMIT
> 2018-01-25 15:10:09 CET [15255-17] myuser@db LOG:  execute : BEGIN
> 
> If I read the above log correctly it looks like the last transaction box id = 
> 5081 has been rolled back properly, but the previous one 
> (2018-01-25 15:10:09 CET [15254-14] myuser@db LOG:  execute S_1: COMMIT) has 
> been commited without raising en error which results in creating second 
> unwanted box.
> 
> Could you point what possibly might be wrong with proposed constraint trigger?

There is a race condition.

The trigger runs right before the transaction commits, but if both 15256 and 
15254 commit
at the same time, the trigger functions may run concurrently and then both 
won't see the
results of the other transaction, which has not yet committed.

The window for the race condition grows with the run time of your trigger 
function,
but it will never go away.

You could make your function faster if you use
   IF EXISTS (SELECT 1 FROM boxes WHERE ...) THEN

Using SERIALIZABLE transactions would be a simple way to make sure your 
constraint is
not violated.

Yours,
Laurenz Albe



  1   2   >