Re: [HACKERS] triggers on prepare, commit, rollback... ?
On May 19, 2008, at 6:53 PM, Tom Lane wrote: Another response I've heard is but I don't want to make inside-the-database changes, I want to propagate the state to someplace external. Of course that's completely broken too, because there is *absolutely no way* you will ever make such changes atomic with the inside-the-database transaction commit. We discourage people from making triggers cause outside-the-database side effects already --- it's not going to be better to do it in an on commit trigger. Isn't this close to what NOTIFY is? An on-commit trigger that causes only outside-the-database effects. Cheers, Steve -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] triggers on prepare, commit, rollback... ?
On Mon, 2008-05-19 at 21:53 -0400, Tom Lane wrote: Bruce Momjian [EMAIL PROTECTED] writes: trigger on prepare, commit, rollback, savepoint, This is a sufficiently frequently asked question that I wish someone would add an entry to the FAQ about it, or add it to the TODO list's Features we don't want section. OK, remind me why we don't want it again? I'm sure I've ranted on this several times before, but a quick archive search doesn't find anything. So, here are a few points to chew on: * Trigger on rollback: what's that supposed to do? Tell others that this trx failed, maybe log a failure ? OTOH, this can be implemented by a daemon that sits on tail -f logfile | grep ROLLBACK The current transaction is already aborted, so the trigger has no hope of making any database changes that will ever be visible to anyone. after reading your explanations trigger on rollback/failure is the only one what makes sense still, but it may be that after failure it does not have enough state usable to report anything useful. - Hannu -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Installation of Postgres 32Bit on 64 bit machine
Hi, Thanks for the reply...When you are saying that I need to install 32 bit version of Libraries, are you referring to the libraries that are specified in the error message, I infact searched for the libraries and got it installed but some of the libraries are not available for SUSE Linux, if you install any other libraries it does not resolve the error.Please let me know what has to be done to resolve this issue.Thanks in advanceRegardsCinu--- On Tue, 20/5/08, Douglas McNaught [EMAIL PROTECTED] wrote:From: Douglas McNaught [EMAIL PROTECTED]Subject: Re: [HACKERS] Installation of Postgres 32Bit on 64 bit machineTo: "cinu" [EMAIL PROTECTED]Cc: pgsql-hackers@postgresql.orgDate: Tuesday, 20 May, 2008, 3:27 AMOn Mon, May 19, 2008 at 1:40 PM, cinu [EMAIL PROTECTED] wrote: Hi All, I am trying to install PostgreSQL(postgresql-8.2.4-1PGDG.i686.rpm) on a 64 bit machine, when I try to install I get the following error message: :/home/dump/postgres32bit # rpm -ivh postgresql-server-8.2.4-1PGDG.i686.rpm postgresql-8.2.4-1PGDG.i686.rpm postgresql-libs-8.2.4-1PGDG.i686.rpm warning: postgresql-server-8.2.4-1PGDG.i686.rpm: Header V3 DSA signature: NOKEY, key ID 20579f11 error: Failed dependencies: libcrypto.so.4 is needed by postgresql-server-8.2.4-1PGDG.i686 libreadline.so.4 is needed by postgresql-server-8.2.4-1PGDG.i686 libssl.so.4 is needed by postgresql-server-8.2.4-1PGDG.i686 initscripts is needed by postgresql-8.2.4-1PGDG.i686 libcrypto.so.4 is needed by postgresql-8.2.4-1PGDG.i686 libreadline.so.4 is needed by postgresql-8.2.4-1PGDG.i686 libssl.so.4 is needed by postgresql-8.2.4-1PGDG.i686 libcrypto.so.4 is needed by postgresql-libs-8.2.4-1PGDG.i686 libssl.so.4 is needed by postgresql-libs-8.2.4-1PGDG.i686 This installation is being done on SUSE linux 10, please let me know if there is any alternative with which I can bypass these errors and make the installation successful. You need to install 32-bit versions of the libraries PG depends on. They can coexist alongside the 64-bit versions. -Doug From Chandigarh to Chennai - find friends all over India. Click here.
[HACKERS] Installation of Postgres 32Bit on 64 bit machine
Hello All, I am trying to install PostgreSQL(postgresql-8.2.4-1PGDG.i686.rpm) on a 64 bit machine, when I try to install I get the following error message: :/home/dump/postgres32bit # rpm -ivh postgresql-server-8.2.4-1PGDG.i686.rpm postgresql-8.2.4-1PGDG.i686.rpm postgresql-libs-8.2.4-1PGDG.i686.rpm warning: postgresql-server-8.2.4-1PGDG.i686.rpm: Header V3 DSA signature: NOKEY, key ID 20579f11 error: Failed dependencies: libcrypto.so.4 is needed by postgresql-server-8.2.4-1PGDG.i686 libreadline.so.4 is needed by postgresql-server-8.2.4-1PGDG.i686 libssl.so.4 is needed by postgresql-server-8.2.4-1PGDG.i686 initscripts is needed by postgresql-8.2.4-1PGDG.i686 libcrypto.so.4 is needed by postgresql-8.2.4-1PGDG.i686 libreadline.so.4 is needed by postgresql-8.2.4-1PGDG.i686 libssl.so.4 is needed by postgresql-8.2.4-1PGDG.i686 libcrypto.so.4 is needed by postgresql-libs-8.2.4-1PGDG.i686 libssl.so.4 is needed by postgresql-libs-8.2.4-1PGDG.i686 This installation is being done on SUSE Linux 10, please let me know if there is any alternative with which I can bypass these errors and make the installation successful. Regards Suresh
Re: [HACKERS] [GSoC08]some detail plan of improving hash index
Greg Smith wrote On Fri, 16 May 2008, Josh Berkus wrote: For a hard-core benchmark, I'd try EAStress (SpecJAppserver Lite) This reminds me...Jignesh had some interesting EAStress results at the East conference I was curious to try and replicate more publicly one day. Now that there are some initial benchmarking servers starting to become available, it strikes me that this would make a good test case to run on some of those periodically. I don't have a spare $2K for a commercial license right now, but there's a cheap ($250) non-profit license for EAStress around. That might be a useful purchase for one of the PG non-profits to make one day though. I (an individual, not me ex-cathedra) could pony up the geld for such a license if it is useful; let me know if so and where to do so. Greg Williamson Senior DBA DigitalGlobe Confidentiality Notice: This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain confidential and privileged information and must be protected in accordance with those provisions. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. (My corporate masters made me say this.)
Re: [HACKERS] triggers on prepare, commit, rollback... ?
Dear Tom, trigger on prepare, commit, rollback, savepoint, I'm sure I've ranted on this several times before, but a quick archive search doesn't find anything. So, here are a few points to chew on: I agree that not all of them may make sense, but some may do, and I do not know which of them would definitely be of any use in any case for anyone. In fact, I just know that I do not know:-) * Trigger on rollback: what's that supposed to do? The current transaction is already aborted, so the trigger has no hope of making any database changes that will ever be visible to anyone. For some synchronous replication implementation, it may be to forward the ROLLBACK to the twin transaction in another database? * Trigger on commit: what do you do if the transaction fails after calling the trigger? The reductio ad absurdum for this is to consider having two on-commit triggers, where obviously the second one could fail. Obviously, an AFTER xxx cannot make the xxx to fail. I would use BEFORE COMMIT or BEFORE PREPARE to check global consistency constraints that would not make sense to be checked at the simple row or statement level in a complex transaction. The basic problem is that the transaction commit sequence is very Indeed. The BEFORE should be before anything else in the commit sequence, and may make it fail, but the AFTER is after anything else, and could not change the outcome. the user transaction commits or aborts. But that hardly seems like a usable basis for replication, since you're just plain out of luck if the secondary transaction fails. As for replication, I was naively thinking of using BEFORE PREPARE to forward a prepare and possibly fail in the master if the slave fails, and then an AFTER PREPARE or BEFORE COMMIT/ROLLBACK PREPARE would forward the final COMMIT/ROLLBACK depending on the success of the prepare operation in the master and slaves. The function could also wait for something to finish on the slave. If the something fails in between, then there are several state combinations to detect and handle, but this is just the point of 2PC. I'm not claiming this is easy, on the contrary, but ISTM that the 2PC semantics is all what is fundamentaly needed to achieve synchroneous replication, and once hooks are available there should be a way to prototype something at the user level (slony style). The efficiency of the stuff is another issue, but some applications don't need high throughput and low latency, but just to know that the data are not lost. Well, just my usual 0.02 EUR:-) -- Fabien. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] plpgsql: penalty due to double evaluation of parameters
Hi, Within exec_prepare_plan there are calls to exec_eval_datum to identify the argtypes of the involved parameters. However exec_eval_datum actually fills up value, isnull entries in these cases causing unnecessary additional calls when all we need is the datum type. Such unnecessary evaluation of values might prove to be very costly later since this quirk of exec_eval_datum usage is not so visible. Worse still it could cause bugs if some evaluations have side-effects across multiple evals. It might make sense to introduce a new function exec_eval_datum_type to address this or exec_eval_datum could itself be modified for cases where we just need the datum type. Should I cook up a patch for this? I am inclined towards introducing a new function (but that means that any new datum related changes need to be carried out in 2 functions instead of one currently). Regards, Nikhils -- EnterpriseDB http://www.enterprisedb.com
Re: [HACKERS] triggers on prepare, commit, rollback... ?
On Tue, May 20, 2008 at 06:38:17AM +0100, James Mansion wrote: Tom Lane wrote: Another response I've heard is but I don't want to make inside-the-database changes, I want to propagate the state to someplace external. Of course that's completely broken too, because there is Some things are idempotent and are effectively hints - that they are not transacted can be well understood and accomodated. The idempotent comment makes sense, hints I'm not so sure about as they sound as though they've got very complicated semantics. One operation that fits the transaction BEGIN/END (either COMMIT or ROLLBACK) semantics well, that I can think of is that of the now() function. It can be modelled in at least a couple of ways using transaction begin/end. If you have a BEGIN trigger, then when a transaction is run it would save the time in a transaction level variable. If you have an END trigger, then this variable would be reset, the now() function would need to check on every call to see if it's missing and set it to the current time if it's missing, otherwise it should return its value. Could this be modelled without using transaction level triggers? Sam -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] triggers on prepare, commit, rollback... ?
On Tue, 2008-05-20 at 12:09 +0200, Fabien COELHO wrote: As for replication, I was naively thinking of using BEFORE PREPARE to forward a prepare and possibly fail in the master if the slave fails, and then an AFTER PREPARE or BEFORE COMMIT/ROLLBACK PREPARE would forward the final COMMIT/ROLLBACK depending on the success of the prepare operation in the master and slaves. The function could also wait for something to finish on the slave. If the something fails in between, then there are several state combinations to detect and handle, but this is just the point of 2PC. I'm not claiming this is easy, on the contrary, but ISTM that the 2PC semantics is all what is fundamentaly needed to achieve synchroneous replication, and once hooks are available there should be a way to prototype something at the user level (slony style). The efficiency of the stuff is another issue, but some applications don't need high throughput and low latency, but just to know that the data are not lost. There are probably easier solutions for achieving this, like using two connections at client level or doing async replication, or running data modifications through pl/proxy functions where partitioning function always returns two partitions --- Hannu -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [rfc,patch] PL/Proxy in core
On Wed, 2008-05-14 at 23:29 +0300, Marko Kreen wrote: There are few code beautification ideas on which I'd like to get feedback from wider audience: - Drop CONNECT statement. This was added to make creating simple dblink style calls easier, but actually its not maintainable compared to CLUSTER, it can be used only for occasional hacks. OTOH, if we want to deprecate dblink and replace it with PL/Proxy, it should probably stay. - Drop SELECT statement. This was added as it was simple to do and also to be straightforward replacement for dblink. But it's conceptually ugly. Also it gives impression that there will be UPDATE, DELETE and IF... which will not happen. I'd also suggest one feature request - Add COPY FROM/TO support The way It could be done is similar to (now deprecated) SELECT CREATE FUNCTION copy_users_to_partitions() RETURNS VOID AS $$ CLUSTER 'userdb'; RUN ON hashtext(text::$1); COPY users FROM stdin; $$ LANGUAGE plproxy; and it should be used like COPY is currently proxydb# SELECT copy_users_to_partitions(); bob bobspwd [EMAIL PROTECTED] ben benspwd [EMAIL PROTECTED] ... amy amyspwd [EMAIL PROTECTED] . copy_users_to_partitions -- (1 row) I am not sure how easy it is to get access to stdin from inside a function, but the version of COPY which copies from filesystem can surely be done. On slaves it will always be plain COPY with STDIN/STDOUT. As this reintroduces direct access to partition tables feature removed with SELECT, it should be a feature that can be used by superusers only. Notice that field type should be given, as it can't be deduced from arguments. COPY .. TO would usually (but not neccessarily) be RUN ON ALL Hannu -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] triggers on prepare, commit, rollback... ?
I'm not claiming this is easy, on the contrary, but ISTM that the 2PC semantics is all what is fundamentaly needed to achieve synchroneous replication There are probably easier solutions for achieving this, like using two connections at client level Sure, but that means doing the implementation in the client, although I think that a system solution is better and independent of the client code, and I may make the solution evolve in time without direct client-side effect. So I'm more interested in the generic implementation. or doing async replication, The problem of Async is the *A*, you cannot guarantee that the data will not be lost if the failure occurs between the commit and the later replication. For some application, this is not an option. or running data modifications through pl/proxy functions where partitioning function always returns two partitions I don't think that pl/proxy takes care of 2PC semantics in any useful way. Possibly something like pgpool could take care somehow of the replication by executing queries on two backends, but there are issues with such an approach (say, a SEQUENCE may not return the same result on both sides, some functions may have side effects...), and on commit it must use prepared statements on both sides, and I don't think this is the case for now with the current pgpool. Anyway, I do not think that there is a simple high availability / high throuput / low latency / guaranteed replication / easy to administrate / load balanced silver bullet... My point is more about exploration, and for that user-visible hooks would help. -- Fabien. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] triggers on prepare, commit, rollback... ?
On Tue, 2008-05-20 at 13:36 +0200, Fabien COELHO wrote: or running data modifications through pl/proxy functions where partitioning function always returns two partitions I don't think that pl/proxy takes care of 2PC semantics in any useful way. Possibly something like pgpool could take care somehow of the replication by executing queries on two backends, but there are issues with such an approach (say, a SEQUENCE may not return the same result on both sides, some functions may have side effects...), and on commit it must use prepared statements on both sides, and I don't think this is the case for now with the current pgpool. Anyway, I do not think that there is a simple high availability / high throuput / low latency / guaranteed replication / easy to administrate / load balanced silver bullet... My point is more about exploration, and for that user-visible hooks would help. 2PC will never be any of ( high throuput / low latency / easy to administrate ) - Hannu -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] triggers on prepare, commit, rollback... ?
Tom Lane wrote: Bruce Momjian [EMAIL PROTECTED] writes: trigger on prepare, commit, rollback, savepoint, This is a sufficiently frequently asked question that I wish someone would add an entry to the FAQ about it, or add it to the TODO list's Features we don't want section. OK, remind me why we don't want it again? I'm sure I've ranted on this several times before, but a quick archive search doesn't find anything. So, here are a few points to chew on: * Trigger on rollback: what's that supposed to do? The current transaction is already aborted, so the trigger has no hope of making any database changes that will ever be visible to anyone. * Trigger on commit: what do you do if the transaction fails after calling the trigger? The reductio ad absurdum for this is to consider having two on-commit triggers, where obviously the second one could fail. I full agree that having triggers on rollback and on commit of 2PC transactions is broken by design. Triggers on COMMIT (for non-2PC transactions) and PREPARE (for 2PC-Transactions) seem workable though - I'd expect such a trigger to be executed *before* any actual commit handling takes place. Essentially, doing BEGIN some work COMMIT in the presence of an on-commit trigger would be equivalent to doing BEGIN some work SELECT my_trigger_function COMMIT. A possible use-case for that is aggregating some statistics collected during a transaction. One could e.g. maintain a cache of table rowcounts by summing up the number of inserted and deleted records per table with some per-row ON INSERT and ON DELETE (presumably C-language) triggers, and than update a global cache at transaction end. regards, Florian Pflug -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Installation of Postgres 32Bit on 64 bit machine
Hi, On Mon, 2008-05-19 at 17:10 +0530, cinu wrote: This installation is being done on SUSE linux 10 This is not a list for PostgreSQL installation issues, please use -general or -admin. ...and those packages are *not* for SuSE. Regards, -- Devrim GÜNDÜZ , RHCE PostgreSQL Replication, Consulting, Custom Development, 24x7 support Managed Services, Shared and Dedicated Hosting Co-Authors: plPHP, ODBCng - http://www.commandprompt.com/ signature.asc Description: This is a digitally signed message part
Re: [HACKERS] triggers on prepare, commit, rollback... ?
On Tue, 20 May 2008, Hannu Krosing wrote: Tell others that this trx failed, maybe log a failure ? OTOH, this can be implemented by a daemon that sits on tail -f logfile | grep ROLLBACK In order to follow the log files like that successfully in many environments, you need to stay in sync as the underlying log file changes (it might rotate every day for example). Unfortunately it's not as simple as just using tail. -- * Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] idea: storing view source in system catalogs
I wonder if there is any merit to the idea of storing the 'create view' statement that created the view in an appropriate place. There are basically two reasons for this: *) preserve initial formatting, etc. Database functions when viewed with \df+ in psql appear nice and clean as I wrote them. Much better than \d view *) store what the view intends to do, not what it does. If I do: create view v as select * from foo; The view definition as understood by the database expands the column list. This has unfortunately means that the view definition is no longer valid if the underlying type changes. Understandably, select * is often considered bad style but nevertheless often comes up when writing 'advanced' type manipulation that postgresql is so good at. This also comes up when expanding composite type for example. The idea here is to provide a stepping stone towards allowing the view to be redefined against source objects during invalidation events. ISTM the easiest and best way to do that is to try and reapply the original definition against the altered dependant objects and throw the resultant error, if any. Views can be a real pain to deal with than functions in terms of DDL operations. merlin -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] triggers on prepare, commit, rollback... ?
On Tue, May 20, 2008 at 01:20:48PM -0400, Greg Smith wrote: On Tue, 20 May 2008, Hannu Krosing wrote: Tell others that this trx failed, maybe log a failure ? OTOH, this can be implemented by a daemon that sits on tail -f logfile | grep ROLLBACK In order to follow the log files like that successfully in many environments, you need to stay in sync as the underlying log file changes (it might rotate every day for example). Unfortunately it's not as simple as just using tail. GNU tail provides the -F (or --follow=name) for just this reason. Sam -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] idea: storing view source in system catalogs
On Tue, May 20, 2008 at 02:03:17PM -0400, Merlin Moncure wrote: I wonder if there is any merit to the idea of storing the 'create view' statement that created the view in an appropriate place. There are basically two reasons for this: +1 for DDL in general, including the original CREATE and appending all subsequent ALTERs. DROP would have to make the thing go away. I suppose CREATE OR REPLACE would also wipe the earlier versions, but I'm not married to to that idea. Cheers, David. -- David Fetter [EMAIL PROTECTED] http://fetter.org/ Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter Skype: davidfetter XMPP: [EMAIL PROTECTED] Remember to vote! Consider donating to Postgres: http://www.postgresql.org/about/donate -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] idea: storing view source in system catalogs
Merlin Moncure [EMAIL PROTECTED] writes: I wonder if there is any merit to the idea of storing the 'create view' statement that created the view in an appropriate place. No, there isn't. As counterexamples look at pg_constraint.consrc and pg_attrdef.adsrc, both of which were mistakes from the day they were put in, and have been deprecated for a long time. Source doesn't have any reasonable way to track table/column renames, to point out one problem. If I do: create view v as select * from foo; The view definition as understood by the database expands the column list. Indeed, exactly as is *required* by the SQL spec. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] idea: storing view source in system catalogs
David Fetter wrote: On Tue, May 20, 2008 at 02:03:17PM -0400, Merlin Moncure wrote: I wonder if there is any merit to the idea of storing the 'create view' statement that created the view in an appropriate place. There are basically two reasons for this: +1 for DDL in general, including the original CREATE and appending all subsequent ALTERs. DROP would have to make the thing go away. I suppose CREATE OR REPLACE would also wipe the earlier versions, but I'm not married to to that idea. The 1000$ question is how to deal with renames, though. Not of view itself, but of the tables it depends on. Currently, the view tracks those renames (which is an important feature, IMHO), and you get the correct (using the new names) SQL when dumping the view. Anything that stores the original statement, but fails to track renames is more confusing that what it's worth, I think... But maybe you could store the whitespace appearing before (or after?) a token in the parse tree that is stored for a view. That might not allow reconstructing the *precise* statement, but at least the reconstructed statement would preserve newlines and indention - which probably is the whole reason for wanting to store the original statement in the first place, no? I have no idea how hard I'd be to carry that information from the lexer into the parser, and then into whatever representation we use for storing a view, though... regards, Florian Pflug -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] triggers on prepare, commit, rollback... ?
Tom Lane wrote: Bruce Momjian [EMAIL PROTECTED] writes: trigger on prepare, commit, rollback, savepoint, This is a sufficiently frequently asked question that I wish someone would add an entry to the FAQ about it, or add it to the TODO list's Features we don't want section. OK, remind me why we don't want it again? I'm sure I've ranted on this several times before, but a quick archive search doesn't find anything. I know of this very short rant: http://archives.postgresql.org/pgsql-hackers/2008-04/msg01200.php Florian G. Pflug wrote: A possible use-case for that is aggregating some statistics collected during a transaction. One could e.g. maintain a cache of table rowcounts by summing up the number of inserted and deleted records per table with some per-row ON INSERT and ON DELETE (presumably C-language) triggers, and than update a global cache at transaction end. This is possible now using deferred constraint triggers (PL/pgSQL is sufficient), though better described IMHO as before prepare or before commit rather than on Any FAQ addition should mention deferred constraint triggers. I would expect problems with after commit and after rollback triggers. I think that the documentation of when the existing deferred constraint triggers run in 2PC/non 2PC could be clarified, and the effects on the transaction state that are possible within such trigger functions documented. http://www.postgresql.org/docs/current/interactive/sql-createconstraint.html Says They can be fired either at the end of the statement causing the triggering event, or at the end of the containing transaction; It refers to http://www.postgresql.org/docs/current/interactive/sql-createtable.html Which says Checking of constraints that are deferrable can be postponed until the end of the transaction and If the constraint is INITIALLY DEFERRED, it is checked only at the end of the transaction. I'd also contest this statement: Only foreign key constraints currently accept this clause. All other constraint types are not deferrable. Regards, Stephen Denne. At the Datamail Group we value teamwork, respect, achievement, client focus, and courage. This email with any attachments is confidential and may be subject to legal privilege. If it is not intended for you please advise by replying immediately, destroy it and do not copy, disclose or use it in any way. The Datamail Group, through our GoGreen programme, is committed to environmental sustainability. Help us in our efforts by not printing this email. __ This email has been scanned by the DMZGlobal Business Quality Electronic Messaging Suite. Please see http://www.dmzglobal.com/dmzmessaging.htm for details. __ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers