Tom Lane <[EMAIL PROTECTED]> writes: > Simon Riggs <[EMAIL PROTECTED]> writes: >> A much better objective would be to remove duplicate trigger calls, so >> there isn't any build up of trigger data in the first place. That would >> apply only to immutable functions. RI checks certainly fall into that >> category. > > They're hardly "duplicates": each event is for a different tuple. > > For RI checks, once you get past a certain percentage of the table it'd > be better to throw away all the per-tuple events and do a full-table > verification a la RI_Initial_Check(). I've got no idea about a sane > way to make that happen, though.
One idea I had was to accumulate the data in something like a tuplestore and then perform the RI check as a join between a materialize node and the target table. Then we could use any join type whether a hash join, nested loop, merge join, etc depending on how many there on each side are and how many are distinct values. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's PostGIS support! -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers