On Wed, 2008-05-28 at 18:17 -0400, Gregory Stark wrote:
> "Simon Riggs" <[EMAIL PROTECTED]> writes:
> 
> > AFAICS we must aggregate the trigger checks. We would need a special
> > property of triggers that allowed them to be aggregated when two similar
> > checks arrived. We can then use hash aggregation to accumulate them. We
> > might conceivably need to spill to disk also, since the aggregation may
> > not always be effective. But in most cases the tables against which FK
> > checks are made are significantly smaller than the tables being loaded.
> > Once we have hash aggregated them, that is then the first part of a hash
> > join to the target table.
> 
> Well we can't aggregate them as they're created because later modifications
> could delete or update the original records. The SQL spec requires that FK
> checks be effective at the end of the command. 

Well, thats what we need to do. We just need to find a way...

Currently, we store trigger entries by htid. I guess we need to
aggregate them on the actual values looked up.

The SQL spec also says that the contents of the FK check table should be
taken as at the start of the command, so we should be safe to aggregate
the values prior to the check.

As already suggested in work on Read Only Tables, we could optimise them
away to being constraint checks.

-- 
 Simon Riggs           www.2ndQuadrant.com
 PostgreSQL Training, Services and Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to