> It seems in this case the time needed for a single deferred trigger somehow
> depends on the number of dead tuples in the table, because a vacuum of the
> table will 'reset' the query-times. However, even if I wanted to, vacuum is
> not allowed from within a function.
> What is happening here? And more importantly, what can I do to prevent
> this?

I'm not clear on all of the work you're doing in the trigger.  However, it 
seems obvious that you're deleting and/or updating a large number of rows.  
The escalating execution times would be consistent with that.

> NB. My real-world application 'collects' id's in need for deferred work,
> but this work is both costly and only needed once per base record. So I use
> an 'update' table whose content I join with the actual tables in order to
> do the work for _all_ the base records involved upon the first execution of
> the deferred trigger. At the end of the trigger, this 'update' table is
> emptied so any additional deferred triggers on the same table will hardly
> lose any time. Or at least, that was the intention....

I think you're doing a lot more than is wise to do in triggers.   Deferrable 
triggers aren't really intended for running long procedures with the creation 
of types and temporary tables (your post got a bit garbled, so pardon me if 
I'm misreading it).   I'd suggest reconsidering your approach to this 
application problem.

At the very least, increase max_fsm_relations to some high value, which may 
help (or not). 


__Aglio Database Solutions_______________
Josh Berkus                    Consultant
Ph: 415-752-2500        Fax: 415-752-2387
2166 Hayes Suite 200    San Francisco, CA

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Reply via email to