Amit Khandekar <> wrote:
> On 15 August 2014 04:04, Kevin Grittner <> wrote:

>> The identifiers in the trigger function are not resolved to
>> particular objects until there is a request to fire the trigger.
> Ah ok, you are talking about changes specific to the PL language
> handlers. Yes, I agree that in the plpgsql parser (and in any PL
> handler), we need to parse such table references in the SQL construct,
> and transform it into something else.
>> At that time the parse analysis
>> needs to find the name defined somewhere.  It's not defined in the
>> catalogs like a table or view, and it's not defined in the query
>> itself like a CTE or VALUES clause.  The names specified in trigger
>> creation must be recognized as needing to resolve to the new
>> TuplestoreScan, and it needs to match those to the tuplestores
>> themselves.

For now I have added the capability to register named tuplestores
(with the associated TupleDesc) to SPI.  This seems like a pretty
useful place to be able to do this, although I tried to arrange for
it to be reasonably easy to add other registration mechanisms

> One approach that comes to my mind is by transforming such transition
> table references into a RangeFunction reference while in plpgsql
> parser/lexer. This RangeFunction would point to a set-returning
> catalog function that would return rows from the delta tuplestore. So
> the tuplestore itself would remain a blackbox. Once we have such a
> function, any language handler can re-use the same interface.

plpgsql seems entirely the wrong place for that.  What I have done
is for trigger.c to add the tuplestores to the TriggerData
structure, and not worry about it beyond that.  I have provided a
way to register named tuplestores to SPI, and for the subsequent
steps to recognize those names as relation names.  The change to
plpgsql are minimal, since they only need to take the tuplestores
from TriggerData and register them with SPI before making the SPI
calls to plan and execute.

>> Row counts, costing, etc. needs to be provided so the
>> optimizer can pick a good plan in what might be a complex query
>> with many options.
> I am also not sure about the costing, but I guess it may be possible
> to supply some costs to the FunctionScan plan node.

I went with a bogus row estimate for now.  I think we can arrange 
for a tuplestore to keep a count of rows added, and a function to 
retrieve that, and thereby get a better estimate, but that is not 
yet done.

I think this is approaching a committable state, although I think 
it should probably be broken down to four separate patches.  Two of 
them are very small and easy to generate: the SPI changes and the 
plpgsql changes are in files that are distinct from all the other 
changes.  What remains is to separate the parsing of the CREATE 
TRIGGER statement and the trigger code that generates the 
tuplestores for the transition tables from the analysis, planning, 
and execution phases which deal with a more generic concept of 
named tuplestores.  Some of the changes for those two things both 
touch files related to parsing -- for different things, but in the 
same files.

To avoid polluting paring code with SPI and tuplestore includes, I 
created a very thin abstraction layer for parse analysis to use.  I 
didn't do the same for the executor yet, but it may be a good idea 
there, too.

It probably still needs more documentation and definitely needs 
more regression tests.  I have used these transition tables in 
plpgsql in simple tests, but there may be bugs still lurking.

New patch attached.

Kevin Grittner
The Enterprise PostgreSQL Company

Attachment: after-trigger-delta-relations-v3.patch.gz
Description: application/tgz

Sent via pgsql-hackers mailing list (
To make changes to your subscription:

Reply via email to