Marthin Laubscher <postg...@lobeshare.co.za> writes:
> Essentially the aggregate functions would still be front and centre as 
> defined for the user defined type, and though the user defined type itself 
> would be largely unaware of it, all the individual functions that manipulate 
> values of the UDT would go through the same process of getting access to the 
> value in decoded for if it already exist before calling the decoding routines 
> if it doesn't. If I choose the right memory context, would that simply 
> age-out when the session, transaction, query or aggregate is done, or how 
> what else would know we're done with the memory so we can let go of it?

Well, yeah, that's the problem.  You can certainly maintain your own
persistent data structure somewhere, but then it's entirely on your
head to manage it and avoid memory leakage/bloating as you process
more and more data.  The mechanisms I pointed you at provide a
structure that makes sure space gets reclaimed when there's no longer
a reference to it, but if you go the roll-your-own route then it's
a lot messier.

A mechanism that might work well enough is a transaction-lifespan
hash table.  You could look at, for example, uncommitted_enum_types
in pg_enum.c for sample code.

                        regards, tom lane


Reply via email to