Sorry for top posting - darn phone...
1) as I mentioned when I reviewed the patch in commitfest I don't see
the point of the manual memory management. Palloc/realloc has just the
same kind of doubling behaviour behind the scenes anyways. Just call
realloc before adding every new element.
2) look at the aggregate for count() which avoids pallocing a new
bigint using a similar trick. It protects against the bug you describe
checking the fctxt to verify it's being called as an aggregate
function and not a regular function. So as long as the aggregate has
the right initial state it should be fine.
Come to think of it though... Do we require creators of new aggregates
own the state transition function? If not we have a problem...
greg
On 15 Oct 2008, at 07:08 PM, "Ian Caulfield" <[EMAIL PROTECTED]>
wrote:
2008/10/14 Robert Treat <[EMAIL PROTECTED]>
On Monday 13 October 2008 04:53:44 Markus Wanner wrote:
Having reviewed the last commit fest's intagg patch as well, I
thought
we agreed that a more general functionality is wanted for core.
But as
long as we don't have that, I'd like intagg to stay in contrib.
While I agree that the "right" solution would be to make this code
work more
generally for other aggregates, I also think that until someone is
willing to
do that work, this needs to stay in contrib, and that we ought to
accept
patches improving it.
I started to look at implementing array_agg by making the existing
intagg stuff more generic, but I came across an issue (which might
also be a bug in intagg). Intagg relies on being able to stomp on the
input transition value, and basically allocates the working array as a
block, reallocating it when it becomes too small. The lower bound of
the array is (ab)used to keep track of how many items have been
allocated.
For a generic aggregate, which accepts any types and NULL inputs, in
order to avoid a linear search through the data to find where to put
the next element, it would be useful to instead store the offset to
the next free byte in the lower pointer.
The problem is that I can't see any way to guarantee that someone
wouldn't create another aggregate using the same transition function,
but giving an initial value to the lower bound which will cause the
transition function to do naughty things (as far as I can tell, this
is also true of intagg - giving an inital state value of
'[200000:200000]{1}' will cause it to happily write up to 200000
integers off the end of that one element array without allocating any
extra storage...
I'm not sure what the best way around this is - it seems that
performancewise, avoiding an array_seek() call in the transition
function would be good. Is there some way that the transition function
can tell which context the state value was allocated in, and thus
whether it was supplied as an initial value or not?
Ian
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers