mlw <[EMAIL PROTECTED]> writes: > I had written a piece of code about two years ago that used the aggregate > feature of PostgreSQL to create an array of integers from an aggregate, as: > > select int_array_aggregate( column ) from table group by column
I found this and am using it extensively. It's extremely helpful, thank you. It goes well with either the *= operators in contrib/array or the gist indexing in contrib/intarray. One problem I've found though is that the optimizer doesn't have any good statistics for estimating the number of matches of such operators. It seems like fixing that would require a lot of changes to the statistics gathered. > create table fast_lookup as select reference, aggregate_array( field ) from > table group by field > > Where the function aggregate_array takes any number of data types. Sure, that seems logical. Actually I already bumped into a situation where I wanted an array of char(1). I just kludged it to use ascii() of that first character, but it would be cleaner and perhaps better for unicode later to use the actual character. Someone else on the list already asked for an function that gave an array of varchar. I think they were pointed at a general purpose function from plr. -- greg ---------------------------(end of broadcast)--------------------------- TIP 4: Don't 'kill -9' the postmaster