"Itagaki Takahiro" <itagaki.takah...@gmail.com> writes: > I encountered "out of memory" error in large > GROUP BY query with array_agg(). The server log > was filled by the following messages:
> accumArrayResult: 8192 total in 1 blocks; 7800 free (0 chunks); 392 > used > Should we choose smaller size of initial memory in accumArrayResult()? That's not really going to help much, considering that the planner's estimated memory use per hash aggregate is only a few dozen bytes. We have to get that estimate in sync with reality or the problem will remain. Eventually it might be nice to have some sort of way to specify the estimate to use for any aggregate function --- but for a near-term fix maybe we should just hard-wire a special case for array_agg in count_agg_clauses_walker(). I'd be inclined to leave the array_agg code as-is and teach the planner to assume ALLOCSET_DEFAULT_INITSIZE per array_agg aggregate. regards, tom lane -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs