Tomas Vondra schrieb am 25.01.2017 um 22:46:
I guess this is based on the column statistics stored in pg_stats, but I
am not sure:


It is based on the average length of values in that column, yes.

Thanks for confirming that.

I assume this is taken from pg_stats.avg_width ?

I'm not sure what you mean by 'dynamically resize'. The above
decision is pretty much how planner decides whether to use hash
aggregate or group aggregate. If we estimate that the hash aggregate
will fit into work_mem, the planner will consider both possibilities.
If the estimate says hash aggregate would not fit into work_mem,
we'll only consider group aggregate, because that can work with very
little memory.

At execution time we'll only use as much memory as actually needed.
The trouble is that if we under-estimated the amount of memory,
there's no way back.

The "under-estimation" is what I am referring to with "dynamically resize".

What happens if the planner assumes 100kb but in reality it needs 100MB?

Thomas








--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to