On 9 July 2014 18:54, Tomas Vondra <t...@fuzzy.cz> wrote:

> (1) size the buckets for NTUP_PER_BUCKET=1 (and use whatever number
>     of batches this requires)

If we start off by assuming NTUP_PER_BUCKET = 1, how much memory does
it save to recalculate the hash bucket at 10 instead?
Resizing sounds like it will only be useful of we only just overflow our limit.

If we release next version with this as a hardcoded change, my
understanding is that memory usage for hash joins will leap upwards,
even if the run time of queries reduces. It sounds like we need some
kind of parameter to control this. "We made it faster" might not be
true if we run this on servers that are already experiencing high
memory pressure.

-- 
 Simon Riggs                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to