Alvaro Herrera <alvhe...@commandprompt.com> writes:
> Excerpts from Tom Lane's message of mar jul 27 20:05:02 -0400 2010:
>> Well, the issue you're hitting is that the executor is dividing the
>> query into batches to keep the size of the in-memory hash table below
>> work_mem.  The planner should expect that and estimate the cost of
>> the hash technique appropriately, but seemingly it's failing to do so.

> Hmm, I wasn't aware that hash joins worked this way wrt work_mem.  Is
> this visible in the explain output?

As of 9.0, any significant difference between "Hash Batches" and
"Original Hash Batches" would be a cue that the planner blew the
estimate.  For Peter's problem, we're just going to have to look
to see if the estimated cost changes in a sane way between the
small-work_mem and large-work_mem cases.

                        regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to