Excerpts from Tom Lane's message of mar jul 27 20:05:02 -0400 2010:
> Peter Hussey <pe...@labkey.com> writes:

> > 2) How is work_mem used by a query execution?
> 
> Well, the issue you're hitting is that the executor is dividing the
> query into batches to keep the size of the in-memory hash table below
> work_mem.  The planner should expect that and estimate the cost of
> the hash technique appropriately, but seemingly it's failing to do so.
> Since you didn't provide EXPLAIN ANALYZE output, though, it's hard
> to be sure.

Hmm, I wasn't aware that hash joins worked this way wrt work_mem.  Is
this visible in the explain output?  If it's something subtle (like an
increased total cost), may I suggest that it'd be a good idea to make it
explicit somehow in the machine-readable outputs?

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to