Richard Huxton <[EMAIL PROTECTED]> writes: > Let's see if that hash-join is really the culprit. Can you run EXPLAIN > and then EXPLAIN ANALYSE on the query, but first issue: > SET enable_hashjoin=off; > If that make little difference, try the same with enable_hashagg.
It seems like it must be the hashagg step --- hashjoin spills to disk in an orderly fashion when it reaches work_mem, but hashagg doesn't (yet). However, if we know that there're only going to be 60K hashagg entries, how could the memory get blown out by that? Perhaps there's a memory leak here somewhere. Please restart your postmaster under a reasonable ulimit setting, so that it will get ENOMEM rather than going into swap hell, and then try the query again. When it runs up against the ulimit it will give an "out of memory" error and dump some per-context memory usage info into the postmaster log. That info is what we need to see. regards, tom lane ---------------------------(end of broadcast)--------------------------- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly