"Joost Kraaijeveld" <[EMAIL PROTECTED]> writes:
> I have a query that has run on 3 other *identical* machines (hardware,
> software, postgresql.conf idenntical, just other data in the database)
> that give me an "out of memory error" every time I try (see below).

> Anyone any idea of where or how to look for the problem or the
> solution? 

What have you got work_mem set to?

The problem evidently is that a hash join table has gotten too large:

> HashBatchContext: 533741652 total in 76 blocks; 1376 free (74 chunks); 
> 533740276 used

Now that's supposed to not get bigger than work_mem (plus or minus some
slop), so either you're trying to run with work_mem of half a gig or
more (answer: don't do that) or you've found some kind of memory leak
(answer: send a reproducible test case to pgsql-bugs).

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
       choose an index scan if your joining column's datatypes do not
       match

Reply via email to