Re: [GENERAL] work_mem greater than 2GB issue

2009-05-15 Thread wickro
> HashAggregate doesn't have any ability to spill to disk.  The planner > will not select a HashAggregate if it thinks the required hash table > would be larger than work_mem.  What you've evidently got here is a > misestimate of the required hash table size, which most likely is > stemming from a

Re: [GENERAL] work_mem greater than 2GB issue

2009-05-14 Thread wickro
Seq Scan on partner_country_keyword (cost=0.00..2310878.80 rows=126170880 width=28)" So this is a planning mistake? Should a hash be allowed to grow larger than work_mem before it starts to use the disk? On May 14, 4:11 pm, [email protected] (Gregory Stark) wrote: > wickro writes: >

[GENERAL] work_mem greater than 2GB issue

2009-05-14 Thread wickro
Hi everyone, I have a largish table (> 8GB). I'm doing a very simple single group by on. I am the only user of this database. If I set work mem to anything under 2GB (e.g. 1900MB) the postmaster process stops at that value while it's peforming it's group by. There is only one hash operation so tha