Hi,

On 06/11/15 16:20, Jan Wieck wrote:
On 06/11/2015 09:53 AM, Kouhei Kaigai wrote:
curious: what was work_mem set to?

work_mem=48GB

My machine mounts 256GB physical RAM.

work_mem can be allocated several times per backend. Nodes like sort
and hash_aggregate may each allocate that much. You should set
work_mem to a fraction of physical-RAM / concurrent-connections
depending on the complexity of your queries. 48GB does not sound
reasonable.

That's true, but there are cases where values like this may be useful (e.g. for a particular query). We do allow such work_mem values, so I consider this failure to be a bug.

It probably existed in the past, but was amplified by the hash join improvements I did for 9.5, because that uses NTUP_PER_BUCKET=1 instead of NTUP_PER_BUCKET=10. So the arrays of buckets are much larger, and we also much more memory than we had in the past.

Interestingly, the hash code checks for INT_MAX overflows on a number of places, but does not check for this ...

regards
--
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to