On Wed, Nov 26, 2014 at 7:24 PM, Andrew Dunstan <and...@dunslane.net> wrote: > On 11/26/2014 05:00 PM, Andrew Dunstan wrote: >> Attached is some anonymized DDL for a fairly complex schema from a >> PostgreSQL Experts client. Also attached is an explain query that runs >> against the schema. The client's problem is that in trying to run the >> explain, Postgres simply runs out of memory. On my untuned 9.3 test rig, >> (Scientific Linux 6.4 with 24Gb of RAM and 24Gb of swap) vmstat clearly >> shows the explain chewing up about 7Gb of memory. When it's done the free >> memory jumps back to where it was. On a similar case on the clients test rig >> we saw memory use jump lots more. >> >> The client's question is whether this is not a bug. It certainly seems >> like it should be possible to plan a query without chewing up this much >> memory, or at least to be able to limit the amount of memory that can be >> grabbed during planning. Going from humming along happily to OOM conditions >> all through running "explain <somequery>" is not very friendly. >> > > Further data point - thanks to Andrew Gierth (a.k.a. RhodiumToad) for > pointing this out. The query itself grabs about 600Mb to 700Mb to run, > whereas the EXPLAIN takes vastly more - on my system 10 times more. Surely > that's not supposed to happen?
Hmm. So you can run the query but you can't EXPLAIN it? That sounds like it could well be a bug, but I'm thinking you might have to instrument palloc() to find out where all of that space is being allocated to figure out why it's happening - or maybe connect gdb to the server while the EXPLAIN is chewing up memory and pull some backtraces to figure out what section of code it's stuck in. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers