Hi, We still get out of memory error during pg_dump execution
bin$ ./initdb -D /tmp/test The files belonging to this database system will be owned by user "jsoler". This user must also own the server process. The database cluster will be initialized with locale "es_ES.UTF-8". The default database encoding has accordingly been set to "UTF8". The default text search configuration will be set to "spanish". Data page checksums are disabled. creating directory /tmp/test ... ok creating subdirectories ... ok selecting default max_connections ... 100 selecting default shared_buffers ... 128MB selecting dynamic shared memory implementation ... posix creating configuration files ... ok running bootstrap script ... ok performing post-bootstrap initialization ... ok syncing data to disk ... ok WARNING: enabling "trust" authentication for local connections You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb. Success. You can now start the database server using: ./pg_ctl -D /tmp/test -l logfile start bin$ ./pg_ctl -D /tmp/test -l logfile start waiting for server to start.... done server started bin$ ./psql -p 10000 -d postgres psql (11devel) Type "help" for help. postgres=# select version(); version ---------------------------------------------------------------------------------------------------- PostgreSQL 11devel on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.2.0-8ubuntu3.2) 7.2.0, 64-bit (1 row) postgres=# create table lo_table (id numeric, lo oid) ; CREATE TABLE postgres=# insert into lo_table select a.i,lo_from_bytea(0,E'\\xffffff0000000000') from generate_series(1,32775000) as a(i); INSERT 0 32775000 postgres=# \q j /bin$ ./pg_dump -v --port=10000 -Fc -d postgres -f /tmp/dump pg_dump: last built-in OID is 16383 pg_dump: reading extensions pg_dump: identifying extension members pg_dump: reading schemas pg_dump: reading user-defined tables pg_dump: reading user-defined functions pg_dump: reading user-defined types pg_dump: reading procedural languages pg_dump: reading user-defined aggregate functions pg_dump: reading user-defined operators pg_dump: reading user-defined access methods pg_dump: reading user-defined operator classes pg_dump: reading user-defined operator families pg_dump: reading user-defined text search parsers pg_dump: reading user-defined text search templates pg_dump: reading user-defined text search dictionaries pg_dump: reading user-defined text search configurations pg_dump: reading user-defined foreign-data wrappers pg_dump: reading user-defined foreign servers pg_dump: reading default privileges pg_dump: reading user-defined collations pg_dump: reading user-defined conversions pg_dump: reading type casts pg_dump: reading transforms pg_dump: reading table inheritance information pg_dump: reading event triggers pg_dump: finding extension tables pg_dump: finding inheritance relationships pg_dump: reading column info for interesting tables pg_dump: finding the columns and types of table "public.lo_table" pg_dump: flagging inherited columns in subtables pg_dump: reading indexes pg_dump: flagging indexes in partitioned tables pg_dump: reading extended statistics pg_dump: reading constraints pg_dump: reading triggers pg_dump: reading rewrite rules pg_dump: reading policies pg_dump: reading row security enabled for table "public.lo_table" pg_dump: reading policies for table "public.lo_table" pg_dump: reading publications pg_dump: reading publication membership pg_dump: reading publication membership for table "public.lo_table" pg_dump: reading subscriptions pg_dump: reading large objects out of memory 2018-03-19 7:55 GMT+01:00 Amit Khandekar <amitdkhan...@gmail.com>: > On 17 March 2018 at 00:47, Tom Lane <t...@sss.pgh.pa.us> wrote: > > Amit Khandekar <amitdkhan...@gmail.com> writes: > >> If the SELECT target list expression is a join subquery, and if the > >> subquery does a hash join, then the query keeps on consuming more and > >> more memory. Below is such a query : > > > > Thanks for the report! > > > > I dug into this with valgrind, and found that the problem is that > > ExecHashTableCreate allocates some memory that isn't freed by > > ExecHashTableDestroy, specifically the per-hash-key function > > information. This is just dumb. We can keep that stuff in the > > hashtable's hashCxt instead, where it will get freed at the right time. > > The attached patch seems to fix it just by reordering the code. > > I saw that you have now committed the fix and also backported it to > all supported branches. > > Thanks ! > > -Amit Khandekar > EnterpriseDB Corporation > The Postgres Database Company > >