Re: Out of Memory errors are frustrating as heck!

2019-04-21 Thread Tomas Vondra
On Sun, Apr 21, 2019 at 11:40:22AM -0500, Justin Pryzby wrote: On Sun, Apr 21, 2019 at 10:36:43AM -0400, Tom Lane wrote: Jeff Janes writes: > The growEnabled stuff only prevents infinite loops. It doesn't prevent > extreme silliness. > If a single 32 bit hash value has enough tuples by

Re: Out of Memory errors are frustrating as heck!

2019-04-21 Thread Gunther
Now to Justin's patch. First undo Tomas' patch and apply: $ mv src/include/executor/hashjoin.h.orig src/include/executor/hashjoin.h $ mv src/backend/executor/nodeHash.c.orig src/backend/executor/nodeHash.c $ mv src/backend/executor/nodeHashjoin.c.orig src/backend/executor/nodeHashjoin.c $ patch

Re: Out of Memory errors are frustrating as heck!

2019-04-21 Thread Gunther
After applying Tomas' patch 0002 as corrected, over 0001, same thing: integrator=# set enable_nestloop to off; SET integrator=# explain analyze select * from reports.v_BusinessOperation; WARNING: ExecHashIncreaseNumBatches: increasing number of batches from 16 to 32 WARNING:

Re: Out of Memory errors are frustrating as heck!

2019-04-21 Thread Tomas Vondra
On Sun, Apr 21, 2019 at 07:25:15PM -0400, Gunther wrote: After applying Tomas' corrected patch 0001, and routing HJDEBUG messages to stderr: integrator=# set enable_nestloop to off; SET integrator=# explain analyze select * from reports.v_BusinessOperation; ... WARNING: 

Re: Out of Memory errors are frustrating as heck!

2019-04-21 Thread Gunther
After applying Tomas' corrected patch 0001, and routing HJDEBUG messages to stderr: integrator=# set enable_nestloop to off; SET integrator=# explain analyze select * from reports.v_BusinessOperation; WARNING:  ExecHashIncreaseNumBatches: increasing number of batches from 16 to 32 WARNING: 

Re: Out of Memory errors are frustrating as heck!

2019-04-21 Thread Tom Lane
Tomas Vondra writes: > On Sun, Apr 21, 2019 at 10:36:43AM -0400, Tom Lane wrote: >> Jeff Janes writes: >>> If a single 32 bit hash value has enough tuples by itself to not fit in >>> work_mem, then it will keep splitting until that value is in a batch by >>> itself before shutting off >> Right,

Re: Out of Memory errors are frustrating as heck!

2019-04-21 Thread Justin Pryzby
On Sun, Apr 21, 2019 at 10:36:43AM -0400, Tom Lane wrote: > Jeff Janes writes: > > The growEnabled stuff only prevents infinite loops. It doesn't prevent > > extreme silliness. > > > If a single 32 bit hash value has enough tuples by itself to not fit in > > work_mem, then it will keep

Re: Backup and Restore (pg_dump & pg_restore)

2019-04-21 Thread Tomas Vondra
On Sat, Apr 20, 2019 at 06:50:47PM +, Daulat Ram wrote: Hello Team, We are getting below error while migrating pg_dump from Postgresql 9.6 to Postgresql 11.2 via pg_restore in docker environment. 90d4c9f363c8:~$ pg_restore -d kbcn "/var/lib/kbcn_backup19" pg_restore:

Re: Out of Memory errors are frustrating as heck!

2019-04-21 Thread Tomas Vondra
On Sun, Apr 21, 2019 at 10:36:43AM -0400, Tom Lane wrote: Jeff Janes writes: The growEnabled stuff only prevents infinite loops. It doesn't prevent extreme silliness. If a single 32 bit hash value has enough tuples by itself to not fit in work_mem, then it will keep splitting until that

Backup and Restore (pg_dump & pg_restore)

2019-04-21 Thread Daulat Ram
Hello Team, We are getting below error while migrating pg_dump from Postgresql 9.6 to Postgresql 11.2 via pg_restore in docker environment. 90d4c9f363c8:~$ pg_restore -d kbcn "/var/lib/kbcn_backup19" pg_restore: [archiver (db)] Error while PROCESSING TOC: pg_restore: [archiver (db)] Error from

Re: Out of Memory errors are frustrating as heck!

2019-04-21 Thread Tom Lane
Jeff Janes writes: > The growEnabled stuff only prevents infinite loops. It doesn't prevent > extreme silliness. > If a single 32 bit hash value has enough tuples by itself to not fit in > work_mem, then it will keep splitting until that value is in a batch by > itself before shutting off

Re: Out of Memory errors are frustrating as heck!

2019-04-21 Thread Jeff Janes
On Sat, Apr 20, 2019 at 4:26 PM Tom Lane wrote: > Tomas Vondra writes: > > Considering how rare this issue likely is, we need to be looking for a > > solution that does not break the common case. > > Agreed. What I think we need to focus on next is why the code keeps > increasing the number of

Re: Out of Memory errors are frustrating as heck!

2019-04-21 Thread Tomas Vondra
On Sun, Apr 21, 2019 at 03:08:22AM -0500, Justin Pryzby wrote: On Sun, Apr 21, 2019 at 01:03:50AM -0400, Gunther wrote: On 4/20/2019 21:14, Tomas Vondra wrote: >Maybe. But before wasting any more time on the memory leak investigation, >I suggest you first try the patch moving the BufFile

Re: Out of Memory errors are frustrating as heck!

2019-04-21 Thread Justin Pryzby
I was able to reproduce in a somewhat contrived way: sh -c 'ulimit -v 1024000 ; /usr/local/pgsql/bin/postgres -D ./pg12dev5 -cport=1234' & postgres=# SET work_mem='64kB';SET client_min_messages =debug1;SET log_statement_stats=on;explain(analyze) WITH v AS MATERIALIZED (SELECT * FROM

Re: Out of Memory errors are frustrating as heck!

2019-04-21 Thread Justin Pryzby
On Sun, Apr 21, 2019 at 01:03:50AM -0400, Gunther wrote: > On 4/20/2019 21:14, Tomas Vondra wrote: > >Maybe. But before wasting any more time on the memory leak investigation, > >I suggest you first try the patch moving the BufFile allocations to a > >separate context. That'll either confirm or