On Sat, Jul 16, 2016 at 2:47 PM, Jan Wieck <j...@wi3ck.info> wrote:
> On Tue, Jul 12, 2016 at 3:29 PM, Merlin Moncure <mmonc...@gmail.com> wrote:
>> I've noticed that pl/pgsql functions/do commands do not behave well
>> when the statement resolves and frees memory.   To be clear:
>> FOR i in 1..1000000
>>   INSERT INTO foo VALUES (i);
>> ...runs just fine while
>>   INSERT INTO foo VALUES (1);
>>   INSERT INTO foo VALUES (2);
>>   ...
>>   INSERT INTO foo VALUES (1000000);
>> END;
> This sounds very much like what led to commit
> 25c539233044c235e97fd7c9dc600fb5f08fe065.
> It seems that patch was only applied to master and never backpatched to 9.5
> or earlier.

You're right; thanks (my bad for missing that).  For those following
along, the case that turned this up was:
<create temp table stuff>
<insert into stuff>

Where the insertion step was a large number of standalone insert statements.

(temp table creation isn't necessary to turn up this bug, but it's a
common pattern when sending batch updates to a server).

For those following along, the workaround I recommend would be to do this:

do $d$
<create temp table stuff>
create function doit() returns void as
  <insert into stuff>
$$ language sql;
perform doit();

BTW, while the fix does address the cleanup performance issue, it's
still the case that anonymous code blocks burn up lots of resident
memory (my 315k example I tested with ate around 8gb IIRC) when run
like this.  My question is, if the pl/pgsql code block is anonymous
and not in some kind of a loop, why bother caching the plan at all?


Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to