On 20.12.2012 02:29, Tom Lane wrote:
> Tomas Vondra <t...@fuzzy.cz> writes:
>> What it does:
> 
>> 1) creates a simple table called "test" with one text column.
> 
>> 2) creates a plpgsql function with one parameter, and all that function
>>    does is passing the parameter to EXECUTE
> 
>> 3) calls the function with a string containing many INSERTs into the
>>    test table
> 
>> The way the EXECUTE is used is a bit awkward, but the failures seem a
>> bit strange to me. The whole script is ~500kB and most of that is about
>> 11k of very simple INSERT statements:
> 
>>    insert into test(value) values (''aaaaaaaaaa'');
> 
> The reason this fails is that you've got a half-megabyte source string,
> and each of the 11000 plans that are due to be created from it saves
> its own copy of the source string.  Hence, 5500 megabytes needed just
> for source strings.
> 
> We could possibly fix this by inventing some sort of reference-sharing
> arrangement (which'd be complicated and fragile) or by not storing the
> source strings with the plans (which'd deal a serious blow to our
> ability to provide helpful error messages).  Neither answer seems
> appealing.

Thanks for the explanation, I didn't occur to me that each plan keeps a
copy of the whole source string.

> I think it would be a better idea to adopt a less brain-dead way of
> processing the data.  Can't you convert this to a single INSERT with a
> lot of VALUES rows?  Or split it into multiple EXECUTE chunks?

Well, it's not my app but I'll recommend it to them. Actually I already
did but I didn't have an explanation of why it behaves like this.

The really annoying bit is that in 9.1 this works fine (it's just as
crazy approach as on but it does not end with OOM error).

Tomas


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to