Circling back on this one, I had a look at our analyze code. I found
one place where *maybe* we weren't freeing memory and freed it, but
analyzing a 2M record table I barely see any bump up in memory usage
(from 22M up to 24M at peak) during analyze. And the change I made
didn't appear to alter that (though the objects were probably all
small enough that they weren't being detoasted into copies in any
event). Though maybe with a really big table? (with really big
objects?) Though still, doesn't analyze just pull a limited sample
(30K approx max) so why would table size make any difference after a
certain point?

P.

On Tue, Mar 3, 2015 at 3:17 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:
> wambacher <wnordm...@gmx.de> writes:
>> My system has 24GB of real memory but after some hours one autovacuum worker
>> is using 80-90% of  memory, the OOM-Killer (out of memory killer) kills the
>> process with kill -9 and the postgresql-server is restarting because of that
>> problem.
>
>> i changed the base configuration to use very small buffers, restartetd the
>> server twice but the problem still exists.
>
>> i think, it's allways the same table and that table is huge: 111GB data and
>> 3 Indices with 4GB, 128 GB and 12 GB. It's the table planet_osm_ways from
>> openstreetmap. maybe that helps.
>
> Maybe you could reduce the statistics targets for that table.
>
> I think we've heard that the analyze functions for PostGIS data types are
> memory hogs, too --- maybe it's worth inquiring about that on the postgis
> mailing lists.
>
>                         regards, tom lane
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to