On 3/4/15 9:10 AM, Robert Haas wrote:
On Wed, Feb 25, 2015 at 5:06 PM, Jim Nasby <jim.na...@bluetreble.com> wrote:
Could the large allocation[2] for the dead tuple array in lazy_space_alloc
cause problems with linux OOM? [1] and some other things I've read indicate
that a large mmap will count towards total system memory, including
producing a failure if overcommit is disabled.
I believe that this is possible.
Would it be worth avoiding the full size allocation when we can?
Maybe. I'm not aware of any evidence that this is an actual problem
as opposed to a theoretical one. vacrelstats->dead_tuples is limited
to a 1GB allocation, which is not a trivial amount of memory, but it's
not huge, either. But we could consider changing the representation
from a single flat array to a list of chunks, with each chunk capped
at say 64MB. That would not only reduce the amount of memory that we
I was thinking the simpler route of just repalloc'ing... the memcpy
would suck, but much less so than the extra index pass. 64M gets us 11M
tuples, which probably isn't very common.
needlessly allocate, but would allow autovacuum to make use of more
than 1GB of maintenance_work_mem, which it looks like it currently
can't. I'm not sure that's a huge problem right now either, because
I'm confused... how autovacuum is special in this regard? Each worker
can use up to 1G, just like a regular vacuum, right? Or are you just
saying getting rid of the 1G limit would be good?
it's probably rare to vacuum at able with more than 1GB / 6 =
178,956,970 dead tuples in it, but it would certainly suck if you did
and if the current 1GB limit forced you to do multiple vacuum passes.
Yeah, with 100+ GB machines not that uncommon today perhaps it's worth
significantly upping this.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers