On Wed, Dec 11, 2013 at 9:43 AM, Simon Riggs <si...@2ndquadrant.com> wrote:
> On 25 November 2013 21:51, Peter Geoghegan <p...@heroku.com> wrote:
>> On Sun, Nov 24, 2013 at 9:06 AM, Simon Riggs <si...@2ndquadrant.com> wrote:
>>> VACUUM uses 6 bytes per dead tuple. And autovacuum regularly removes
>>> dead tuples, limiting their numbers.
>>> In what circumstances will the memory usage from multiple concurrent
>>> VACUUMs become a problem? In those circumstances, reducing
>>> autovacuum_work_mem will cause more passes through indexes, dirtying
>>> more pages and elongating the problem workload.
>> Yes, of course, but if we presume that the memory for autovacuum
>> workers to do everything in one pass simply isn't there, it's still
>> better to do multiple passes.
> That isn't clear to me. It seems better to wait until we have the memory.
> My feeling is this parameter is a fairly blunt approach to the
> problems of memory pressure on autovacuum and other maint tasks. I am
> worried that it will not effectively solve the problem. I don't wish
> to block the patch; I wish to get to an effective solution to the
> problem.
> A better aproach to handling memory pressure would be to globally
> coordinate workers so that we don't oversubscribe memory, allocating
> memory from a global pool.

This is doubtless true, but that project is at least two if not three
orders of magnitude more complex than what's being proposed here, and
I don't think we should make the perfect the enemy of the good.

Right now, maintenance_work_mem controls the amount of memory that
we're willing to use for either a vacuum operation or an index build.
Those things don't have much to do with each other, so it's not hard
for me to imagine that someone might want to configure different
memory usage for one than the other.  This patch would allow that, and
I think that's good.

Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to