> More generally, Josh has made repeated comments that various proposed
> value/formulas for work_mem are too low, but obviously the people who
> suggested them didn't think so.  So I'm a bit concerned that we don't
> all agree on what the end goal of this activity looks like.

The counter-proposal to "auto-tuning" is just to raise the default for
work_mem to 4MB or 8MB.  Given that Bruce's current formula sets it at
6MB for a server with 8GB RAM, I don't really see the benefit of going
to a whole lot of code and formulas in order to end up at a figure only
incrementally different from a new static default.

The core issue here is that there aren't good "generic" values for these
settings for all users -- that's why we have the settings in the first
place.   Following a formula isn't going to change that.

If we're serious about autotuning, then we should look at:

a) admissions control for non-shared resources (e.g. work_mem)

b) auto-feedback tuning loops (ala Heikki's checkpoint_segments and the
bgwriter).

We could certainly create an autofeedback tuning loop for work_mem.
Just watch the database, record the amount of data spilled to disk for
work (pg_stat_sorts), and record the total RAM pinned by backends.
*Then* apply a formula and maybe bump up work_mem a bit depending on
what comes out of it.  And keep monitoring and keep readjusting.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to