On 10/14/13 8:18 AM, Robert Haas wrote:
On Sat, Oct 12, 2013 at 3:07 AM, Magnus Hagander <mag...@hagander.net> wrote:
On Oct 11, 2013 10:23 PM, "Josh Berkus" <j...@agliodbs.com> wrote:
On 10/11/2013 01:11 PM, Bruce Momjian wrote:
In summary, I think we need to:

*  decide on new defaults for work_mem and maintenance_work_mem
*  add an initdb flag to allow users/packagers to set shared_bufffers?
*  add an autovacuum_work_mem setting?
*  change the default for temp_buffers?

If we're changing defaults, bgwriter_lru_maxpages and vacuum_cost_limit
could also use a bump; those thresholds were set for servers with < 1GB
of RAM

Uh, those are there to limit io and not memory, right? More memory isn't the
reason to increase them, more io is. For people deploying on modern server
hardware then yes it's often low, but for all those deploying in virtualized
environments with io performance reminding you of the 1990ies, I'm not so
sure it is...

bgwriter_lru_maxpages is clearly related to the size of
shared_buffers, although confusingly it is expressed as a number of
buffers, while shared_buffers is expressed as a quantity of memory.  I
think we might have done better to call the GUC
bgwriter_lru_maxpercent and make it a percentage of shared buffers.


Also, more memory generally means more filesystem cache which means you can do 
more vacuum work per round.

FWIW, on our 512G servers...

cnuapp_p...@postgres11.obr=# select name, setting from pg_settings where name ~ 
'vacuum_cost';
             name             | setting
------------------------------+---------
 autovacuum_vacuum_cost_delay | 10
 autovacuum_vacuum_cost_limit | -1
 vacuum_cost_delay            | 10
 vacuum_cost_limit            | 2000
 vacuum_cost_page_dirty       | 10
 vacuum_cost_page_hit         | 1
 vacuum_cost_page_miss        | 10
(7 rows)

The page_hit cost is intentionally the same as the page_dirty limit because 
writes to the SAN are generally far cheaper than reads that actually hit 
spindles. Of course with the amount of FS cache we have (512G-8G shared buffers 
at most) reads are often very likely to hit the FS cache, but tuning of these 
settings while watching IO stats has shown these settings to be minimally 
disruptive.
--
Jim C. Nasby, Data Architect                       j...@nasby.net
512.569.9461 (cell)                         http://jim.nasby.net


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to