> Grzegorz Jaskiewicz  wrote:
> I guess that the systems could behave much better, but no one is
> going to tweak settings for 50 different installations over 50
> different type of data and 50 different sets of hardware.
> If there was even a tiny amount of automation provided in the
> postgresql, I would welcome it with open arms.
Hmmm...  Well, we have about 100 pieces of hardware with about 200
databases, and we *do* tune them individually, but it's not as
onerous as it might seem.  For our 72 production circuit court
servers, for example, we have one standard configuration which has as
its last line an include file for overrides.  For some counties that
override file is empty.  For many we override effective_cache_size
based on the RAM installed in the machine.  Since most of these
servers have the database fully cached, the "standard" file uses
equal, low settings for seq_page_cost and random_page_cost, but we
override that where necessary.  We don't generally tune anything else
differently among these servers.  (Maybe work_mem, I'd have to
Which leads me to think that these might be the key items to
autotune.  It's not actually that hard for me to imagine timing a
small percentage of randomly selected page accesses and developing
costing factors for the page costs on the fly.  It might be a bit
trickier to autotune effective_cache_size, but I can think of two or
three heuristics which might work.  Automatically generating sane
values for these three things would eliminate a significant fraction
of problems posted to the performance list.

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to