> On that note, can I raise the idea again of dropping the default
> value for random_page_cost in postgresql.conf? I think 4 is too
> conservative in this day and age. Certainly the person who will
> be negatively impacted by a default drop of 4 to 3 will be the
> exception and not the rule.

I don't agree.  The defaults are there for people who aren't going to read 
enough of the documentation to set them.  As such, conservative for the 
defaults is appropriate.

If we were going to change anything automatically, it would be to set 
effective_cache_size to 1/3 of RAM at initdb time.  However, I don't know any 
method to determine RAM size that works on all the platforms we support.


> Also, to the extent that we think these numbers mean anything at all,
> we should try to keep them matching the physical parameters we think
> they represent. 

Personally, what I would love to see is the system determining and caching 
some of these parameters automatically.   For example, in a database which 
has been running in production for a couple of days, it should be possible to 
determine the ratio of average random seek tuple cost to average seq scan 
tuple cost.

Other parameters should really work the same way.   Effective_cache_size, for 
example, is a blunt instrument to replace what the database should ideally do 
through automated interactive fine tuning.  Particularly since we have 2 
separate caches (or 3, if you count t1 and t2 from 2Q).   What the planner 
really needs to know is: is this table or index already in the t1 or t2 cache 
(can't we determine this?)?   How likely is it to be in the filesystem cache?  
The latter question is not just one of size (table < memory), but one of 
frequency of access.

Of course, this stuff is really, really hard which is why we rely on the 
GUCs ...

Josh Berkus
Aglio Database Solutions
San Francisco

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?


Reply via email to