>> If you model the costing to reflect the reality on your server, good
>> plans will be chosen.
>
> Wouldn't it be "better" to derive those costs from actual performance
> data measured at runtime?
>
> Say, pg could measure random/seq page cost, *per tablespace* even.
>
> Has that been tried?

FWIW, awhile ago I wrote a simple script to measure this and found
that the *actual* random_page / seq_page cost ratio was much higher
than 4/1.

The problem is that caching effects have a large effect on the time it
takes to access a random page, and caching effects are very workload
dependent. So anything automated would probably need to optimize the
parameter values over a set of 'typical' queries, which is exactly
what a good DBA does when they set random_page_cost...

Best,
Nathan

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to