Tom Lane <t...@sss.pgh.pa.us> wrote:
 
> I've tried to avoid having the planner need to know the total size
> of the database cluster, but it's kind of hard to avoid that if
> you want to model this honestly.
 
Agreed.  Perhaps the cost could start escalating when the pages to
access hit (effective_cache_size * relation_size / database_size),
and escalate to the defaults (or some new GUCs) in a linear fashion
until you hit effective_cache_size?
 
> BTW, it seems that all these variants have an implicit assumption
> that if you're reading a small part of the table it's probably
> part of the working set
 
I would say that the assumption should be that seq_page_cost and
random_page_cost model the costs for less extreme (and presumably
more common) queries, and that we're providing a way of handling the
exceptional, extreme queries.
 
-Kevin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to