Robert Haas <robertmh...@gmail.com> wrote:
 
> Wow.  That's fascinating, and if you don't mind, I might mention
> this potential problem in a future talk at some point.
 
I don't mind at all.
 
> For example, in your case, it would be sufficient to estimate the
> amount of data that a given query is going to grovel through and
> then applying some heuristic to choose values for random_page_cost
> and seq_page_cost based on the ratio of that value to, I don't
> know, effective_cache_size.
 
That's where my day-dreams on the topic have been starting.
 
> Unfortunately, to know how much data we're going to grovel
> through, we need to know the plan; and to decide on the right
> plan, we need to know how much data we're going to grovel through.
 
And that's where they've been ending.
 
The only half-sane answer I've thought of is to apply a different
cost to full-table or full-index scans based on the ratio with
effective cache size.  A higher cost for such scans is something
which I've already postulated might be worthwhile for SSI, because
of the increased risk of rw-conflicts which could ultimately
contribute to serialization failures -- to attempt to model, at
least in some crude way, the costs associated with transaction
retry.
 
-Kevin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to