>>> Greg Smith <[EMAIL PROTECTED]> wrote: > I don't think random_page_cost actually corresponds with any real number > anymore. I just treat it as an uncalibrated knob you can turn and > benchmark the results at. Same here. We have always found best performance in our production environments with this set to somewhere from the same as seq_page_cost to twice seq_page_cost -- depending on how much of the database is cached. As we get toward more heavily cached databases we also reduce seq_page_cost. So we range from (0.1,0.1) to (1,2). These have really become abstractions with legacy names. If I had to suggest how someone choose a starting setting, I would say that seq_page_cost should be the proportion of sequential scans likely to need to go the disk, and random_page_cost should be two times the proportion of heap data which doesn't fit in cache space. Add 0.1 to both numbers and then truncate to one decimal position. This, of course, assumes a battery backed caching RAID controller, a reasonable RAID for the data set, and one of the more typical types of usage patterns. -Kevin
-- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers