"Greg Sabino Mullane" <[EMAIL PROTECTED]> writes: > On that note, can I raise the idea again of dropping the default > value for random_page_cost in postgresql.conf? I think 4 is too > conservative in this day and age. Certainly the person who will > be negatively impacted by a default drop of 4 to 3 will be the > exception and not the rule.
The ones who'd be negatively impacted are the ones we haven't been hearing from ;-). To assume that they aren't out there is a logical fallacy. I still think that 4 is about right for large databases (where "large" is in comparison to available RAM). Also, to the extent that we think these numbers mean anything at all, we should try to keep them matching the physical parameters we think they represent. I think that the "reduce random_page_cost" mantra is not an indication that that parameter is wrong, but that the cost models it feeds into need more work. One thing we *know* is wrong is the costing of nestloop inner indexscans: there needs to be a correction for caching of index blocks across repeated scans. I've looked at this a few times but not come up with anything that seemed convincing. Another thing I've wondered about more than once is if we shouldn't discount fetching of higher-level btree pages on the grounds that they're probably in RAM already, even if the indexscan isn't inside a loop. regards, tom lane ---------------------------(end of broadcast)--------------------------- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly