On 2/22/04 5:06 PM, Tom Lane wrote:
> John Siracusa <[EMAIL PROTECTED]> writes:
>> I want to do something that will convince Postgres that using the date
>> index is, by far, the best plan when running my queries, even when the
>> date column correlation stat drops well below 1.0.
> Have you tried experimenting with random_page_cost?  Seems like your
> results suggest that you need to lower it.

I don't want to do anything that "universal" if I can help it, because I
don't want to adversely affect any other queries that the planner currently

I'm guessing that the reason using the date index is always so much faster
is that doing so only reads the rows in the date range (say, 1,000 of them)
instead of reading every single row in the table (1,000,000) as in a seqscan

I think the key is to get the planner to correctly ballpark the number of
rows in the date range.  If it does, I can't imagine it ever deciding to
read 1,000,000 rows instead of 1,000 with any sane "cost" setting.  I'm
assuming the defaults are sane :)


---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

Reply via email to