On Thu, Aug 28, 2003 at 20:46:00 -0700, Ken Geis <[EMAIL PROTECTED]> wrote: > > A big problem is that the values I am working with are *only* the > primary key and the optimizer is choosing a table scan over an index > scan. That is why I titled the email "bad estimates." The table has > (stock_id, price_date) as the primary key, and a bunch of other columns. > What I *really* want to do efficiently is > > select stock_id, min(price_date), max(price_date) > from day_ends > group by stock_id; > > It is not the table or the query that is wrong. It is either the db > parameters or the optimizer itself.
If you want both the max and the min, then things are going to be a bit more work. You are either going to want to do two separate selects or join two selects or use subselects. If there aren't enough prices per stock, the sequential scan might be fastest since you only need to go through the table once and don't have to hit the index blocks. It is still odd that you didn't get a big speed up for just the min though. You example did have the stock id and the date as the primary key which would make sense since the stock id and stock price on a day wouldn't be guarenteed to be unique. Are you absolutely sure you have a combined key on the stock id and the stock price? ---------------------------(end of broadcast)--------------------------- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html