On Sun, Aug 29, 2004 at 03:38:00PM -0600, Scott Marlowe wrote:
>>> select somefield from sometable where timestampfield > now()-'60
>>> seconds'::interval
>>> 
>>> and count the number of returned rows.  If there's a lot, it won't be
>>> any faster, if there's a few, it should be a win.
>> Why would this ever be faster? And how could postgres ever calculate that
>> without doing a sequential scan when count(*) would force it to do a
>> sequential scan?
> Because, count(*) CANNOT use an index.  So, if you're hitting, say,
> 0.01% of the table (let's say 20 out of 20,000,000 rows or something
> like that) then the second should be MUCH faster.

Of course count(*) can use an index:

images=# explain analyze select count(*) from images where event='test';
                                                             QUERY PLAN                
                                             
------------------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=168.97..168.97 rows=1 width=0) (actual time=68.211..68.215 rows=1 
loops=1)
   ->  Index Scan using unique_filenames on images  (cost=0.00..168.81 rows=63 
width=0) (actual time=68.094..68.149 rows=8 loops=1)
         Index Cond: ((event)::text = 'test'::text)
 Total runtime: 68.369 ms
(4 rows)

However, it cannot rely on an index _alone_; it has to go fetch the relevant
pages, but of course, so must "select somefield from" etc..

/* Steinar */
-- 
Homepage: http://www.sesse.net/

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Reply via email to