I was chasing down a query that ran fine in 8.1 but had an near
infinite runtime in 9.2.  It turned out to be from a bad filter
estimate that is surprisingly simple to reproduce:

postgres=# create table foo(i int);
CREATE TABLE
postgres=# insert into foo select 1000 + (v/200) from generate_series(1,5000) v;
INSERT 0 5000
postgres=# ANALYZE foo;
ANALYZE

The following query runs fine: it estimates the returned rows pretty wel:
postgres=# explain analyze  select * from foo where i > 100 and i < 10000;
                                             QUERY PLAN
----------------------------------------------------------------------------------------------------
 Seq Scan on foo  (cost=0.00..98.00 rows=4999 width=4) (actual
time=0.018..1.071 rows=5000 loops=1)
   Filter: ((i > 100) AND (i < 10000))
 Total runtime: 1.425 ms

...but if you introduce a floating point cast, it drastically changes
the returned rows (why?):

postgres=# explain analyze  select * from foo where i::float8 > 100
and i::float8 < 10000;
                                                    QUERY PLAN
-------------------------------------------------------------------------------------------------------------------
 Seq Scan on foo  (cost=0.00..123.00 rows=25 width=4) (actual
time=0.022..1.566 rows=5000 loops=1)
   Filter: (((i)::double precision > 100::double precision) AND
((i)::double precision < 10000::double precision))

merlin


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Reply via email to