"Peter J. Holzer" <hjp-pg...@hjp.at> writes:
>>> Merge Semi Join  (cost=316864.57..319975.79 rows=1 width=81) (actual 
>>> time=7703.917..30948.271 rows=2 loops=1)
>>>   Merge Cond: ((t.term)::text = (f.berechnungsart)::text)
>>>   ->  Index Scan using term_term_idx on term t  (cost=0.00..319880.73 
>>> rows=636 width=81) (actual time=7703.809..7703.938 rows=3 loops=1)
>>>         Filter: (((facttablename)::text = 'facttable_stat_fta4'::text) AND 
>>> ((columnname)::text = 'berechnungsart'::text))

> Just noticed that this is a bit strange, too: 

> This scans the whole index term_term_idx and for every row found it
> checks the table for the filter condition. So it has to read the whole
> index and the whole table, right? But the planner estimates that it will
> return only 636 rows (out of 6.1E6), so using
> term_facttablename_columnname_idx to extract those 636 and then sorting
> them should be quite a bit faster (even just a plain full table scan
> and then sorting should be faster).

Hm.  I do not see that here with Tomas' sample data, neither on HEAD nor
9.1: I always get a scan using term_facttablename_columnname_idx.  I agree
your plan looks strange.  Can you create some sample data that reproduces
that particular misbehavior?

                        regards, tom lane


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to