Re,

With modifing parameters like this :

ALTER TABLE keywords ALTER keyword SET STATISTICS 100;
ALTER TABLE keywords ALTER k_id SET STATISTICS 100;
ALTER TABLE engine ALTER k_id SET STATISTICS 100;
ALTER TABLE engine ALTER f_id SET STATISTICS 100;

vacuuming both tables
and rewriting the queries using sub-selects :

select count (distinct f.f_id) as results
FROM
fiches f
INNER JOIN (SELECT distinct f_id FROM keywords,engine WHERE engine.k_id = keywords.k_id AND keyword like 'exploitation%') as e1 USING(f_id) INNER JOIN (SELECT distinct f_id FROM keywords,engine WHERE engine.k_id = keywords.k_id AND keyword like 'maintenance%') as e2 USING(f_id) INNER JOIN (SELECT distinct f_id FROM keywords,engine WHERE engine.k_id = keywords.k_id AND keyword like 'numerique%') as e3 USING(f_id)

The query time is less than 600 ms, and increases only a little adding more keywords.

Thanks to Tom Lane and Simon Riggs.

Best regards,
Antoine Bajolet

Antoine Bajolet a écrit :

Hello,

Tom Lane a écrit :

Antoine Bajolet <[EMAIL PROTECTED]> writes:
We are using postgresql in a search engine on an intranet handling throusand of documents.
But we ave a big problem when users use more than two search key.


I think you need to increase the statistics targets for your keywords
table --- the estimates of numbers of matching rows are much too small:
What value you think i could put into a ALTER TABLE SET STATISTICS statment ?

Also, the solution given by Simon Riggs works well.
<quote>

Recode your SQL with an IN subselect that retrieves all possible keywords before it accesses the larger table.
</quote>

But i will try the old ones increasing the statistics parameter and compare performance.



---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster

Reply via email to