We are using Postgres for the first time after being SQLServer users for a long 
time so forgive for being noobs.

We are using a BI tool that generates a query with an unusually large number of 
joins. My understanding is that with this many joins Postgres query planner 
can't possibly use an exhaustive search so it drops into a heuristics 
algorithm. Unfortunately, the query runs quite slow (~35 seconds) and seems to 
ignore using primary keys and indexes where available.

Query plan here (sorry had to anonymize):
http://explain.depesz.com/s/Uml

Line 30 is one of the pain points where a full table scan is running on 4.2 
million rows even though there are indexes on oscar_bravo.foxtrot_four and 
oscar_charlie.foxtrot_four

We've tried to play around with the join_collapse_limit value by upping it from 
the default of 8 to 10 or 12 but it doesn't seem to help much. Cranking the 
value up to an unreasonable value of 20 does shave some seconds off the query 
time but not substantially (explain plan with the value set to 20: 
http://explain.depesz.com/s/sW6).

We haven't tried playing around with the geqo_threshold at this point.

Any thoughts on ways to speed up the run time of this query or any other 
Postgres settings we should be aware of when dealing with this unusually large 
number of joins?

Thanks in advance



Marco Di Cesare


Reply via email to