Mike Mascari <[EMAIL PROTECTED]> writes:True!
Tom Lane wrote:
Uh ... dare I ask whether you think it's too high? Or too low?Too low. In fact, after testing some of my queries which are a bit large (# of tables) in size, I usually just wind up turning it off.
Well, that's why it's configurable ;-).
But don't you find that itWell, this particular query uses a UNION where the first SELECT query is composed of an 11-way join, and the second 5. So it is not testing the 64-way join scenario. From memory, the most joins I execute in a single query is in the low twenties. In those scenerios, I had used explicit join syntax to improve planning time.
takes a long time to plan the larger queries? How many tables are
involved, exactly?
I ran a crude script to test the differences in planning time (EXPLAIN) and execution time (EXPLAIN ANALYZE). I wanted to do the ANALYZE as well since the plans generated were different and I feared the GEQO generated plan may be the actual cause of the sluggishness, instead of the actual planning. The script just feeds the SQL to psql, so I know it is timing the client, connection costs, psql formatting, etc...a.k.a. crude. But as you can see, over ten runs for each category, GEQO loses big. I get (in seconds):
label | count | avg | stddev -----------------+-------+--------+---------------------
NO GEQO PLAN | 10 | 0.8809 | 0.00564604089409752
NO GEQO ANALYZE | 10 | 1.1534 | 0.0093594871654564
GEQO PLAN | 10 | 3.0127 | 0.119783183757633
GEQO ANALYZE | 10 | 6.0446 | 2.50524499578163
Mike Mascari
---------------------------(end of broadcast)--------------------------- TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faqs/FAQ.html