>>> I am using PostgreSQL (7.4) with a schema that was generated
>>> automatically (using hibernate). The schema consists of about 650
>>> relations. One particular query (also generated automatically)
>>> consists of left joining approximately 350 tables.
>One thought is that I am not sure I believe the conclusion that planning
>is taking only 36 ms; even realizing that the exclusive use of left
>joins eliminates options for join order, there are still quite a lot of
>plans to consider. You should try both EXPLAIN and EXPLAIN ANALYZE
>from psql and see how long each takes. It'd also be interesting to keep
>an eye on how large the backend process grows while doing this --- maybe
>it's being driven into swap.
Thanks for the suggestion. I've timed both the EXPLAIN and the EXPLAIN ANALYZE
Both operations took 1m 37s. The analyze output indicates that the query
execution time was 950ms. This doesn't square with the JDBC prepareStatement
executing in 36ms. My guess is that the prepare was actually a no-op but
I haven't found anything about this yet.
So, is it correct to interpret this as the query planner taking an awful long
time? Is it possible to force the query planner to adopt a specific strategy
and not search for alternatives (I'm aware of the noXX options, it's the
reverse logic that I'm thinking of here). Alternatively, is there some way
to check if the query planner is bottlenecking on a specific resource?
Finally, PFC was asking about the nature of the application, it's not a
specific application just a generic bit of infrastructure consisting of
a transformation of the UBL schema. Despite being fairly restricted in scope,
the schema is highly denormalized hence the large number of tables.
Thanks for all your help.
I'm using Vodafone Mail - to get your free mobile email account go to
Use of Vodafone Mail is subject to Terms and Conditions
---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?