Re: [PERFORM] Query plan for very large number of joins

2005-06-04 Thread philb
Despite being fairly restricted in scope, the schema is highly denormalized hence the large number of tables. Do you mean normalized? Or do you mean you've pushed the superclass details down onto each of the leaf classes? Sorry, I meant normalized, typing faster than I'm thinking here:) The

Re: [PERFORM] Query plan for very large number of joins

2005-06-03 Thread philb
I am using PostgreSQL (7.4) with a schema that was generated automatically (using hibernate). The schema consists of about 650 relations. One particular query (also generated automatically) consists of left joining approximately 350 tables. [snip] One thought is that I am not sure I

Re: [PERFORM] Query plan for very large number of joins

2005-06-03 Thread Tom Lane
[EMAIL PROTECTED] writes: Thanks for the suggestion. I've timed both the EXPLAIN and the EXPLAIN ANALYZE operations. Both operations took 1m 37s. The analyze output indicates that the query execution time was 950ms. This doesn't square with the JDBC prepareStatement executing in 36ms. My

Re: [PERFORM] Query plan for very large number of joins

2005-06-03 Thread philb
Anyone following this thread might be interested to know that disabling the merge and hash joins (as suggested below) resulted in the execution time dropping from ~90 seconds to ~35 seconds. Disabling GEQO has brought about a marginal reduction (~1 second, pretty much within the the margin of

Re: [PERFORM] Query plan for very large number of joins

2005-06-03 Thread Tom Lane
[EMAIL PROTECTED] writes: I've attached the schema and query text, hopefully it will be of some use to you. Note that both are taken from the HyperUBL project (https://hyperubl.dev.java.net/). Sadly, at this stage I think it's time for me to try alternatives to either Hibernate or

Re: [PERFORM] Query plan for very large number of joins

2005-06-03 Thread Simon Riggs
On Fri, 2005-06-03 at 13:22 +0100, [EMAIL PROTECTED] wrote: I am using PostgreSQL (7.4) with a schema that was generated automatically (using hibernate). The schema consists of about 650 relations. One particular query (also generated automatically) consists of left joining approximately

[PERFORM] Query plan for very large number of joins

2005-06-02 Thread philb
Hi, I am using PostgreSQL (7.4) with a schema that was generated automatically (using hibernate). The schema consists of about 650 relations. One particular query (also generated automatically) consists of left joining approximately 350 tables. At this stage, most tables are empty and those

Re: [PERFORM] Query plan for very large number of joins

2005-06-02 Thread Richard Huxton
[EMAIL PROTECTED] wrote: Hi, I am using PostgreSQL (7.4) with a schema that was generated automatically (using hibernate). The schema consists of about 650 relations. One particular query (also generated automatically) consists of left joining approximately 350 tables. May I be the first to

Re: [PERFORM] Query plan for very large number of joins

2005-06-02 Thread Tom Lane
Richard Huxton dev@archonet.com writes: [EMAIL PROTECTED] wrote: I am using PostgreSQL (7.4) with a schema that was generated automatically (using hibernate). The schema consists of about 650 relations. One particular query (also generated automatically) consists of left joining approximately

Re: [PERFORM] Query plan for very large number of joins

2005-06-02 Thread PFC
I am using PostgreSQL (7.4) with a schema that was generated automatically (using hibernate). The schema consists of about 650 relations. One particular query (also generated automatically) consists of left joining approximately 350 tables. At this Just out of curiosity, what

Re: [PERFORM] Query plan for very large number of joins

2005-06-02 Thread Sebastian Hennebrueder
Tom Lane schrieb: Richard Huxton dev@archonet.com writes: [EMAIL PROTECTED] wrote: I am using PostgreSQL (7.4) with a schema that was generated automatically (using hibernate). The schema consists of about 650 relations. One particular query (also generated automatically) consists