Re: [PERFORM] Query plan for very large number of joins

2005-06-04 Thread philb
>> Despite being fairly restricted in scope, >> the schema is highly denormalized hence the large number of tables. > >Do you mean normalized? Or do you mean you've pushed the superclass >details down onto each of the leaf classes? Sorry, I meant normalized, typing faster than I'm thinking here:)

Re: [PERFORM] Query plan for very large number of joins

2005-06-03 Thread Simon Riggs
On Fri, 2005-06-03 at 13:22 +0100, [EMAIL PROTECTED] wrote: > > >>> I am using PostgreSQL (7.4) with a schema that was generated > >>> automatically (using hibernate). The schema consists of about 650 > >>> relations. One particular query (also generated automatically) > >>> consists of left joini

Re: [PERFORM] Query plan for very large number of joins

2005-06-03 Thread Tom Lane
<[EMAIL PROTECTED]> writes: > I've attached the schema and query text, hopefully it will be of some use > to you. Note that both are taken from the HyperUBL project > (https://hyperubl.dev.java.net/). Sadly, at this stage I think it's > time for me to try alternatives to either Hibernate or Postg

Re: [PERFORM] Query plan for very large number of joins

2005-06-03 Thread philb
Anyone following this thread might be interested to know that disabling the merge and hash joins (as suggested below) resulted in the execution time dropping from ~90 seconds to ~35 seconds. Disabling GEQO has brought about a marginal reduction (~1 second, pretty much within the the margin of

Re: [PERFORM] Query plan for very large number of joins

2005-06-03 Thread Tom Lane
<[EMAIL PROTECTED]> writes: > Thanks for the suggestion. I've timed both the EXPLAIN and the EXPLAIN > ANALYZE operations. > Both operations took 1m 37s. The analyze output indicates that the query > execution time was 950ms. This doesn't square with the JDBC prepareStatement > executing in 36m

Re: [PERFORM] Query plan for very large number of joins

2005-06-03 Thread philb
>>> I am using PostgreSQL (7.4) with a schema that was generated >>> automatically (using hibernate). The schema consists of about 650 >>> relations. One particular query (also generated automatically) >>> consists of left joining approximately 350 tables. [snip] >One thought is that I am not s

Re: [PERFORM] Query plan for very large number of joins

2005-06-02 Thread Sebastian Hennebrueder
Tom Lane schrieb: Richard Huxton writes: [EMAIL PROTECTED] wrote: I am using PostgreSQL (7.4) with a schema that was generated automatically (using hibernate). The schema consists of about 650 relations. One particular query (also generated automatically) consists of left joining ap

Re: [PERFORM] Query plan for very large number of joins

2005-06-02 Thread PFC
I am using PostgreSQL (7.4) with a schema that was generated automatically (using hibernate). The schema consists of about 650 relations. One particular query (also generated automatically) consists of left joining approximately 350 tables. At this Just out of curiosity, what appl

Re: [PERFORM] Query plan for very large number of joins

2005-06-02 Thread Tom Lane
Richard Huxton writes: > [EMAIL PROTECTED] wrote: >> I am using PostgreSQL (7.4) with a schema that was generated >> automatically (using hibernate). The schema consists of about 650 >> relations. One particular query (also generated automatically) >> consists of left joining approximately 350 tab

Re: [PERFORM] Query plan for very large number of joins

2005-06-02 Thread Richard Huxton
[EMAIL PROTECTED] wrote: Hi, I am using PostgreSQL (7.4) with a schema that was generated automatically (using hibernate). The schema consists of about 650 relations. One particular query (also generated automatically) consists of left joining approximately 350 tables. May I be the first to of

[PERFORM] Query plan for very large number of joins

2005-06-02 Thread philb
Hi, I am using PostgreSQL (7.4) with a schema that was generated automatically (using hibernate). The schema consists of about 650 relations. One particular query (also generated automatically) consists of left joining approximately 350 tables. At this stage, most tables are empty and those