Hi,

I am using PostgreSQL (7.4) with a schema that was generated automatically 
(using hibernate). 
The schema consists of about 650 relations. One particular query (also 
generated 
automatically) consists of left joining approximately 350 tables. At this 
stage, most tables are empty and those with values have less than 50 entries. 
The query takes about 90 seconds to execute (on a P4, 2.6Ghz).

All of the relations have a primary key which is indexed and all of the joins 
are on foreign keys which are explicitly declared. I've checked the obvious 
tunables (effective_cache_size, shared_memory and sort_buffer) but changing 
these has had no effect. The system has a total of 750MB RAM, I've varied 
the shared memory up to 256MB and the sort buffer up to 128MB without affecting 
the performance. 

Running the query as a JDBC prepared statement indicates that the query 
optimiser 
is spending a negligable amount of time on the task (~ 36 ms) compared to 
the executor (~ 90 seconds). The output of EXPLAIN indicates (AFAICT) that 
all of the joins are of type "Nested Loop Left Join" and all of the scans 
are of type "Seq Scan". I have refrained from posting the query and the 
query plan since these are 80K and 100K apiece but if anyone wants to see 
them I can certainly forward them on. 

My (uninformed) suspicion is that the optimiser has failed over to the default 
plan on the basis of the number of tables in the join. My question is, is 
there anyone out there using PostgreSQL with this size of schema? Is there 
anything that can be done to bring about the order of magnitude increase 
in speed that I need? 

Thanks for your help,
 -phil 


I'm using Vodafone Mail - to get your free mobile email account go to 
http://www.vodafone.ie
Use of Vodafone Mail is subject to Terms and Conditions  
http://www.vodafone.ie/terms/website



---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

               http://www.postgresql.org/docs/faq

Reply via email to