Bill Chandler wrote:


Using PostgreSQL 7.4.2 on Solaris. I'm trying to improve performance on some queries to my databases so
I wanted to try out various index structures.

Since I'm going to be running my performance tests
repeatedly, I created some SQL scripts to delete and recreate various index configurations. One of the
scripts contains the commands for recreating the 'original' index configuration (i.e. the one I've already got some numbers for). Only thing is now
when I delete and recreate the original indexes then
run the query, I'm finding the performance has gone
completely down the tubes compared to what I originally had. A query that used to take 5 minutes
to complete now takes hours to complete.

For what it's worth my query looks something like:

select * from tbl_1, tbl_2 where =
and like 'x%y%' and tbl_1.x > 1234567890123
order by tbl_1.x;

tbl_1 is very big (> 2 million rows)
tbl_2 is relatively small (7000 or so rows)
tbl_1.x is a numeric(13) & are integers is a varchar(64)

I've run 'VACUUM ANALYZE' on both tables involved in
the query. I also used 'EXPLAIN' and observed that
the query plan is completely changed from what it was originally.

Get an explain analyze. That gives actual v/s planned time spent. See what is causing the difference. A discrepency between planned and actual row is usually a indication of out-of-date stats.

Which are the indexes on these tables? You should list fields with indexes first in where clause. Also list most selective field first so that it eliminates as many rows as possible in first scan.

I hope you have read the tuning articles on and applied some basic tuning.

And post the table schema, hardware config, postgresql config(important ones of course) and explain analyze for queries. That would be something to start with.


---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

Reply via email to