Re: [PERFORM] Bad choice of query plan from PG 7.3.6 to PG 7.3.9

2005-05-08 Thread Jona




Sorry Tom, misread your mail! My bad :-(

I believe the following is the data you need ?

   

  Live Server


  relname
  relpages


  ctp_statcon
  72


  statcon_pk
  135


  
  
  
  


  Test Server


  relname
  relpages


  ctp_statcon
  34


  statcon_pk
  28

  


Have executed the following query to obtain that data:
SELECT relname, relpages
FROM pg_class
WHERE relname = 'statcon_pk' OR relname = 'sc2ctp_fk' OR relname =
'sc2mtp_fk' OR relname = 'sc2sc_fk' OR relname = 'ctp_statcon'

The size difference for the index is surprisingly big I think,
considering that there's only around 1000 rows more in the table on the
live server than on the server.
Count for Live Server: 12597
Count for Test Server: 11494
Any insight into this?

Cheers
Jona

PS: The meta data for the table is:
CREATE TABLE statcon_tbl
(
 id serial NOT NULL,
 data bytea,
 wm bool DEFAULT 'FALSE',
 created timestamp DEFAULT now(),
 modified timestamp DEFAULT now(),
 enabled bool DEFAULT 'TRUE',
 bitsperpixel int4 DEFAULT 0,
 mtpid int4,
 sctid int4,
 ctpid int4,
 CONSTRAINT statcon_pk PRIMARY KEY (id),
 CONSTRAINT sc2ctp_fk FOREIGN KEY (ctpid) REFERENCES contype_tbl (id)
ON   UPDATE CASCADE ON DELETE CASCADE,
 CONSTRAINT sc2mtp_fk FOREIGN KEY (mtpid) REFERENCES mimetype_tbl (id)
ON UPDATE CASCADE ON DELETE CASCADE,
 CONSTRAINT sc2sct_fk FOREIGN KEY (sctid) REFERENCES statcontrans_tbl
(id) ON UPDATE CASCADE ON DELETE CASCADE
) 
WITHOUT OIDS;
CREATE INDEX ctp_statcon ON statcon_tbl USING btree (sctid, ctpid);


Tom Lane wrote:

  Jona [EMAIL PROTECTED] writes:
  
  
anyway, here's the info for relpages:
Live Server: 424
Test Server: 338

  
  
I was asking about the indexes associated with the table, not the table
itself.

			regards, tom lane
  






Re: [PERFORM] Bad choice of query plan from PG 7.3.6 to PG 7.3.9

2005-05-07 Thread Jona




Wouldn't the VACUUM have made them equivalent??

anyway, here's the info for relpages:
Live Server: 424
Test Server: 338

Please note though that there're more rows on the live server than on
the test server due to recent upload.
Total Row counts are as follows:
Live Server: 12597
Test Server: 11494

When the problems started the tables had identical size though.

Cheers
Jona

Tom Lane wrote:

  Jona [EMAIL PROTECTED] writes:
  
  
Test Server:
comm=# VACUUM ANALYZE VERBOSE StatCon_Tbl;
INFO:  --Relation public.statcon_tbl--
INFO:  Pages 338: Changed 0, Empty 0; Tup 11494: Vac 0, Keep 0, UnUsed 0.
Total CPU 0.02s/0.00u sec elapsed 1.98 sec.
INFO:  --Relation pg_toast.pg_toast_179851--
INFO:  Pages 85680: Changed 0, Empty 0; Tup 343321: Vac 0, Keep 0, UnUsed 0.
Total CPU 1.75s/0.23u sec elapsed 30.36 sec.
INFO:  Analyzing public.statcon_tbl
VACUUM

  
  
  
  
Live Server:
comm=# VACUUM ANALYZE VERBOSE StatCon_Tbl;
INFO:  --Relation public.statcon_tbl--
INFO:  Pages 424: Changed 0, Empty 0; Tup 12291: Vac 0, Keep 0, UnUsed 6101.
Total CPU 0.00s/0.01u sec elapsed 0.01 sec.
INFO:  --Relation pg_toast.pg_toast_891830--
INFO:  Pages 89234: Changed 0, Empty 0; Tup 352823: Vac 0, Keep 0, 
UnUsed 5487.
Total CPU 3.21s/0.47u sec elapsed 18.03 sec.
INFO:  Analyzing public.statcon_tbl
VACUUM

  
  
Hm, the physical table sizes aren't very different, which suggests that
the problem must lie with the indexes.  Unfortunately, VACUUM in 7.3
doesn't tell you anything about indexes if it doesn't have any dead rows
to clean up.  Could you look at pg_class.relpages for all the indexes
of this table, and see what that shows?

			regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org
  






Re: [PERFORM] Bad choice of query plan from PG 7.3.6 to PG 7.3.9

2005-05-07 Thread Tom Lane
Jona [EMAIL PROTECTED] writes:
 anyway, here's the info for relpages:
 Live Server: 424
 Test Server: 338

I was asking about the indexes associated with the table, not the table
itself.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [PERFORM] Bad choice of query plan from PG 7.3.6 to PG 7.3.9

2005-05-06 Thread Jona




Thank you for the swift reply.
The test server is hardly ever vacuumed as it in general sees very
limited traffic. vacuum is only necessary if the server sees a lot of
write operations, i.e. update, delete, insert right?

What explains the different choice of query plans then?
As can be seen from the following snippets the test server decides to
use an index twice in Query 2, where as the live server decides to do a
full scan of tables with 38.5k and 5.5k records.
In Query 3 it's vice versa.
Seems strange to me...

Query 2:
--- Bad idea, price_tbl hold 38.5k records

Test:
- Index Scan using aff_price_uq on price_tbl (cost=0.00..6.01
rows=1 width=4) (actual time=0.01..0.01 rows=1 loops=2838)"

Live:
- Seq Scan on price_tbl (cost=0.00..883.48 rows=2434 width=4)
(actual time=0.86..67.25 rows=4570 loops=1)"

Filter: (affid = 8)"


--- Bad idea, sct2subcattype_tbl hold 5.5k records

Test:
- Index Scan using subcat_uq on sct2subcattype_tbl
(cost=0.00..79.26 rows=26 width=8) (actual time=0.01..0.17 rows=59
loops=48)
Live:
- Seq Scan on sct2subcattype_tbl (cost=0.00..99.26 rows=5526
width=8) (actual time=0.01..30.16 rows=5526 loops=1)"


Query 3:
- Bad idea, sct2lang_tbl has 8.6k records
Test:
- Seq Scan on sct2lang_tbl (cost=0.00..150.79 rows=8679 width=8)
(actual time=0.03..10.70 rows=8679 loops=1)"

Live:
- Index Scan using sct2lang_uq on sct2lang_tbl (cost=0.00..8.13
rows=2 width=8) (actual time=1.10..2.39 rows=2 loops=69)"


Will get a VACUUM VERBOSE of StatCon_Tbl

Cheers
Jona

PS: The query plans are extracted using pgAdmin on Windows, if you can
recommend a better cross-platform postgre client I'd be happy to try it
out.

Tom Lane wrote:

  Jona [EMAIL PROTECTED] writes:
  
  
I'm currently experiencing problems with long query execution times.
What I believe makes these problems particularly interesting is the 
difference in execution plans between our test server running PostGreSQL 
7.3.6 and our production server running PostGreSQL 7.3.9.
The test server is an upgraded "home machine", a Pentium 4 with 1GB of 
memory and IDE disk.
The production server is a dual CPU XEON Pentium 4 with 2GB memory and 
SCSI disks.
One should expect the production server to be faster, but appearently 
not as the outlined query plans below shows.

  
  
I think the plans are fine; it looks to me like the production server
has serious table-bloat or index-bloat problems, probably because of
inadequate vacuuming.  For instance compare these entries:

-  Index Scan using ctp_statcon on statcon_tbl  (cost=0.00..6.01 rows=1 width=4) (actual time=0.05..0.31 rows=39 loops=4)
  Index Cond: ((statcon_tbl.sctid = "outer".sctid) AND (statcon_tbl.ctpid = 1))

-  Index Scan using ctp_statcon on statcon_tbl  (cost=0.00..20.40 rows=5 width=4) (actual time=27.97..171.84 rows=39 loops=4)
  Index Cond: ((statcon_tbl.sctid = "outer".sctid) AND (statcon_tbl.ctpid = 1))

Appears to be exactly the same task ... but the test server spent
1.24 msec total while the production server spent 687.36 msec total.
That's more than half of your problem right there.  Some of the other
scans seem a lot slower on the production machine too.

  
  
1) How come the query plans between the 2 servers are different?

  
  
The production server's rowcount estimates are pretty good, the test
server's are not.  How long since you vacuumed/analyzed the test server?

It'd be interesting to see the output of "vacuum verbose statcon_tbl"
on both servers ...

			regards, tom lane

PS: if you post any more query plans, please try to use software that
doesn't mangle the formatting so horribly ...

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
  





Re: [PERFORM] Bad choice of query plan from PG 7.3.6 to PG 7.3.9

2005-05-06 Thread Jona




Results of VACUUM VERBOSE from both servers

Test server:
comm=# VACUUM VERBOSE StatCon_Tbl;
INFO: --Relation public.statcon_tbl--
INFO: Pages 338: Changed 338, Empty 0; Tup 11494: Vac 0, Keep 0,
UnUsed 0.
 Total CPU 0.02s/0.00u sec elapsed 0.04 sec.
INFO: --Relation pg_toast.pg_toast_179851--
INFO: Pages 85680: Changed 85680, Empty 0; Tup 343321: Vac 0, Keep 0,
UnUsed 0.
 Total CPU 4.03s/0.40u sec elapsed 70.99 sec.
VACUUM

Live Server:
comm=# VACUUM VERBOSE StatCon_Tbl;
INFO: --Relation public.statcon_tbl--
INFO: Pages 424: Changed 0, Empty 0; Tup 12291: Vac 0, Keep 0, UnUsed
6101.
 Total CPU 0.01s/0.00u sec elapsed 0.60 sec.
INFO: --Relation pg_toast.pg_toast_891830--
INFO: Pages 89234: Changed 0, Empty 0; Tup 352823: Vac 0, Keep 0,
UnUsed 5487.
 Total CPU 4.44s/0.34u sec elapsed 35.48 sec.
VACUUM

Cheers
Jona

Tom Lane wrote:

  Jona [EMAIL PROTECTED] writes:
  
  
I'm currently experiencing problems with long query execution times.
What I believe makes these problems particularly interesting is the 
difference in execution plans between our test server running PostGreSQL 
7.3.6 and our production server running PostGreSQL 7.3.9.
The test server is an upgraded "home machine", a Pentium 4 with 1GB of 
memory and IDE disk.
The production server is a dual CPU XEON Pentium 4 with 2GB memory and 
SCSI disks.
One should expect the production server to be faster, but appearently 
not as the outlined query plans below shows.

  
  
I think the plans are fine; it looks to me like the production server
has serious table-bloat or index-bloat problems, probably because of
inadequate vacuuming.  For instance compare these entries:

-  Index Scan using ctp_statcon on statcon_tbl  (cost=0.00..6.01 rows=1 width=4) (actual time=0.05..0.31 rows=39 loops=4)
  Index Cond: ((statcon_tbl.sctid = "outer".sctid) AND (statcon_tbl.ctpid = 1))

-  Index Scan using ctp_statcon on statcon_tbl  (cost=0.00..20.40 rows=5 width=4) (actual time=27.97..171.84 rows=39 loops=4)
  Index Cond: ((statcon_tbl.sctid = "outer".sctid) AND (statcon_tbl.ctpid = 1))

Appears to be exactly the same task ... but the test server spent
1.24 msec total while the production server spent 687.36 msec total.
That's more than half of your problem right there.  Some of the other
scans seem a lot slower on the production machine too.

  
  
1) How come the query plans between the 2 servers are different?

  
  
The production server's rowcount estimates are pretty good, the test
server's are not.  How long since you vacuumed/analyzed the test server?

It'd be interesting to see the output of "vacuum verbose statcon_tbl"
on both servers ...

			regards, tom lane

PS: if you post any more query plans, please try to use software that
doesn't mangle the formatting so horribly ...

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
  





Re: [PERFORM] Bad choice of query plan from PG 7.3.6 to PG 7.3.9

2005-05-06 Thread Christopher Kings-Lynne
You didn't do analyze.
Chris
Jona wrote:
  Results of VACUUM VERBOSE from both servers
Test server:
comm=# VACUUM VERBOSE StatCon_Tbl;
INFO:  --Relation public.statcon_tbl--
INFO:  Pages 338: Changed 338, Empty 0; Tup 11494: Vac 0, Keep 0, UnUsed 0.
Total CPU 0.02s/0.00u sec elapsed 0.04 sec.
INFO:  --Relation pg_toast.pg_toast_179851--
INFO:  Pages 85680: Changed 85680, Empty 0; Tup 343321: Vac 0, Keep 0, 
UnUsed 0.
Total CPU 4.03s/0.40u sec elapsed 70.99 sec.
VACUUM

Live Server:
comm=# VACUUM VERBOSE StatCon_Tbl;
INFO:  --Relation public.statcon_tbl--
INFO:  Pages 424: Changed 0, Empty 0; Tup 12291: Vac 0, Keep 0, UnUsed 6101.
Total CPU 0.01s/0.00u sec elapsed 0.60 sec.
INFO:  --Relation pg_toast.pg_toast_891830--
INFO:  Pages 89234: Changed 0, Empty 0; Tup 352823: Vac 0, Keep 0, 
UnUsed 5487.
Total CPU 4.44s/0.34u sec elapsed 35.48 sec.
VACUUM

Cheers
Jona
Tom Lane wrote:
Jona [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] writes:
 

I'm currently experiencing problems with long query execution times.
What I believe makes these problems particularly interesting is the 
difference in execution plans between our test server running PostGreSQL 
7.3.6 and our production server running PostGreSQL 7.3.9.
The test server is an upgraded home machine, a Pentium 4 with 1GB of 
memory and IDE disk.
The production server is a dual CPU XEON Pentium 4 with 2GB memory and 
SCSI disks.
One should expect the production server to be faster, but appearently 
not as the outlined query plans below shows.
   

I think the plans are fine; it looks to me like the production server
has serious table-bloat or index-bloat problems, probably because of
inadequate vacuuming.  For instance compare these entries:
-  Index Scan using ctp_statcon on statcon_tbl  (cost=0.00..6.01 rows=1 
width=4) (actual time=0.05..0.31 rows=39 loops=4)
 Index Cond: ((statcon_tbl.sctid = outer.sctid) AND (statcon_tbl.ctpid = 
1))
-  Index Scan using ctp_statcon on statcon_tbl  (cost=0.00..20.40 rows=5 
width=4) (actual time=27.97..171.84 rows=39 loops=4)
 Index Cond: ((statcon_tbl.sctid = outer.sctid) AND (statcon_tbl.ctpid = 
1))
Appears to be exactly the same task ... but the test server spent
1.24 msec total while the production server spent 687.36 msec total.
That's more than half of your problem right there.  Some of the other
scans seem a lot slower on the production machine too.
 

1) How come the query plans between the 2 servers are different?
   

The production server's rowcount estimates are pretty good, the test
server's are not.  How long since you vacuumed/analyzed the test server?
It'd be interesting to see the output of vacuum verbose statcon_tbl
on both servers ...
regards, tom lane
PS: if you post any more query plans, please try to use software that
doesn't mangle the formatting so horribly ...
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
 

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [PERFORM] Bad choice of query plan from PG 7.3.6 to PG 7.3.9

2005-05-06 Thread Jona
Now with analyze
Test Server:
comm=# VACUUM ANALYZE VERBOSE StatCon_Tbl;
INFO:  --Relation public.statcon_tbl--
INFO:  Pages 338: Changed 0, Empty 0; Tup 11494: Vac 0, Keep 0, UnUsed 0.
   Total CPU 0.02s/0.00u sec elapsed 1.98 sec.
INFO:  --Relation pg_toast.pg_toast_179851--
INFO:  Pages 85680: Changed 0, Empty 0; Tup 343321: Vac 0, Keep 0, UnUsed 0.
   Total CPU 1.75s/0.23u sec elapsed 30.36 sec.
INFO:  Analyzing public.statcon_tbl
VACUUM
Live Server:
comm=# VACUUM ANALYZE VERBOSE StatCon_Tbl;
INFO:  --Relation public.statcon_tbl--
INFO:  Pages 424: Changed 0, Empty 0; Tup 12291: Vac 0, Keep 0, UnUsed 6101.
   Total CPU 0.00s/0.01u sec elapsed 0.01 sec.
INFO:  --Relation pg_toast.pg_toast_891830--
INFO:  Pages 89234: Changed 0, Empty 0; Tup 352823: Vac 0, Keep 0, 
UnUsed 5487.
   Total CPU 3.21s/0.47u sec elapsed 18.03 sec.
INFO:  Analyzing public.statcon_tbl
VACUUM

Have done some sampling running the same query a few times through the 
past few hours and it appears that the VACUUM has helped.
The following are the results after the Vacuum:

After VACUUM VERBOSE:
Index Scan using ctp_statcon on statcon_tbl  (cost=0.00..21.29 rows=5 
width=4) (actual time=0.07..0.37 rows=39 loops=4)
Index Cond: ((statcon_tbl.sctid = outer.sctid) AND (statcon_tbl.ctpid 
= 1))

After VACUUM ANALYZE VERBOSE:
Index Scan using ctp_statcon on statcon_tbl  (cost=0.00..20.03 rows=5 
width=4) (actual time=0.09..0.37 rows=39 loops=4)
Index Cond: ((statcon_tbl.sctid = outer.sctid) AND (statcon_tbl.ctpid 
= 1))

Only question remains why one server uses its indexes and the other 
don't eventhough VACUUM ANALYZE has now been run on both servers?
And even more interesting, before the VACUUM ANALYZEit was the server 
where no vacuum had taken place that used its index.

Cheers
Jona
Christopher Kings-Lynne wrote:
You didn't do analyze.
Chris
Jona wrote:
  Results of VACUUM VERBOSE from both servers
Test server:
comm=# VACUUM VERBOSE StatCon_Tbl;
INFO:  --Relation public.statcon_tbl--
INFO:  Pages 338: Changed 338, Empty 0; Tup 11494: Vac 0, Keep 0, 
UnUsed 0.
Total CPU 0.02s/0.00u sec elapsed 0.04 sec.
INFO:  --Relation pg_toast.pg_toast_179851--
INFO:  Pages 85680: Changed 85680, Empty 0; Tup 343321: Vac 0, Keep 
0, UnUsed 0.
Total CPU 4.03s/0.40u sec elapsed 70.99 sec.
VACUUM

Live Server:
comm=# VACUUM VERBOSE StatCon_Tbl;
INFO:  --Relation public.statcon_tbl--
INFO:  Pages 424: Changed 0, Empty 0; Tup 12291: Vac 0, Keep 0, 
UnUsed 6101.
Total CPU 0.01s/0.00u sec elapsed 0.60 sec.
INFO:  --Relation pg_toast.pg_toast_891830--
INFO:  Pages 89234: Changed 0, Empty 0; Tup 352823: Vac 0, Keep 0, 
UnUsed 5487.
Total CPU 4.44s/0.34u sec elapsed 35.48 sec.
VACUUM

Cheers
Jona
Tom Lane wrote:
Jona [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] writes:
 

I'm currently experiencing problems with long query execution times.
What I believe makes these problems particularly interesting is the 
difference in execution plans between our test server running 
PostGreSQL 7.3.6 and our production server running PostGreSQL 7.3.9.
The test server is an upgraded home machine, a Pentium 4 with 1GB 
of memory and IDE disk.
The production server is a dual CPU XEON Pentium 4 with 2GB memory 
and SCSI disks.
One should expect the production server to be faster, but 
appearently not as the outlined query plans below shows.
  
I think the plans are fine; it looks to me like the production server
has serious table-bloat or index-bloat problems, probably because of
inadequate vacuuming.  For instance compare these entries:
-  Index Scan using ctp_statcon on statcon_tbl  (cost=0.00..6.01 
rows=1 width=4) (actual time=0.05..0.31 rows=39 loops=4)
 Index Cond: ((statcon_tbl.sctid = outer.sctid) AND 
(statcon_tbl.ctpid = 1))

-  Index Scan using ctp_statcon on statcon_tbl  (cost=0.00..20.40 
rows=5 width=4) (actual time=27.97..171.84 rows=39 loops=4)
 Index Cond: ((statcon_tbl.sctid = outer.sctid) AND 
(statcon_tbl.ctpid = 1))

Appears to be exactly the same task ... but the test server spent
1.24 msec total while the production server spent 687.36 msec total.
That's more than half of your problem right there.  Some of the other
scans seem a lot slower on the production machine too.
 

1) How come the query plans between the 2 servers are different?
  
The production server's rowcount estimates are pretty good, the test
server's are not.  How long since you vacuumed/analyzed the test 
server?

It'd be interesting to see the output of vacuum verbose statcon_tbl
on both servers ...
regards, tom lane
PS: if you post any more query plans, please try to use software that
doesn't mangle the formatting so horribly ...
---(end of 
broadcast)---
TIP 4: Don't 'kill -9' the postmaster
 

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


[PERFORM] Bad choice of query plan from PG 7.3.6 to PG 7.3.9 part 1

2005-05-05 Thread Jona
Hi
I'm currently experiencing problems with long query execution times.
What I believe makes these problems particularly interesting is the 
difference in execution plans between our test server running PostGreSQL 
7.3.6 and our production server running PostGreSQL 7.3.9.
The test server is an upgraded home machine, a Pentium 4 with 1GB of 
memory and IDE disk.
The production server is a dual CPU XEON Pentium 4 with 2GB memory and 
SCSI disks.
One should expect the production server to be faster, but appearently 
not as the outlined query plans below shows.

My questions can be summoned up to:
1) How come the query plans between the 2 servers are different?
2) How come the production server in general estimates the cost of the 
query plans so horribly wrong? (ie. it chooses a bad query plan where as 
the test server chooses a good plan)
3) In Query 2, how come the production server refuses the use its 
indexes (subcat_uq and aff_price_uq, both unique indexes) where as the 
test server determines that the indexes are the way to go
4) In Query 3, how come the test server refuses to use its index 
(sct2lang_uq) and the production server uses it? And why is the test 
server still faster eventhough it makes a sequential scan of a table 
with 8.5k records in?

Please note, a VACUUM ANALYSE is run on the production server once a day 
(used to be once an hour but it seemed to make no difference), however 
there are generally no writes to the tables used in the queries.

If anyone could shed some light on these issues I would truly appreciate 
it.

Cheers
Jona
PS. Please refer to part 2 for the other queries and query plans
 

Query 1:
EXPLAIN ANALYZE
SELECT DISTINCT StatConTrans_Tbl.id, Code_Tbl.sysnm AS code, 
PriceCat_Tbl.amount AS price, Country_Tbl.currency,
 CreditsCat_Tbl.amount AS credits, Info_Tbl.title, Info_Tbl.description
FROM (SCT2SubCatType_Tbl
INNER JOIN SCT2Lang_Tbl ON SCT2SubCatType_Tbl.sctid = SCT2Lang_Tbl.sctid
INNER JOIN Language_Tbl ON SCT2Lang_Tbl.langid = Language_Tbl.id AND 
Language_Tbl.sysnm = UPPER('us') AND Language_Tbl.enabled = true
INNER JOIN Info_Tbl ON SCT2SubCatType_Tbl.sctid = Info_Tbl.sctid AND 
Language_Tbl.id = Info_Tbl.langid
INNER JOIN SubCatType_Tbl ON SCT2SubCatType_Tbl.subcattpid = 
SubCatType_Tbl.id AND SubCatType_Tbl.enabled = true
INNER JOIN CatType_Tbl ON SubCatType_Tbl.cattpid = CatType_Tbl.id AND 
CatType_Tbl.enabled = true
INNER JOIN SuperCatType_Tbl ON CatType_Tbl.spcattpid = 
SuperCatType_Tbl.id AND SuperCatType_Tbl.enabled = true
INNER JOIN StatConTrans_Tbl ON SCT2SubCatType_Tbl.sctid = 
StatConTrans_Tbl.id AND StatConTrans_Tbl.enabled = true
INNER JOIN Price_Tbl ON StatConTrans_Tbl.id = Price_Tbl.sctid AND 
Price_Tbl.affid = 8
INNER JOIN PriceCat_Tbl ON Price_Tbl.prccatid = PriceCat_Tbl.id AND 
PriceCat_Tbl.enabled = true
INNER JOIN Country_Tbl ON PriceCat_Tbl.cntid = Country_Tbl.id AND 
Country_Tbl.enabled = true
INNER JOIN CreditsCat_Tbl ON Price_Tbl.crdcatid = CreditsCat_Tbl.id AND 
CreditsCat_Tbl.enabled = true
INNER JOIN StatCon_Tbl ON StatConTrans_Tbl.id = StatCon_Tbl.sctid AND 
StatCon_Tbl.ctpid = 1
INNER JOIN Code_Tbl ON SuperCatType_Tbl.id = Code_Tbl.spcattpid AND 
Code_Tbl.affid = 8 AND Code_Tbl.cdtpid = 1)
WHERE SCT2SubCatType_Tbl.subcattpid = 79
ORDER BY StatConTrans_Tbl.id DESC
LIMIT 8 OFFSET 0

Plan on PostGre 7.3.6 on Red Hat Linux 3.2.3-39
Limit  (cost=178.59..178.61 rows=1 width=330) (actual time=22.77..28.51 
rows=4 loops=1)
  -  Unique  (cost=178.59..178.61 rows=1 width=330) (actual 
time=22.77..28.50 rows=4 loops=1)
-  Sort  (cost=178.59..178.60 rows=1 width=330) (actual 
time=22.76..22.85 rows=156 loops=1)
  Sort Key: statcontrans_tbl.id, code_tbl.sysnm, 
pricecat_tbl.amount, country_tbl.currency, creditscat_tbl.amount, 
info_tbl.title, info_tbl.description
  -  Hash Join  (cost=171.19..178.58 rows=1 width=330) 
(actual time=3.39..6.55 rows=156 loops=1)
Hash Cond: (outer.cntid = inner.id)
-  Nested Loop  (cost=170.13..177.51 rows=1 
width=312) (actual time=3.27..5.75 rows=156 loops=1)
  Join Filter: (inner.sctid = outer.sctid)
  -  Hash Join  (cost=170.13..171.48 rows=1 
width=308) (actual time=3.12..3.26 rows=4 loops=1)
Hash Cond: (outer.crdcatid = 
inner.id)
-  Hash Join  (cost=169.03..170.38 
rows=1 width=300) (actual time=3.00..3.11 rows=4 loops=1)
  Hash Cond: (outer.spcattpid = 
inner.spcattpid)
  -  Hash Join  
(cost=167.22..168.56 rows=1 width=253) (actual time=2.88..2.97 rows=4 
loops=1)
Hash Cond: (outer.id = 
inner.prccatid)
-  Seq Scan on 
pricecat_tbl  (cost=0.00..1.29 rows=12 width=12) 

Re: [PERFORM] Bad choice of query plan from PG 7.3.6 to PG 7.3.9 part 1

2005-05-05 Thread Tom Lane
Jona [EMAIL PROTECTED] writes:
 I'm currently experiencing problems with long query execution times.
 What I believe makes these problems particularly interesting is the 
 difference in execution plans between our test server running PostGreSQL 
 7.3.6 and our production server running PostGreSQL 7.3.9.
 The test server is an upgraded home machine, a Pentium 4 with 1GB of 
 memory and IDE disk.
 The production server is a dual CPU XEON Pentium 4 with 2GB memory and 
 SCSI disks.
 One should expect the production server to be faster, but appearently 
 not as the outlined query plans below shows.

I think the plans are fine; it looks to me like the production server
has serious table-bloat or index-bloat problems, probably because of
inadequate vacuuming.  For instance compare these entries:

-  Index Scan using ctp_statcon on statcon_tbl  (cost=0.00..6.01 rows=1 
width=4) (actual time=0.05..0.31 rows=39 loops=4)
  Index Cond: ((statcon_tbl.sctid = outer.sctid) AND (statcon_tbl.ctpid = 
1))

-  Index Scan using ctp_statcon on statcon_tbl  (cost=0.00..20.40 rows=5 
width=4) (actual time=27.97..171.84 rows=39 loops=4)
  Index Cond: ((statcon_tbl.sctid = outer.sctid) AND (statcon_tbl.ctpid = 
1))

Appears to be exactly the same task ... but the test server spent
1.24 msec total while the production server spent 687.36 msec total.
That's more than half of your problem right there.  Some of the other
scans seem a lot slower on the production machine too.

 1) How come the query plans between the 2 servers are different?

The production server's rowcount estimates are pretty good, the test
server's are not.  How long since you vacuumed/analyzed the test server?

It'd be interesting to see the output of vacuum verbose statcon_tbl
on both servers ...

regards, tom lane

PS: if you post any more query plans, please try to use software that
doesn't mangle the formatting so horribly ...

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[PERFORM] Bad choice of query plan from PG 7.3.6 to PG 7.3.9 part 1b

2005-05-03 Thread Jona
Please refer to part 1a for questions and part 2 for more queries and 
query plans.
Why won't this list accept my questions and sample data in one mail???

/Jona
 

Query 1:
EXPLAIN ANALYZE
SELECT DISTINCT StatConTrans_Tbl.id, Code_Tbl.sysnm AS code, 
PriceCat_Tbl.amount AS price, Country_Tbl.currency,
 CreditsCat_Tbl.amount AS credits, Info_Tbl.title, Info_Tbl.description
FROM (SCT2SubCatType_Tbl
INNER JOIN SCT2Lang_Tbl ON SCT2SubCatType_Tbl.sctid = SCT2Lang_Tbl.sctid
INNER JOIN Language_Tbl ON SCT2Lang_Tbl.langid = Language_Tbl.id AND 
Language_Tbl.sysnm = UPPER('us') AND Language_Tbl.enabled = true
INNER JOIN Info_Tbl ON SCT2SubCatType_Tbl.sctid = Info_Tbl.sctid AND 
Language_Tbl.id = Info_Tbl.langid
INNER JOIN SubCatType_Tbl ON SCT2SubCatType_Tbl.subcattpid = 
SubCatType_Tbl.id AND SubCatType_Tbl.enabled = true
INNER JOIN CatType_Tbl ON SubCatType_Tbl.cattpid = CatType_Tbl.id AND 
CatType_Tbl.enabled = true
INNER JOIN SuperCatType_Tbl ON CatType_Tbl.spcattpid = 
SuperCatType_Tbl.id AND SuperCatType_Tbl.enabled = true
INNER JOIN StatConTrans_Tbl ON SCT2SubCatType_Tbl.sctid = 
StatConTrans_Tbl.id AND StatConTrans_Tbl.enabled = true
INNER JOIN Price_Tbl ON StatConTrans_Tbl.id = Price_Tbl.sctid AND 
Price_Tbl.affid = 8
INNER JOIN PriceCat_Tbl ON Price_Tbl.prccatid = PriceCat_Tbl.id AND 
PriceCat_Tbl.enabled = true
INNER JOIN Country_Tbl ON PriceCat_Tbl.cntid = Country_Tbl.id AND 
Country_Tbl.enabled = true
INNER JOIN CreditsCat_Tbl ON Price_Tbl.crdcatid = CreditsCat_Tbl.id AND 
CreditsCat_Tbl.enabled = true
INNER JOIN StatCon_Tbl ON StatConTrans_Tbl.id = StatCon_Tbl.sctid AND 
StatCon_Tbl.ctpid = 1
INNER JOIN Code_Tbl ON SuperCatType_Tbl.id = Code_Tbl.spcattpid AND 
Code_Tbl.affid = 8 AND Code_Tbl.cdtpid = 1)
WHERE SCT2SubCatType_Tbl.subcattpid = 79
ORDER BY StatConTrans_Tbl.id DESC
LIMIT 8 OFFSET 0

Plan on PostGre 7.3.6 on Red Hat Linux 3.2.3-39
Limit  (cost=178.59..178.61 rows=1 width=330) (actual time=22.77..28.51 
rows=4 loops=1)
  -  Unique  (cost=178.59..178.61 rows=1 width=330) (actual 
time=22.77..28.50 rows=4 loops=1)
-  Sort  (cost=178.59..178.60 rows=1 width=330) (actual 
time=22.76..22.85 rows=156 loops=1)
  Sort Key: statcontrans_tbl.id, code_tbl.sysnm, 
pricecat_tbl.amount, country_tbl.currency, creditscat_tbl.amount, 
info_tbl.title, info_tbl.description
  -  Hash Join  (cost=171.19..178.58 rows=1 width=330) 
(actual time=3.39..6.55 rows=156 loops=1)
Hash Cond: (outer.cntid = inner.id)
-  Nested Loop  (cost=170.13..177.51 rows=1 
width=312) (actual time=3.27..5.75 rows=156 loops=1)
  Join Filter: (inner.sctid = outer.sctid)
  -  Hash Join  (cost=170.13..171.48 rows=1 
width=308) (actual time=3.12..3.26 rows=4 loops=1)
Hash Cond: (outer.crdcatid = 
inner.id)
-  Hash Join  (cost=169.03..170.38 
rows=1 width=300) (actual time=3.00..3.11 rows=4 loops=1)
  Hash Cond: (outer.spcattpid = 
inner.spcattpid)
  -  Hash Join  
(cost=167.22..168.56 rows=1 width=253) (actual time=2.88..2.97 rows=4 
loops=1)
Hash Cond: (outer.id = 
inner.prccatid)
-  Seq Scan on 
pricecat_tbl  (cost=0.00..1.29 rows=12 width=12) (actual time=0.04..0.08 
rows=23 loops=1)
  Filter: (enabled = 
true)
-  Hash  
(cost=167.21..167.21 rows=1 width=241) (actual time=2.80..2.80 rows=0 
loops=1)
  -  Nested Loop  
(cost=3.77..167.21 rows=1 width=241) (actual time=1.31..2.79 rows=4 
loops=1)
Join Filter: 
(inner.sctid = outer.sctid)
-  Nested 
Loop  (cost=3.77..161.19 rows=1 width=229) (actual time=1.19..2.60 
rows=4 loops=1)
  Join 
Filter: (outer.sctid = inner.sctid)
  -  Hash 
Join  (cost=3.77..155.17 rows=1 width=44) (actual time=1.07..2.37 rows=4 
loops=1)

Hash Cond: (outer.langid = inner.id)
-  
Nested Loop  (cost=2.69..154.06 rows=7 width=40) (actual time=0.90..2.18 
rows=8 loops=1)
  
Join Filter: (outer.sctid = inner.sctid)
  
-  Nested Loop  (cost=2.69..21.30 rows=1 width=32) (actual 
time=0.78..1.94 rows=4 loops=1)