Re: [PERFORM] Bad choice of query plan from PG 7.3.6 to PG 7.3.9

2005-05-08 Thread Jona




Sorry Tom, misread your mail! My bad :-(

I believe the following is the data you need ?

   

  Live Server


  relname
  relpages


  ctp_statcon
  72


  statcon_pk
  135


  
  
  
  


  Test Server


  relname
  relpages


  ctp_statcon
  34


  statcon_pk
  28

  


Have executed the following query to obtain that data:
SELECT relname, relpages
FROM pg_class
WHERE relname = 'statcon_pk' OR relname = 'sc2ctp_fk' OR relname =
'sc2mtp_fk' OR relname = 'sc2sc_fk' OR relname = 'ctp_statcon'

The size difference for the index is surprisingly big I think,
considering that there's only around 1000 rows more in the table on the
live server than on the server.
Count for Live Server: 12597
Count for Test Server: 11494
Any insight into this?

Cheers
Jona

PS: The meta data for the table is:
CREATE TABLE statcon_tbl
(
  id serial NOT NULL,
  data bytea,
  wm bool DEFAULT 'FALSE',
  created timestamp DEFAULT now(),
  modified timestamp DEFAULT now(),
  enabled bool DEFAULT 'TRUE',
  bitsperpixel int4 DEFAULT 0,
  mtpid int4,
  sctid int4,
  ctpid int4,
  CONSTRAINT statcon_pk PRIMARY KEY (id),
  CONSTRAINT sc2ctp_fk FOREIGN KEY (ctpid) REFERENCES contype_tbl (id)
ON      UPDATE CASCADE ON DELETE CASCADE,
  CONSTRAINT sc2mtp_fk FOREIGN KEY (mtpid) REFERENCES mimetype_tbl (id)
ON UPDATE CASCADE ON DELETE CASCADE,
  CONSTRAINT sc2sct_fk FOREIGN KEY (sctid) REFERENCES statcontrans_tbl
(id) ON UPDATE CASCADE ON DELETE CASCADE
) 
WITHOUT OIDS;
CREATE INDEX ctp_statcon  ON statcon_tbl  USING btree  (sctid, ctpid);


Tom Lane wrote:

  Jona <[EMAIL PROTECTED]> writes:
  
  
anyway, here's the info for relpages:
Live Server: 424
Test Server: 338

  
  
I was asking about the indexes associated with the table, not the table
itself.

			regards, tom lane
  






Re: [PERFORM] Bad choice of query plan from PG 7.3.6 to PG 7.3.9

2005-05-07 Thread Tom Lane
Jona <[EMAIL PROTECTED]> writes:
> anyway, here's the info for relpages:
> Live Server: 424
> Test Server: 338

I was asking about the indexes associated with the table, not the table
itself.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [PERFORM] Bad choice of query plan from PG 7.3.6 to PG 7.3.9

2005-05-07 Thread Jona




Wouldn't the VACUUM have made them equivalent??

anyway, here's the info for relpages:
Live Server: 424
Test Server: 338

Please note though that there're more rows on the live server than on
the test server due to recent upload.
Total Row counts are as follows:
Live Server: 12597
Test Server: 11494

When the problems started the tables had identical size though.

Cheers
Jona

Tom Lane wrote:

  Jona <[EMAIL PROTECTED]> writes:
  
  
Test Server:
comm=# VACUUM ANALYZE VERBOSE StatCon_Tbl;
INFO:  --Relation public.statcon_tbl--
INFO:  Pages 338: Changed 0, Empty 0; Tup 11494: Vac 0, Keep 0, UnUsed 0.
Total CPU 0.02s/0.00u sec elapsed 1.98 sec.
INFO:  --Relation pg_toast.pg_toast_179851--
INFO:  Pages 85680: Changed 0, Empty 0; Tup 343321: Vac 0, Keep 0, UnUsed 0.
Total CPU 1.75s/0.23u sec elapsed 30.36 sec.
INFO:  Analyzing public.statcon_tbl
VACUUM

  
  
  
  
Live Server:
comm=# VACUUM ANALYZE VERBOSE StatCon_Tbl;
INFO:  --Relation public.statcon_tbl--
INFO:  Pages 424: Changed 0, Empty 0; Tup 12291: Vac 0, Keep 0, UnUsed 6101.
Total CPU 0.00s/0.01u sec elapsed 0.01 sec.
INFO:  --Relation pg_toast.pg_toast_891830--
INFO:  Pages 89234: Changed 0, Empty 0; Tup 352823: Vac 0, Keep 0, 
UnUsed 5487.
Total CPU 3.21s/0.47u sec elapsed 18.03 sec.
INFO:  Analyzing public.statcon_tbl
VACUUM

  
  
Hm, the physical table sizes aren't very different, which suggests that
the problem must lie with the indexes.  Unfortunately, VACUUM in 7.3
doesn't tell you anything about indexes if it doesn't have any dead rows
to clean up.  Could you look at pg_class.relpages for all the indexes
of this table, and see what that shows?

			regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org
  






Re: [PERFORM] Bad choice of query plan from PG 7.3.6 to PG 7.3.9

2005-05-06 Thread Tom Lane
Jona <[EMAIL PROTECTED]> writes:
> Test Server:
> comm=# VACUUM ANALYZE VERBOSE StatCon_Tbl;
> INFO:  --Relation public.statcon_tbl--
> INFO:  Pages 338: Changed 0, Empty 0; Tup 11494: Vac 0, Keep 0, UnUsed 0.
> Total CPU 0.02s/0.00u sec elapsed 1.98 sec.
> INFO:  --Relation pg_toast.pg_toast_179851--
> INFO:  Pages 85680: Changed 0, Empty 0; Tup 343321: Vac 0, Keep 0, UnUsed 0.
> Total CPU 1.75s/0.23u sec elapsed 30.36 sec.
> INFO:  Analyzing public.statcon_tbl
> VACUUM

> Live Server:
> comm=# VACUUM ANALYZE VERBOSE StatCon_Tbl;
> INFO:  --Relation public.statcon_tbl--
> INFO:  Pages 424: Changed 0, Empty 0; Tup 12291: Vac 0, Keep 0, UnUsed 6101.
> Total CPU 0.00s/0.01u sec elapsed 0.01 sec.
> INFO:  --Relation pg_toast.pg_toast_891830--
> INFO:  Pages 89234: Changed 0, Empty 0; Tup 352823: Vac 0, Keep 0, 
> UnUsed 5487.
> Total CPU 3.21s/0.47u sec elapsed 18.03 sec.
> INFO:  Analyzing public.statcon_tbl
> VACUUM

Hm, the physical table sizes aren't very different, which suggests that
the problem must lie with the indexes.  Unfortunately, VACUUM in 7.3
doesn't tell you anything about indexes if it doesn't have any dead rows
to clean up.  Could you look at pg_class.relpages for all the indexes
of this table, and see what that shows?

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] Bad choice of query plan from PG 7.3.6 to PG 7.3.9

2005-05-06 Thread Jona
Now with analyze
Test Server:
comm=# VACUUM ANALYZE VERBOSE StatCon_Tbl;
INFO:  --Relation public.statcon_tbl--
INFO:  Pages 338: Changed 0, Empty 0; Tup 11494: Vac 0, Keep 0, UnUsed 0.
   Total CPU 0.02s/0.00u sec elapsed 1.98 sec.
INFO:  --Relation pg_toast.pg_toast_179851--
INFO:  Pages 85680: Changed 0, Empty 0; Tup 343321: Vac 0, Keep 0, UnUsed 0.
   Total CPU 1.75s/0.23u sec elapsed 30.36 sec.
INFO:  Analyzing public.statcon_tbl
VACUUM
Live Server:
comm=# VACUUM ANALYZE VERBOSE StatCon_Tbl;
INFO:  --Relation public.statcon_tbl--
INFO:  Pages 424: Changed 0, Empty 0; Tup 12291: Vac 0, Keep 0, UnUsed 6101.
   Total CPU 0.00s/0.01u sec elapsed 0.01 sec.
INFO:  --Relation pg_toast.pg_toast_891830--
INFO:  Pages 89234: Changed 0, Empty 0; Tup 352823: Vac 0, Keep 0, 
UnUsed 5487.
   Total CPU 3.21s/0.47u sec elapsed 18.03 sec.
INFO:  Analyzing public.statcon_tbl
VACUUM

Have done some sampling running the same query a few times through the 
past few hours and it appears that the VACUUM has helped.
The following are the results after the Vacuum:

After VACUUM VERBOSE:
Index Scan using ctp_statcon on statcon_tbl  (cost=0.00..21.29 rows=5 
width=4) (actual time=0.07..0.37 rows=39 loops=4)
Index Cond: ((statcon_tbl.sctid = "outer".sctid) AND (statcon_tbl.ctpid 
= 1))

After VACUUM ANALYZE VERBOSE:
Index Scan using ctp_statcon on statcon_tbl  (cost=0.00..20.03 rows=5 
width=4) (actual time=0.09..0.37 rows=39 loops=4)
Index Cond: ((statcon_tbl.sctid = "outer".sctid) AND (statcon_tbl.ctpid 
= 1))

Only question remains why one server uses its indexes and the other 
don't eventhough VACUUM ANALYZE has now been run on both servers?
And even more interesting, before the VACUUM ANALYZEit was the server 
where no vacuum had taken place that used its index.

Cheers
Jona
Christopher Kings-Lynne wrote:
You didn't do analyze.
Chris
Jona wrote:
  Results of VACUUM VERBOSE from both servers
Test server:
comm=# VACUUM VERBOSE StatCon_Tbl;
INFO:  --Relation public.statcon_tbl--
INFO:  Pages 338: Changed 338, Empty 0; Tup 11494: Vac 0, Keep 0, 
UnUsed 0.
Total CPU 0.02s/0.00u sec elapsed 0.04 sec.
INFO:  --Relation pg_toast.pg_toast_179851--
INFO:  Pages 85680: Changed 85680, Empty 0; Tup 343321: Vac 0, Keep 
0, UnUsed 0.
Total CPU 4.03s/0.40u sec elapsed 70.99 sec.
VACUUM

Live Server:
comm=# VACUUM VERBOSE StatCon_Tbl;
INFO:  --Relation public.statcon_tbl--
INFO:  Pages 424: Changed 0, Empty 0; Tup 12291: Vac 0, Keep 0, 
UnUsed 6101.
Total CPU 0.01s/0.00u sec elapsed 0.60 sec.
INFO:  --Relation pg_toast.pg_toast_891830--
INFO:  Pages 89234: Changed 0, Empty 0; Tup 352823: Vac 0, Keep 0, 
UnUsed 5487.
Total CPU 4.44s/0.34u sec elapsed 35.48 sec.
VACUUM

Cheers
Jona
Tom Lane wrote:
Jona <[EMAIL PROTECTED]>  writes:
 

I'm currently experiencing problems with long query execution times.
What I believe makes these problems particularly interesting is the 
difference in execution plans between our test server running 
PostGreSQL 7.3.6 and our production server running PostGreSQL 7.3.9.
The test server is an upgraded "home machine", a Pentium 4 with 1GB 
of memory and IDE disk.
The production server is a dual CPU XEON Pentium 4 with 2GB memory 
and SCSI disks.
One should expect the production server to be faster, but 
appearently not as the outlined query plans below shows.
  
I think the plans are fine; it looks to me like the production server
has serious table-bloat or index-bloat problems, probably because of
inadequate vacuuming.  For instance compare these entries:
->  Index Scan using ctp_statcon on statcon_tbl  (cost=0.00..6.01 
rows=1 width=4) (actual time=0.05..0.31 rows=39 loops=4)
 Index Cond: ((statcon_tbl.sctid = "outer".sctid) AND 
(statcon_tbl.ctpid = 1))

->  Index Scan using ctp_statcon on statcon_tbl  (cost=0.00..20.40 
rows=5 width=4) (actual time=27.97..171.84 rows=39 loops=4)
 Index Cond: ((statcon_tbl.sctid = "outer".sctid) AND 
(statcon_tbl.ctpid = 1))

Appears to be exactly the same task ... but the test server spent
1.24 msec total while the production server spent 687.36 msec total.
That's more than half of your problem right there.  Some of the other
scans seem a lot slower on the production machine too.
 

1) How come the query plans between the 2 servers are different?
  
The production server's rowcount estimates are pretty good, the test
server's are not.  How long since you vacuumed/analyzed the test 
server?

It'd be interesting to see the output of "vacuum verbose statcon_tbl"
on both servers ...
regards, tom lane
PS: if you post any more query plans, please try to use software that
doesn't mangle the formatting so horribly ...
---(end of 
broadcast)---
TIP 4: Don't 'kill -9' the postmaster
 

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org

---

Re: [PERFORM] Bad choice of query plan from PG 7.3.6 to PG 7.3.9

2005-05-06 Thread Christopher Kings-Lynne
You didn't do analyze.
Chris
Jona wrote:
  Results of VACUUM VERBOSE from both servers
Test server:
comm=# VACUUM VERBOSE StatCon_Tbl;
INFO:  --Relation public.statcon_tbl--
INFO:  Pages 338: Changed 338, Empty 0; Tup 11494: Vac 0, Keep 0, UnUsed 0.
Total CPU 0.02s/0.00u sec elapsed 0.04 sec.
INFO:  --Relation pg_toast.pg_toast_179851--
INFO:  Pages 85680: Changed 85680, Empty 0; Tup 343321: Vac 0, Keep 0, 
UnUsed 0.
Total CPU 4.03s/0.40u sec elapsed 70.99 sec.
VACUUM

Live Server:
comm=# VACUUM VERBOSE StatCon_Tbl;
INFO:  --Relation public.statcon_tbl--
INFO:  Pages 424: Changed 0, Empty 0; Tup 12291: Vac 0, Keep 0, UnUsed 6101.
Total CPU 0.01s/0.00u sec elapsed 0.60 sec.
INFO:  --Relation pg_toast.pg_toast_891830--
INFO:  Pages 89234: Changed 0, Empty 0; Tup 352823: Vac 0, Keep 0, 
UnUsed 5487.
Total CPU 4.44s/0.34u sec elapsed 35.48 sec.
VACUUM

Cheers
Jona
Tom Lane wrote:
Jona <[EMAIL PROTECTED]>  writes:
 

I'm currently experiencing problems with long query execution times.
What I believe makes these problems particularly interesting is the 
difference in execution plans between our test server running PostGreSQL 
7.3.6 and our production server running PostGreSQL 7.3.9.
The test server is an upgraded "home machine", a Pentium 4 with 1GB of 
memory and IDE disk.
The production server is a dual CPU XEON Pentium 4 with 2GB memory and 
SCSI disks.
One should expect the production server to be faster, but appearently 
not as the outlined query plans below shows.
   

I think the plans are fine; it looks to me like the production server
has serious table-bloat or index-bloat problems, probably because of
inadequate vacuuming.  For instance compare these entries:
->  Index Scan using ctp_statcon on statcon_tbl  (cost=0.00..6.01 rows=1 
width=4) (actual time=0.05..0.31 rows=39 loops=4)
 Index Cond: ((statcon_tbl.sctid = "outer".sctid) AND (statcon_tbl.ctpid = 
1))
->  Index Scan using ctp_statcon on statcon_tbl  (cost=0.00..20.40 rows=5 
width=4) (actual time=27.97..171.84 rows=39 loops=4)
 Index Cond: ((statcon_tbl.sctid = "outer".sctid) AND (statcon_tbl.ctpid = 
1))
Appears to be exactly the same task ... but the test server spent
1.24 msec total while the production server spent 687.36 msec total.
That's more than half of your problem right there.  Some of the other
scans seem a lot slower on the production machine too.
 

1) How come the query plans between the 2 servers are different?
   

The production server's rowcount estimates are pretty good, the test
server's are not.  How long since you vacuumed/analyzed the test server?
It'd be interesting to see the output of "vacuum verbose statcon_tbl"
on both servers ...
regards, tom lane
PS: if you post any more query plans, please try to use software that
doesn't mangle the formatting so horribly ...
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
 

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [PERFORM] Bad choice of query plan from PG 7.3.6 to PG 7.3.9

2005-05-06 Thread Jona




Results of VACUUM VERBOSE from both servers

Test server:
comm=# VACUUM VERBOSE StatCon_Tbl;
INFO:  --Relation public.statcon_tbl--
INFO:  Pages 338: Changed 338, Empty 0; Tup 11494: Vac 0, Keep 0,
UnUsed 0.
    Total CPU 0.02s/0.00u sec elapsed 0.04 sec.
INFO:  --Relation pg_toast.pg_toast_179851--
INFO:  Pages 85680: Changed 85680, Empty 0; Tup 343321: Vac 0, Keep 0,
UnUsed 0.
    Total CPU 4.03s/0.40u sec elapsed 70.99 sec.
VACUUM

Live Server:
comm=# VACUUM VERBOSE StatCon_Tbl;
INFO:  --Relation public.statcon_tbl--
INFO:  Pages 424: Changed 0, Empty 0; Tup 12291: Vac 0, Keep 0, UnUsed
6101.
    Total CPU 0.01s/0.00u sec elapsed 0.60 sec.
INFO:  --Relation pg_toast.pg_toast_891830--
INFO:  Pages 89234: Changed 0, Empty 0; Tup 352823: Vac 0, Keep 0,
UnUsed 5487.
    Total CPU 4.44s/0.34u sec elapsed 35.48 sec.
VACUUM

Cheers
Jona

Tom Lane wrote:

  Jona <[EMAIL PROTECTED]> writes:
  
  
I'm currently experiencing problems with long query execution times.
What I believe makes these problems particularly interesting is the 
difference in execution plans between our test server running PostGreSQL 
7.3.6 and our production server running PostGreSQL 7.3.9.
The test server is an upgraded "home machine", a Pentium 4 with 1GB of 
memory and IDE disk.
The production server is a dual CPU XEON Pentium 4 with 2GB memory and 
SCSI disks.
One should expect the production server to be faster, but appearently 
not as the outlined query plans below shows.

  
  
I think the plans are fine; it looks to me like the production server
has serious table-bloat or index-bloat problems, probably because of
inadequate vacuuming.  For instance compare these entries:

->  Index Scan using ctp_statcon on statcon_tbl  (cost=0.00..6.01 rows=1 width=4) (actual time=0.05..0.31 rows=39 loops=4)
  Index Cond: ((statcon_tbl.sctid = "outer".sctid) AND (statcon_tbl.ctpid = 1))

->  Index Scan using ctp_statcon on statcon_tbl  (cost=0.00..20.40 rows=5 width=4) (actual time=27.97..171.84 rows=39 loops=4)
  Index Cond: ((statcon_tbl.sctid = "outer".sctid) AND (statcon_tbl.ctpid = 1))

Appears to be exactly the same task ... but the test server spent
1.24 msec total while the production server spent 687.36 msec total.
That's more than half of your problem right there.  Some of the other
scans seem a lot slower on the production machine too.

  
  
1) How come the query plans between the 2 servers are different?

  
  
The production server's rowcount estimates are pretty good, the test
server's are not.  How long since you vacuumed/analyzed the test server?

It'd be interesting to see the output of "vacuum verbose statcon_tbl"
on both servers ...

			regards, tom lane

PS: if you post any more query plans, please try to use software that
doesn't mangle the formatting so horribly ...

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
  





Re: [PERFORM] Bad choice of query plan from PG 7.3.6 to PG 7.3.9

2005-05-06 Thread Jona




Thank you for the swift reply.
The test server is hardly ever vacuumed as it in general sees very
limited traffic. vacuum is only necessary if the server sees a lot of
write operations, i.e. update, delete, insert right?

What explains the different choice of query plans then?
As can be seen from the following snippets the test server decides to
use an index twice in Query 2, where as the live server decides to do a
full scan of tables with 38.5k and 5.5k records.
In Query 3 it's vice versa.
Seems strange to me...

Query 2:
--- Bad idea, price_tbl hold 38.5k records

Test:
->  Index Scan using aff_price_uq on price_tbl  (cost=0.00..6.01
rows=1 width=4) (actual time=0.01..0.01 rows=1 loops=2838)"

Live:
->  Seq Scan on price_tbl  (cost=0.00..883.48 rows=2434 width=4)
(actual time=0.86..67.25 rows=4570 loops=1)"

Filter: (affid = 8)"


--- Bad idea, sct2subcattype_tbl hold 5.5k records

Test:
->  Index Scan using subcat_uq on sct2subcattype_tbl 
(cost=0.00..79.26 rows=26 width=8) (actual time=0.01..0.17 rows=59
loops=48)
Live:
 ->  Seq Scan on sct2subcattype_tbl  (cost=0.00..99.26 rows=5526
width=8) (actual time=0.01..30.16 rows=5526 loops=1)"


Query 3:
- Bad idea, sct2lang_tbl has 8.6k records
Test:
 ->  Seq Scan on sct2lang_tbl  (cost=0.00..150.79 rows=8679 width=8)
(actual time=0.03..10.70 rows=8679 loops=1)"

Live:
->  Index Scan using sct2lang_uq on sct2lang_tbl  (cost=0.00..8.13
rows=2 width=8) (actual time=1.10..2.39 rows=2 loops=69)"


Will get a VACUUM VERBOSE of StatCon_Tbl

Cheers
Jona

PS: The query plans are extracted using pgAdmin on Windows, if you can
recommend a better cross-platform postgre client I'd be happy to try it
out.

Tom Lane wrote:

  Jona <[EMAIL PROTECTED]> writes:
  
  
I'm currently experiencing problems with long query execution times.
What I believe makes these problems particularly interesting is the 
difference in execution plans between our test server running PostGreSQL 
7.3.6 and our production server running PostGreSQL 7.3.9.
The test server is an upgraded "home machine", a Pentium 4 with 1GB of 
memory and IDE disk.
The production server is a dual CPU XEON Pentium 4 with 2GB memory and 
SCSI disks.
One should expect the production server to be faster, but appearently 
not as the outlined query plans below shows.

  
  
I think the plans are fine; it looks to me like the production server
has serious table-bloat or index-bloat problems, probably because of
inadequate vacuuming.  For instance compare these entries:

->  Index Scan using ctp_statcon on statcon_tbl  (cost=0.00..6.01 rows=1 width=4) (actual time=0.05..0.31 rows=39 loops=4)
  Index Cond: ((statcon_tbl.sctid = "outer".sctid) AND (statcon_tbl.ctpid = 1))

->  Index Scan using ctp_statcon on statcon_tbl  (cost=0.00..20.40 rows=5 width=4) (actual time=27.97..171.84 rows=39 loops=4)
  Index Cond: ((statcon_tbl.sctid = "outer".sctid) AND (statcon_tbl.ctpid = 1))

Appears to be exactly the same task ... but the test server spent
1.24 msec total while the production server spent 687.36 msec total.
That's more than half of your problem right there.  Some of the other
scans seem a lot slower on the production machine too.

  
  
1) How come the query plans between the 2 servers are different?

  
  
The production server's rowcount estimates are pretty good, the test
server's are not.  How long since you vacuumed/analyzed the test server?

It'd be interesting to see the output of "vacuum verbose statcon_tbl"
on both servers ...

			regards, tom lane

PS: if you post any more query plans, please try to use software that
doesn't mangle the formatting so horribly ...

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
  





Re: [PERFORM] Bad choice of query plan from PG 7.3.6 to PG 7.3.9 part 1

2005-05-05 Thread Tom Lane
Jona <[EMAIL PROTECTED]> writes:
> I'm currently experiencing problems with long query execution times.
> What I believe makes these problems particularly interesting is the 
> difference in execution plans between our test server running PostGreSQL 
> 7.3.6 and our production server running PostGreSQL 7.3.9.
> The test server is an upgraded "home machine", a Pentium 4 with 1GB of 
> memory and IDE disk.
> The production server is a dual CPU XEON Pentium 4 with 2GB memory and 
> SCSI disks.
> One should expect the production server to be faster, but appearently 
> not as the outlined query plans below shows.

I think the plans are fine; it looks to me like the production server
has serious table-bloat or index-bloat problems, probably because of
inadequate vacuuming.  For instance compare these entries:

->  Index Scan using ctp_statcon on statcon_tbl  (cost=0.00..6.01 rows=1 
width=4) (actual time=0.05..0.31 rows=39 loops=4)
  Index Cond: ((statcon_tbl.sctid = "outer".sctid) AND (statcon_tbl.ctpid = 
1))

->  Index Scan using ctp_statcon on statcon_tbl  (cost=0.00..20.40 rows=5 
width=4) (actual time=27.97..171.84 rows=39 loops=4)
  Index Cond: ((statcon_tbl.sctid = "outer".sctid) AND (statcon_tbl.ctpid = 
1))

Appears to be exactly the same task ... but the test server spent
1.24 msec total while the production server spent 687.36 msec total.
That's more than half of your problem right there.  Some of the other
scans seem a lot slower on the production machine too.

> 1) How come the query plans between the 2 servers are different?

The production server's rowcount estimates are pretty good, the test
server's are not.  How long since you vacuumed/analyzed the test server?

It'd be interesting to see the output of "vacuum verbose statcon_tbl"
on both servers ...

regards, tom lane

PS: if you post any more query plans, please try to use software that
doesn't mangle the formatting so horribly ...

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster