Thanks for the replies guys,

Chris -
very cool feature timing - didnt know about that one. Appears to be taking the following times in pulling up the page:
web browser: 1.15 sec
postgres: 1.52 sec
other: 0.83 sec


Andrew:
Query looks like the following:

explain analyse SELECT

job.*,
customer.*,
ubd.suburb, location.*,
street.street,
location.designation_no,
a1.initials as surveyor,
a2.initials as draftor,
prices.*,
plans.*

FROM

job,
login a1,
login a2,
prices,
location,
ubd,
plans

WHERE

(
a1.code = job.surveyor_no AND
a2.code = job.draftor_no AND
job.customer_no = customer.customer_no AND
job.location_no = location.location_no AND
location.suburb_no = ubd.suburb_id AND
location.street_no = street.street_no AND
job.customer_no = customer.customer_no AND
job.price_id = prices.pricelist_id AND
job.price_revision = prices.revision AND
location.plan_no = plans.number AND
location.plan_type = plans.plantype AND

( (job.jobbookflag <> 'flagged') AND ( job.status = 'normal' ) ))

ORDER BY job_no DESC;



QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
Sort (cost=566.31..567.06 rows=298 width=2626) (actual time=1378.38..1380.08 rows=353 loops=1)
Sort Key: job.job_no
-> Hash Join (cost=232.59..554.06 rows=298 width=2626) (actual time=124.96..1374.12 rows=353 loops=1)
Hash Cond: ("outer".suburb_no = "inner".suburb_id)
-> Hash Join (cost=221.45..519.06 rows=288 width=2606) (actual time=118.60..1187.87 rows=353 loops=1)
Hash Cond: ("outer".street_no = "inner".street_no)
-> Hash Join (cost=204.79..496.64 rows=287 width=2587) (actual time=108.16..997.57 rows=353 loops=1)
Hash Cond: ("outer".surveyor_no = "inner".code)
-> Hash Join (cost=203.21..490.05 rows=287 width=2573) (actual time=106.89..823.47 rows=353 loops=1)
Hash Cond: ("outer".customer_no = "inner".customer_no)
-> Hash Join (cost=159.12..440.93 rows=287 width=2291) (actual time=92.16..654.51 rows=353 loops=1)
Hash Cond: ("outer".draftor_no = "inner".code)
-> Hash Join (cost=157.55..434.33 rows=287 width=2277) (actual time=90.96..507.34 rows=353 loops=1)
Hash Cond: ("outer".price_id = "inner".pricelist_id)
Join Filter: ("outer".price_revision = "inner".revision)
-> Hash Join (cost=142.95..401.01 rows=336 width=2150) (actual time=82.57..377.87 rows=353 loops=1)
Hash Cond: ("outer".plan_no = "inner".number)
Join Filter: ("outer".plan_type = "inner".plantype)
-> Hash Join (cost=25.66..272.20 rows=418 width=2110) (actual time=14.58..198.50 rows=353 loops=1)
Hash Cond: ("outer".location_no = "inner".location_no)
-> Seq Scan on job (cost=0.00..238.18 rows=418 width=2029) (actual time=0.31..95.21 rows=353 loops=1)
Filter: ((jobbookflag <> 'flagged'::character varying) AND (status = 'normal'::character varying))
-> Hash (cost=23.53..23.53 rows=853 width=81) (actual time=13.91..13.91 rows=0 loops=1)
-> Seq Scan on "location" (cost=0.00..23.53 rows=853 width=81) (actual time=0.03..8.92 rows=853 loops=1)
-> Hash (cost=103.43..103.43 rows=5543 width=40) (actual time=67.55..67.55 rows=0 loops=1)
-> Seq Scan on plans (cost=0.00..103.43 rows=5543 width=40) (actual time=0.01..36.89 rows=5544 loops=1)
-> Hash (cost=13.68..13.68 rows=368 width=127) (actual time=7.98..7.98 rows=0 loops=1)
-> Seq Scan on prices (cost=0.00..13.68 rows=368 width=127) (actual time=0.03..5.83 rows=368 loops=1)
-> Hash (cost=1.46..1.46 rows=46 width=14) (actual time=0.57..0.57 rows=0 loops=1)
-> Seq Scan on login a2 (cost=0.00..1.46 rows=46 width=14) (actual time=0.02..0.31 rows=46 loops=1)
-> Hash (cost=42.07..42.07 rows=807 width=282) (actual time=14.24..14.24 rows=0 loops=1)
-> Seq Scan on customer (cost=0.00..42.07 rows=807 width=282) (actual time=0.03..9.03 rows=807 loops=1)
-> Hash (cost=1.46..1.46 rows=46 width=14) (actual time=0.57..0.57 rows=0 loops=1)
-> Seq Scan on login a1 (cost=0.00..1.46 rows=46 width=14) (actual time=0.02..0.31 rows=46 loops=1)
-> Hash (cost=14.53..14.53 rows=853 width=19) (actual time=9.79..9.79 rows=0 loops=1)
-> Seq Scan on street (cost=0.00..14.53 rows=853 width=19) (actual time=0.01..5.12 rows=853 loops=1)
-> Hash (cost=9.91..9.91 rows=491 width=20) (actual time=5.73..5.73 rows=0 loops=1)
-> Seq Scan on ubd (cost=0.00..9.91 rows=491 width=20) (actual time=0.02..2.98 rows=491 loops=1)
Total runtime: 1383.99 msec
(39 rows)


Time: 1445.80 ms



I tried setting up 10-15 indexes yesterday, but couldn't see they were doing anything. I have since deleted them (on the premise that I didn't have a clue what I was doing).

I'm not actually running any keys in this database... would that be a simpler way of running my queries? I only learnt postgres / unix from scratch a year ago so my db setup and queries is probably pretty messy :)

Thanks,
Dave
[EMAIL PROTECTED]





----- Original Message ----- From: "Andrei Reinus" <[EMAIL PROTECTED]>
To: "SpaceBallOne" <[EMAIL PROTECTED]>
Cc: <pgsql-performance@postgresql.org>
Sent: Monday, January 24, 2005 5:22 PM
Subject: Re: [PERFORM] poor performance of db?



SpaceBallOne wrote:

Hello everyone,

First time poster to the mailing list here.

We have been running pgsql for about a year now at a pretty basic
level (I guess) as a backend for custom
web (intranet) application software. Our database so far is a "huge"
(note sarcasm) 10 Mb containing of about 6 or so principle tables.

Our 'test' screen we've been using loads a 600kb HTML document which
is basically a summary of our client's orders. It took originally 11.5
seconds to load in internet explorer (all 10.99 seconds were pretty
much taken up by postgres processes on a freebsd server).

I then re-wrote the page to use a single select query to call all the
information needed by PHP to draw the screen. That managed to shave it
down to 3.5 seconds... but this so far is as fast as I can get the
page to load. Have tried vacuuming and creating indexes but to no
avail. (increasing shared mem buffers yet to be done)

Now heres the funny bit ...

Every time I tested an idea to speed it up, I got exactly the same
loading time on a Athlon 1800+, 256Mb RAM, 20Gb PATA computer as
compared to a Dual Opteron 246, 1Gb RAM, 70Gb WD Raptor SATA server.
Now, why a dual opteron machine can't perform any faster than a lowly
1800+ athlon in numerous tests is completely beyond me .. increased
memory and RAID 0 disc configurations so far have not resulted in any
significant performance gain in the opteron server.

Do these facts sound right? If postgres is meant to be a 200Gb
industrial strength database, should it really be taking this long
pulling 600kb worth of info from a 10Mb database? And why no
performance difference between two vastly different hardware spec'd
computers??? Am I missing some vital postgres.conf setting??

Any advice welcome.

Thanks,
Dave
[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>


Could you give us a bit more info. What you are trying to do. EXPLAIN ANALYZE would be great. In my experience first problem with the first db app is no indexes used in joining.

--
-- Andrei Reinus




--------------------------------------------------------------------------------



---------------------------(end of broadcast)--------------------------- TIP 5: Have you checked our extensive FAQ?

              http://www.postgresql.org/docs/faq


---------------------------(end of broadcast)--------------------------- TIP 7: don't forget to increase your free space map settings

Reply via email to