[ This has been written offline yesterday.  Now I see that most of it
has already been covered.  I send it anyway ... ]

On Tue, 24 Jun 2003 09:39:32 +0200, "Michael Mattox"
>Websites are monitored every 5 or 10 minutes (depends on client),
>there are 900 monitors which comes out to 7,800 monitorings per hour.

So your server load - at least INSERT, UPDATE, DELETE - is absolutely
predictable.  This is good.  It enables you to design a cron-driven
VACUUM strategy.

|INFO:  --Relation public.jdo_sequencex--
|INFO:  Pages 28: Changed 1, Empty 0; Tup 1: Vac 5124, Keep 0, UnUsed 0.
                                          ^      ^^^^
This table could stand more frequent VACUUMs, every 15 minutes or so.

BTW, from the name of this table and from the fact that there is only
one live tuple I guess that you are using it to keep track of a
sequence number.  By using a real sequence you could get what you need
with less contention; and you don't have to VACUUM a sequence.

|INFO:  --Relation public.monitorx--
|INFO:  Removed 170055 tuples in 6036 pages.
|        CPU 0.52s/0.81u sec elapsed 206.26 sec.
|INFO:  Pages 6076: Changed 0, Empty 0; Tup 2057: Vac 170055, Keep 568, UnUsed 356.
|        Total CPU 6.28s/13.23u sec elapsed 486.07 sec.

The Vac : Tup ratio for this table is more than 80.  You have to
VACUUM this table more often.  How long is "overnight"?  Divide this
by 80 and use the result as the interval between
        VACUUM [VERBOSE] [ANALYSE] public.monitorx;

Thus you'd have approximately as many dead tuples as live tuples and
the table size should not grow far beyond 150 pages (after an initial
VACUUM FULL, of course).  Then VACUUM of this table should take no
more than 20 seconds.

Caveat:  Frequent ANALYSEs might trigger the need to VACUUM

>  The
>monitor table has columns "nextdate" and "status" which are updated with
>every monitoring, [...]
> updating the "nextdate" before the monitoring and inserting the
>status and status item records after.

Do you mean updating monitor.nextdate before the monitoring and
monitor.status after the monitoring?  Can you combine these two
UPDATEs into one?

>  During the vacuum my application does slow down quite a bit

Yes, because VACUUM does lots of I/O.

> and afterwards is slow speeds back up.

... because the working set is slowly fetched into the cache after
having been flushed out by VACUUM.  Your five largest relations are
monitorstatus_statusitemsx, monitorstatusitemlistd8ea58a5x,
monitorstatusitemlistx, monitorstatusitemx, and monitorstatusx.  The
heap relations alone (without indexes) account for 468701 pages,
almost 4GB.  VACUUMing these five relations takes 23 minutes for
freeing less than 200 out of 6 million tuples for each relation.  This
isn't worth it.  Unless always the same tuples are updated over and
over, scheduling a VACUUM for half a million deletions/updates should
be sufficient.

>shared_buffers = 3072           # min max_connections*2 or 16, 8KB each
>sort_mem = 8192         # min 64, size in KB
>vacuum_mem = 24576              # min 1024, size in KB
>The rest are left uncommented (using the defaults).

As has already been said, don't forget effective_cache_size.  I'm not
so sure about random_page_cost.  Try to find out which queries are too
slow.  EXPLAIN ANALYSE is your friend.

One more thing:  I see 2 or 3 UPDATEs and 5 INSERTs per monitoring.
Are these changes wrapped into a single transaction?


---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
    (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

Reply via email to