On Wed, 2003-07-02 at 09:46, Michael Mattox wrote:
> > Shared buffers is probably too high.  How much memory in this machine?
> > Is there anything else running aside from PostgreSQL?  What does top say
> > about cached / buffered data (number)
> I was using the 25% of RAM guideline posted recently.  The machine has
> 1.5gig but it also has a couple other java applications running on it
> including tomcat.

Seems you have a ton of cached data, so it's probably fine.  Only way to
really find out is a benchmark of sorts.

> > I see you reduced the random_page_cost to 1.5.  Why did you do this (how
> > is your disk subsystem configured)?
> Someone suggested I lower it to 1.5 or 1.0, not sure what the reasoning was.
> The disks are both SCSI 10,000 RPM.  My data directory is on one disk by

I'd be tempted to bump it up to 2.0 or 2.5 since data is on a single
disk (sequential scans *will* be faster than an index scan), but you
would need to run a benchmark on your disk to see if that is right.

This setting should not effect this query however unless you change the
value by a few orders of magnitude.

> there's now 7693057 rows in monitorstatusx and monitorstatusitemx as well as
> the necessary rows for the join tables.

Looks to me like 50% of the database should be in memory already if
Linux is caching the right stuff.

> > During normal use, what is your query spread like?  Mostly selects with
> > some inserts?  Any updates or deletes?  How often to updates or deletes
> > come in, and how many rows do they effect?
> There is a query on monitorx by datex every 10 seconds (monitors are updated
> every 5 minutes, so every 10 seconds I get the monitors that are due for an
> update).  Each monitor is then saved with its status field modified, and a
> new status item is inserted.  This happens every 5 minutes.  There are 

Hence the vacuum every 5 minutes.  Sounds good.

Every monitor is updated every 5 minutes?

> Before I guess the index with monitorx,datex didn't do much because all the
> data had the same date.  But now that I have over 2 weeks of real data, it
> makes a difference.

Yeah, the more the date diverges, the more useful that index will be.

Your reports. Are they always on the most recent date, or do they vary
in time.  Are queries distributed against old data as much as the new

If not (the examples you've shown have been recent) you might try a
partial index on datex, monitorx and a single index on monitorx.

CREATE INDEX ON .... (datex, monitorx) WHERE datex >= '2003-07-01';

After data from the 1st is no longer useful (being queried frequently),
drop that index, and recreate WHERE datex >= <relevent date here>.

This will make a significant improvement but will sacrifice the
performance on queries for older timeframes.

Since you're doing frequent updates, don't forget to drop any non-useful
indexes. Updating indexes significantly lengthens the time taken for an
update to occur.


PGP Key: http://www.rbt.ca/rbtpub.asc

Attachment: signature.asc
Description: This is a digitally signed message part

Reply via email to