Hopefully, i am not steering this into a different direction, but is there a way to
find out how much sort memory each query is taking up, so that we can scale that up
with increasing users?
From: scott.marlowe [mailto:[EMAIL PROTECTED]
Sent: Tue 10/21/2003 1:33 PM
To: Josh Berkus
Cc: Anjan Dave; Richard Huxton; [EMAIL PROTECTED]
Subject: Re: [PERFORM] Tuning for mid-size server
On Tue, 21 Oct 2003, Josh Berkus wrote:
> > From what I know, there is a cache-row-set functionality that doesn't
> > exist with the newer postgres...
> What? PostgreSQL has always used the kernel cache for queries.
> > Concurrent users will start from 1 to a high of 5000 or more, and could
> > ramp up rapidly. So far, with increased users, we have gone up to
> > starting the JVM (resin startup) with 1024megs min and max (recommended
> > by Sun) - on the app side.
> Well, just keep in mind when tuning that your calculations should be based on
> *available* RAM, meaning RAM not used by Apache or the JVM.
> With that many concurrent requests, you'll want to be *very* conservative
> sort_mem; I might stick to the default of 1024 if I were you, or even lower
> it to 512k.
Exactly. Remember, Anjan, that that if you have a single sort that can't
fit in RAM, it will use the hard drive for temp space, effectively
"swapping" on its own. If the concurrent sorts run the server out of
memory, the server will start swapping process, quite possibly the sorts,
in a sort of hideous round robin death spiral that will bring your machine
to its knees as the worst possible time, midday under load. sort_mem is
one of the small "foot guns" in the postgresql.conf file that people tend
to pick up and go "huh, what's this do?" right before cranking it up.
---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly