On Friday 23 April 2004 13:21, scott.marlowe wrote:
> On Fri, 23 Apr 2004, Chris Hoover wrote:
> > DB's on Powervaults 220S using raid 5 (over 6 disks)
> What controller is this, the adaptec? We've found it to be slower than
> the LSI megaraid based controller, but YMMV.
We are using the perc3/di controller. Believe it is using the megaraid
> > Running RH ES 2.1
> Are you running the latest kernel for ES 2.1? Early 2.4 kernels are
> pretty pokey and have some odd behaviour under load that later 2.4
> kernels seemed to fix.
I'm not sure we are at the latest and greatest for 2.1, but I am trying to get
there. Management won't let me do the upgrade w/o first testing/proving it
will not cause any more issues. Due to all of the current issues, and the
criticality of these systems to our bottom line, they are being very careful
with any change that may impact our users further.
We are waiting on our datacenter to plug in our test server and powervault so
that we can test the upgrades the the latest RH 2.1 kernel.
> > Here is the postgresql.conf from the server with the 11GB db:
> > max_connections = 64
> > shared_buffers = 32768 # 256MB=32768(buffs)*8192(bytes/buff)
> > max_fsm_relations = 1000 # min 10, fsm is free space map, ~40 bytes
> > max_fsm_pages = 10000 # min 1000, fsm is free space map, ~6 bytes
> IF you're doing lots of updates and such, you might want these higher.
> Have you vacuumed full the databases since taking over?
> > sort_mem = 4096 # 256MB=4096(bytes/proc)*64(procs or conns)
> Sorry, that's wrong. sort_mem is measure in kbytes. i.e. 8192 means 8
> megs sort_mem. Try setting it a bit higher (you've got LOTS of ram in
> these boxes) to something like 16 or 32 meg.
> > checkpoint_segments = 16 # in logfile segments, min 1, 16MB each
> > checkpoint_timeout = 30 # range 30-3600, in seconds
> > effective_cache_size = 131072 # typically 8KB each
> This still looks low. On one machine you're showing kernel cache of about
> .7 gig, on the other it's 6 gig. 6 gigs of kernel cache would be a
> setting of 800000. It's more of a nudge factor than an exact science, so
> don't worry too much.
I believe changing this requires a restart of the cluster (correct?). If so,
I'll try bumping up the effective_cache_size over the weekend.
Also, will all of the memory available to these machines, should I be running
with larger shared_buffers? It seems like 256M is a bit small.
> If you've got fast I/O look at lowering random page cost to something
> between 1 and 2. We use 1.3 to 1.4 on most of our machines with fast
> drives under them.
> I'd use vmstat to see if you're I/O bound.
If we end up being I/O bound, should the random page cost be set higher?
> also, look for index bloat. Before 7.4 it was a serious problem. With
> 7.4 regular vacuuming should reclaim most lost space, but there are corner
> cases where you still might need to re-index.
> ---------------------------(end of broadcast)---------------------------
> TIP 6: Have you searched our list archives?
Thanks for the help,
---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend