Sorry I wasn't clear. We do have nightly vacuum crons defined on all pg
servers. Apparently, this one had been taking many hours to finish
recently, and we did an additional vacuum during day time when there was
low volume, which finished quickly.

The platform I mentioned is RedHat 9, PG7.4, on Dell PowerEdge2650.

Here's the output of 'top' below, taken just now. I can capture the
stats during peak time in the afternoon also:

68 processes: 67 sleeping, 1 running, 0 zombie, 0 stopped
CPU0 states:   3.1% user   4.4% system    0.0% nice   0.0% iowait  92.0%
idle
CPU1 states:   0.0% user   3.2% system    0.0% nice   0.0% iowait  96.3%
idle
CPU2 states:   0.4% user   0.3% system    0.0% nice   0.0% iowait  98.3%
idle
CPU3 states:   0.3% user   1.0% system    0.0% nice   0.0% iowait  98.2%
idle
Mem:  3874188k av, 3622296k used,  251892k free,       0k shrd,  322372k
buff
                   2369836k actv,  454984k in_d,   44568k in_c
Swap: 4096532k av,   24552k used, 4071980k free                 2993384k
cached

  PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME CPU
COMMAND
 4258 postgres  16   0 88180  86M 85796 S     2.1  2.2  14:55   0
postmaster
 5260 postgres  15   0 85844  83M 84704 S     0.0  2.2   2:51   1
postmaster
14068 root      23   0 69240  67M  2164 S     3.9  1.7  59:44   2 wish
 3157 postgres  15   0 50364  49M 48484 S     0.0  1.2   0:02   3
postmaster
 2174 postgres  15   0 50196  48M 48380 S     0.1  1.2   0:00   0
postmaster
 3228 postgres  15   0 49292  48M 47536 S     0.0  1.2   0:00   3
postmaster
 3050 postgres  15   0 49184  47M 47364 S     0.5  1.2   0:00   2
postmaster
 2725 postgres  15   0  7788 7688  6248 S     0.0  0.1   0:00   3
postmaster
 3600 postgres  16   0  5812 5700  4784 S     0.0  0.1   0:00   3
postmaster
 1342 gdm       15   0 12988 5560  2056 S     0.0  0.1  19:36   3
gdmgreeter

According to top's man, 
RSS is: The total amount of physical memory used by  the  task,  in
kilo-bytes...
SHARE is: The amount of shared memory used by the task is shown...
SIZE is: The size of the task's code plus data plus stack space,  in
kilo-bytes...

I am not sure how do I calculate whether 80MB shared_buffer (in
postgresql.conf)should be increased or decreased from the above values,
because during higher loads, the number of postmaster instances go up to
100 (limited by max connections), each at an RSS of about 87MB...

Thanks,
anjan

-----Original Message-----
From: Tom Lane [mailto:[EMAIL PROTECTED] 
Sent: Thursday, January 15, 2004 7:52 PM
To: Anjan Dave
Cc: [EMAIL PROTECTED]
Subject: Re: [PERFORM] shared_buffer value 


"Anjan Dave" <[EMAIL PROTECTED]> writes:
> Question is, does the 80MB buffer allocation correspond to ~87MB per 
> postmaster instance? (with about 100 instances of postmaster, that 
> will be about 100 x 80MB =3D 8GB??)

Most likely, top is counting some portion of the shared memory block
against each backend process.  This behavior is platform-specific,
however, and you did not tell us what platform you're on.

> Interestingly, at one point, we vacuumed the database, and the size 
> reported by 'df -k' on the pgsql slice dropped very 
> significantly...guess, it had been using a lot of temp files?

"At one point"?  If your setup doesn't include *routine* vacuuming, you
are going to have problems with file bloat.  This isn't something you
can do just when you happen to remember it --- it needs to be driven off
a cron job or some such.  Or use the contrib autovacuum daemon. You want
to vacuum often enough to keep the database size more or less constant.

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

Reply via email to