Turns out the man page of vmstat in procps was changed on Oct 8 2002:
http://cvs.sourceforge.net/viewcvs.py/procps/procps/vmstat.8?r1=1.1r2=1.2
in reaction to a debian bug report:
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=157935
--
Markus Bertheau [EMAIL PROTECTED]
and certainly anyone who's been around a computer more than a week or
two knows which direction in and out are customarily seen from.
regards, tom lane
Apparently not whoever wrote the man page that everyone copied ;-)
Interesting. I checked this on several machines. They actually
correctly then all
the above will make a massive difference in performance.
Rod
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Rod Taylor
Sent: 25 October 2004 22:19
To: Anjan Dave
Cc: Postgresql Performance
Subject: Re: [PERFORM] can't handle large
of a
RAID10)
-anjan
-Original Message-
From: Rod Taylor [mailto:[EMAIL PROTECTED]
Sent: Monday, October 25, 2004 5:19 PM
To: Anjan Dave
Cc: Postgresql Performance
Subject: Re: [PERFORM] can't handle large number of INSERT/UPDATEs
On Mon, 2004-10-25 at 16:53, Anjan Dave wrote:
Hi,
I am
On Tue, 2004-10-26 at 13:42, Anjan Dave wrote:
It probably is locking issue. I got a long list of locks held when we ran select *
from pg_locks during a peak time.
relation | database | transaction | pid | mode | granted
large number of INSERT/UPDATEs
On Tue, 2004-10-26 at 13:42, Anjan Dave wrote:
It probably is locking issue. I got a long list of locks held when we
ran select * from pg_locks during a peak time.
relation | database | transaction | pid | mode | granted
Anjan,
It probably is locking issue. I got a long list of locks held when we ran
select * from pg_locks during a peak time.
Do the back-loaded tables have FKs on them? This would be a likely cause
of lock contention, and thus serializing inserts/updates to the tables.
--
--Josh
Josh
I don't have iostat on that machine, but vmstat shows a lot of writes to
the drives, and the runnable processes are more than 1:
6 1 0 3617652 292936 279192800 0 52430 1347 4681 25
19 20 37
Assuming that's the output of 'vmstat 1' and not some other delay,
50MB/second of
[mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 26, 2004 2:29 PM
To: Anjan Dave
Cc: Rod Taylor; Postgresql Performance
Subject: Re: [PERFORM] can't handle large number of INSERT/UPDATEs
I don't have iostat on that machine, but vmstat shows a lot of writes
to
the drives, and the runnable
PROTECTED]
Sent: Tuesday, October 26, 2004 5:53 PM
To: Anjan Dave
Cc: Rod Taylor; Postgresql Performance
Subject: Re: [PERFORM] can't handle large number of INSERT/UPDATEs
Anjan Dave [EMAIL PROTECTED] writes:
None of the locks are in state false actually.
In that case you don't have a locking
] can't handle large number of INSERT/UPDATEs
Anjan Dave [EMAIL PROTECTED] writes:
One thing I am not sure is why 'bi' (disk writes) stays at 0 mostly,
it's the 'bo' column that shows high numbers (reads from disk). With so
many INSERT/UPDATEs, I
On Tue, 26 Oct 2004, Tom Lane wrote:
Anjan Dave [EMAIL PROTECTED] writes:
One thing I am not sure is why 'bi' (disk writes) stays at 0 mostly,
it's the 'bo' column that shows high numbers (reads from disk). With so
many INSERT/UPDATEs, I would expect it the other way around...
Er ...
Anjan,
Oct 26 17:26:25 vl-pe6650-003 postgres[14273]: [4-1] LOG: recycled
transaction
log file 000B0082
...
Oct 26 17:31:27 vl-pe6650-003 postgres[14508]: [2-1] LOG: recycled
transaction
log file 000B0083
Oct 26 17:31:27 vl-pe6650-003 postgres[14508]: [3-1] LOG:
about it if there's some info on it
somewhere.
Thanks,
Anjan
-Original Message-
From: Josh Berkus [mailto:[EMAIL PROTECTED]
Sent: Tue 10/26/2004 8:42 PM
To: [EMAIL PROTECTED]
Cc: Anjan Dave; Tom Lane; Rod Taylor
Subject: Re: [PERFORM] can't handle large number of INSERT/UPDATEs
Curtis Zinzilieta [EMAIL PROTECTED] writes:
On Tue, 26 Oct 2004, Tom Lane wrote:
Er ... it *is* the other way around. bi is blocks in (to the CPU),
bo is blocks out (from the CPU).
Ummm.
[EMAIL PROTECTED] T2]$ man vmstat
bi: Blocks sent to a block device (blocks/s).
bo:
aylor" [EMAIL PROTECTED]; "Postgresql Performance"
(B[EMAIL PROTECTED]
(BSent: Wednesday, October 27, 2004 12:21 PM
(BSubject: Re: [PERFORM] can't handle large number of INSERT/UPDATEs
(B
(B
(B Curtis Zinzilieta [EMAIL PROTECTED] writes:
(B On Tue, 26 Oct 2004, Tom Lane wrote:
(
Hi,
I am dealing with an app here that uses pg to handle a few
thousand concurrent web users. It seems that under heavy load, the INSERT and
UPDATE statements to one or two specific tables keep queuing up, to the count
of 150+ (one table has about 432K rows, other has about 2.6Million
On Mon, 2004-10-25 at 16:53, Anjan Dave wrote:
Hi,
I am dealing with an app here that uses pg to handle a few thousand
concurrent web users. It seems that under heavy load, the INSERT and
UPDATE statements to one or two specific tables keep queuing up, to
the count of 150+ (one table
On Oct 25, 2004, at 13:53, Anjan Dave wrote:
I am dealing with an app here that uses pg to handle a few thousand
concurrent web users. It seems that under heavy load, the INSERT and
UPDATE statements to one or two specific tables keep queuing up, to
the count of 150+ (one table has about 432K
19 matches
Mail list logo