Re-ran it 3 times on each host -
Sun:
-bash-3.00$ time pgbench -s 10 -c 10 -t 3000 pgbench
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 1
number of clients: 10
number of transactions per client: 3000
number of transactions actually processed: 30000/30000
tps = 827.810778 (including connections establishing)
tps = 828.410801 (excluding connections establishing)
real 0m36.579s
user 0m1.222s
sys 0m3.422s
Intel:
-bash-3.00$ time pgbench -s 10 -c 10 -t 3000 pgbench
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 1
number of clients: 10
number of transactions per client: 3000
number of transactions actually processed: 30000/30000
tps = 597.067503 (including connections establishing)
tps = 597.606169 (excluding connections establishing)
real 0m50.380s
user 0m2.621s
sys 0m7.818s
Thanks,
Anjan
-----Original Message-----
From: Anjan Dave
Sent: Wed 12/7/2005 10:54 AM
To: Tom Lane
Cc: Vivek Khera; Postgresql Performance
Subject: Re: [PERFORM] High context switches occurring
Thanks for your inputs, Tom. I was going after high concurrent clients,
but should have read this carefully -
-s scaling_factor
this should be used with -i (initialize) option.
number of tuples generated will be multiple of the
scaling factor. For example, -s 100 will imply 10M
(10,000,000) tuples in the accounts table.
default is 1. NOTE: scaling factor should be at least
as large as the largest number of clients you intend
to test; else you'll mostly be measuring update
contention.
I'll rerun the tests.
Thanks,
Anjan
-----Original Message-----
From: Tom Lane [mailto:[EMAIL PROTECTED]
Sent: Tuesday, December 06, 2005 6:45 PM
To: Anjan Dave
Cc: Vivek Khera; Postgresql Performance
Subject: Re: [PERFORM] High context switches occurring
"Anjan Dave" <[EMAIL PROTECTED]> writes:
> -bash-3.00$ time pgbench -c 1000 -t 30 pgbench
> starting vacuum...end.
> transaction type: TPC-B (sort of)
> scaling factor: 1
> number of clients: 1000
> number of transactions per client: 30
> number of transactions actually processed: 30000/30000
> tps = 45.871234 (including connections establishing)
> tps = 46.092629 (excluding connections establishing)
I can hardly think of a worse way to run pgbench :-(. These numbers
are
about meaningless, for two reasons:
1. You don't want number of clients (-c) much higher than scaling
factor
(-s in the initialization step). The number of rows in the "branches"
table will equal -s, and since every transaction updates one
randomly-chosen "branches" row, you will be measuring mostly row-update
contention overhead if there's more concurrent transactions than there
are rows. In the case -s 1, which is what you've got here, there is no
actual concurrency at all --- all the transactions stack up on the
single branches row.
2. Running a small number of transactions per client means that
startup/shutdown transients overwhelm the steady-state data. You
should
probably run at least a thousand transactions per client if you want
repeatable numbers.
Try something like "-s 10 -c 10 -t 3000" to get numbers reflecting test
conditions more like what the TPC council had in mind when they
designed
this benchmark. I tend to repeat such a test 3 times to see if the
numbers are repeatable, and quote the middle TPS number as long as
they're not too far apart.
regards, tom lane
---------------------------(end of
broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings
---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly