Matthew Wakeling wrote:
On Sun, 25 Jan 2009, M. Edward (Ed) Borasky wrote:
Actually, this isn't so much a 'pgbench' exercise as it is a source of
'real-world application' data for my Linux I/O performance visualization
tools. I've done 'iozone' tests, though not recently. But what I'm
M. Edward (Ed) Borasky wrote:
At the CMG meeting I asked the disk drive engineers, well, if the
drives are doing the scheduling, why does Linux go to all the trouble?
One big reason is that Linux knows more about the relative importance of
I/O operations than the individual drives do. Linux's
Craig Ringer wrote:
M. Edward (Ed) Borasky wrote:
At the CMG meeting I asked the disk drive engineers, well, if the
drives are doing the scheduling, why does Linux go to all the trouble?
One big reason is that Linux knows more about the relative importance of
I/O operations than the
On Mon, 26 Jan 2009, M. Edward (Ed) Borasky wrote:
Is there a howto somewhere on disabling this on a Seagate Barracuda?
http://inferno.slug.org/cgi-bin/wiki?Western_Digital_NCQ is a good
discussion of disabling NCQ support under Linux (both in user-space and
directly in the kernel itself).
Greg Smith wrote:
It's a tough time to be picking up inexpensive consumer SATA disks right
now. Seagate's drive reliability has been falling hard the last couple
of years, but all the WD drives I've started trying out instead have
just awful firmware. At last they're all cheap I guess.
I
M. Edward (Ed) Borasky wrote:
At the CMG meeting I asked the disk drive engineers, well, if the
drives are doing the scheduling, why does Linux go to all the trouble?
Their answer was something like, smart disk drives are a relatively
recent invention. But
One more reason?
I imagine the disk
Ron Mayer wrote:
M. Edward (Ed) Borasky wrote:
At the CMG meeting I asked the disk drive engineers, well, if the
drives are doing the scheduling, why does Linux go to all the trouble?
Their answer was something like, smart disk drives are a relatively
recent invention. But
One more reason?
Greg Smith wrote:
On Thu, 22 Jan 2009, Alvaro Herrera wrote:
Also, I think you should set the scale in the prepare step (-i) at
least as high as the number of clients you're going to use. (I dimly
recall some recent development in this area that might mean I'm wrong.)
The idea behind
[snip]
I'm actually doing some very similar testing and getting very similar
results. My disk is a single Seagate Barracuda 7200 RPM SATA (160 GB).
The OS is openSUSE 11.1 (2.6.27 kernel) with the stock PostgreSQL
8.3.5 RPM. I started out running pgbench on the same machine but just
moved the
On Sun, 25 Jan 2009, M. Edward (Ed) Borasky wrote:
I started out running pgbench on the same machine but just
moved the driver to another one trying to get better results.
That normally isn't necessary until you get to the point where you're
running thousands of transactions per second. The
Greg Smith wrote:
I'm not sure what is going on with your system, but the advice showing
up earlier in this thread is well worth heeding here: if you haven't
thoroughly proven that your disk setup works as expected on simple I/O
tests such as dd and bonnie++, you shouldn't be running pgbench
On Thu, Jan 22, 2009 at 10:52 PM, Ibrahim Harrani
ibrahim.harr...@gmail.com wrote:
Hi Craig,
Here is the result. It seems that disk write is terrible!.
r...@myserver /usr]# time (dd if=/dev/zero of=bigfile bs=8192
count=100; sync)
100+0 records in
100+0 records out
On 1/23/09, Ibrahim Harrani ibrahim.harr...@gmail.com wrote:
Hi Craig,
Here is the result. It seems that disk write is terrible!.
r...@myserver /usr]# time (dd if=/dev/zero of=bigfile bs=8192
count=100; sync)
Note, while sequential write speeds are a good indication of general
raid
On Fri, 23 Jan 2009, Merlin Moncure wrote:
Note, while sequential write speeds are a good indication of general
raid crappyness, they are not the main driver of your low pgbench
results (buy they may be involved with poor insert performance) That is
coming from your seek performance, which is
Hi,
I am running postgresql 8.3.5 on FreeBSD with Dual core Intel(R)
Xeon(R) CPU 3065 @ 2.33GHz, 2GB RAM and Seagate Technology -
Barracuda 7200.10 SATA 3.0Gb/ (RAID 1).
I made several benchmark test with pgbench, TPS rate is almost 40 +/- 5.
$ pgbench -i pgbench -s 50 -U pgsql
[pg...@$
On Thu, 2009-01-22 at 17:47 +0200, Ibrahim Harrani wrote:
Hi,
I am running postgresql 8.3.5 on FreeBSD with Dual core Intel(R)
Xeon(R) CPU 3065 @ 2.33GHz, 2GB RAM and Seagate Technology -
Barracuda 7200.10 SATA 3.0Gb/ (RAID 1).
I made several benchmark test with pgbench, TPS rate is
On 1/22/09, Ibrahim Harrani ibrahim.harr...@gmail.com wrote:
Is this rate is normal or not? What can I do to improve tps and insert
performance?
postgresql.conf
shared_buffers = 800MB # min 128kB or max_connections*16kB
work_mem = 2MB # min
Ibrahim Harrani escribió:
I made several benchmark test with pgbench, TPS rate is almost 40 +/- 5.
$ pgbench -i pgbench -s 50 -U pgsql
[pg...@$ pgbench -c 200 -t 2 -U pgsql -d pgbench
Try with 1000 transactions per client or more, instead of 2.
Also, I think you should set the scale in the
Hi Merlin,
Here is the bonnie++ and new pgbench result with high transaction numbers.
$ pgbench -i -s 30 -U pgsql pgbench
$ pbench -c 100 -t 1000 -U pgsql -d pgbench
transaction type: TPC-B (sort of)
scaling factor: 30
number of clients: 100
number of transactions per client: 1000
number of
This is the another bonnie++ test result with version 1.03
Delete files in random order...done.
Version 1.03e --Sequential Output-- --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
MachineSize K/sec %CP K/sec %CP
On Thu, Jan 22, 2009 at 1:27 PM, Ibrahim Harrani
ibrahim.harr...@gmail.com wrote:
Version 1.93d --Sequential Output-- --Sequential Input-
--Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP
Hi David,
$ I run the test again with the following options. Also I added the
html output of the result.
$ bonnie++ -u pgsql -n 128 -r 2048 -s 4096 -x 1
Using uid:70, gid:70.
Writing with putc()...done
Writing intelligently...done
Rewriting...done
Reading with getc()...done
Reading
On Thu, Jan 22, 2009 at 3:29 PM, Ibrahim Harrani
ibrahim.harr...@gmail.com wrote:
This is a intel server with onboard raid. I will check raid
configuration again tomorrow. Especially Write Cache and Read Ahead
values mentioned at
David Rees wrote:
On Thu, Jan 22, 2009 at 1:27 PM, Ibrahim Harrani
ibrahim.harr...@gmail.com wrote:
Version 1.93d --Sequential Output-- --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
MachineSize K/sec %CP K/sec
On Thu, 22 Jan 2009, Alvaro Herrera wrote:
Also, I think you should set the scale in the prepare step (-i) at
least as high as the number of clients you're going to use. (I dimly
recall some recent development in this area that might mean I'm wrong.)
The idea behind that maxim
Hi Craig,
Here is the result. It seems that disk write is terrible!.
r...@myserver /usr]# time (dd if=/dev/zero of=bigfile bs=8192
count=100; sync)
100+0 records in
100+0 records out
819200 bytes transferred in 945.343806 secs (8665630 bytes/sec)
real15m46.206s
user
Ibrahim Harrani wrote:
Hi Craig,
Here is the result. It seems that disk write is terrible!.
r...@myserver /usr]# time (dd if=/dev/zero of=bigfile bs=8192
count=100; sync)
100+0 records in
100+0 records out
819200 bytes transferred in 945.343806 secs (8665630 bytes/sec)
real
27 matches
Mail list logo