Re: [CentOS] Benchmark Disk IO

2010-05-19 Thread Waleed Harbi
*Try dbench.*
*
*
*
http://publib.boulder.ibm.com/infocenter/lnxinfo/v3r0m0/index.jsp?topic=/liaag/journalingfilesystem/publicjournal12.htm
*
*
*
*
OR*
*
*
*http://linuxhelp.150m.com/resources/fs-benchmarks.htm*
*
*
* 
http://www.phoronix-test-suite.com/?k=home
*
--
Best Wishes,
Waleed Harbi

Dream | Do | Be


On Wed, May 19, 2010 at 4:04 PM, Matt Keating  wrote:

> > I don't usually use iozone (I usually use bonnie++) so take this with a
> > grain of salt, but those speed look suspiciously like cache speeds. Bump
> > the size (-s parameter) up to twice your real RAM size.
> >
> > --
> > Benjamin Franz
> > ___
> > CentOS mailing list
> > CentOS@centos.org
> > http://lists.centos.org/mailman/listinfo/centos
> >
>
> Will give that a try - 16gb file incoming :/
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Benchmark Disk IO

2010-05-19 Thread Jerry Franz
On 05/19/2010 06:14 AM, John Doe wrote:
> From: Matt Keating
>
>>> I don't usually use iozone (I usually use bonnie++) so take this with
>>> a grain of salt, but those speed look suspiciously like cache speeds.
>>> Bump the size (-s parameter) up to twice your real RAM size.
>>>
>> Will give that a try - 16gb file incoming
>>  
> Or maybe do a:
>sync; echo 3>  /proc/sys/vm/drop_caches
> between the 2 tests...?
>
It wouldn't help. The problem is the tests were using file sizes small 
enough to easily fit completely into the system caches. So you end up 
benchmarking the performance of the I/O system caches - not the drives 
themselves.

-- 
Benjamin Franz
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Benchmark Disk IO

2010-05-19 Thread John Doe
From: Matt Keating 
>> I don't usually use iozone (I usually use bonnie++) so take this with 
>> a grain of salt, but those speed look suspiciously like cache speeds. 
>> Bump the size (-s parameter) up to twice your real RAM size.
> Will give that a try - 16gb file incoming 

Or maybe do a:
  sync; echo 3 > /proc/sys/vm/drop_caches
between the 2 tests...?

JD


  
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Benchmark Disk IO

2010-05-19 Thread Matt Keating
> I don't usually use iozone (I usually use bonnie++) so take this with a
> grain of salt, but those speed look suspiciously like cache speeds. Bump
> the size (-s parameter) up to twice your real RAM size.
>
> --
> Benjamin Franz
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>

Will give that a try - 16gb file incoming :/
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Benchmark Disk IO

2010-05-19 Thread Benjamin Franz
On 05/19/2010 02:44 AM, Matt Keating wrote:
> 2010/5/6 Matt Keating:
>
>> Thanks for all the updates. Will look into iozone and the advice given
>> about the rest.
>>  
> Either I'm doing/reading something wrong or a 1TB SATA 7200 RPM drive
> is faster than 4x300GB SCSI 10K RPM drives in raid 10.
> Both of the results below were from iozone, running the following command:
> $ iozone -R -l 5 -u 5 -r 4k -s 100m -F /tmp/F1 /tmp/F2 /tmp/F3 /tmp/F4 /tmp/F5
>
>
[...]
> Am I doing something wrong? Please advise.
>

I don't usually use iozone (I usually use bonnie++) so take this with a 
grain of salt, but those speed look suspiciously like cache speeds. Bump 
the size (-s parameter) up to twice your real RAM size.

-- 
Benjamin Franz
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Benchmark Disk IO

2010-05-19 Thread Matt Keating
2010/5/6 Matt Keating :
> Thanks for all the updates. Will look into iozone and the advice given
> about the rest.

Either I'm doing/reading something wrong or a 1TB SATA 7200 RPM drive
is faster than 4x300GB SCSI 10K RPM drives in raid 10.
Both of the results below were from iozone, running the following command:
$ iozone -R -l 5 -u 5 -r 4k -s 100m -F /tmp/F1 /tmp/F2 /tmp/F3 /tmp/F4 /tmp/F5

SATA:
"Throughput report Y-axis is type of test X-axis is number of processes"
"Record size = 4 Kbytes "
"Output is in Kbytes/sec"

"  Initial write "  564135.95
"Rewrite " 2021499.52
"   Read " 5937227.44
"Re-read " 5898310.02
"   Reverse Read " 5652286.96
"Stride read " 5556376.58
"Random read " 5505582.00
" Mixed workload " 3570796.92
"   Random write " 1913500.58
" Pwrite "  580229.98
"  Pread " 5310776.62


RAID:
"Throughput report Y-axis is type of test X-axis is number of processes"
"Record size = 4 Kbytes "
"Output is in Kbytes/sec"

"  Initial write "  253099.59
"Rewrite "  915449.39
"   Read " 1911688.05
"Re-read " 1906603.72
"   Reverse Read " 1847584.97
"Stride read " 1772254.31
"Random read " 1550438.36
" Mixed workload " 1276847.84
"   Random write "  930307.99
" Pwrite "  206193.02
"  Pread " 2631370.07

Am I doing something wrong? Please advise.

Thanks,
Matt
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Benchmark Disk IO

2010-05-06 Thread Chan Chung Hang Christopher
Les Mikesell wrote:
> On 5/5/2010 12:00 PM, Karanbir Singh wrote:
>>> Try to run the same IO operations as your production server is running.
>>> Bonnie++ could be good application for benchmarking. Also run some
>>> parallel rsync, rm, find, etc proccesses.
>>>
>> I am with John Pierce on this one, role and app will dictate benchmarks
>> that reflect reality.
>>
>> Having said that, I think iozone>  bonnie++
> 
> If the job involves creating/deleting lots of little files like a mail 
> server with maildir format storage, you might try to dig up a copy of 
> postmark too.
> 

Les, you have got to be joking. There is not a single fsync/fsyncdata 
call in postmark. postmark is completely unsuitable to mimicking mail 
queues or deliveries to maildirs. I, for one, am glad that Netapp has 
stopped advertising and have pulled their 'fake' benchmarking utility. 
It might have been relevant on Linux when it did not have barriers and 
fsync/fsyncdata had zero guarantees unlike the BSDs and UNIX operating 
systems.

For delivery to maildirs, you want to use fsbench from Bruce Guenter, 
which does the right thing.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Benchmark Disk IO

2010-05-06 Thread Alan McKay
There is a dd test that floats around the PostgreSQL lists, so I wrote
this simple script to automate it - use at your own risk!

#!/bin/bash

# do something which parses command line parameters

DEFAULT_BLOCK=8
DEFAULT_PATH=/data/tmp
DEFAULT_FILE=ddfile

helpme()
{
echo "Usage: $0 [RAM=x] [PATH=/foo/bar] [FILE=foo] [BLOCKSIZE=X]"
echo "Runs a dd test on disk subsystem to benchmark I/O speed"
echo "  BLOCKSIZE - given in K.  Should be no need to change"
echo "default is 8 which is what PostgreSQL uses"
echo "  RAM - must be specified in Gigs"
echo "default is to use output of 'free' command"
echo "  PATH - where do you want the files created?  i.e.
which filesystem do you want tested?"
echo "default is $DEFAULT_PATH"
echo "  FILE - file name to create in test.  Really makes no
difference."
echo "default is $DEFAULT_FILE"
exit 1
}

default_ram()
{
MYB=`free | grep Mem: | awk '{print $2}'`
((MYG=MYB/1000/1000))
export MYGIGS=$MYG
echo "RAM not specified - assuming ${MYB}/${MYGIGS}G"
}

default_path()
{
export  MYPATH=$DEFAULT_PATH
echo "PATH not specified - using default $MYPATH"
}

default_file()
{
export  MYFILE=$DEFAULT_FILE
echo "FILE not specified - using default $MYFILE"
}

default_block()
{
export  MYBLOCK=$DEFAULT_BLOCK
echo "BLOCKSIZE not specified - using default $MYBLOCK"
}

RAMGIVEN=0
PATHGIVEN=0
FILEGIVEN=0
BLOCKGIVEN=0

while [ $# -gt 0 ]
do
MYVAR=`echo $1 | awk -F= '{print $1}'`
MYVAL=`echo $1 | awk -F= '{print $2}'`

case $MYVAR in
BLOCKSIZE)
MYBLOCK=$MYVAL
BLOCKGIVEN=1
;;

RAM)
MYGIGS=$MYVAL
RAMGIVEN=1
;;

PATH)
MYPATH=$MYVAL
PATHGIVEN=1
;;

FILE)
MYFILE=$MYVAL
FILEGIVEN=1
;;

-h|-help|--h|--help)
helpme
;;

*)
echo "ERROR : unknown parameter [${MYVAR}]"
;;
esac

shift
done

[[ $RAMGIVEN -eq 0 ]]   && default_ram
[[ $PATHGIVEN -eq 0 ]]  && default_path
[[ $FILEGIVEN -eq 0 ]]  && default_file
[[ $BLOCKGIVEN -eq 0 ]]  && default_block

PATH_EXISTS=0
[[ -d $MYPATH ]]&& PATH_EXISTS=1

echo "[$MYGIGS][$MYPATH][$MYFILE][$PATH_EXISTS]"


mkdir -p $MYPATH
pushd $MYPATH

df .

FREESPACE=`df . | tail -1 | awk '{print $4}'`

BLOCKSPERGIG=125000

((ONCERAM=BLOCKSPERGIG*MYGIGS))
((TWICERAM=2*BLOCKSPERGIG*MYGIGS))
((THRICERAM=3*BLOCKSPERGIG*MYGIGS*MYBLOCK))

echo "Have:$FREESPACE, Need:$THRICERAM (3x RAM)"

if [ $THRICERAM -ge $FREESPACE ]
then
echo "ERROR: not enough room on disk"
echo "Have:$FREESPACE, Need:$THRICERAM (3x RAM)"
exit 2
fi


DDCOM="dd if=/dev/zero of=${MYFILE} bs=8k count=${TWICERAM} && sync"
echo $DDCOM

time sh -c "$DDCOM"
DDCOM="dd if=/dev/zero of=${MYFILE}2 bs=8K count=${ONCERAM}"
echo $DDCOM
$DDCOM

DDCOM="dd if=${MYFILE} of=/dev/null bs=8k"
echo $DDCOM
time $DDCOM

rm -f ${MYFILE}
rm -f ${MYFILE}2
[[ $PATH_EXISTS -eq 0 ]]&& rmdir $MYPATH

popd




-- 
“Don't eat anything you've ever seen advertised on TV”
 - Michael Pollan, author of "In Defense of Food"
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Benchmark Disk IO

2010-05-06 Thread Matt Keating
Sorry for the top post - clicked send before looking
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Benchmark Disk IO

2010-05-06 Thread Matt Keating
Thanks for all the updates. Will look into iozone and the advice given
about the rest.

2010/5/6  :
> On Thu, May 06, 2010 at 12:56:55AM -0700, John R Pierce wrote:
>> przemol...@poczta.fm wrote:
>> > The above numbers are true if we have random (!) IO pattern.
>> > In case of sequential (!) IO even SATA disks can deliver much, much higher 
>> > numbers.
>> >
>>
>>
>> sequential IO is remarkably rare in a typical server environment
>
> Yes, of course: Oracle's redo logs which are key performance factor for all
> transactions (inserts/updates) have sequential IO pattern.
> And Oracle is not a typical server environment 
>
>> anyways, the IOPS numbers on sequential operations aren't much higher,
>> they are just transferring more data per operation.
>
> I didn't say that they _are_ much higher. I said that even SATA
> disks can deliver hight IOPS on condition of sequential IO.
>
>
> Regards
> Przemyslaw Bak (przemol)
> --
> http://przemol.blogspot.com/
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> --
> Audi kilka tysiÄ cy zĹ otych taniej? Przebieraj wĹ rĂłd tysiÄ cy ogĹ oszeĹ !
> Sprawdz >>> http://linkint.pl/f26b3
>
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Benchmark Disk IO

2010-05-06 Thread przemolicc
On Thu, May 06, 2010 at 12:56:55AM -0700, John R Pierce wrote:
> przemol...@poczta.fm wrote:
> > The above numbers are true if we have random (!) IO pattern.
> > In case of sequential (!) IO even SATA disks can deliver much, much higher 
> > numbers.
> >   
> 
> 
> sequential IO is remarkably rare in a typical server environment

Yes, of course: Oracle's redo logs which are key performance factor for all 
transactions (inserts/updates) have sequential IO pattern.
And Oracle is not a typical server environment 

> anyways, the IOPS numbers on sequential operations aren't much higher, 
> they are just transferring more data per operation.

I didn't say that they _are_ much higher. I said that even SATA
disks can deliver hight IOPS on condition of sequential IO.


Regards
Przemyslaw Bak (przemol)
--
http://przemol.blogspot.com/





























--
Audi kilka tysięcy złotych taniej? Przebieraj wśród tysięcy ogłoszeń!
Sprawdz >>> http://linkint.pl/f26b3

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Benchmark Disk IO

2010-05-06 Thread Евгений Килимчук
2010/5/6 John R Pierce 

> ???  wrote:
> > Use a simple test:
> > time dd if=/dev/zero of=/tmp/test-hd bs=1M count=1000
>
> sequential cached writes, yeah, thats useful. *not*
>
>
This is one of the steps.

You can use sysbench random read and random write for multi-thirds.

> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>



-- 
Best regards,

Eugene Kilimchuk 
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Benchmark Disk IO

2010-05-06 Thread John R Pierce
przemol...@poczta.fm wrote:
> The above numbers are true if we have random (!) IO pattern.
> In case of sequential (!) IO even SATA disks can deliver much, much higher 
> numbers.
>   


sequential IO is remarkably rare in a typical server environment

anyways, the IOPS numbers on sequential operations aren't much higher, 
they are just transferring more data per operation.


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Benchmark Disk IO

2010-05-06 Thread John R Pierce
???  wrote:
> Use a simple test:
> time dd if=/dev/zero of=/tmp/test-hd bs=1M count=1000

sequential cached writes, yeah, thats useful. *not*


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Benchmark Disk IO

2010-05-06 Thread przemolicc
On Wed, May 05, 2010 at 09:47:19AM -0700, nate wrote:
> Matt Keating wrote:
> > What is the best way to benchmark disk IO?
> >
> > I'm looking to move one of my servers, which is rather IO intense. But
> > not without first benchmarking the current and new disk array, To make
> > sure this isn't a full waste of time.
> 
> You can do a pretty easy calculation based on the #/type of drives
> to determine the approx number of raw IOPS that are available, since
> it's I/O intensive your probably best off with RAID 1+0, which further
> simplifies the calculation, parity based raid can make it really
> complicated.
> 
> 7200 RPM disk = ~90 IOPS
> 1 RPM disk = ~150-180 IOPS
> 15000 RPM disk = ~230-250 IOPS
> ...

The above numbers are true if we have random (!) IO pattern.
In case of sequential (!) IO even SATA disks can deliver much, much higher 
numbers.


Regards
Przemyslaw Bak (przemol)
--
http://przemol.blogspot.com/





























--
Szukasz pracy? Zobacz ciekawe oferty w Twoim miescie
Sprawdz >>> http://linkint.pl/f26b2

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Benchmark Disk IO

2010-05-06 Thread Евгений Килимчук
Hi!

Use a simple test:
time dd if=/dev/zero of=/tmp/test-hd bs=1M count=1000

Sysbench:
http://sysbench.sourceforge.net/docs/#fileio_mode

And this:
http://assets.en.oreilly.com/1/event/27/Linux%20Filesystem%20Performance%20for%20Databases%20Presentation.pdf

2010/5/5 Matt Keating 

> What is the best way to benchmark disk IO?
>
> I'm looking to move one of my servers, which is rather IO intense. But
> not without first benchmarking the current and new disk array, To make
> sure this isn't a full waste of time.
>
> thanks
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>



-- 
Best regards,

Eugene Kilimchuk 
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Benchmark Disk IO

2010-05-05 Thread Ross Walker
On May 5, 2010, at 1:13 PM, Les Mikesell  wrote:

> On 5/5/2010 12:00 PM, Karanbir Singh wrote:
>>
>>> Try to run the same IO operations as your production server is  
>>> running.
>>> Bonnie++ could be good application for benchmarking. Also run some
>>> parallel rsync, rm, find, etc proccesses.
>>>
>>
>> I am with John Pierce on this one, role and app will dictate  
>> benchmarks
>> that reflect reality.
>>
>> Having said that, I think iozone>  bonnie++
>
> If the job involves creating/deleting lots of little files like a mail
> server with maildir format storage, you might try to dig up a copy of
> postmark too.

I found iometer is a good tool for real-world benchmarking if you take  
the time to setup the tests according to the workload.

-Ross

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Benchmark Disk IO

2010-05-05 Thread Les Mikesell
On 5/5/2010 12:00 PM, Karanbir Singh wrote:
>
>> Try to run the same IO operations as your production server is running.
>> Bonnie++ could be good application for benchmarking. Also run some
>> parallel rsync, rm, find, etc proccesses.
>>
>
> I am with John Pierce on this one, role and app will dictate benchmarks
> that reflect reality.
>
> Having said that, I think iozone>  bonnie++

If the job involves creating/deleting lots of little files like a mail 
server with maildir format storage, you might try to dig up a copy of 
postmark too.

-- 
   Les Mikesell
lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Benchmark Disk IO

2010-05-05 Thread Karanbir Singh
On 05/05/2010 05:55 PM, Dominik Zyla wrote:
> Try to run the same IO operations as your production server is running.
> Bonnie++ could be good application for benchmarking. Also run some
> parallel rsync, rm, find, etc proccesses.
>

I am with John Pierce on this one, role and app will dictate benchmarks 
that reflect reality.

Having said that, I think iozone > bonnie++

- KB
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Benchmark Disk IO

2010-05-05 Thread Dominik Zyla
On Wed, May 05, 2010 at 05:17:53PM +0100, Matt Keating wrote:
> What is the best way to benchmark disk IO?
> 
> I'm looking to move one of my servers, which is rather IO intense. But
> not without first benchmarking the current and new disk array, To make
> sure this isn't a full waste of time.

Try to run the same IO operations as your production server is running.
Bonnie++ could be good application for benchmarking. Also run some
parallel rsync, rm, find, etc proccesses.

It's good habbit to monitor machines with cacti or something like that.
After benchmarks, you can compare cacti graphs.

-- 
Dominik Zyla



pgpuyN4xngrr5.pgp
Description: PGP signature
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Benchmark Disk IO

2010-05-05 Thread nate
Matt Keating wrote:
> What is the best way to benchmark disk IO?
>
> I'm looking to move one of my servers, which is rather IO intense. But
> not without first benchmarking the current and new disk array, To make
> sure this isn't a full waste of time.

You can do a pretty easy calculation based on the #/type of drives
to determine the approx number of raw IOPS that are available, since
it's I/O intensive your probably best off with RAID 1+0, which further
simplifies the calculation, parity based raid can make it really
complicated.

7200 RPM disk = ~90 IOPS
1 RPM disk = ~150-180 IOPS
15000 RPM disk = ~230-250 IOPS
SSD = 

Otherwise, Iozone is a neat benchmark. SPC-1 is a great benchmark
for SQL-type apps though it's very high end and designed for testing
full storage arrays not a dinky server.

nate


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Benchmark Disk IO

2010-05-05 Thread John R Pierce
Matt Keating wrote:
> What is the best way to benchmark disk IO?
>
> I'm looking to move one of my servers, which is rather IO intense. But
> not without first benchmarking the current and new disk array, To make
> sure this isn't a full waste of time.
>   


synthetic benchmarks only tell you what the synthetic benchmarks 
measure, which may or may not be of significance to your application.


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos