In regard to, ³Can anyone share some insight on how best to benchmark fs
performance?²

Not sure how to best do it but I will explain how we do it and some of the
study we have done with it.

We have an app that was written so we could generate different io patters.
We can send different io sizes, specify the quantity of ios we send, the
number of process spawned to perform the operation, read or writes and it
can force directio on solaris bypassing the file system buffer.  The app was
written for solaris but we also use it on linux via a hack.

The main reason we run it is to compare different different system
configurations.  For example; is a 7+1 lun faster than a 2+1.  Is two 2+2
luns on different controller with a volume manager stripe faster than a 4+4
on a single controller.  The io app generators I downloaded probably did
this type of things but it was too confusing dealing with the output.  This
app lines up with iostat output perfectly.

More importantly we need to tune file systems to match io patterns.  The
file system we build to handle  9 million small files is different than the
file system we build that we write backups to.  With the arrays we use there
is the ability to specify the segment size, which is the amount of data r/w
to each disk in a raid set, the chuck size.  The luns stripe size is
calculated based on the segment size times number of data disk.  We have
played with a few different combinations and done comparative test which
yield dramatic performance variances.  We have by now way tried all the
combinations.  A more in-depth study of several of these is going to begin
someday.

So for a real world example; we knew we could make our backup app write with
1MB io.  This is a reasonably large io.  Using our app we wrote and read 1MB
ios to a couple different configs we felt mathematically put us in the
ballpark.  One config was much better than the others and we stuck with it.

Building a solid test base where only one variable changes at a time is the
most import part.  Recently we acquired one of the 32Gb ssd things.  Ours
came from sun, has a sata interface and has an intel logo on it.  Our
particular need for the devices was on our small file samqfs files system,
metadata reads and writes take forever.  The meta data is at about 25GB so
for a test the 32 GB device was sufficient.  In samqfs the metadata can
reside on a separate lun. One a sun t2000 we constructed two samqfs file
systems each with exactly the same configuration where only the meta data
devices were different.  One was the ssd the other was a 1+1 (raid 1 two
disks with a 16k segment size/16k stripe size).  The first test was a
500,000 small file write.  The file system with the ssd meta data would take
2 hours and 13 minutes, and to the non ssd meta data it would take 2:43.  It
didn¹t impress any of us, although there was improvement.  After passing
around the results several had different ideas.  They wanted to compare read
and ensure that the data disk io was not skewing the results for the meta
data.  So instead of generating fake files to build meta data we backed up
the metadata in a production samqfs file system and restored it to both test
systems, then we ran metadumps to /dev/null comparing the time.  The read
times were within seconds of each other.  I was told ssd are suppose to be
very fast comparatively on reads.  We felt our test may be flawed by some
samqfs background process or something, this is why building a solid test
base is important.

Could go om and on.  Here is an interesting IO thing:

http://blogs.sun.com/HPC/entry/video_monitoring_i_o_performance

 



--
mike cannon
[email protected]
864.650.2577 (cell)
864.656.3809 (office)

computing & information technology
340 computer court
anderson, sc 29625

_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss

Reply via email to