Re: [zfs-discuss] Testing of UFS, VxFS and ZFS

2007-04-17 Thread Selim Daoud

filebench for example

On 4/17/07, Torrey McMahon [EMAIL PROTECTED] wrote:

Tony Galway wrote:

 I had previously undertaken a benchmark that pits out of box
 performance of UFS via SVM, VxFS and ZFS but was waylaid due to some
 outstanding availability issues in ZFS. These have been taken care of,
 and I am once again undertaking this challenge on behalf of my
 customer. The idea behind this benchmark is to show

 a. How ZFS might displace the current commercial volume and file
 system management applications being used.

 b. The learning curve of moving from current volume management
 products to ZFS.

 c. Performance differences across the different volume management
 products.

 VDBench is the test bed of choice as this has been accepted by the
 customer as a telling and accurate indicator of performance. The last
 time I attempted this test it had been suggested that VDBench is not
 appropriate to testing ZFS, I cannot see that being a problem, VDBench
 is a tool – if it highlights performance problems, then I would think
 it is a very effective tool so that we might better be able to fix
 those deficiencies.


First, VDBench is a Sun internal and partner only tool so you might not
get much response on this list.
Second, VDBench is great for testing raw block i/o devices. I think a
tool that does file system testing will get you better data.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Testing of UFS, VxFS and ZFS

2007-04-16 Thread Erast Benson
Did you measure CPU utilization by any chance during the tests?
Its T2000 and CPU cores are quite slow on this box hence might be a
bottleneck.

just a guess.

On Mon, 2007-04-16 at 13:10 -0400, Tony Galway wrote:
 I had previously undertaken a benchmark that pits “out of box”
 performance of UFS via SVM, VxFS and ZFS but was waylaid due to some
 outstanding availability issues in ZFS. These have been taken care of,
 and I am once again undertaking this challenge on behalf of my
 customer. The idea behind this benchmark is to show
 
  
 
 a.  How ZFS might displace the current commercial volume and file
 system management applications being used.
 
 b. The learning curve of moving from current volume management
 products to ZFS.
 
 c.  Performance differences across the different volume management
 products.
 
  
 
 VDBench is the test bed of choice as this has been accepted by the
 customer as a telling and accurate indicator of performance. The last
 time I attempted this test it had been suggested that VDBench is not
 appropriate to testing ZFS, I cannot see that being a problem, VDBench
 is a tool – if it highlights performance problems, then I would think
 it is a very effective tool so that we might better be able to fix
 those deficiencies.
 
  
 
 Now, to the heart of my problem!
 
  
 
 The test hardware is a T2000 connected to a 12 disk SE3510 (presenting
 as JBOD)  through a brocade switch, and I am using Solaris 10 11/06.
 For Veritas, I am using Storage Foundation Suite 5.0. The systems were
 jumpstarted to the same configuration before testing a different
 volume management software to ensure there were no artifacts remaining
 from any previous test.
 
  
 
 I present my vdbench definition below for your information:
 
  
 
 sd=FS,lun=/pool/TESTFILE,size=10g,threads=8
 
 wd=DWR,sd=FS,rdpct=100,seekpct=80
 
 wd=ETL,sd=FS,rdpct=0,  seekpct=80
 
 wd=OLT,sd=FS,rdpct=70, seekpct=80
 
 rd=R1-DWR,wd=DWR,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8k,16k,32k,64k,128k)
 
 rd=R1-ETL,wd=ETL,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8k,16k,32k,64k,128k)
 
 rd=R1-OLT,wd=OLT,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8k,16k,32k,64k,128k)
 
 rd=R2-DWR,wd=DWR,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8k,16k,32k,64k,128k)
 
 rd=R2-ETL,wd=ETL,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8k,16k,32k,64k,128k)
 
 rd=R2-OLT,wd=OLT,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8k,16k,32k,64k,128k)
 
 rd=R3-DWR,wd=DWR,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8k,16k,32k,64k,128k)
 
 rd=R3-ETL,wd=ETL,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8k,16k,32k,64k,128k)
 
 rd=R3-OLT,wd=OLT,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8k,16k,32k,64k,128k)
 
  
 
 As you can see, it is fairly straight forward and I take the average
 of the three runs in each of ETL, OLT and DWR workloads. As an aside,
 I am also performing this test for various file system block sizes as
 applicable as well.
 
  
 
 I then ran this workload against a Raid-5 LUN created and mounted in
 each of the different file system types. Please note that one of the
 test criteria is that the associated volume management software create
 the Raid-5 LUN, not the disk subsystem.
 
  
 
 1.  UFS via SVM
 
 # metainit d20 –r d1 … d8 
 
 # newfs /dev/md/dsk/d20
 
 # mount /dev/md/dsk/d20 /pool
 
  
 
 2.  ZFS
 
 # zfs create pool raidz d1 … d8
 
  
 
 3.  VxFS – Veritas SF5.0
 
 # vxdisk init SUN35100_0 ….  SUN35100_7
 
 # vxdg init testdg SUN35100_0  … 
 
 # vxassist –g testdg make pool 418283m layout=raid5
 
  
 
  
 
 Now to my problem – Performance!  Given the test as defined above,
 VxFS absolutely blows the doors off of both UFS and ZFS during write
 operations. For example, during a single test on an 8k file system
 block, I have the following average IO Rates:
 
  
 
  
 
 
ETL
 
 
   OLTP
 
 
DWR
 
 
 UFS
 
 
390.00
 
 
   1298.44
 
 
  23173.60
 
 
 VxFS
 
 
  15323.10
 
 
  27329.04
 
 
  22889.91
 
 
 ZFS
 
 
   2122.23
 
 
   7299.36
 
 
  22940.63
 
 
 
  
 
  
 
 If you look at these numbers percentage wise, with VxFS being set to
 100%, then UFS run’s at 2.5% the speed, and ZFS at 13.8% the speed,
 for OLTP UFS is 4.8% and ZFS 26.7%, however in DWR where there are
 100% reads, no writing, performance is similar with UFS at 101.2% and
 ZFS at 100.2% the speed of VxFS.
 
  
 
   cid:image002.png@01C78027.99B515D0
 
  
 
  
 
 Given this performance problems, then quite obviously VxFS quite
 rightly deserves to be the file system of choice, even with a cost
 premium. If anyone has any insight into why I am seeing, consistently,
 these types of very disappointing numbers I would very much appreciate
 your comments. The numbers are very disturbing as it is indicating
 that write 

Re: [zfs-discuss] Testing of UFS, VxFS and ZFS

2007-04-16 Thread Frank Cusack

On April 16, 2007 1:10:41 PM -0400 Tony Galway [EMAIL PROTECTED] wrote:

I had previously undertaken a benchmark that pits out of box performance

...

The test hardware is a T2000 connected to a 12 disk SE3510 (presenting as

...

Now to my problem - Performance!  Given the test as defined above, VxFS
absolutely blows the doors off of both UFS and ZFS during write
operations. For example, during a single test on an 8k file system block,
I have the following average IO Rates:


out of the box performance of zfs on T2000 hardware might suffer.

http://blogs.sun.com/realneel/entry/zfs_and_databases is the
only link I could find, but there is another article somewhere
about tuning for t2000, related to PCI on the t2000, ie it is
t2000-specific.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


RE: [zfs-discuss] Testing of UFS, VxFS and ZFS

2007-04-16 Thread Tony Galway
The volume is 7+1. I have created the volume using both the default (DRL) as
well as 'nolog' to turn it off, both with similar performance. On the advice
of Henk, after he had looked over my data, he is notice that the veritas
test seems to be almost entirely using file system cache. I will retest with
a much larger file to defeat this cache (I do not want to modify my mount
options). If this then shows similar performance (I will also retest with
ZFS with the same file size) then the question will probably have more to do
with how ZFS handles file system caching.

-Tony

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: April 16, 2007 2:16 PM
To: [EMAIL PROTECTED]
Subject: Re: [zfs-discuss] Testing of UFS, VxFS and ZFS

Is the VxVM volume 8-wide?  It is not clear from your creation commands.
  -- richard

Tony Galway wrote:
 
 
 I had previously undertaken a benchmark that pits out of box 
 performance of UFS via SVM, VxFS and ZFS but was waylaid due to some 
 outstanding availability issues in ZFS. These have been taken care of, 
 and I am once again undertaking this challenge on behalf of my customer. 
 The idea behind this benchmark is to show
 
  
 
 a.   How ZFS might displace the current commercial volume and file 
 system management applications being used.
 
 b.  The learning curve of moving from current volume management 
 products to ZFS.
 
 c.   Performance differences across the different volume management 
 products.
 
  
 
 VDBench is the test bed of choice as this has been accepted by the 
 customer as a telling and accurate indicator of performance. The last 
 time I attempted this test it had been suggested that VDBench is not 
 appropriate to testing ZFS, I cannot see that being a problem, VDBench 
 is a tool - if it highlights performance problems, then I would think it 
 is a very effective tool so that we might better be able to fix those 
 deficiencies.
 
  
 
 Now, to the heart of my problem!
 
  
 
 The test hardware is a T2000 connected to a 12 disk SE3510 (presenting 
 as JBOD)  through a brocade switch, and I am using Solaris 10 11/06. For 
 Veritas, I am using Storage Foundation Suite 5.0. The systems were 
 jumpstarted to the same configuration before testing a different volume 
 management software to ensure there were no artifacts remaining from any 
 previous test.
 
  
 
 I present my vdbench definition below for your information:
 
  
 
 sd=FS,lun=/pool/TESTFILE,size=10g,threads=8
 
 wd=DWR,sd=FS,rdpct=100,seekpct=80
 
 wd=ETL,sd=FS,rdpct=0,  seekpct=80
 
 wd=OLT,sd=FS,rdpct=70, seekpct=80
 

rd=R1-DWR,wd=DWR,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8
k,16k,32k,64k,128k) 
 
 

rd=R1-ETL,wd=ETL,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8
k,16k,32k,64k,128k) 
 
 

rd=R1-OLT,wd=OLT,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8
k,16k,32k,64k,128k) 
 
 

rd=R2-DWR,wd=DWR,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8
k,16k,32k,64k,128k) 
 
 

rd=R2-ETL,wd=ETL,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8
k,16k,32k,64k,128k) 
 
 

rd=R2-OLT,wd=OLT,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8
k,16k,32k,64k,128k) 
 
 

rd=R3-DWR,wd=DWR,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8
k,16k,32k,64k,128k) 
 
 

rd=R3-ETL,wd=ETL,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8
k,16k,32k,64k,128k) 
 
 

rd=R3-OLT,wd=OLT,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8
k,16k,32k,64k,128k) 
 
 
  
 
 As you can see, it is fairly straight forward and I take the average of 
 the three runs in each of ETL, OLT and DWR workloads. As an aside, I am 
 also performing this test for various file system block sizes as 
 applicable as well.
 
  
 
 I then ran this workload against a Raid-5 LUN created and mounted in 
 each of the different file system types. Please note that one of the 
 test criteria is that the associated volume management software create 
 the Raid-5 LUN, not the disk subsystem.
 
  
 
 1.   UFS via SVM
 
 # metainit d20 -r d1 . d8
 
 # newfs /dev/md/dsk/d20
 
 # mount /dev/md/dsk/d20 /pool
 
  
 
 2.   ZFS
 
 # zfs create pool raidz d1 . d8
 
  
 
 3.   VxFS - Veritas SF5.0
 
 # vxdisk init SUN35100_0 ..  SUN35100_7
 
 # vxdg init testdg SUN35100_0  .
 
 # vxassist -g testdg make pool 418283m layout=raid5
 
  
 
  
 
 Now to my problem - Performance!  Given the test as defined above, VxFS 
 absolutely blows the doors off of both UFS and ZFS during write 
 operations. For example, during a single test on an 8k file system 
 block, I have the following average IO Rates:
 
  
 
  
 
   
 
 *ETL *
 
   
 
 *OLTP *
 
   
 
 *DWR *
 
 *UFS *
 
   
 
 390.00
 
   
 
 1298.44
 
   
 
 23173.60
 
 *VxFS *
 
   
 
 15323.10
 
   
 
 27329.04
 
   
 
 22889.91
 
 *ZFS *
 
   
 
 2122.23
 
   
 
 7299.36
 
   
 
 22940.63
 
  
 
  
 
 If you look

Re: [zfs-discuss] Testing of UFS, VxFS and ZFS

2007-04-16 Thread Torrey McMahon

Tony Galway wrote:


I had previously undertaken a benchmark that pits “out of box” 
performance of UFS via SVM, VxFS and ZFS but was waylaid due to some 
outstanding availability issues in ZFS. These have been taken care of, 
and I am once again undertaking this challenge on behalf of my 
customer. The idea behind this benchmark is to show


a. How ZFS might displace the current commercial volume and file 
system management applications being used.


b. The learning curve of moving from current volume management 
products to ZFS.


c. Performance differences across the different volume management 
products.


VDBench is the test bed of choice as this has been accepted by the 
customer as a telling and accurate indicator of performance. The last 
time I attempted this test it had been suggested that VDBench is not 
appropriate to testing ZFS, I cannot see that being a problem, VDBench 
is a tool – if it highlights performance problems, then I would think 
it is a very effective tool so that we might better be able to fix 
those deficiencies.




First, VDBench is a Sun internal and partner only tool so you might not 
get much response on this list.
Second, VDBench is great for testing raw block i/o devices. I think a 
tool that does file system testing will get you better data.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss