Re: [zfs-discuss] NFS and Tar/Star Performance

2007-06-14 Thread eric kustarz


On Jun 13, 2007, at 9:22 PM, Siegfried Nikolaivich wrote:



On 12-Jun-07, at 9:02 AM, eric kustarz wrote:
Comparing a ZFS pool made out of a single disk to a single UFS  
filesystem would be a fair comparison.


What does your storage look like?


The storage looks like:

NAMESTATE READ WRITE CKSUM
tankONLINE   0 0 0
  raidz1ONLINE   0 0 0
c0t0d0  ONLINE   0 0 0
c0t1d0  ONLINE   0 0 0
c0t2d0  ONLINE   0 0 0
c0t4d0  ONLINE   0 0 0
c0t5d0  ONLINE   0 0 0
c0t6d0  ONLINE   0 0 0

All disks are local SATA/300 drives with SATA framework on marvell  
card.  The SATA drives are consumer drives with 16MB cache.


I agree it's not a fair comparison, especially with raidz over 6  
drives.  However, a performance difference of 10x is fairly large.


I do not have a single drive available to test ZFS with and compare  
it to UFS, but I have done similar tests in the past with one ZFS  
drive without write cache, etc. vs. a UFS drive of the same brand/ 
size.  The difference was still on the order of 10x slower for the  
ZFS drive over NFS.  What could cause such a large difference?  Is  
there a way to measure NFS_COMMIT latency?




You should do the comparison on a single drive.  For ZFS, enable the  
write cache as its safe to do so.  For UFS, disable the write cache.


Make sure you're on non-debug bits.

eric

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] NFS and Tar/Star Performance

2007-06-13 Thread Siegfried Nikolaivich


On 12-Jun-07, at 9:02 AM, eric kustarz wrote:
Comparing a ZFS pool made out of a single disk to a single UFS  
filesystem would be a fair comparison.


What does your storage look like?


The storage looks like:

NAMESTATE READ WRITE CKSUM
tankONLINE   0 0 0
  raidz1ONLINE   0 0 0
c0t0d0  ONLINE   0 0 0
c0t1d0  ONLINE   0 0 0
c0t2d0  ONLINE   0 0 0
c0t4d0  ONLINE   0 0 0
c0t5d0  ONLINE   0 0 0
c0t6d0  ONLINE   0 0 0

All disks are local SATA/300 drives with SATA framework on marvell  
card.  The SATA drives are consumer drives with 16MB cache.


I agree it's not a fair comparison, especially with raidz over 6  
drives.  However, a performance difference of 10x is fairly large.


I do not have a single drive available to test ZFS with and compare  
it to UFS, but I have done similar tests in the past with one ZFS  
drive without write cache, etc. vs. a UFS drive of the same brand/ 
size.  The difference was still on the order of 10x slower for the  
ZFS drive over NFS.  What could cause such a large difference?  Is  
there a way to measure NFS_COMMIT latency?



Cheers,
Siegfried
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] NFS and Tar/Star Performance

2007-06-12 Thread Roch - PAE

Hi Seigfried, just making sure you had seen this:

http://blogs.sun.com/roch/entry/nfs_and_zfs_a_fine 

You have very fast NFS to non-ZFS runs.

That seems only possible if the  hosting OS did not sync the
data when NFS required it or the  drive in question had some
fast write caches.  If the drive did  have some FWC and  ZFS
was  still slow  using them,  that  would be  the issue with
flushing mention in the blog entry.

but also maybe there   is something to  be learned  from the
Samba and AFP results...

Takeaways:

ZFS and NFS just work together.

ZFS has an open issue with some storage array (the
issue  is *not* related   to NFS); it's being worked
on. Will need collaboration from storage vendors.

NFS is slower than direct attached. Can be very very
much slower on single threaded loads.

There are many ways to workaround the slowness but most
are just not safe for your data.

-r



Siegfried Nikolaivich writes:
  This is an old topic, discussed many times at length.  However, I  
  still wonder if there are any workarounds to this issue except  
  disabling ZIL, since it makes ZFS over NFS almost unusable (it's a  
  whole magnitude slower).  My understanding is that the ball is in the  
  hands of NFS due to ZFS's design.  The testing results are below.
  
  
  Solaris 10u3 AMD64 server with Mac client over gigabit ethernet.  The  
  filesystem is on a 6 disk raidz1 pool, testing the performance of  
  untarring (with bzip2) the Linux 2.6.21 source code.  The archive is  
  stored locally and extracted remotely.
  
  Locally
  ---
  tar xfvj linux-2.6.21.tar.bz2
  real4m4.094s,user0m44.732s,  sys 0m26.047s
  
  star xfv linux-2.6.21.tar.bz2
  real1m47.502s,   user0m38.573s,  sys 0m22.671s
  
  Over NFS
  
  tar xfvj linux-2.6.21.tar.bz2
  real48m22.685s,  user0m45.703s,  sys 0m59.264s
  
  star xfv linux-2.6.21.tar.bz2
  real49m13.574s,  user0m38.996s,  sys 0m35.215s
  
  star -no-fsync -x -v -f linux-2.6.21.tar.bz2
  real49m32.127s,  user0m38.454s,  sys 0m36.197s
  
  
  The performance seems pretty bad, lets see how other protocols fare.
  
  Over Samba
  --
  tar xfvj linux-2.6.21.tar.bz2
  real4m34.952s,   user0m44.325s,  sys 0m27.404s
  
  star xfv linux-2.6.21.tar.bz2
  real4m2.998s,user0m44.121s,  sys 0m29.214s
  
  star -no-fsync -x -v -f linux-2.6.21.tar.bz2
  real4m13.352s,   user0m44.239s,  sys 0m29.547s
  
  Over AFP
  
  tar xfvj linux-2.6.21.tar.bz2
  real3m58.405s,   user0m43.132s,  sys 0m40.847s
  
  star xfv linux-2.6.21.tar.bz2
  real19m44.212s,  user0m38.535s,  sys 0m38.866s
  
  star -no-fsync -x -v -f linux-2.6.21.tar.bz2
  real3m21.976s,   user0m42.529s,  sys 0m39.529s
  
  
  Samba and AFP are much faster, except the fsync'ed star over AFP.  Is  
  this a ZFS or NFS issue?
  
  Over NFS to non-ZFS drive
  -
  tar xfvj linux-2.6.21.tar.bz2
  real5m0.211s,user0m45.330s,  sys 0m50.118s
  
  star xfv linux-2.6.21.tar.bz2
  real3m26.053s,   user0m43.069s,  sys 0m33.726s
  
  star -no-fsync -x -v -f linux-2.6.21.tar.bz2
  real3m55.522s,   user0m42.749s,  sys 0m35.294s
  
  It looks like ZFS is the culprit here.  The untarring is much faster  
  to a single 80 GB UFS drive than a 6 disk raid-z array over NFS.
  
  
  Cheers,
  Siegfried
  
  
  PS. Getting netatalk to compile on amd64 Solaris required some  
  changes since i386 wasn't being defined anymore, and somehow it  
  thought the architecture was sparc64 for some linking steps.
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] NFS and Tar/Star Performance

2007-06-12 Thread eric kustarz


Over NFS to non-ZFS drive
-
tar xfvj linux-2.6.21.tar.bz2
real5m0.211s,   user0m45.330s,  sys 0m50.118s

star xfv linux-2.6.21.tar.bz2
real3m26.053s,  user0m43.069s,  sys 0m33.726s

star -no-fsync -x -v -f linux-2.6.21.tar.bz2
real3m55.522s,  user0m42.749s,  sys 0m35.294s

It looks like ZFS is the culprit here.  The untarring is much  
faster to a single 80 GB UFS drive than a 6 disk raid-z array over  
NFS.




Comparing a ZFS pool made out of a single disk to a single UFS  
filesystem would be a fair comparison.


What does your storage look like?

eric

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] NFS and Tar/Star Performance

2007-06-12 Thread eric kustarz


On Jun 12, 2007, at 12:57 AM, Roch - PAE wrote:



Hi Seigfried, just making sure you had seen this:

http://blogs.sun.com/roch/entry/nfs_and_zfs_a_fine

You have very fast NFS to non-ZFS runs.

That seems only possible if the  hosting OS did not sync the
data when NFS required it or the  drive in question had some
fast write caches.  If the drive did  have some FWC and  ZFS
was  still slow  using them,  that  would be  the issue with
flushing mention in the blog entry.

but also maybe there   is something to  be learned  from the
Samba and AFP results...

Takeaways:

ZFS and NFS just work together.

ZFS has an open issue with some storage array (the
issue  is *not* related   to NFS); it's being worked
on. Will need collaboration from storage vendors.

NFS is slower than direct attached. Can be very very
much slower on single threaded loads.


Roch knows this, but just to point out for others following the  
discussion...


In this case (single threaded file creates) NFS is slower.  However,  
NFS can go at 1Gbe wirespeed, which can be faster than your disks  
(depending how many spindles you have and if you've striped them for  
performance).




There are many ways to workaround the slowness but most
are just not safe for your data.


Yeah, the samba numbers were interesting... so i guess its ok in CIFS  
for the client to be out of sync with the server?  That is, i wonder  
how they handle the case where the client creates a file, server  
replies ok w/out the data/metadata going to stable storage, server  
crashes, comes back up, created file is not on stable storage but the  
client (and its app) thinks it exists...


I really would like to know the details of CIFS behavior compared to  
NFS...


eric

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] NFS and Tar/Star Performance

2007-06-12 Thread Neil . Perrin

eric kustarz wrote:


Over NFS to non-ZFS drive
-
tar xfvj linux-2.6.21.tar.bz2
real5m0.211s,user0m45.330s,sys 0m50.118s

star xfv linux-2.6.21.tar.bz2
real3m26.053s,user0m43.069s,sys 0m33.726s

star -no-fsync -x -v -f linux-2.6.21.tar.bz2
real3m55.522s,user0m42.749s,sys 0m35.294s

It looks like ZFS is the culprit here.  The untarring is much  faster 
to a single 80 GB UFS drive than a 6 disk raid-z array over  NFS.




Comparing a ZFS pool made out of a single disk to a single UFS  
filesystem would be a fair comparison.


Right, and to be fairer you need to ensure the disk write cache is disabled
(format -e) when testing ufs (as ufs does no flushing of the cache).
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss