Have a gander below :

> Agreed - it sucks - especially for small file use.  Here's a 5,000 ft view
> of the performance while unzipping and extracting a tar archive.  First
> the test is run on a SPARC 280R running Build 51a with dual 900MHz USIII
> CPUs and 4Gb of RAM:
>
> $ cp emacs-21.4a.tar.gz /tmp
> $ ptime gunzip -c /tmp/emacs-21.4a.tar.gz |tar xf -
>
> real       13.092
> user        2.083
> sys         0.183

here is my machine here ( Solaris 8 Ultra 2 200MHz )

# cd /tmp
# ptime /export/home/dclarke/star -x -time -z file=/tmp/emacs-21.4a.tar.gz
/export/home/dclarke/star: 7457 blocks + 0 bytes (total of 76359680 bytes =
74570.00k).
/export/home/dclarke/star: Total time 11.057sec (6744 kBytes/sec)

real       11.146
user        0.300
sys         1.762

and the same test on the same machine with a local UFS filesystem :

# cd /mnt/test
# ptime /export/home/dclarke/star -x -time -z file=/tmp/emacs-21.4a.tar.gz
/export/home/dclarke/star: 7457 blocks + 0 bytes (total of 76359680 bytes =
74570.00k).
/export/home/dclarke/star: Total time 92.378sec (807 kBytes/sec)

real     1:32.463
user        0.351
sys         3.658

Pretty much what I expect for an old old Solaris 8 box.

Then I try using a mounted NFS filesystem shared from ZFS on snv_46

# cat /etc/release
                           Solaris Nevada snv_46 SPARC
           Copyright 2006 Sun Microsystems, Inc.  All Rights Reserved.
                        Use is subject to license terms.
                            Assembled 14 August 2006

# zfs set sharenfs=nosub,nosuid,rw=pluto,root=pluto zfs0/backup
# zfs get sharenfs zfs0/backup
NAME             PROPERTY       VALUE                      SOURCE
zfs0/backup      sharenfs       nosub,nosuid,rw=pluto,root=pluto  local
#

# tip hardwire
connected

pluto console login: root
Password:
Nov 22 18:41:50 pluto login: ROOT LOGIN /dev/console
Last login: Tue Nov 21 02:07:39 on console
Sun Microsystems Inc.   SunOS 5.8       Generic Patch   February 2004
# cat /etc/release
                       Solaris 8 2/04 s28s_hw4wos_05a SPARC
           Copyright 2004 Sun Microsystems, Inc.  All Rights Reserved.
                            Assembled 08 January 2004

# dfshares mars
RESOURCE                                  SERVER ACCESS    TRANSPORT
      mars:/export/zfs/backup               mars  -         -
      mars:/export/zfs/qemu                 mars  -         -
#

# mkdir /export/nfs
# mount -F nfs -o bg,intr,nosuid mars:/export/zfs/backup /export/nfs
#
# cd /export/nfs/titan
# ls -lap
total 142780
drwxr-xr-x   3 dclarke  other          8 Nov 22 19:08 ./
drwxr-xr-x   9 root     sys           12 Nov 15 20:14 ../
-rw-r--r--   1 phil     csw        13102 Jul 12 12:32 README.csw
-rw-r--r--   1 dclarke  csw       189389 Sep 14 19:33 ae-2.2.0.tar.gz
-rw-r--r--   1 dclarke  csw      91965440 Jul 25 12:56 dclarke.tar
-rw-r--r--   1 dclarke  csw      20403483 Nov 22 19:07 emacs-21.4a.tar.gz
-rw-r--r--   1 dclarke  csw      5468160 Jul 25 12:57 root.tar
drwxr-xr-x   5 dclarke  csw            5 May 24  2006 schily/
#

Now that my Solaris 8 box has a mounted ZFS/NFS filesystem I test again

# ptime /export/home/dclarke/star -x -time -z file=/tmp/emacs-21.4a.tar.gz
/export/home/dclarke/star: 7457 blocks + 0 bytes (total of 76359680 bytes =
74570.00k).
/export/home/dclarke/star: Total time 215.958sec (345 kBytes/sec)

real     3:36.048
user        0.397
sys         5.961
#

That is based on the ZFS/NFS mounted filesystem.

What if I run the same test on my server locally? On ZFS ?

# ptime /root/bin/star -x -time -z file=/tmp/emacs-21.4a.tar.gz
/root/bin/star: 7457 blocks + 0 bytes (total of 76359680 bytes = 74570.00k).
/root/bin/star: Total time 32.238sec (2313 kBytes/sec)

real       32.680
user        6.973
sys         9.945
#

So gee ... thats all pretty slow but really really slow with ZFS shared out
via NFS.

wow .. good to know.   I *never* would have seen that coming.

Dennis

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to