Last week we got our new Sunfire X4540 system with Solaris 10 x86 u6 
preinstalled. We installed the recommended patch cluster from 22 April 2009. 
The 46 disks have been setup as a single ZFS pool with RAIDZ2 and 4 spare 
disks. Each RAIDZ2 is across all 6 controllers. System has 3 gigabit interfaces 
connected. We have the same setup as described on this recipe:

http://blogs.sun.com/timthomas/entry/recipe_for_a_zfs_raid 

PROBLEM DESCRIPTION
-----------------------------
 
Write access over NFS with many small files (e.g. cvs checkout) to a ZFS volume 
is about 20x slower than to our Netapp FAS 3140 system. Writing large files 
(1gb) has about the same speed. On the NFS server itself the write actions 
local to the ZFS volume are about as fast as to the local UFS system volume. 
Local writes to the same volume mounted with NFS via localhost is also 
extremely slow.

 
COMPARISONS 
---------------------- 
 
NFS Share on Netapp FAS 3140: 
bernd at nfs-server:/net/netapp-server/vol/tmp> timex cvs -Q checkout myProject 
 
real          37.95 
user           1.29 
sys            2.83 
 
Local disks with  ZFS filesystem and RAIDZ2: 
bernd at nfs-server:/export/tmp> timex cvs -Q checkout myProject 
 
real          18.66 
user           1.03 
sys            1.43 
 
Local system disk with mirrored UFS filesystem: 

bernd at nfs-server:/var/tmp> timex cvs -Q checkout myProject 
 
real          17.02 
user           1.09 
sys            2.15 
 
Lokal via loopback mounted NFS: 

bernd at nfs-server:/> mount -F nfs localhost:/export/tmp /mnt
bernd at nfs-server:/> cd /mnt
bernd at nfs-server:/mnt> timex cvs -Q checkout myProject 
 
real       12:37.80 
user           1.58 
sys            7.36 
 
Remotely from a M4000 via NFS to ZFS pool: 

bernd at m4000-host:/net/nfs-server/export/tmp> timex cvs -Q checkout myProject
 
real       11:47.97 
user           3.37 
sys           14.45 
 
All write actions without NFS are fast. With NFS in between it's about 20x 
slower. There is no network bottleneck in between. It occurs also on the system 
itself via loopback mounted NFS. 
 
=> The problem is probably in the NFS sharing code of ZFS. 
 
The system is running fine so far - but users are complaining about slow 
access. 
 
 
CONFIGURATION 
------------------------- 
 
bernd at nfs-server:~> /usr/sbin/zpool status 
  pool: export 
 state: ONLINE 
 scrub: none requested 
config: 
 
        NAME        STATE     READ WRITE CKSUM 
        export      ONLINE       0     0     0 
          raidz2    ONLINE       0     0     0 
            c0t1d0  ONLINE       0     0     0 
            c1t1d0  ONLINE       0     0     0 
            c2t1d0  ONLINE       0     0     0 
            c3t1d0  ONLINE       0     0     0 
            c4t1d0  ONLINE       0     0     0 
            c5t1d0  ONLINE       0     0     0 
          raidz2    ONLINE       0     0     0 
            c0t2d0  ONLINE       0     0     0 
            c1t2d0  ONLINE       0     0     0 
            c2t2d0  ONLINE       0     0     0 
            c3t2d0  ONLINE       0     0     0 
            c4t2d0  ONLINE       0     0     0 
            c5t2d0  ONLINE       0     0     0 
          raidz2    ONLINE       0     0     0 
            c0t3d0  ONLINE       0     0     0 
            c1t3d0  ONLINE       0     0     0 
            c2t3d0  ONLINE       0     0     0 
            c3t3d0  ONLINE       0     0     0 
            c4t3d0  ONLINE       0     0     0 
            c5t3d0  ONLINE       0     0     0 
          raidz2    ONLINE       0     0     0 
            c0t4d0  ONLINE       0     0     0 
            c1t4d0  ONLINE       0     0     0 
            c2t4d0  ONLINE       0     0     0 
            c3t4d0  ONLINE       0     0     0 
            c4t4d0  ONLINE       0     0     0 
            c5t4d0  ONLINE       0     0     0 
          raidz2    ONLINE       0     0     0 
            c0t5d0  ONLINE       0     0     0 
            c1t5d0  ONLINE       0     0     0 
            c2t5d0  ONLINE       0     0     0 
            c3t5d0  ONLINE       0     0     0 
            c4t5d0  ONLINE       0     0     0 
            c5t5d0  ONLINE       0     0     0 
          raidz2    ONLINE       0     0     0 
            c0t6d0  ONLINE       0     0     0 
            c1t6d0  ONLINE       0     0     0 
            c2t6d0  ONLINE       0     0     0 
            c3t6d0  ONLINE       0     0     0 
            c4t6d0  ONLINE       0     0     0 
            c5t6d0  ONLINE       0     0     0 
          raidz2    ONLINE       0     0     0 
            c0t7d0  ONLINE       0     0     0 
            c1t7d0  ONLINE       0     0     0 
            c2t7d0  ONLINE       0     0     0 
            c3t7d0  ONLINE       0     0     0 
            c4t7d0  ONLINE       0     0     0 
            c5t7d0  ONLINE       0     0     0 
        spares 
          c2t0d0    AVAIL    
          c3t0d0    AVAIL    
          c4t0d0    AVAIL    
          c5t0d0    AVAIL    
 
errors: No known data errors 
 
bernd at nfs-server:~> /usr/sbin/zpool get all export 
NAME    PROPERTY     VALUE       SOURCE 
export  size         19.0T       - 
export  used         3.39T       - 
export  available    15.6T       - 
export  capacity     17%         - 
export  altroot      -           default 
export  health       ONLINE      - 
export  guid         6329791088454615229  - 
export  version      10          default 
export  bootfs       -           default 
export  delegation   on          default 
export  autoreplace  off         default 
export  cachefile    -           default 
export  failmode     wait        default 
 
bernd at nfs-server:~> /usr/sbin/zfs get all export/tmp 
NAME                 PROPERTY         VALUE                  SOURCE 
export/tmp  type             filesystem             - 
export/tmp  creation         Thu Apr 30 11:59 2009  - 
export/tmp  used             39.0G                  - 
export/tmp  available        61.0G                  - 
export/tmp  referenced       39.0G                  - 
export/tmp  compressratio    1.00x                  - 
export/tmp  mounted          yes                    - 
export/tmp  quota            100G                   local 
export/tmp  reservation      none                   default 
export/tmp  recordsize       128K                   default 
export/tmp  mountpoint       /export/tmp   default 
export/tmp  sharenfs         rw,anon=0              local 
export/tmp  checksum         on                     default 
export/tmp  compression      off                    default 
export/tmp  atime            on                     default 
export/tmp  devices          on                     default 
export/tmp  exec             on                     default 
export/tmp  setuid           on                     default 
export/tmp  readonly         off                    default 
export/tmp  zoned            off                    default 
export/tmp  snapdir          hidden                 default 
export/tmp  aclmode          groupmask              default 
export/tmp  aclinherit       restricted             default 
export/tmp  canmount         on                     default 
export/tmp  shareiscsi       off                    default 
export/tmp  xattr            on                     default 
export/tmp  copies           1                      default 
export/tmp  version          3                      - 
export/tmp  utf8only         off                    - 
export/tmp  normalization    none                   - 
export/tmp  casesensitivity  sensitive              - 
export/tmp  vscan            off                    default 
export/tmp  nbmand           off                    default 
export/tmp  sharesmb         off                    default 
export/tmp  refquota         none                   default 
export/tmp  refreservation   none                   default 


Is this a known issue or did I make something wrong? The setup is very simple 
and straightforward, so most like we are not the only ones with that problem.

Thanks in advance for help and best regards,
Bernd
-- 
This message posted from opensolaris.org

Reply via email to