I've got an X4500/thumper that is mainly used as an NFS server.

It has been discussed in the past that NFS performance with ZFS can be slow 
(when running "tar" to expand an archive with lots of files, for example.)  My 
understanding is the reason that zfs/nfs is slow in this case is because it is 
doing the "correct/safe" thing of waiting for the files to be written to disk.

I can (and have) improved nfs/zfs performance by about 15x by adding "set 
zfs:zil_disable=1" or "zfs:zfs_nocacheflush=1" to /etc/system but this is 
unsafe (though a common workaround?)

But, I have never understood why zfs/nfs is so much slower than ufs/nfs in the 
case of expanding a tar archive.  Is ufs/nfs not properly committing the data 
to disk?

Anyway, with the just released Solaris 10 10/08, zpool has been upgraded to 
version 10 which includes option of using a separate storage device for the ZIL.
It had been my impression that you would need to use an flash disk/SSD to store 
the ZIL to improve performance, but Richard Elling mentioned in a earlier post 
that you could use a regular disk slice for this also (see 
http://www.opensolaris.org/jive/thread.jspa?threadID=80213&tstart=15)

On an X4500 server, I had a zpool of 8 disks arranged in RAID 10.  I installed 
a flash archive of s10u6 on the server then ran "zpool upgrade".  Next, I used 
"zpool add log" to add a 50GB slice on the boot disk for the zfs intent log.  

But, I didn't see any improvement in NFS performance in running "gtar zxf 
Python-2.5.2.tgz" (Python language source code)  It took 0.6sec to run on the 
local system (no NFS) and 2min20sec over NFS.  If I disable the ZIL, the 
command runs in about 10sec on the NFS client.  (It runs in about 15 seconds 
over NFS to a UFS slice on the NFS server.)  The separate intent log didn't 
seem to do anything in this case.
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to