Re: [zfs-discuss] s10u6--will using disk slices for zfs logs improve nfs performance?

2008-12-01 Thread Roch Bourbonnais

Le 15 nov. 08 à 08:49, Nicholas Lee a écrit :



 On Sat, Nov 15, 2008 at 7:54 AM, Richard Elling [EMAIL PROTECTED] 
  wrote:
 In short, separate logs with rotating rust may reduce sync write  
 latency by
 perhaps 2-10x on an otherwise busy system.  Using write optimized SSDs
 will reduce sync write latency by perhaps 10x in all cases.  This is  
 one of
 those situations where we can throw hardware at the problem to solve  
 it.

 Are the SSD devices Sun is using in the 7000s available for general  
 use?  Are they OEM parts or special items?


Custom designed for the Hybrid Storage Pool.

-r


 Nicholas
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] s10u6--will using disk slices for zfs logs improve nfs performance?

2008-12-01 Thread Richard Elling
Nicholas Lee wrote:


 On Sat, Nov 15, 2008 at 7:54 AM, Richard Elling 
 [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:

 In short, separate logs with rotating rust may reduce sync write
 latency by
 perhaps 2-10x on an otherwise busy system.  Using write optimized SSDs
 will reduce sync write latency by perhaps 10x in all cases.  This
 is one of
 those situations where we can throw hardware at the problem to
 solve it.


 Are the SSD devices Sun is using in the 7000s available for general 
 use?  Are they OEM parts or special items?


Yes, they are OEMed. See:
http://www.marketwatch.com/news/story/STEC-Support-Suns-Unified-Storage/story.aspx?guid=%7B07043E00-7628-411D-B24A-2FFEC8B8F706%7D

The ZEUS product line makes a fine slog while the MACH8 product line
works nicely for L2ARC.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] s10u6--will using disk slices for zfs logs improve nfs performance?

2008-11-14 Thread Richard Elling
Neil Perrin wrote:
 I wouldn't expect any improvement using a separate disk slice for the Intent 
 Log
 unless that disk was much faster and was otherwise largely idle. If it was 
 heavily
 used then I'd expect quite the performance degradation as the disk head 
 bounces
 around between slices. Separate intent logs are really recommended for fast 
 devices
 (SSDs or NVRAM).
   

Yes.  A simple test like this on an otherwise idle file system will not
show dramatic improvements because you are still dealing with rotations
which incur an average latency of about 4.17 ms for a thumper disk (7,200
rpm).  When the file system is busy with other work, such as reads, this may
jump 2-10x for log latency to the main pool, but should remain fairly close
to 4ms for an otherwise idle disk, such as the boot disk.  With SSDs, 
like we
build into the Sun Storage 7000 family, the separate log latency will be on
the order of a few 100s of microseconds, or about an order of magnitude
faster than the average case with rotating media.

In short, separate logs with rotating rust may reduce sync write latency by
perhaps 2-10x on an otherwise busy system.  Using write optimized SSDs
will reduce sync write latency by perhaps 10x in all cases.  This is one of
those situations where we can throw hardware at the problem to solve it.
 -- richard

 When you're comparing against UFS is the write cache disabled (use format -e)?
 Otherwise UFS is unsafe. 

 To get a apples to apples perf comparison, you can compare either:

 Safe mode
 -
 ZFS with default settings (zil_disable=0  zfs_nocacheflush=0)
 against UFS with write cache disabled. Ie the safe mode.

 Unsafe mode - unless device is volatile.
 ---
 ZFS with zil_disable=0  zfs_nocacheflush=1
 against UFS with write cache enabled.

 From my reading of one your comparisons, ZFS takes 10s vs 15s for UFS
 (unsafe mode)

 Neil.

 On 11/13/08 16:23, Doug wrote:
   
 I've got an X4500/thumper that is mainly used as an NFS server.

 It has been discussed in the past that NFS performance with ZFS can be slow 
 (when running tar to expand an archive with lots of files, for example.)  
 My understanding is the reason that zfs/nfs is slow in this case is because 
 it is doing the correct/safe thing of waiting for the files to be written 
 to disk.

 I can (and have) improved nfs/zfs performance by about 15x by adding set 
 zfs:zil_disable=1 or zfs:zfs_nocacheflush=1 to /etc/system but this is 
 unsafe (though a common workaround?)

 But, I have never understood why zfs/nfs is so much slower than ufs/nfs in 
 the case of expanding a tar archive.  Is ufs/nfs not properly committing the 
 data to disk?

 Anyway, with the just released Solaris 10 10/08, zpool has been upgraded to 
 version 10 which includes option of using a separate storage device for the 
 ZIL.
 It had been my impression that you would need to use an flash disk/SSD to 
 store the ZIL to improve performance, but Richard Elling mentioned in a 
 earlier post that you could use a regular disk slice for this also (see 
 http://www.opensolaris.org/jive/thread.jspa?threadID=80213tstart=15)

 On an X4500 server, I had a zpool of 8 disks arranged in RAID 10.  I 
 installed a flash archive of s10u6 on the server then ran zpool upgrade.  
 Next, I used 
 zpool add log to add a 50GB slice on the boot disk for the zfs intent log. 
  

 But, I didn't see any improvement in NFS performance in running gtar zxf 
 Python-2.5.2.tgz (Python language source code)  It took 0.6sec to run on 
 the local system (no NFS) and 2min20sec over NFS.  If I disable the ZIL, the 
 command runs in about 10sec on the NFS client.  (It runs in about 15 seconds 
 over NFS to a UFS slice on the NFS server.)  The separate intent log didn't 
 seem to do anything in this case.
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] s10u6--will using disk slices for zfs logs improve nfs performance?

2008-11-14 Thread Nicholas Lee
On Sat, Nov 15, 2008 at 7:54 AM, Richard Elling [EMAIL PROTECTED]wrote:

 In short, separate logs with rotating rust may reduce sync write latency by
 perhaps 2-10x on an otherwise busy system.  Using write optimized SSDs
 will reduce sync write latency by perhaps 10x in all cases.  This is one of
 those situations where we can throw hardware at the problem to solve it.
 http://www.opensolaris.org/jive/thread.jspa?threadID=80213tstart=15


Are the SSD devices Sun is using in the 7000s available for general use?
 Are they OEM parts or special items?

Nicholas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] s10u6--will using disk slices for zfs logs improve nfs performance?

2008-11-13 Thread Doug
I've got an X4500/thumper that is mainly used as an NFS server.

It has been discussed in the past that NFS performance with ZFS can be slow 
(when running tar to expand an archive with lots of files, for example.)  My 
understanding is the reason that zfs/nfs is slow in this case is because it is 
doing the correct/safe thing of waiting for the files to be written to disk.

I can (and have) improved nfs/zfs performance by about 15x by adding set 
zfs:zil_disable=1 or zfs:zfs_nocacheflush=1 to /etc/system but this is 
unsafe (though a common workaround?)

But, I have never understood why zfs/nfs is so much slower than ufs/nfs in the 
case of expanding a tar archive.  Is ufs/nfs not properly committing the data 
to disk?

Anyway, with the just released Solaris 10 10/08, zpool has been upgraded to 
version 10 which includes option of using a separate storage device for the ZIL.
It had been my impression that you would need to use an flash disk/SSD to store 
the ZIL to improve performance, but Richard Elling mentioned in a earlier post 
that you could use a regular disk slice for this also (see 
http://www.opensolaris.org/jive/thread.jspa?threadID=80213tstart=15)

On an X4500 server, I had a zpool of 8 disks arranged in RAID 10.  I installed 
a flash archive of s10u6 on the server then ran zpool upgrade.  Next, I used 
zpool add log to add a 50GB slice on the boot disk for the zfs intent log.  

But, I didn't see any improvement in NFS performance in running gtar zxf 
Python-2.5.2.tgz (Python language source code)  It took 0.6sec to run on the 
local system (no NFS) and 2min20sec over NFS.  If I disable the ZIL, the 
command runs in about 10sec on the NFS client.  (It runs in about 15 seconds 
over NFS to a UFS slice on the NFS server.)  The separate intent log didn't 
seem to do anything in this case.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] s10u6--will using disk slices for zfs logs improve nfs performance?

2008-11-13 Thread Neil Perrin
I wouldn't expect any improvement using a separate disk slice for the Intent Log
unless that disk was much faster and was otherwise largely idle. If it was 
heavily
used then I'd expect quite the performance degradation as the disk head bounces
around between slices. Separate intent logs are really recommended for fast 
devices
(SSDs or NVRAM).

When you're comparing against UFS is the write cache disabled (use format -e)?
Otherwise UFS is unsafe. 

To get a apples to apples perf comparison, you can compare either:

Safe mode
-
ZFS with default settings (zil_disable=0  zfs_nocacheflush=0)
against UFS with write cache disabled. Ie the safe mode.

Unsafe mode - unless device is volatile.
---
ZFS with zil_disable=0  zfs_nocacheflush=1
against UFS with write cache enabled.

From my reading of one your comparisons, ZFS takes 10s vs 15s for UFS
(unsafe mode)

Neil.

On 11/13/08 16:23, Doug wrote:
 I've got an X4500/thumper that is mainly used as an NFS server.
 
 It has been discussed in the past that NFS performance with ZFS can be slow 
 (when running tar to expand an archive with lots of files, for example.)  
 My understanding is the reason that zfs/nfs is slow in this case is because 
 it is doing the correct/safe thing of waiting for the files to be written 
 to disk.
 
 I can (and have) improved nfs/zfs performance by about 15x by adding set 
 zfs:zil_disable=1 or zfs:zfs_nocacheflush=1 to /etc/system but this is 
 unsafe (though a common workaround?)
 
 But, I have never understood why zfs/nfs is so much slower than ufs/nfs in 
 the case of expanding a tar archive.  Is ufs/nfs not properly committing the 
 data to disk?
 
 Anyway, with the just released Solaris 10 10/08, zpool has been upgraded to 
 version 10 which includes option of using a separate storage device for the 
 ZIL.
 It had been my impression that you would need to use an flash disk/SSD to 
 store the ZIL to improve performance, but Richard Elling mentioned in a 
 earlier post that you could use a regular disk slice for this also (see 
 http://www.opensolaris.org/jive/thread.jspa?threadID=80213tstart=15)
 
 On an X4500 server, I had a zpool of 8 disks arranged in RAID 10.  I 
 installed a flash archive of s10u6 on the server then ran zpool upgrade.  
 Next, I used 
 zpool add log to add a 50GB slice on the boot disk for the zfs intent log.  
 
 But, I didn't see any improvement in NFS performance in running gtar zxf 
 Python-2.5.2.tgz (Python language source code)  It took 0.6sec to run on the 
 local system (no NFS) and 2min20sec over NFS.  If I disable the ZIL, the 
 command runs in about 10sec on the NFS client.  (It runs in about 15 seconds 
 over NFS to a UFS slice on the NFS server.)  The separate intent log didn't 
 seem to do anything in this case.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss