[EMAIL PROTECTED] said:
> And I did another preforman test by copy 512MB file into zfs pool that
> created from 1 lun only. and the test result was the same - 12 sec !?
> 
> NOTE : server V240, solaris10(11/06), 2GB RAM, connected to HDS storage type
> AMS500 with two HBA type qlogic QLA2342.
> 
> Any explanation? it seems that te stripe didn't actualy effect preformance! 

The AMS500 has 1GB cache or more, likely your file isn't large enough to
cause any activity on the physical disks.  Also, the ZFS cache gets flushed
out to the array at a time potentially after your I/O test program has
returned and said its writes are complete, so you may not have gotten the
actual time it took to write everything out.

Here's a simple test that I've found captures the time taken to flush
the ZFS cache to storage (it writes 20GB):

  /bin/time -p /bin/ksh -c "/bin/rm -f testfile && /bin/dd if=/dev/zero 
of=testfile bs=1024k count=20480 && /bin/sync"

Lastly, and you probably are already aware of this, our HDS array turned
out to be quite sensitive to the FC queue length setting.  We got faster
results by setting [s]sd_max_throttle according to the HDS installation
guide for Solaris.  E.g. the guide says 32 is the largest setting you can
use, depending on how many LUN's are active per port on the array (Solaris
default value is 256).

Regards,

Marion


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to