Im not sure about the workload but I did configure the volumes with the block
size in mind.. didnt seem to do much. it could be due to the fact im basically
HW raid then zfs raid and i just dont know the equation to define a smarter
blocksize. seems like if i have 2 arrays with 64kb striped together that 128k
would be ideal for my zfs datasets, but again.. my logic isnt infinite when it
comes to this fun stuff ;)
The 6120 has 2 volumes each with 64k stripe size blocks. i then raidz'ed the 2
volumes and tried both 64k and 128k. i do get a bit of a performance gain on
rewrite at 128k.
These are dd tests by the way:
*this one is locally, and works just great.
bash-3.00# date ; uname -a
Thu Apr 19 21:11:22 EDT 2007
SunOS yuryaku 5.10 Generic_125100-04 sun4u sparc SUNW,Sun-Fire-V210
^-------^
bash-3.00# df -k
Filesystem kbytes used avail capacity Mounted on
...
se6120 697761792 26 666303904 1% /pool/se6120
se6120/rfs-v10 31457280 9710895 21746384 31% /pool/se6120/rfs-v10
bash-3.00# time dd if=/dev/zero of=/pool/se6120/rfs-v10/rw-test-1.loo bs=8192
count=131072
131072+0 records in
131072+0 records out
real 0m13.783s real 0m14.136s
user 0m0.331s
sys 0m9.947s
*this one is from a HP-UX 11i system mounted to the v210 listed above:
onyx:/rfs># date ; uname -a
Thu Apr 19 21:15:02 EDT 2007
HP-UX onyx B.11.11 U 9000/800 1196424606 unlimited-user license
^====^
onyx:/rfs># bdf
Filesystem kbytes used avail %used Mounted on
...
yuryaku.sol:/pool/se6120/rfs-v10
31457280 9710896 21746384 31% /rfs/v10
onyx:/rfs># time dd if=/dev/zero of=/rfs/v10/rw-test-2.loo bs=8192 count=131072
131072+0 records in
131072+0 records out
real 1m2.25s real 0m29.02s real 0m50.49s
user 0m0.30s
sys 0m8.16s
*my 6120 tidbits of interest:
6120 Release 3.2.6 Mon Feb 5 02:26:22 MST 2007 (xxx.xxx.xxx.xxx)
Copyright (C) 1997-2006 Sun Microsystems, Inc. All Rights Reserved.
daikakuji:/:<1>vol mode
volume mounted cache mirror
v1 yes writebehind off
v2 yes writebehind off
daikakuji:/:<5>vol list
volume capacity raid data standby
v1 340.851 GB 5 u1d01-06 u1d07
v2 340.851 GB 5 u1d08-13 u1d14
daikakuji:/:<6>sys list
controller : 2.5
blocksize : 64k
cache : auto
mirror : auto
mp_support : none
naca : off
rd_ahead : off
recon_rate : med
sys memsize : 256 MBytes
cache memsize : 1024 MBytes
fc_topology : auto
fc_speed : 2Gb
disk_scrubber : on
ondg : befit
----
Am i missing something? As far as the RW test, i will tinker some more and
paste the results soonish.
Thanks in advance,
Andy Lubel
-----Original Message-----
From: Bill Moore [mailto:[EMAIL PROTECTED]
Sent: Fri 4/20/2007 5:13 PM
To: Andy Lubel
Cc: [email protected]
Subject: Re: [zfs-discuss] ZFS+NFS on storedge 6120 (sun t4)
When you say rewrites, can you give more detail? For example, are you
rewriting in 8K chunks, random sizes, etc? The reason I ask is because
ZFS will, by default, use 128K blocks for large files. If you then
rewrite a small chunk at a time, ZFS is forced to read 128K, modify the
small chunk you're changing, and then write 128K. Obviously, this has
adverse effects on performance. :) If your typical workload has a
preferred block size that it uses, you might try setting the recordsize
property in ZFS to match - that should help.
If you're completely rewriting the file, then I can't imagine why it
would be slow. The only thing I can think of is the forced sync that
NFS does on a file closed. But if you set zil_disable in /etc/system
and reboot, you shouldn't see poor performance in that case.
Other folks have had good success with NFS/ZFS performance (while other
have not). If it's possible, could you characterize your workload in a
bit more detail?
--Bill
On Fri, Apr 20, 2007 at 04:07:44PM -0400, Andy Lubel wrote:
>
> We are having a really tough time accepting the performance with ZFS
> and NFS interaction. I have tried so many different ways trying to
> make it work (even zfs set:zil_disable 1) and I'm still no where near
> the performance of using a standard NFS mounted UFS filesystem -
> insanely slow; especially on file rewrites.
>
> We have been combing the message boards and it looks like there was a
> lot of talk about this interaction of zfs+nfs back in november and
> before but since i have not seen much. It seems the only fix up to
> that date was to disable zil, is that still the case? Did anyone ever
> get closure on this?
>
> We are running solaris 10 (SPARC) .latest patched 11/06 release
> connecting directly via FC to a 6120 with 2 raid 5 volumes over a bge
> interface (gigabit). tried raidz, mirror and stripe with no
> negligible difference in speed. the clients connecting to this
> machine are HP-UX 11i and OS X 10.4.9 and they both have corresponding
> performance characteristics.
>
> Any insight would be appreciated - we really like zfs compared to any
> filesystem we have EVER worked on and dont want to revert if at all
> possible!
>
>
> TIA,
>
> Andy Lubel
>
> _______________________________________________
> zfs-discuss mailing list
> [email protected]
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss