This isn't a proper full response, but I did want to offer one
question/peice of advice. 

You mentioned that iSCSI writes are at 15% for a couple seconds and then
drop to 0%, in a cycle.  This suggests to me that your monitoring the
physical disks of the Zpool.  If this is the case your going to see
non-sense numbers.  Here's why:

Think of ZFS as an complete I/O subsystem.  You read and write into the
front of ZFS, inside ZFS its working its magic, and on the backside its
reading and writing to physical disks to keep the magic alive.  The
wierd writes are ZFS Transaction Groups (TXG's) flushing out to physical
disk every 30 seconds.  ZFS bundles async IO writes together and the
pushes them all to physical disk as an optimized operation... so you see
almost 30 seconds of idle disk, then a big splat of IO, then idle
again.  The same magic is happening for read IO, the ARC is going to
answer _most_ of your read requests from memory, so the read IO to
physical disk will look pretty minimal.

To put it another more generic way, if you are looking at the physical
disks your seeing caching effect.

I would concentrate on the IO performance from your Windows applications
or raw network performance.

Like I said, not the answer your looking for but I wanted to throw it in
there, just in case.

benr.

PS: Isn't a PE1950 a little loud for a home storage server? :)


milosz wrote:
> hi all,
>
> i've posted about this before, but i've done some work since then and i'd 
> like to run it by people again.
>
> i'm having what seem to be two (maybe related, maybe not) issues with windows 
> -> snv_111 network/iscsi performance.
>
> hardware: snv_111 on homebrew box intended for san/nas use, intel server 
> gigabit nics, windows server 2003 on dell pe 1950, broadcom nics.
>
> (1) saturation on iscsi writes (from ms iscsi initiator -> snv_111) is 
> useless: getting around 12% of gigabit.
> (2a) iscsi write also suffers from periodic dips.  as in, i'll get 15% 
> saturation for a couple of seconds, then 0% for a second, repeat.
> (2b) i am able to get to around 50% utilization with: iscsi read, cifs read, 
> cifs write.  the cifs writes suffer from periodic dips as well, but they are 
> not as dramatic as the iscsi write dips (50% -> ~25%).
>
> here is what i have done to optimize:
>
> (1) set default mtu on e1000g interfaces to 9000 in e1000g.conf
> (2) turned on jumbo frames on the switch and broadcom nics on windows box.
> (3) turned off nagle's algorithm in windows.
> (4) turned off nagle's on solaris (ndd -set /dev/tcp tcp_naglim_def 1)
> (5) windows: set sackopts=1, tcp1323opts=3 (window scaling), 
> tcpreceivewindow=400k
> (6) solaris: set tcp_recv_hiwat to 400k
> (7) solaris: iscsitadm modify target -m 400000
> (8) turned off lso_enable and tx_hcksum_enable in e1000g.conf (this is 
> precautionary, since there have been issues in the past)
> (9) turned off lso and tco on broadcom nics (again, precautionary).
>
> iperf results:
>
> solaris -> windows: steady 99% network utilization, perfect.
> windows -> solaris: around .1% utilization (yep, less than 2 megabits max) 
> unless i specify 400k as the default window size for the client-side iperf; 
> in that case i get 98% with periodic dips to 40-80%
>
> notes:
>
> --netstat -f inet shows proper send & receive window sizes (400k).
> --not seeing anything unusual; snoop/tcpdump captures look fine; not seeing 
> any errors on nics or switchports.
> --results identical with lso & tco enabled/disabled on both sides
> --windows -> windows cifs reads & writes are steady at 58%
>
> so: any ideas?  based on the iperf evidence i think windows tcp window 
> scaling might not be working correctly (even though it seems to be working 
> more or less fine for cifs), which would maybe explain issue (1).  issue (2), 
> the periodic dips in transfer, i don't really know how to explain, but it is 
> specific to windows -> solaris operations.
>
> anyone else doing windows -> solaris iscsi?
>
> thanks,
>
> milosz
>   

_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss

Reply via email to