> From: [email protected] [mailto:discuss-
> [email protected]] On Behalf Of Jonathan B Bayer
> 
> I then did a simple test, copying a 4.3 gig ISO to each volume.  I did
this two
> times, and timed the second copy;  this way the overhead of allocating
space
> was eliminated.
> 
> The results puzzled me:
> 
>     NFS:    5:08
>     iSCSI:  5:54

Unless I miss my guess, it takes the same amount of time either way, but
your NFS client is buffering the writes and reporting completion sooner than
actual completion.  However, extremely suspicious:  Your NFS speed is
111Mbit/sec, while your iscsi speed is 97Mbit/sec.

Are you 100% certain you are getting Gigabit speeds on all your wires &
switches, client and server?  My explanation makes perfect sense as long as
there's something limiting your wire speed to 100Mbit.  

If you don't have something limiting you to 100Mbit ... Well ... you're only
getting 100Mbit, and that is unnaturally slow for either protocol.

PS.  I performed this exact same test before, and I found there was no
performance difference between NFS and iSCSI.  Interested to see if your
results agree.

PPS. In either NFS or iSCSI, you're likely to be limited by ZIL performance
on the ZFS side, because they both perform sync-mode writes by default.  You
could disable your ZIL, or add SSD log devices to the server...  Or hang it
all, and just do read-performance testing instead.  I would suggest you'll
get better (more accurate comparison of iscsi vs nfs) results reading from
the server instead of writing.

_______________________________________________
Discuss mailing list
[email protected]
https://lists.lopsa.org/cgi-bin/mailman/listinfo/discuss
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to