Folks,
I've tried finding a definitive answer to this, and have not been able to - 
please forgive if I'm posting in the wrong area...

I have tried in the last week 2008.11, 2008.5, and the most recent version of 
the Community Edition.  In all cases, I have followed the many documents out on 
the 'net for configuring an iSCSI target server for an ESX cluster I have.  In 
addition, I have also tried an NFS server.

In all cases, network performance is bottlenecking for some reason.  With 
iSCSI, it's between 1-4MB/sec.  With NFS, it's around 35MB/sec.  I've seen 
various posts with folks having similar problems, but no resolutions.

I have ZFS configured underneath, and have tried this on three different high 
end Dell server systems, all with the same result.  I've changed network cards, 
tried different switches, changed flow control and jumbo frames in places, and 
tried at least 10 different changes to iSCSI recommended in the various 
community forums and mailing lists.

Is iSCSI performance just broken?  Is NFS really limited to 35MB/sec, even 
without ESX? (I've tested with a local machine mounting NFS).

I've isolated that it's not my hardware, not my network, and not my clients 
(they're performing far better against several other iSCSI implementations - 
both hardware and software, as well as other NFS implementations).

I really, really, really want to use ZFS and all of the other benefits for an 
8T array that I need to deploy in a week or so, but not if I can't get iSCSI or 
NFS performing better.  Any resources are appreciated...

-s
_______________________________________________
opensolaris-discuss mailing list
[email protected]

Reply via email to