You've got a performance problem. Let's see if we can determine a
few things first.
I've got to ask the obvious question. You are running with 1GbE
NICs on both machines, correct? What about any switches in between
the machines. Are they also 1GbE switches.
The same machine is serving as iSCSI target and client. The interface
used for the connection is a GbE NIC, and on a GbE network, even
though the traffic is never handed to the interface (counters don't
increase). If this works to our satisfaction yes, we will move the
target to another storage box.
Assuming that indeed your network is healthy and capable of pushing
100+MB/s the next thing is to define poor performance. iSCSI would
be slower than direct attached storage. Depending on work load, the
network latency can be an issue.
Agreed. Coming from "the networking side" I've seen many things
completely killed because of latency. I'm for instance very curious
to see what performance difference there will be between an MTU of
1500 and jumbo frames of 9000 as a function of # of simultaneous
sessions to the iSCSI target ;-) but that's a whole different thing.
I suspect we're hitting a case where data is only going to a single
spindle and that's limited to around 30MB/s. Another issue, is that
I believe ZFS enables Write Caches on devices and then performance
a SYNC_CACHE as it deems necessary. I could be wrong. Right now,
the iSCSI Target issues a fsync() call after every write. I need to
implement the MODE_SELECT Cache Page control.
If you can be specific about the performance difference that would
be most helpful.
I can do some tests, what would be a good way to do that besides a
'time'?
Regards,
Rutger
_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss