Jeff Anderson-Lee wrote:
Rick McNeal wrote:
On Jan 18, 2007, at 12:00 PM, Mark A. Carlson wrote:
I would imagine the argument is that of software drivers
for the storage stack consume much less overhead by
cutting out the IP part - less CPU consumed, perhaps
better throughput. As far as cost, it leverages the NIC
commodity pricing curve without requiring TCP offload.
The amount of CPU speed being consumed is really only valid for
underpowered machines. Any modern desktop has more then enough
horsepower to completely fill a 1GbE link with traffic at 4KB packet
sizes.
I don't have OpenSolaris numbers at hand, but under Linux x86_64 on a
server-class motherboard that doesn't seem to be the case.
In a recent "echo"-style test, 4KiB UDP pegged one CPU of a dual 3.6GHz
Xeon EM64T but only obtained 90MB/s for UDP and 63MB/s for TCP/IP (with
no IPSEC). It didn't saturate the network until sending 16KiB packets
for UDP and never did for TCP. Perhaps with a TCP offload engine under
Solaris one might do better but... that's a lot of CPU power devoted
just to flinging the bits.
Giving Linux network performance numbers and assuming things about
OpenSolaris network performance isn't actually helpful they have very
very different network stack implementations.
It also isn't useful info until you give full details of the hardware,
especially the NIC being used and the type of network hardware involved
for switches (unless this was back to back).
--
Darren J Moffat
_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss