I have this at least partially explained. It's VMWare's emulated Intel e1000 NIC. VMWare has a virtual NIC which uses vmxnet3, but it presently doesn't work under OpenSolaris, so that's why I'm using the emulated one. Vmxnet3 does work under Solaris and I tested it along with e1000 there, and there is substantial difference in throughput between them.

I am seeing slow network file I/O through both builtin nfs and samba on an OpenSolaris snv_111 serve running under VMWare hypervisor.

The NIC is e1000 and is set to full-duplex with MTU 1500 all across the board on a local subnet.

With ttcp on an independent Linux client, I see about 76 MB/second, which is already slow. (Between the Linux client and a Mac, I see about 114 MB/second.)

The storage setup is a zfs pool consisting of a 8 SATA drives in raidz and 12 in raidz2. Locally, with dd if=/dev/zero of=zerofile bs=1000M count=1, the rate is about 184 MB/second.
With the Linux samba client, it's about 39 MB/second.
With the Linux NFS client, it's about 43 MB/second.

Duplicating a 1100 MB file on the pool locally is about 656 MB/second.
With the Linux samba client, it's about 40 MB/second.
With the Linux NFS client, it's about 44 MB/second.

I'm really interested in having a Windows Server 2008 running on the same hypervisor do the communicating. And this is about half the speed of the Linux client!

other keywords: poor horrible awful terrible performance

--

Maurice Volaski, [email protected]
Computing Support, Rose F. Kennedy Center
Albert Einstein College of Medicine of Yeshiva University
_______________________________________________
networking-discuss mailing list
[email protected]

Reply via email to