Ok, I've just created a new OpenSolaris virtual machine on our ESX server and done some testing with that. It appears it's the network card in our main server that's the bottleneck. When I do a send / receive from the virtual machine to our backup server I'm getting an average of 25MB/s with peaks as high as 50MB/s.
And it looks to me like I'm being limited by the disks at both ends of that. On the sender zpool iostat shows me that I'm hitting 150 iops on each disk, which is probably about the limit for a 7200 rpm SATa disk. And the receiving end is a single virtual disk, stored on a single SATA drive on the ESX server. It's writing at 400 iops and 30MB/s which is also likely to be pretty close to the physical limit. Which unfortunately means it's the network card on our main server that's causing the problem. But that makes it even more confusing. That server is a NFS server for VMware, and we can see speeds of 90MB/s reading and 50MB/s writing for synchronous NFS from ESX. For some reason though, when doing a zfs send we're being limited to 10MB/s, and if I try to run two sends in parallel, they each drop to 5MB/s. There shouldn't be any significant network activity on this server though, is there any way I can check that? -- This message posted from opensolaris.org