So tonight, we finally took the plunge and upgraded our zfs/kvm server to r151012 ... the results were terrible. The kvm booted very slowly and all networking felt really slow ... so I did a little test:
ubutu-14.04-guest$ dd if=/dev/zero bs=1M count=20 | ssh omnios-r151012-host dd of=/dev/null 20971520 bytes (21 MB) copied, 6.27333 s, 3.3 MB/s ubutu-14.04-guest$ ssh omnios-r151012-host dd if=/dev/zero bs=1M count=20 | dd of=/dev/null 20971520 bytes transferred in 8.010208 secs (2618099 bytes/sec) These numbers were obtained using virtio net drivers but switching to e1000 did not significantly change things. So we booted back into r151010 again ... the difference was immediately apparent ... but there are also number to back this up. ubutu-14.04-guest$ dd if=/dev/zero bs=1M count=20 | ssh omnios-r151010-host dd of=/dev/null 20971520 bytes (21 MB) copied, 0.812479 s, 25.8 MB/s ubutu-14.04-guest$ ssh omnios-r151010-host dd if=/dev/zero bs=1M count=20 | dd of=/dev/null 20971520 bytes (21 MB) copied, 0.545423 s, 38.5 MB/s as you can see the difference in guest network performance is roughly one magnitude ... I have not tested disk performance explicitly, but even booting a windows host took ages ... so I suspect whatever is causing this influences all kvm guest IO. cheers tobi -- Tobi Oetiker, OETIKER+PARTNER AG, Aarweg 15 CH-4600 Olten, Switzerland www.oetiker.ch t...@oetiker.ch +41 62 775 9902 _______________________________________________ OmniOS-discuss mailing list OmniOS-discuss@lists.omniti.com http://lists.omniti.com/mailman/listinfo/omnios-discuss