I've temporarily got vmware server running on my new "server", and intend to 
migrate over to kvm as soon as possible, if it provides enough incentive 
(extra performance, features). Currently I'm waiting for full iommu support in 
the kernel, modules and userspace, and didn't plan to migrate till I had 
hardware that could do iommu, kvm fully supported iommu + DMA for devices 
"passed through", could also pass through more than one device per guest (I 
saw hints that the intel iommu implementation can only do one device per 
guest? please tell me I'm wrong, it seems like an odd design choice to make), 
and full migration.

But if I can get enough performance over vmware server 2 with plain old kvm + 
virtio, I'd happily migrate.

I saw a message late last year comparing the two, but I know how quickly 
things change in the OSS world, and I also intend to use "raw" devices 
(possibly AoE) for guest disks (not qcow or anything like it), and virtio for 
networking.

So has anyone tested the two lately? Got any experiences you'd like to share?

-- 
Thomas Fjellstrom
[email protected]
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to