Hi all

I am participating in a project which try to port vhost_net on Xen。

By change the memory copy and notify mechanism ,currently virtio-net with vhost_net could run on Xen with good performance。TCP receive throughput of single vnic from 2.77Gbps up to 6Gps。In VM receive side,I instead grant_copy with grant_map + memcopy,it efficiently reduce the cost of grant_table spin_lock of dom0,So the hole server TCP performance from 5.33Gps up to 9.5Gps。

Now I am consider the live migrate of vhost_net on Xen,vhost_net use vhost_log for live migrate on Kvm,but qemu on Xen havn't manage the hole memory of VM,So I am trying to fallback datapath from vhost_net to qemu when doing live migrate ,and fallback datapath from qemu to
vhost_net again after vm migrate to new server。

My question is:
why didn't vhost_net do the same fallback operation for live migrate on KVM,but use vhost_log to mark the dirty page? Is there any mechanism fault for the idea of fallback datapath from vhost_net to qemu for live migrate?

any question about the detail of vhost_net on Xen is welcome。

Thanks


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to