On Wed, Apr 22, 2015 at 10:23:04AM +0100, Daniel P. Berrange wrote:
> On Fri, Apr 17, 2015 at 04:53:02PM +0800, Chen Fan wrote:
> > backgrond:
> > Live migration is one of the most important features of virtualization 
> > technology.
> > With regard to recent virtualization techniques, performance of network I/O 
> > is critical.
> > Current network I/O virtualization (e.g. Para-virtualized I/O, VMDq) has a 
> > significant
> > performance gap with native network I/O. Pass-through network devices have 
> > near
> > native performance, however, they have thus far prevented live migration. 
> > No existing
> > methods solve the problem of live migration with pass-through devices 
> > perfectly.
> > 
> > There was an idea to solve the problem in website:
> > https://www.kernel.org/doc/ols/2008/ols2008v2-pages-261-267.pdf
> > Please refer to above document for detailed information.
> > 
> > So I think this problem maybe could be solved by using the combination of 
> > existing
> > technology. and the following steps are we considering to implement:
> > 
> > -  before boot VM, we anticipate to specify two NICs for creating bonding 
> > device
> >    (one plugged and one virtual NIC) in XML. here we can specify the NIC's 
> > mac addresses
> >    in XML, which could facilitate qemu-guest-agent to find the network 
> > interfaces in guest.
> > 
> > -  when qemu-guest-agent startup in guest it would send a notification to 
> > libvirt,
> >    then libvirt will call the previous registered initialize callbacks. so 
> > through
> >    the callback functions, we can create the bonding device according to 
> > the XML
> >    configuration. and here we use netcf tool which can facilitate to create 
> > bonding device
> >    easily.
> 
> I'm not really clear on why libvirt/guest agent needs to be involved in this.
> I think configuration of networking is really something that must be left to
> the guest OS admin to control. I don't think the guest agent should be trying
> to reconfigure guest networking itself, as that is inevitably going to 
> conflict
> with configuration attempted by things in the guest like NetworkManager or
> systemd-networkd.
> 
> IOW, if you want to do this setup where the guest is given multiple NICs 
> connected
> to the same host LAN, then I think we should just let the gues admin configure
> bonding in whatever manner they decide is best for their OS install.

Thinking about it some more I'm not even convinced this should need direct
support in libvirt or QEMU at all.  We already have the ability to hotplug
and unplug NICs, and the guest OS can be setup to run appropriate scripts
when a PCI hotadd/remove event occurrs (eg via udev rules). So I think
this functionality can be done entirely within the mgmt application (oVirt
or OpenStack) and the guest OS.


Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Reply via email to