Hi Simon We initially started with a totally autonomous execution model as in the other drivers. However it didn't work, as the networking operations are tightly coupled with the VM ones. For example in the case of a live-migrations you have to: 1.- pre-start the network in host B 2.- live-migrate VM 3.- clean-up network in host A 4.- post-start the network in host B
So it is quite more simple to keep this orchestration at the VMM level and call the VNM from there. If you are using the VMM wrapper the networking actions will be called for you. If you plan to talk to the OpenNebula core directly you need to do the networking calls in your new driver. Hope this helps Cheers Ruben On Fri, Nov 23, 2012 at 4:04 AM, Simon Boulet <[email protected]> wrote: > Hi, > > I'm working on a custom set of drivers for OpenNebula. From my research, I > understand that the OpenNebula VMM wrapper (one_vm_exec or one_vm_ssh) is > responsible for doing the appropriate calls to the VNM (and to the VMM) > when doing VM actions. Is there any particular reasons why the VNM isn't > spawned and called directly by the OpenNebula core, like the other drivers > are (VMM, IM, Datastore, etc). > > Am I right to assume that my custom VMM would need to spawn the VNM driver > as per the VM template <NET_DRV> attribute? > > Thanks > > Simon > > _______________________________________________ > Users mailing list > [email protected] > http://lists.opennebula.org/listinfo.cgi/users-opennebula.org > > -- Ruben S. Montero, PhD Project co-Lead and Chief Architect OpenNebula - The Open Source Solution for Data Center Virtualization www.OpenNebula.org | [email protected] | @OpenNebula
_______________________________________________ Users mailing list [email protected] http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
