Hi Mathieu, The current train of thought is to have neutron notify nova via a call back when ports are ready. This model should hopefully scale better as now nova-compute won't need to poll on neutron checking on the port status. Dan Smith already has a patch out that adds an api to nova for it to receive external events : https://review.openstack.org/#/c/74565/ that neutron can use for this.
I abandoned that patch as it takes the approach of polling neutron from nova-compute which we don't want. Aaron On Wed, Feb 19, 2014 at 12:58 AM, Mathieu Rohon <mathieu.ro...@gmail.com>wrote: > Hi Aaron, > > You seem to have abandonned this patch : > https://review.openstack.org/#/c/74218/ > > You want neutron to update port in nova, can you please tell us how do > you want to do that? > > I think that we should use such a mechanism for live-migration. > live-migration should occur once the port is set up on the destination > host. This could potentially resolve this bug : > > https://bugs.launchpad.net/neutron/+bug/1274160 > > Best, > > Mathieu > > On Tue, Feb 18, 2014 at 2:55 AM, Aaron Rosen <aaronoro...@gmail.com> > wrote: > > Hi Maru, > > > > Thanks for getting this thread started. I've filed the following > blueprint > > for this: > > > > https://blueprints.launchpad.net/nova/+spec/check-neutron-port-status > > > > and have a have a prototype of it working here: > > > > https://review.openstack.org/#/c/74197/ > > https://review.openstack.org/#/c/74218/ > > > > One part that threw me a little while getting this working is that if > using > > ovs and the new libvirt_vif_driver LibvirtGenericVifDriver, nova no > longer > > calls ovs-vsctl to set external_ids:iface-id and that libvirt > automatically > > does that for you. Unfortunately, this data seems to only make it to > ovsdb > > when the instance is powered on. Because of this I needed to add back > those > > calls as neutron needs this data to be set in ovsdb before it can start > > wiring the ports. > > > > I'm hoping this change should help out with > > https://bugs.launchpad.net/neutron/+bug/1253896 but we'll see. I'm not > sure > > if it's to late to merge this in icehouse but it might be worth > considering > > if we find that it helps reduce gate failures. > > > > Best, > > > > Aaron > > > > > > On Thu, Feb 13, 2014 at 3:31 AM, Mathieu Rohon <mathieu.ro...@gmail.com> > > wrote: > >> > >> +1 for this feature which could potentially resolve a race condition > >> that could occur after port-binding refactoring in ML2 [1]. > >> in ML2, the port could be ACTIVE once a MD has bound the port. the > >> vif_type could then be known by nova, and nova could create the > >> network correctly thanks to vif_type and vif_details ( with > >> vif_security embedded [2]) > >> > >> > >> [1] > http://lists.openstack.org/pipermail/openstack-dev/2014-February/026750.html > >> [2]https://review.openstack.org/#/c/72452/ > >> > >> On Thu, Feb 13, 2014 at 3:13 AM, Maru Newby <ma...@redhat.com> wrote: > >> > Booting a Nova instance when Neutron is enabled is often unreliable > due > >> > to the lack of coordination between Nova and Neutron apart from port > >> > allocation. Aaron Rosen and I have been talking about fixing this by > having > >> > Nova perform a check for port 'liveness' after vif plug and before vm > boot. > >> > The idea is to have Nova fail the instance if its ports are not seen > to be > >> > 'live' within a reasonable timeframe after plug. Our initial thought > is > >> > that the compute node would call Nova's networking subsystem which > could > >> > query Neutron for the status of the instance's ports. > >> > > >> > The open question is whether the port 'status' field can be relied > upon > >> > to become ACTIVE for all the plugins currently in the tree. If this > is not > >> > the case, please reply to this thread with an indication of how one > would be > >> > able to tell the 'liveness' of a port managed by the plugin you > maintain. > >> > > >> > In the event that one or more plugins cannot reliably indicate port > >> > liveness, we'll need to ensure that the port liveness check can be > >> > optionally disabled so that the existing behavior of racing vm boot is > >> > maintained for plugins that need it. > >> > > >> > Thanks in advance, > >> > > >> > > >> > Maru > >> > > >> > > >> > _______________________________________________ > >> > OpenStack-dev mailing list > >> > OpenStack-dev@lists.openstack.org > >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > >> _______________________________________________ > >> OpenStack-dev mailing list > >> OpenStack-dev@lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > OpenStack-dev@lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev@lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >
_______________________________________________ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev