Looking at the log involving the server ([1] - the same one you provided
in the first comment and in comment #3), and specifically lines 19 and
21, it's clear that sync_routers() is triggering
auto_schedule_routers(). Before [2] removed in, the call from
sync_routers() to auto_schedule_routers() was done in line 96 of
neutron/api/rpc/handlers/l3_rpc.py, as can be observed from the log:

2016-10-09 17:03:52.366 144166 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/neutron/api/rpc/handlers/l3_rpc.py", line 96, 
in sync_routers
2016-10-09 17:03:52.366 144166 ERROR oslo_messaging.rpc.dispatcher     
self.l3plugin.auto_schedule_routers(context, host, router_ids)

In [2], it's evident that the line 96 itself is removed. Thus, this
can't be reproduced in master or in stable/mitaka and there is no
(upstream) bug to fix.

[1]: http://paste.openstack.org/show/585669/

** Changed in: neutron
       Status: New => Invalid

You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.

  Partial HA network causing HA router creation failed (race conditon)

Status in neutron:

Bug description:
  ENV: stable/mitaka,VXLAN
  Neutron API: two neutron-servers behind a HA proxy VIP.

  Exception log:
  [1] http://paste.openstack.org/show/585669/
  [2] http://paste.openstack.org/show/585670/

  Log [1] shows that the subnet of HA network is concurrently deleted
  while a new HA router create API comes. Seems the race conditon
  described in this bug is till exists :
  https://bugs.launchpad.net/neutron/+bug/1533440, where has description

  Some known exceptions:
  2. IpAddressGenerationFailure: (HA port created failed due to the
     concurrently HA subnet deletion)

  Log [2] has a very strange behavior that those 3 APIs have a same
  request-id [req-780b1f6e-2b3c-4303-a1de-a5fb4c7ea31e].

  Test scenario:
  Just create one HA router for a tenant, and then quickly delete it.

  For now, our mitaka ENV use VxLAN as tenant network type. So there is a very 
large range of VNI.
  So don't save that, and locally a temporary solution, we add a new config to 
decide whether delete the HA network every time.

To manage notifications about this bug go to:

Mailing list: https://launchpad.net/~yahoo-eng-team
Post to     : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp

Reply via email to