Hi,

As you might have noticed, there has been some progress on parallel tests
for neutron.
In a nutshell:
* Armando fixed the issue with IP address exhaustion on the public network
[1]
* Salvatore has now a patch which has a 50% success rate (the last failures
are because of me playing with it) [2]
* Salvatore is looking at putting back on track full isolation [3]
* All the bugs affecting parallel tests can be queried here [10]
* This blueprint tracks progress made towards enabling parallel testing [11]

---------
The long story is as follows:
Parallel testing basically is not working because parallelism means higher
contention for public IP addresses. This was made worse by the fact that
some tests created a router with a gateway set but never deleted it. As a
result, there were even less addresses in the public range.
[1] was already merged and with [4] we shall make the public network for
neutron a /24 (the full tempest suite is still showing a lot of IP
exhaustion errors).

However, this was just one part of the issue. The biggest part actually
lied with the OVS agent and its interactions with the ML2 plugin. A few
patches ([5], [6], [7]) were already pushed to reduce the number of
notifications sent from the plugin to the agent. However, the agent is
organised in a way such that a notification is immediately acted upon thus
preempting the main agent loop, which is the one responsible for wiring
ports into networks. Considering the high level of notifications currently
sent from the server, this becomes particularly wasteful if one consider
that security membership updates for ports trigger global
iptables-save/restore commands which are often executed in rapid
succession, thus resulting in long delays for wiring VIFs to the
appropriate network.
With the patch [2] we are refactoring the agent to make it more efficient.
This is not production code, but once we'll get close to 100% pass for
parallel testing this patch will be split in several patches, properly
structured, and hopefully easy to review.
It is worth noting there is still work to do: in some cases the loop still
takes too long, and it has been observed ovs commands taking even 10
seconds to complete. To this aim, it is worth considering use of async
processes introduced in [8] as well as leveraging ovsdb monitoring [9] for
limiting queries to ovs database.
We're still unable to explain some failures where the network appears to be
correctly wired (floating IP, router port, dhcp port, and VIF port), but
the SSH connection fails. We're hoping to reproduce this failure patter
locally.

Finally, the tempest patch for full tempest isolation should be made usable
soon. Having another experimental job for it is something worth considering
as for some reason it is not always easy reproducing the same failure modes
exhibited on the gate.

Regards,
Salvatore

[1] https://review.openstack.org/#/c/58054/
[2] https://review.openstack.org/#/c/57420/
[3] https://review.openstack.org/#/c/53459/
[4] https://review.openstack.org/#/c/58284/
[5] https://review.openstack.org/#/c/58860/
[6] https://review.openstack.org/#/c/58597/
[7] https://review.openstack.org/#/c/58415/
[8] https://review.openstack.org/#/c/45676/
[9] https://bugs.launchpad.net/neutron/+bug/1177973
[10]
https://bugs.launchpad.net/neutron/+bugs?field.tag=neutron-parallel&field.tags_combinator=ANY
[11] https://blueprints.launchpad.net/neutron/+spec/neutron-tempest-parallel
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to