From a driver's perspective, it would be simpler, and I think
sufficient, to change ML2 to call initialize() on drivers after the
forking, rather than requiring drivers to know about forking.
-Bob
On 6/8/15 2:59 PM, Armando M. wrote:
Interestingly, [1] was filed a few moments ago:
[1] https://bugs.launchpad.net/neutron/+bug/1463129
On 2 June 2015 at 22:48, Salvatore Orlando <[email protected]
<mailto:[email protected]>> wrote:
I'm not sure if you can test this behaviour on your own because it
requires the VMware plugin and the eventlet handling of backend
response.
But the issue was manifesting and had to be fixed with this
mega-hack [1]. The issue was not about several workers executing
the same code - the loopingcall was always started on a single
thread. The issue I witnessed was that the other API workers just
hang.
There's probably something we need to understand about how
eventlet can work safely with a os.fork (I just think they're not
really made to work together!).
Regardless, I did not spent too much time on it, because I thought
that the multiple workers code might have been rewritten anyway by
the pecan switch activities you're doing.
Salvatore
[1] https://review.openstack.org/#/c/180145/
On 3 June 2015 at 02:20, Kevin Benton <[email protected]
<mailto:[email protected]>> wrote:
Sorry about the long delay.
>Even the LOG.error("KEVIN PID=%s network response: %s" %
(os.getpid(), r.text)) line? Surely the server would have
forked before that line was executed - so what could prevent
it from executing once in each forked process, and hence
generating multiple logs?
Yes, just once. I wasn't able to reproduce the behavior you
ran into. Maybe eventlet has some protection for this? Can you
provide small sample code for the logging driver that does
reproduce the issue?
On Wed, May 13, 2015 at 5:19 AM, Neil Jerram
<[email protected]
<mailto:[email protected]>> wrote:
Hi Kevin,
Thanks for your response...
On 08/05/15 08:43, Kevin Benton wrote:
I'm not sure I understand the behavior you are seeing.
When your
mechanism driver gets initialized and kicks off
processing, all of that
should be happening in the parent PID. I don't know
why your child
processes start executing code that wasn't invoked.
Can you provide a
pointer to the code or give a sample that reproduces
the issue?
https://github.com/Metaswitch/calico/tree/master/calico/openstack
Basically, our driver's initialize method immediately
kicks off a green thread to audit what is now in the
Neutron DB, and to ensure that the other Calico components
are consistent with that.
I modified the linuxbridge mech driver to try to
reproduce it:
http://paste.openstack.org/show/216859/
In the output, I never received any of the init code
output I added more
than once, including the function spawned using eventlet.
Interesting. Even the LOG.error("KEVIN PID=%s network
response: %s" % (os.getpid(), r.text)) line? Surely the
server would have forked before that line was executed -
so what could prevent it from executing once in each
forked process, and hence generating multiple logs?
Thanks,
Neil
The only time I ever saw anything executed by a child
process was actual
API requests (e.g. the create_port method).
On Thu, May 7, 2015 at 6:08 AM, Neil Jerram
<[email protected]
<mailto:[email protected]>
<mailto:[email protected]
<mailto:[email protected]>>> wrote:
Is there a design for how ML2 mechanism drivers
are supposed to cope
with the Neutron server forking?
What I'm currently seeing, with api_workers = 2, is:
- my mechanism driver gets instantiated and
initialized, and
immediately kicks off some processing that
involves communicating
over the network
- the Neutron server process then forks into
multiple copies
- multiple copies of my driver's network
processing then continue,
and interfere badly with each other :-)
I think what I should do is:
- wait until any forking has happened
- then decide (somehow) which mechanism driver is
going to kick off
that processing, and do that.
But how can a mechanism driver know when the
Neutron server forking
has happened?
Thanks,
Neil
__________________________________________________________________________
OpenStack Development Mailing List (not for usage
questions)
Unsubscribe:
[email protected]?subject:unsubscribe
<http://[email protected]?subject:unsubscribe>
<http://[email protected]?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
--
Kevin Benton
__________________________________________________________________________
OpenStack Development Mailing List (not for usage
questions)
Unsubscribe:
[email protected]?subject:unsubscribe
<http://[email protected]?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
[email protected]?subject:unsubscribe
<http://[email protected]?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
--
Kevin Benton
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
[email protected]?subject:unsubscribe
<http://[email protected]?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
[email protected]?subject:unsubscribe
<http://[email protected]?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: [email protected]?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: [email protected]?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev