Right now I'm leaning toward parent always does nothing + PluginWorker.
Everything is forked, no special case for workers==0, and explicit
designation of the only one case. Of course, it's still early in the day
and I haven't had any coffee.
I have updated the patch
On Wed, Jun 10, 2015 at 2:25 PM, Neil Jerram neil.jer...@metaswitch.com
wrote:
On 08/06/15 22:02, Kevin Benton wrote:
This depends on what initialize is supposed to be doing. If it's just a
one-time sync with a back-end, then I think calling it once in each
child process might not be what
On 08/06/15 22:02, Kevin Benton wrote:
This depends on what initialize is supposed to be doing. If it's just a
one-time sync with a back-end, then I think calling it once in each
child process might not be what we want.
I left a comment on Terry's patch. I think we should just use the
There are two classes of behavior that need to be handled:
1) There are things that can only be done after forking like setting up
connections or spawning threads.
2) Some things should only be done once regardless of number of forks, like
syncing.
Even when you just want something to happen
Interestingly, [1] was filed a few moments ago:
[1] https://bugs.launchpad.net/neutron/+bug/1463129
On 2 June 2015 at 22:48, Salvatore Orlando sorla...@nicira.com wrote:
I'm not sure if you can test this behaviour on your own because it
requires the VMware plugin and the eventlet handling of
From a driver's perspective, it would be simpler, and I think
sufficient, to change ML2 to call initialize() on drivers after the
forking, rather than requiring drivers to know about forking.
-Bob
On 6/8/15 2:59 PM, Armando M. wrote:
Interestingly, [1] was filed a few moments ago:
[1]
Right, I think there are use cases for both. I don't think it's a huge
burden to have to know about it. I think it's actually quite important
to understand when the initialization happens.
--
Russell Bryant
On 06/08/2015 05:02 PM, Kevin Benton wrote:
This depends on what initialize is
This depends on what initialize is supposed to be doing. If it's just a
one-time sync with a back-end, then I think calling it once in each child
process might not be what we want.
I left a comment on Terry's patch. I think we should just use the callback
manager to have a pre-fork and post-fork
Sorry about the long delay.
Even the LOG.error(KEVIN PID=%s network response: %s % (os.getpid(),
r.text)) line? Surely the server would have forked before that line was
executed - so what could prevent it from executing once in each forked
process, and hence generating multiple logs?
Yes, just
I'm not sure if you can test this behaviour on your own because it requires
the VMware plugin and the eventlet handling of backend response.
But the issue was manifesting and had to be fixed with this mega-hack [1].
The issue was not about several workers executing the same code - the
loopingcall
Hi Kevin,
Thanks for your response...
On 08/05/15 08:43, Kevin Benton wrote:
I'm not sure I understand the behavior you are seeing. When your
mechanism driver gets initialized and kicks off processing, all of that
should be happening in the parent PID. I don't know why your child
processes
Hi Salvatore,
Thanks for your reply...
On 08/05/15 09:20, Salvatore Orlando wrote:
Just like the Neutron plugin manager, also ML2 driver manager ensure
drivers are loaded only once regardless of the number of workers.
What Kevin did proves that drivers are correctly loaded before forking
(I
I'm not sure I understand the behavior you are seeing. When your mechanism
driver gets initialized and kicks off processing, all of that should be
happening in the parent PID. I don't know why your child processes start
executing code that wasn't invoked. Can you provide a pointer to the code
or
Just like the Neutron plugin manager, also ML2 driver manager ensure
drivers are loaded only once regardless of the number of workers.
What Kevin did proves that drivers are correctly loaded before forking (I
reckon).
However, forking is something to be careful about especially when using
Is there a design for how ML2 mechanism drivers are supposed to cope
with the Neutron server forking?
What I'm currently seeing, with api_workers = 2, is:
- my mechanism driver gets instantiated and initialized, and immediately
kicks off some processing that involves communicating over the
15 matches
Mail list logo