Hi,
I think the main issue is the behavior of the API
of oslo-incubator/openstack/common/service.py, specially:
* ProcessLauncher.launch_service(MyService())
And then the MyService have this behavior:
class MyService:
def __init__(self):
# CODE DONE BEFORE os.fork()
def start(self):
# CODE DONE AFTER os.fork()
So if an application created a FD inside MyService.__init__ or
before ProcessLauncher.launch_service, it will be shared between
processes and we got this kind of issues...
For the rabbitmq/qpid driver, the first connection is created when the
rpc server is started or when the first rpc call/cast/... is done.
So if the application doesn't do that inside MyService.__init__ or
before ProcessLauncher.launch_service everything works as expected.
But if the issue is raised I think this is an application issue (rpc
stuff done before the os.fork())
For the amqp1 driver case, I think this is the same things, it seems
to do lazy creation of the connection too.
I will take a look to the neutron code, if I found a rpc usage
before the os.fork().
Personally, I don't like this API, because the behavior difference between
'__init__' and 'start' is too implicit.
Cheers,
---
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht
Le 2014-11-24 20:27, Ken Giusti a écrit :
Hi all,
As far as oslo.messaging is concerned, should it be possible for the
main application to safely os.fork() when there is already an active
connection to a messaging broker?
I ask because I'm hitting what appears to be fork-related issues with
the new AMQP 1.0 driver. I think the same problems have been seen
with the older impl_qpid driver as well [0]
Both drivers utilize a background threading.Thread that handles all
async socket I/O and protocol timers.
In the particular case I'm trying to debug, rpc_workers is set to 4 in
neutron.conf. As far as I can tell, this causes neutron.service to
os.fork() four workers, but does so after it has created a listener
(and therefore a connection to the broker).
This results in multiple processes all select()'ing the same set of
networks sockets, and stuff breaks :(
Even without the background process, wouldn't this use still result in
sockets being shared across the parent/child processes? Seems
dangerous.
Thoughts?
[0] https://bugs.launchpad.net/oslo.messaging/+bug/1330199
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev