[issue21998] asyncio: support fork

2020-02-10 Thread STINNER Victor


STINNER Victor  added the comment:

There is no activity for 2 years. Asyncio is mature now. It seems like users 
learnt how to work around this issue, or don't use fork() with asyncio.

I close the issue.

--
resolution:  -> out of date
stage:  -> resolved
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2018-09-19 Thread Yury Selivanov


Yury Selivanov  added the comment:

I'll revisit this later.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2018-09-19 Thread STINNER Victor


STINNER Victor  added the comment:

> I'm torn between 2 & 3.  Guido, Victor, Martin, what do you think?

Give up, document that fork() is not supported and close the issue :-) IMHO 
it's not worth it.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2018-03-03 Thread Zac Medico

Zac Medico  added the comment:

I'm not sure about possible use cases that might conflict with this approach, 
but using a separate event loop for each pid seems very reasonable to me, as 
follows:

_default_policy = asyncio.get_event_loop_policy()
_pid_loop = {}

class MultiprocessingPolicy(asyncio.AbstractEventLoopPolicy):
def get_event_loop(self):
pid = os.getpid()
loop = _pid_loop.get(pid)
if loop is None:
loop = self.new_event_loop()
_pid_loop[pid] = loop
return loop

def new_event_loop(self):
return _default_policy.new_event_loop()

asyncio.set_event_loop_policy(MultiprocessingPolicy())

--
nosy: +zmedico

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2017-06-28 Thread Guido van Rossum

Changes by Guido van Rossum :


--
nosy:  -gvanrossum

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2017-06-28 Thread Yury Selivanov

Yury Selivanov added the comment:

> I'm not sure why it would be debug-only.  You usually don't fork() often, and 
> you don't have many event loops around, so the feature sounds cheap.

I think you're right. If it's low or zero overhead we can have the check always 
enabled.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2017-06-28 Thread Antoine Pitrou

Antoine Pitrou added the comment:

I'm not sure why it would be debug-only.  You usually don't fork() often, and 
you don't have many event loops around, so the feature sounds cheap.

In any case, I'm not directly affected by this issue, I'm merely suggesting 
options.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2017-06-28 Thread Yury Selivanov

Yury Selivanov added the comment:

> A compromise for the short term would be to detect fork in debug mode
and raise an exception, and explain how to fix such bug. What do you
think?

I'd prefer to fix it properly in 3.7.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2017-06-28 Thread STINNER Victor

STINNER Victor added the comment:

A compromise for the short term would be to detect fork in debug mode
and raise an exception, and explain how to fix such bug. What do you
think?

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2017-06-28 Thread Yury Selivanov

Yury Selivanov added the comment:

> Possible answer: have a global WeakSet of event loops.  In the child fork 
> handler, iterate over all event loops and "break" those that have already 
> been started.

We can do this but only in debug mode.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2017-06-28 Thread Antoine Pitrou

Antoine Pitrou added the comment:

Possible answer: have a global WeakSet of event loops.  In the child fork 
handler, iterate over all event loops and "break" those that have already been 
started.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2017-06-28 Thread STINNER Victor

STINNER Victor added the comment:

> The most reasonable IMHO would be for it to mark the event loop "broken" (or 
> closed?) in the child, to forbid any further use.

Hum, the problem is that Python cannot guess if the event loop will be
used in the child or the parent process :-/ The problem only occurs
when the event loop is used in the two processes, no?

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2017-06-28 Thread Antoine Pitrou

Antoine Pitrou added the comment:

> Python 3.7 got as new os.register_at_fork() function. I don't know if it 
> could help:

The most reasonable IMHO would be for it to mark the event loop "broken" (or 
closed?) in the child, to forbid any further use.

By the way, on Python 3 (which is pretty much required by asyncio), I really 
suggest using the "forkserver" method of multiprocessing, it removes a ton of 
hassle with inheritance through forking.

--
nosy: +pitrou

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2017-06-27 Thread STINNER Victor

STINNER Victor added the comment:

Python 3.7 got as new os.register_at_fork() function. I don't know if it could 
help:
https://docs.python.org/dev/library/os.html#os.register_at_fork

Can we close this issue? Sorry, I lost track of this issue and I see no 
activity since the end of 2015...

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2015-12-22 Thread Adam Bishop

Adam Bishop added the comment:

A note about this issue should really be added to the documentation - on OS X, 
it fails with the rather non-sensical "OSError: [Errno 9] Bad file descriptor", 
making this very hard to debug.

I don't have any specific requirement for fork support in asyncio as it's 
trivial to move loop creation after the fork, but having to run the interpreter 
through GDB to diagnose the problem is not a good state of affairs.

--
nosy: +Adam.Bishop

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2015-10-27 Thread Christian H

Changes by Christian H :


--
nosy: +Christian H

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2015-09-04 Thread Larry Hastings

Changes by Larry Hastings :


--
nosy:  -larry

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2015-09-04 Thread Larry Hastings

Larry Hastings added the comment:

I've remarked it as "normal" priority and moved it to 3.6.  Not my problem 
anymore!  :D

--
priority: deferred blocker -> normal
versions: +Python 3.6 -Python 3.4, Python 3.5

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2015-09-04 Thread STINNER Victor

STINNER Victor added the comment:

> Surely this is too late for 3.5?

I'm not 100% convinced that asyncio must support fork, so it's too late :-) 
Anyway, we don't care, asyncio will be under provisional status for one more 
cycle (3.5) :-p

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2015-09-04 Thread Larry Hastings

Larry Hastings added the comment:

Surely this is too late for 3.5?

--
nosy: +larry

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2015-05-26 Thread Yury Selivanov

Yury Selivanov added the comment:

 I was thinking only in the child. The parent should be able to continue to 
 use the loop as if the fork didn't happen, right?

Yes, everything should be fine.

I'll rephrase my question: do you think there is a way (and need) to at least 
throw a warning in the master process that the fork has failed (without monkey 
patching os.fork() which is not an option)?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21998
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2015-05-26 Thread Martin Richard

Martin Richard added the comment:

Hi,

My patch was a variation of haypo's patch. The goal was to duplicate the
loop and its internal objects (loop and self pipes) without changing much
to its state from the outside (keeping callbacks and active tasks). I
wanted to be conservative with this patch, but it is not the option I
prefer.

I think that raising a RuntimeError in the child is fine, but may not be
enough:

Imho, saying the loop can't be used anymore in the child is fine, but a
process in which lives an asyncio loop must not be forked is too
restrictive (I'm not thinking of the fork+exec case, which is probably fine
anyway) because a library may rely on child processes, for instance.

Hence, we should allow a program to fork and eventually dispose the
resources of the loop by calling loop.close() - or any other mechanism that
you see fit (clearing all references to the loop is tedious because of the
global default event loop and the cycles between futures/tasks and the
loop).

However, the normal loop.close() sequence will unregister all the fds
registered to the selector, which will impact the parent. Under Linux with
epoll, it's fine if we only close the selector.

I would therefore, in the child after a fork, close the loop without
breaking the selector state (closing without unregister()'ing fds), unset
the default loop so get_event_loop() would create a new loop, then raise
RuntimeError.

I can elaborate on the use case I care about, but in a nutshell, doing so
would allow to spawn worker processes able to create their own loop without
requiring an idle blank child process that would be used as a base for
the workers. It adds the benefit, for instance, of allowing to share data
between the parent and the child leveraging OS copy-on-write.

2015-05-26 18:20 GMT+02:00 Yury Selivanov rep...@bugs.python.org:


 Yury Selivanov added the comment:

  How do other event loops handle fork? Twisted, Tornado, libuv, libev,
 libevent, etc.

 It looks like using fork() while an event loop is running isn't
 recommended in any of the above.  If I understand the code correctly, libev
  gevent reinitialize loops in the forked process (essentially, you have a
 new loop).

 I think we have the following options:

 1. Document that using fork() is not recommended.

 2. Detect fork() and re-initialize event loop in the child process
 (cleaning-up callback queues, re-initializing selectors, creating new
 self-pipe).

 3. Detect fork() and raise a RuntimeError.  Document that asyncio event
 loop does not support forking at all.

 4. The most recent patch by Martin detects the fork() and reinitializes
 self-pipe and selector (although all FDs are kept in the new selector).
 I'm not sure I understand this option.

 I'm torn between 2  3.  Guido, Victor, Martin, what do you think?

 --

 ___
 Python tracker rep...@bugs.python.org
 http://bugs.python.org/issue21998
 ___


--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21998
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2015-05-26 Thread Yury Selivanov

Yury Selivanov added the comment:

 That's really the problem of the code that calls fork(), not directly of
 the event loop. There are some very solid patterns around that (I've
 written several in the distant past, and Unix hasn't changed that much :-).

Alright ;)  I'll draft a patch sometime soon.

--
assignee:  - yselivanov

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21998
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2015-05-26 Thread Guido van Rossum

Guido van Rossum added the comment:

I don't understand. If the fork fails nothing changes right? I guess I'm 
missing some context or use case.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21998
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2015-05-26 Thread Yury Selivanov

Yury Selivanov added the comment:

 I don't understand. If the fork fails nothing changes right? I guess I'm 
 missing some context or use case.

Maybe I'm wrong about this.  My line of thoughts is: a failed fork() call is a 
bug in the program.  Now, the master process will continue operating as it was, 
no warnings, no errors.  The child process will crash with a RuntimeError 
exception.  Will it be properly reported/logged?

I guess the forked child will share the stderr, so the exception won't pass 
completely unnoticed, right?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21998
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2015-05-26 Thread Yury Selivanov

Yury Selivanov added the comment:

 I would therefore, in the child after a fork, close the loop without 
 breaking the selector state (closing without unregister()'ing fds), unset 
 the default loop so get_event_loop() would create a new loop, then raise 
 RuntimeError. 

 I can elaborate on the use case I care about, but in a nutshell, doing so
 would allow to spawn worker processes able to create their own loop without
 requiring an idle blank child process that would be used as a base for
 the workers. It adds the benefit, for instance, of allowing to share data
 between the parent and the child leveraging OS copy-on-write.

The only solution to safely fork a process is to fix loop.close() to
check if it's called from a forked process and to close the loop in
a safe way (to avoid breaking the master process).  In this case
we don't even need to throw a RuntimeError.  But we won't have a 
chance to guarantee that all resources will be freed correctly (right?)

So the idea is (I guess it's the 5th option):

1. If the forked child doesn't call loop.close() immediately after
forking we raise RuntimeError on first loop operation.

2. If the forked child calls (explicitly) loop.close() -- it's fine, 
we just close it, the error won't be raised.  When we close we only 
close the selector (without unregistering or re-regestering any FDs),
we cleanup callback queues without trying to close anything).

Guido, do you still think that raising a RuntimeError in a child
process in an unavoidable way is a better option?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21998
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2015-05-26 Thread Guido van Rossum

Guido van Rossum added the comment:

I think only (3) is reasonable -- raise RuntimeError. There are too many use 
cases to consider and the behavior of the selectors seems to vary as well. Apps 
should ideally not fork with an event loop open; the only reasonable thing to 
do after a fork with an event loop open is to exec another binary (hopefully 
closing FDs using close-on-exec).

*Perhaps* it's possible to safely release some resources used by a loop after a 
fork but I'm skeptical even of that. Opportunistically closing the FDs used for 
the self-pipe and the selector seems fine (whatever is safe could be done the 
first time the loop is touched after the fork, just before raising 
RuntimeError).

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21998
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2015-05-26 Thread Guido van Rossum

Guido van Rossum added the comment:

I don't actually know if the 5th option is possible. My strong requirement is 
that no matter what the child process does, the parent should still be able to 
continue using the loop. IMO it's better to leak a FD in the child than to 
close a resource owned by the parent. Within those constraints I'm okay with 
various solutions.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21998
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2015-05-26 Thread Martin Richard

Martin Richard added the comment:

015-05-26 20:40 GMT+02:00 Yury Selivanov rep...@bugs.python.org:


 Yury Selivanov added the comment:
 The only solution to safely fork a process is to fix loop.close() to
 check if it's called from a forked process and to close the loop in
 a safe way (to avoid breaking the master process).  In this case
 we don't even need to throw a RuntimeError.  But we won't have a
 chance to guarantee that all resources will be freed correctly (right?)


If all the tasks are cancelled and loop's internal structures (callback
lists, tasks sets, etc) are cleared, I believe that the garbage collector
will eventually be able to dispose everything.

However, it's indeed not enough: resources created by other parts of
asyncio may leak (transports, subprocess). For instance, I proposed to add
a detach() method for SubprocessTransport here:
http://bugs.python.org/issue23540 : in this case, I need to close stdin,
stdout, stderr pipes without killing the subprocess.

 So the idea is (I guess it's the 5th option):

 1. If the forked child doesn't call loop.close() immediately after
 forking we raise RuntimeError on first loop operation.

 2. If the forked child calls (explicitly) loop.close() -- it's fine,
 we just close it, the error won't be raised.  When we close we only
 close the selector (without unregistering or re-regestering any FDs),
 we cleanup callback queues without trying to close anything).

 Guido, do you still think that raising a RuntimeError in a child
 process in an unavoidable way is a better option?


 --

 ___
 Python tracker rep...@bugs.python.org
 http://bugs.python.org/issue21998
 ___


--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21998
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2015-05-26 Thread Yury Selivanov

Yury Selivanov added the comment:

 How do other event loops handle fork? Twisted, Tornado, libuv, libev,
libevent, etc.

It looks like using fork() while an event loop is running isn't recommended in 
any of the above.  If I understand the code correctly, libev  gevent 
reinitialize loops in the forked process (essentially, you have a new loop).

I think we have the following options:

1. Document that using fork() is not recommended.

2. Detect fork() and re-initialize event loop in the child process (cleaning-up 
callback queues, re-initializing selectors, creating new self-pipe).

3. Detect fork() and raise a RuntimeError.  Document that asyncio event loop 
does not support forking at all.

4. The most recent patch by Martin detects the fork() and reinitializes 
self-pipe and selector (although all FDs are kept in the new selector).  I'm 
not sure I understand this option.

I'm torn between 2  3.  Guido, Victor, Martin, what do you think?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21998
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2015-05-26 Thread Yury Selivanov

Yury Selivanov added the comment:

 I think only (3) is reasonable -- raise RuntimeError.

Just to be clear -- do we want to raise a RuntimeError in the parent, in the 
child, or both processes?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21998
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2015-05-26 Thread Christian Heimes

Changes by Christian Heimes li...@cheimes.de:


--
nosy: +christian.heimes

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21998
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2015-05-26 Thread Guido van Rossum

Guido van Rossum added the comment:

I was thinking only in the child. The parent should be able to continue to use 
the loop as if the fork didn't happen, right?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21998
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2015-05-25 Thread Yury Selivanov

Changes by Yury Selivanov yseliva...@gmail.com:


--
priority: normal - deferred blocker

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21998
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2015-02-17 Thread Martin Richard

Martin Richard added the comment:

In that case, I suggest a small addition to your patch that would do the trick:

in unix_events.py:
+def _at_fork(self):
+super()._at_fork()
+self._selector._at_fork()
+self._close_self_pipe()
+self._make_self_pipe()
+

becomes:

+def _at_fork(self):
+super()._at_fork()
+if not hasattr(self._selector, '_at_fork'):
+return
+self._selector._at_fork()
+self._close_self_pipe()
+self._make_self_pipe()

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21998
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2015-02-17 Thread STINNER Victor

STINNER Victor added the comment:

 It will (obviously) not work with python 3.4 since self._selector won't have 
 an _at_fork() method.

asyncio doc contains:
The asyncio package has been included in the standard library on a provisional 
basis. Backwards incompatible changes (up to and including removal of the 
module) may occur if deemed necessary by the core developers.

It's not the case for selectors. Even if it would be possible to implement 
selector._at_fork() in asyncio, it would make more sense to implement it in the 
selectors module.

@neologix: Would you be ok to add a *private* _at_fork() method to selectors 
classes in Python 3.4 to fix this issue?

I know that you are not a fan of fork, me neither, but users like to do crazy 
things with fork and then report bugs to asyncio :-)

--
nosy: +neologix

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21998
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2015-02-17 Thread Charles-François Natali

Charles-François Natali added the comment:

 @neologix: Would you be ok to add a *private* _at_fork() method to selectors 
 classes in Python 3.4 to fix this issue?

Not really: after fork(), you're hosed anyway:


   Q6  Will closing a file descriptor cause it to be removed from
all epoll sets automatically?

   A6  Yes, but be aware of the following point.  A file
descriptor is a reference to an open file  description
   (see open(2)).  Whenever a descriptor is duplicated via
dup(2), dup2(2), fcntl(2) F_DUPFD, or fork(2), a
   new file descriptor referring to the same open file
description is created.  An  open  file  description
   continues  to  exist  until all file descriptors referring
to it have been closed.  A file descriptor is
   removed from an epoll set only after all the file
descriptors referring  to  the  underlying  open  file
   description  have  been  closed  (or  before  if the
descriptor is explicitly removed using epoll_ctl(2)
   EPOLL_CTL_DEL).  This means that even after a file
descriptor that is part of  an  epoll  set  has  been
   closed,  events may be reported for that file descriptor if
other file descriptors referring to the same
   underlying file description remain open.


What would you do with the selector after fork(): register the FDs in
a new epoll, remove them?

There's no sensible default behavior, and I'd rrather avoid polluting
the code for this.
If asyncio wants to support this, it can create a new selector and
re-register everything it wants manually: there's a Selector.get_map()
exposing all that's needed.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21998
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2015-02-17 Thread STINNER Victor

STINNER Victor added the comment:

2015-02-17 20:16 GMT+01:00 Charles-François Natali rep...@bugs.python.org:
 What would you do with the selector after fork(): register the FDs in
 a new epoll, remove them?

See the patch:

+def _at_fork(self):
+# don't unregister file descriptors: epoll is still shared with
+# the parent process
+self._epoll = select.epoll()
+for key in self._fd_to_key.values():
+self._register(key)

EpollSelector._at_fork() does nothing on the current epoll object,
create a new epoll object and register again all file descriptor.

Hum, I should maybe close explicitly the old epoll object.

 There's no sensible default behavior, and I'd rrather avoid polluting
 the code for this.

What is wrong with the proposed patch?

 If asyncio wants to support this, it can create a new selector and
 re-register everything it wants manually: there's a Selector.get_map()
 exposing all that's needed.

If possible, I would prefer to implement at fork in the selectors
module directly, the selectors module has a better knowledge of
seletors. For example, asyncio is not aware of the selector._epoll
attribute.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21998
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2015-02-17 Thread Martin Richard

Martin Richard added the comment:

The goal of the patch is to create a duplicate selector (a new epoll() 
structure with the same watched fds as the original epoll). It allows to remove 
fds watched in the child's loop without impacting the parent process.

Actually, it's true that with the current implementation of the selectors 
module (using get_map()), we can achieve the same result than with victor's 
patch without touching the selector module. I attached a patch doing that, also 
working with python 3.4.

I thought about this at_fork() mechanism a bit more and I'm not sure of what we 
want to achieve with this. In my opinion, most of the time, we will want to 
recycle the loop in the child process (close it and create a new one) because 
we will not want to have the tasks and callbacks scheduled on the loop running 
on both the parent and the child (it would probably result in double writes on 
sockets, or double reads, for instance).

With the current implementation of asyncio, I can't recycle the loop for a 
single reason: closing the loop calls _close_self_pipe() which unregisters the 
pipe of the selector (hence breaking the loop in the parent). Since the self 
pipe is an object internal to the loop, I think it's safe to close the pipes 
without unregistering them of the selector. It is at least true with epoll() 
according to the documentation quoted by neologix, but I hope that we can 
expect it to be true with other unix platforms too.

--
Added file: http://bugs.python.org/file38164/at_fork-3.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21998
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2015-02-17 Thread STINNER Victor

STINNER Victor added the comment:

How do other event loops handle fork? Twisted, Tornado, libuv, libev,
libevent, etc.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21998
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2015-02-17 Thread Martin Richard

Martin Richard added the comment:

I read the patch, it looks good to me for python 3.5. It will (obviously) not 
work with python 3.4 since self._selector won't have an _at_fork() method.

I ran the tests on my project with python 3.5a1 and the patch, it seems to work 
as expected: ie. when I close the loop of the parent process in the child, it 
does not affect the parent.

I don't have a case where the loop of the parent is still used in the child 
though.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21998
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2015-02-15 Thread STINNER Victor

Changes by STINNER Victor victor.stin...@gmail.com:


--
title: asyncio: a new self-pipe should be created in the child process after 
fork - asyncio: support fork

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21998
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21998] asyncio: support fork

2015-02-15 Thread STINNER Victor

STINNER Victor added the comment:

Can someone review at_fork-2.patch?

Martin: Can you please try my patch?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21998
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com