Thanks, Mark. Responses are in-line below. -eric
On Wed, Apr 29, 2020 at 6:08 AM Mark Shannon <m...@hotpy.org> wrote: > You can already do CSP with multiprocessing, plus you get true parallelism. > The question the PEP needs to answer is "what do sub-interpreters offer > that other forms of concurrency don't offer". > > https://gist.github.com/markshannon/79cace3656b40e21b7021504daee950c > > This table summarizes the core features of various approaches to > concurrency and compares them to "ideal" CSP. There are lot of question > marks in the PEP 544 column. The PEP needs to address those. > > As it stands, multiprocessing a better fit for CSP than PEP 554. > > IMO, sub-interpreters only become a useful option for concurrency if > they allow true parallelism and are not much more expensive than threads. While I have a different opinion here, especially if we consider trajectory, I really want to keep discussion focused on the proposed API in the PEP. Honestly I'm considering taking up the recommendation to add a new PEP about making subinterpreters official. I never meant for that to be more than a minor point for PEP 554. > > I think we can as well, but I'd like to hear more about what obstacles > > you think we might run into. > > As an example, accessing common objects like `None` and `int` will need > extra indirection. > That *might* be an acceptable cost, or it might not. We don't know. ack > I can't tell you about the unknown unknowns :) :) > > I'm not sure I understand your objection. If a user calls the > > function then they get a list. If that list becomes outdated in the > > next minute or the next millisecond, it does not impact the utility of > > having that list. For example, without that list how would one make > > sure all other interpreters have been destroyed? > > Do you not see the contradiction? > You say that it's OK if the list is outdated immediately, and then ask > how one would make sure all other interpreters have been destroyed. > > With true parallelism, the list could be out of date before it is even > completed. I don't see why that would be a problem in practice. Folks already have to deal with that situation in many other venues in Python (e.g. threading.enumerate()). Not having the list at all would more painful. > > So "close" aligns with other similarly purposed methods out there, > > while "finalize" aligns with the existing C-API and also elevates the > > complex nature of what happens. If we change the name from "destroy" > > then I'd lean toward "finalize". > > I don't see why C-API naming conventions would take precedence over > Python naming conventions for naming a Python method. Naming conventions aren't as important if we focus just on communicating intent. Maybe it's just me, but "close" does not reflect the complexity that "finalize" does. Regardless, if it is called "close" then folks can use contextlib.closing() with it. That's enough to sell me on it. > Ok, let me rephrase. What does "is_shareable()" do? > Is `None` sharable? What about `int`? It's up to the Python implementation to decide if something is shareable or not. In the case of CPython, PEP 554 says: "Initially this will include None, bytes, str, int, and channels. Further types may be supported later." > Its not an implementation detail. The user needs to know the *exact* set > of objects that can be communicated. Using marshal or pickle provides > that information. The point of is_shareable() is to expose that information, though not as a list. Why would users want that full list? It could be huge, BTW. If you are talking about documentation then yeah, we would definitely document which types CPython considers shareable. _______________________________________________ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/IEMXNKSOZT23OEXFWF3VNJMYSRV7OCUU/ Code of Conduct: http://python.org/psf/codeofconduct/