On 9/17/06, Ivan Krstić <[EMAIL PROTECTED]> wrote:

At present, the Python approach to multi-processing sounds a bit like
"let's stick our collective hands in the sand and pretend there's no
problem". In particular, one oft-parroted argument says that it's not
worth changing or optimizing the language for the few people who can
afford SMP hardware. In the meantime, dual-core laptops are becoming the
standard, with Intel predicting quad-core will become mainstream in the
next few years, and the number of server orders for single-core, UP
machines is plummeting.
 
I agree with you Ivan.
 
Even if I won't contribute code or even a design to the solution (because it isn't an area of expertise and I'm still working on encodings stuff) I think that there would be value in saying: "There's a big problem here and we intend to fix it in Python 3000."
 
When you state baldly that something is a problem you encourage the community to experiement with and debate solutions. But I have been in the audience at Python conferences where the majority opinion was that Python had no problem around multi-processor apps because you could just roll your own IPC on top of processes.
 
If you have to roll your own, that's a problem. If you have to select between five solutions with really subtle tradeoffs, that's a problem too.
 
Ivan: why don't you write a PEP about this?

 
* Bite the bullet; write and support a stdlib SHM primitive that works
wherever possible, and simply doesn't work on completely broken
platforms (I understand Windows falls into this category). Utilize it in
a lightweight fork-and-coordinate wrapper provided in the stdlib.
 
Such a low-level approach will not fly. Not just because of Windows but also because of Jython and IronPython. But maybe I misunderstand it in general. Python does not really have an abstraction as low-level "memory" and I don't see why we would want to add it.

* Introduce microthreads, declare that Python endorses Erlang's
no-sharing approach to concurrency, and incorporate something like
candygram into the stdlib.

* Introduce a fork-and-coordinate wrapper in the stdlib, and declare
that we're simply not going to support the use case that requires
sharing (as opposed to merely passing) objects between processes.
 
I'm confused on a few levels.
 
 1. "No sharing" seems to be a feature of both of these options, but the wording you use to describe it is very different.
 
 2. You're conflating API and implementation in a manner that is unclear to me. Why are microthreads important to the Erlang model and what would the API for fork-and-coordinate look like?
 
Since you are fired up about this now, would you consider writing a PEP at least outlining the problem persuasively and championing one of the (feasible) options? This issue has been discussed for more than a decade and the artifacts of previous discussions can be quite hard to find.

 Paul Prescod
 
_______________________________________________
Python-3000 mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-3000
Unsubscribe: 
http://mail.python.org/mailman/options/python-3000/archive%40mail-archive.com

Reply via email to