I'd like to use multiple CPU cores for selected time consuming Python 
computations (incl. numpy/scipy) in a frictionless manner.

Interprocess communication is tedious and out of question, so I thought about 
simply using a more Python interpreter instances (Py_NewInterpreter) with extra 
GIL in the same process.
I expect to be able to directly push around Python Object-Trees between the 2 
(or more) interpreters by doing some careful locking.

Any hope to come through? If possible, what are the main dangers? Is there an 
example / guideline around for that task? - using ctypes or so.

Or is there even a ready made Python module which makes it easy to setup and 
deal with extra Interpreter instances? 
If not, would it be an idea to create such thing in the Python std libs to make 
Python multi-processor-ready. I guess Python will always have a GIL - otherwise 
it would loose lots of comfort in threaded programming


robert
-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to