Chris Lee wrote:

On Jun 15, 2008, at 10:33 PM, Jeff Senn wrote:


On Jun 15, 2008, at 3:57 PM, Simon Pickles wrote:

Hi NRB,

Neutral Robot Boy wrote:
alright, so i'm still in 'beginner' mode with stackless here. i did a bit of reading which suggested that stackless should be able to distribute processing across multiple cores without trouble, and i decided to write a really simple script and look at how much of a load it puts on my cpu.
The stackless scheduler which you activate by calling stackless.run() only runs in one thread. Each tasklet is added to that scheduler and called in turn. No other core will be used.

I suppose one should point out that this is not merely a limitation of Stackless.
e.g. running schedulers in more than one thread won't even help.

Python itself, even using multiple native threads, can only make use of one core at a time due to the GIL (Global Interpreter Lock). If you are interested
in the whys-and-wherefores, a search through the archives of this list
(and/or Google) will provide a bunch of discussions.

-Jas

Yes indeed, I run simulation code which can benefit from as many cores and processors that are available. To achieve this in python I used parallelpython, which acts as a job server and pickles the parameters, modules, and functions for use by a new instance of python. Using this, I can pretty much use all the processing power available on the computer. It can even run across multiple machines, if I go to the trouble to set up the permissions on each machine.

Really? eek, I had misunderstood the GIL, I think. So Carlos's example is multicore but not parallel?

Thats bad for me. My server had several interpreters running 'concurrently' using twisted.PerspectiveBroker to communicate. I guess this model works for clusters but not for SMPs.....

eek again!

Si

_______________________________________________
Stackless mailing list
[email protected]
http://www.stackless.com/mailman/listinfo/stackless

Reply via email to