Richard,

thanks a lot for your lightning fast answer!

Stackless is a microthreading solution.  It is not a scalability
solution in and of itself.

I guess the elegance Stackless brings to microthreads, i.e. tasklets, can fascilitate for it to be mistaken for something that it is not: a distributed design the way that Erlang is at its core.

That is puzzling when approaching Stackless today, in an increasingly multi-core world. I simply, wrongly assumed that the microthreading architecture would span across multiple cores. But it does not, which was the heart of my question about its 'nature'. It can be extended.

But regardless how much the GIL (1) may be an additional obstacle, or not, the immediate reach of Stackless' tasklets seems to be one process and there is no transparency across system processes, cores and boxes (as Erlang features). Is this correct?

You can take the basic
functionality and build up your own framework around this.  Tired of
callbacks?  Make a function that wraps an asynchronous operation in a
channel and whatever calls it will just read as a synchronous call.
Of course, a programmer needs to be aware of the effect of blocking
and when blocking might happen on the code they write, but in practice
this is rarely much of a concern.

Could I do this if I left single core behind ... ? To my eye that is part of 
the advantages you achieved with the very clear architecture decisions you 
opted for with EVE. The more flexible and complex ways you had referred to, 
might have turned out way more complex in this regard.

Stackless has a scheduler which runs on a real thread, and
all microthreads created on that thread are run within that scheduler.
You can have multiple threads each running their own scheduler, with
their own tasklets running within them.

Can channels reach out of their interpreter/scheduler? Or can a Stackless 
interpreter run across multiple cores, or even blades? Are there modules or 
extensions that provide for this, or for transparency in this regard?

That pickling works even across diverse OSses is an exciting feature (2). And I am still working to get my head around what happens to state when sending tasklets over to another box (3). It doesn't look quite trivial.

But is pickling fast enough to do more interactive stuff than load balancing (e.g. loading complete solar systems off to a different blade that has better hardware or because the current blade had more than one solar system mounted). Is it fast enough to completely distribute entities?

As this is what I can't yet fathom about Erlang, how it's paradigma of not sharing state may work well for telecom but not for games. Since that virtue is achieved by taking the liberty from the programmer, it could be replicated by discipline in other languages. But the language inherent features of Erlang would have to be coded in Python, most everytime that they would come into play, making the source more complicated, losing readability.

Yet I'm not a Pythonista in this regard and don't hold readability to be decisive. In regard to multi-core processing I am looking mostly at

- performance, predictability of performance and what might turn out to be 'out of reach' for optimizing, hardwired and part of the compiler/interpreter.

- where state is physically located or how it's mobility is managed and whether it may work in any scenario to share state between microthreads across multiple blades for near realtime calculations.

So, given you are willing to take the time to write a framework to
take care of it, this allows you to move running logic to other
'nodes', to be resumed there.

To arrive at maximum control it may be best to code a fitting custom implementation. Given the complexity of the topic though, I'll sure be best advised to learn more about it by using what's already there, first. I am even expecting to learn that it simply can't work what I am looking for, specifically with regard to cross blade state access. And I am far from fluent enough in Python to write a concurrency framework on my own. Does Pyro (3) work with Stackless? Does the processing module (4) work across multiple blades?

I am looking for these answers myself but maybe someone can add some information about them in the light of this discussion.

Thanks,
Henning

(1) GIL:  http://www.stackless.com/pipermail/stackless/2008-June/003546.html
         http://www.grouplens.org/node/244#comment-2493

(2) Pickling across OSses:
http://holdenweb.blogspot.com/2006/11/stackless-python-continues-to-amaze.html
   Pickling and state:
         http://www.stackless.com/wiki/Pickling

(3) Pyro: http://pyro.sourceforge.net/

(4) Processing module (currently 0.52):
         http://pypi.python.org/pypi/processing



_______________________________________________
Stackless mailing list
[email protected]
http://www.stackless.com/mailman/listinfo/stackless

Reply via email to