Hi Arnar,

You said, "There have been discussions here occasionally about something
generic, like wrapping libevent or similar in an interface that "looks"
synchronous but in the background does async I/O and uses channels to make
it look synchronous." That is of course what this does. Points:
 - "There have been discussions" implies a blocking point ;-) This is a
design that is complete. simple, and proven, and could be put in the virtual
machine tomorrow.
 - The notion of "asynchronous IO" is vague and, in the typical manner,
therefore threatens to complicate matters with a huge palette (your
"myriad") of explicit tools that work at cross-purposes. By contrast, the
first phase of this design (Channel Communication plus Hardware Select)
would result in NO VISIBLE CHANGE, no libraries - just a clearing of the
"blocking" logjam and freedom for other tasklets to run efficiently while
the hardware wait is going on, just as David wanted.
 - There is also no visible change if you add the second phase, "Timer": the
sleeping tasklet (or simultaneously sleeping tasklets) would take no
blocking time on the round robin. (Your time.sleep code indicates this is
not currently the case - am I right, or does Py_BEGIN_ALLOW_THREADS set up a
separate queue?)
 - The complication added by these phases is in the virtual machine only: a
couple of new queues (channel and timer) in addition to Stackless's round
robin scheduler. The channel queues have maximum length 1. I am assuming
that "tasklet" = "process" in the sense I defined in the note (which
certainly appears to be the case, with the examples you gave). I am also
assuming there is some memory dedicated to the tasklet for its whole life
(but I can't imagine anything being "stackless" without that!).
 - The only thing that would create a new visible interpreted code
option would be the last phase, the user-level ALT or select. And that could
be done in any way that people like: for instance, as a standard
Unix-like select call, or a transputer-like ALT branching on readiness of IO
channels. In my experience, that is the central key to any serious parallel
coding (changing disorder into order, as it were).
 - And all without a myriad: the only other such structure that I've ever
found useful is a simple binary semaphore (which works just like the timer
queue, and avoids the ALT/select overhead in a common case). That could be
added later.
 - The only question is whether something about the object structure forbids
input from several channels in one tasklet.

Larry


On 10/9/08, Arnar Birgisson <[EMAIL PROTECTED]> wrote:
>
> Hi Larry,
>
> On Fri, Oct 10, 2008 at 00:54, Larry Dickson <[EMAIL PROTECTED]>
> wrote:
> > On Fri, Sep 26 17:09:54 CEST 2008, Arnar Birgisson <arnarbi at gmail.com
> >
> >> > Surely there is a way around this?  Some kind of pooling select?  If
> >> > there is
> >> > no work around then I cannot see too much practical use for my thread
> >> > library
> >> > [except having to avoid learning tasklets for someone who is familiar
> >> > with
> >> > threads].  As I understand it, due to the GIL the only real practical
> >> > use for
> >> > threads is if one has blocking function calls (IO-type, etc)
> >>
> >> The solution would be asynchronous I/O. There have been discussions
> >> here occasionally about something generic, like wrapping libevent or
> >> similar in an interface that "looks" synchronous but in the background
> >> does async I/O and uses channels to make it look synchronous. I figure
> >> such a thing would be an excellent component of your thread library.
> >>
> >> > [Has the GIL restriction been fixed in 3k?  As far as I know Jython
> does
> >> > not
> >> > have this limitation...]
> >>
> >> The GIL has not been removed in Py 3.0, nor will it be removed any
> >> time soon. Jython does not have such a thing.
> >
> > This design solves all these problems, using only C/Unix select (which
> you
> > pointed out is already used to do time.sleep) in the virtual machine; and
> > it runs in only one thread. There is no need to remove the GIL.
>
> What problems? If you read my message you can clearly see that I
> pointed to a _solution_, namely asynchronous I/O. :) What I meant in
> my last message is that I don't see how your suggestion improves on
> the myriad of async solutions out there already (which are based on
> select, poll, epoll, Windows ASIO, etc).
>
> cheers,
> Arnar
>
_______________________________________________
Stackless mailing list
[email protected]
http://www.stackless.com/mailman/listinfo/stackless

Reply via email to