On Nov 11, 2009, at 9:35 PM, Richard Tew wrote:
They have an nice example where they chain 100000 microthreads each
wrapping the same function that increases the value of a passed
argument by one, with channels inbetween.  Pumping a value through the
chain takes 1.5 seconds.  I can't imagine that Stackless will be
anything close to that, given the difference between scripting and
compiled code.

Did I mess up in my benchmark? I got about 0.07 seconds to go through 100,000 microthreads.

output:

% spython 100000_tasklets.py
100000
100001
100002
100003
100004
100005
100006
100007
100008
100009
startup time: 0.345
10 elements in: 0.714



Benchmark:


import stackless

def source(ch):
    ch.send_sequence(range(10))
    ch.close()

def chain(up, down):
    for item in up:
        down.send(item+1)
    down.close()

def sink(up):
    for item in up:
        print item

def startup():
    down = stackless.channel()
    stackless.tasklet(source)(down)

    for i in range(100000):
        up = down
        down = stackless.channel()
        stackless.tasklet(chain)(up, down)

    stackless.tasklet(sink)(down)

import time
t1 = time.time()
startup()
t2 = time.time()
stackless.run()
t3 = time.time()
print "startup time:", "%.3f" % (t2-t1)
print "10 elements in:", "%.3f" % (t3-t2)


                                Andrew
                                [email protected]



_______________________________________________
Stackless mailing list
[email protected]
http://www.stackless.com/mailman/listinfo/stackless

Reply via email to