Yoav Goldberg wrote:

On Sun, Nov 15, 2009 at 12:10 AM, Terry Reedy <tjre...@udel.edu <mailto:tjre...@udel.edu>> wrote:

    Paul Rubin wrote:

        Mark Chu-Carroll has a new post about Go:

         
http://scienceblogs.com/goodmath/2009/11/the_go_i_forgot_concurrency_an.php


    In a couple of minutes, I wrote his toy prime filter example in
    Python, mostly from the text rather than the code, which I can
    barely stand to read. It ran the first time without error.


Yes, but the cool thing about the Go version is that it does each generator in a different thread, so in theory it could run twice as fast on a multi-core machine.

Which is why I added, in my opinion, that "It would be much better, for instance, to tweak Python, which it has had great success with, to better run on multiple cores."

For instance, add a new keyword 'go' such that

go def f(): yield 1

runs the generator in a different thread, possibly on a different core.

To go further, restrict Python's dynamism, require 3.x annotations, and call the result GoPython ;-).

Actually, it is already possible to post-process code objects to, in effect, remove the effect of many dynamic assumptions
http://code.activestate.com/recipes/277940/

Perhaps, with a different implementation, not even a keyword is needed, just a built-in decorator:

@go
def f(): yield 1

The go decorator would replace f with a wrapper that runs instances gf of f in threads or whatever, calls next(gf) immediately for parallel opereation, and caches the first yielded value until the calling function calls next(wrapper) to retrieve it.

It seems to me that generators are already 'channels' that connect the calling code to the __next__ method, a semi-coroutine based on the body of the generator function. At present, the next method waits until an object is requested. Then it goes into action, yields an object, and rests again. For parallel operations, we need eager, anticipatory evaluation that produces things that *will* be needed rather than lazy evaluation of things that *are* needed and whose absence is holding up everything else.

I see no reason why we cannot have that with Python. I not even sure we cannot have it with CPython, but I am not familiar enough with threads, processes, and CPython internals.

Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to