On 04/04/2011 03:04 PM, Stefan Behnel wrote:
That's what I thought, yes. It looks unexpected, sure. That's the
clear advantage of using inner functions, which do not add anything
new at all. But if we want to add something that looks more like a
loop, we should at least make it behave like something that's easy to
explain.
Sorry for not taking the opportunity to articulate my scepticism in
the workshop discussion. Skipping through the CEP now, I think this
feature adds quite some complexity to the language, and I'm not sure
it's worth that when compared to the existing closures. The equivalent
closure+decorator syntax is certainly easier to explain, and could
translate into exactly the same code. But with the clear advantage
that the scope of local, nonlocal and thread-configuring variables is
immediately obvious.
Basically, your example would become
def f(np.ndarray[double] x, double alpha):
cdef double s = 0
with cython.nogil:
@cython.run_parallel_for_loop( range(x.shape[0]) )
cdef threaded_loop(i): # 'nogil' is inherited
cdef double tmp = alpha * i
nonlocal s
s += x[i] * tmp
s += alpha * (x.shape[0] - 1)
return s
We likely agree that this is not beautiful. It's also harder to
implement than a "simple" for-in-prange loop. But I find it at least
easier to explain and semantically 'obvious'. And it would allow us to
write a pure mode implementation for this based on the threading module.
Short clarification on this example: There is still magic going on here
in the reduction variable -- one must have a version of "s" for each
thread, and then reduce at the end.
(Stefan: I realize that you may know this, I'm just making sure
everything is stated clearly in this discussion.)
Dag Sverre
_______________________________________________
cython-devel mailing list
cython-devel@python.org
http://mail.python.org/mailman/listinfo/cython-devel