Well actually one usage scenario I had in mind was numpy and numerical code.
Think about the following code : for i in xrange(m): x[i] = y[i] + z[i] Now in this case its a lot better to have parallel loops. First, if x,y,z have proper type declarations, lets say they are ndarray, then a parallel loop can easily be converted to C code with OpenMP. Since we are accessing ndarray, and assuming compiler knows how to convert that to direct array access, then compiler can generate very efficient code. There will be no GIL acquire/release within the loop. This scenario is not handled easily using the decorator approach. thanks, rahul On Sat, Jun 21, 2008 at 1:02 PM, Gary Furnish <[EMAIL PROTECTED]> wrote: > I think I agree... cython support for decorators would probably be the > best approach. > > On Sat, Jun 21, 2008 at 11:51 AM, Dag Sverre Seljebotn > <[EMAIL PROTECTED]> wrote: > > rahul garg wrote: > >> On Sat, Jun 21, 2008 at 12:28 PM, Stefan Behnel <[EMAIL PROTECTED]> > >> wrote: > >> > >>> Hi, > >>> > >>> rahul garg wrote: > >>> > I was thinking of providing a "prange" which defaults to xrange when > >>> running > >>> > on interpreter. > >>> > The reason I like the prange construct, is that we can easily add > lets > >>> say > >>> > thread-local variables or reduction variables. > >>> > for i in prange(i, > threadlocal=[myvar1,myvar2],reduction=[red1,red2]): > >>> #loop > >>> > body > >>> > >>> You could easily do that with a > >>> > >>> with thread_each(iterable, threadlocal=...): > >>> ... > >>> > >>> syntax, too, and IMHO it looks much better (minus a better name for > >>> "thread_each" ;) > >>> > >>> And it might even be possible to support this in plain Python one day. > >> > >> > >> Well the problem is that how does it run as a loop on the interpreter? > >> If its to run as a loop on the interpreter, there must be a loop > statement > >> somewhere potentially. > >> What about adopting it to : > >> for i in thread_each(iterator,threadlocal=...): > >> > >> This is uglier than with thread_each() but has a simple serial > >> implementation. > >> Thoughts? > > > > Jumping into this thread at a random spot... > > > > In dev1 days for SAGE a @parallel decorator for functions was > > demonstrated. So: > > > > P> @parallel > > P> def f(x): return 2*x > > ... > > P> f([1,2,3]) > > [2, 4, 6] > > > > Where each of the multiplications would happen in parallel using > > PyProcessing. However, I don't think any Cython support is really needed > > for this (except for decorator support :-)). > > > > So: I like PyProcessing but there's no need to build anything into Cython > > for that. A parallel decorator could work for OpenMP though (at least if > > the function is also declared inline etc.) > > > > Dag Sverre > > > > _______________________________________________ > > Cython-dev mailing list > > [email protected] > > http://codespeak.net/mailman/listinfo/cython-dev > > > _______________________________________________ > Cython-dev mailing list > [email protected] > http://codespeak.net/mailman/listinfo/cython-dev >
_______________________________________________ Cython-dev mailing list [email protected] http://codespeak.net/mailman/listinfo/cython-dev
