> However, I'll also note that the usual way to get high performance with
> NumPy is to reformulate the computation so that you do it all in
> parallell. If you are evaluating this using a grid with n=1000 or similar,
> then as long as you code things so that all operations happen "in
> parallell" then you could code it in pure Python if you wished and it
> would still be at least comparable to Fortran.
> 
> I.e. to avoid all the call overheads etc., it pays off to rather do:
> 
> gridvalues = eval_func(gridpts)
> 
> than
> 
> for i in range(..):
>     gridvalues[i] = eval_func(gridpts[i])
> 
> and continue in the same fashion all the way down. Wherever your loop is,
> there's your problem...
> 
True, but I like loops :-) I find vectorized code quickly becomes unreadable
(and awkward to write). I would rather resort to C or Fortran if this is too
much of an issue. I find the best tradeoff is to use tools like f2py and
Cython to decrease the call overhead, I can live with the other slowdowns.

Gabriel
_______________________________________________
Cython-dev mailing list
[email protected]
http://codespeak.net/mailman/listinfo/cython-dev

Reply via email to