Hi Robert,
Thanks so much for your insight.
> The question should be "is this pattern too generic to be sped up in
> Cython."
Oops, that's what I meant actually...
> If, as your experiments indicate, the empty loop is almost
> as expensive as the do_curvature loop, then Cython can help you
> there.
Correct, that is the case.
> However, it can't speed up the 250000 calls to do_curvature.
Of course, though these are nearly as quick as the C++ call.
Which is why the looping overhead is so nasty.
Certainly when looping through reals rather than integers, things do
get slow.
Here's an example of a typical pattern.
Profiling shows that computing curvature and storing these in a
wrapped C++ datatype is very quick.
# computing the
def curvature( u,v ):
return gaussian_curvature
# storing the results:
a_wrapped_datatype( indx, gaussian_curvature )
# naive really slow version
from numpy import ogrid
def curvature_grid( curvature, a_wrapped_datatype, n_samples ):
n=1
for a in ogrid[ 0:1:complex(n_samples)].tolist():
for b in ogrid[ 0:1:complex(n_samples)].tolist():
curv = curvature( a, b )
a_wrapped_datatype( n, curv )
n+=1
return a_wrapped_datatype
I'm hoping Cython could help me to make an efficient n dimensional loop.
Such a pattern would be quite elegant: although I have got fast
function calls, but now python is introducing serious overhead when
looping.
Since you do this all the time, it would be really interesting to see
if it can be sped up.
Though thanks due to all the replied I understand that this is in fact
pretty difficult to do in a generic way.
The following would be an wonderful function to have:
loop_fastnd( curvature, store_curvature, ( (0, n), (0, n) ) )
-jelle
_______________________________________________
Cython-dev mailing list
[email protected]
http://codespeak.net/mailman/listinfo/cython-dev