On Tue, 2009-01-27 at 13:54 +0900, David Cournapeau wrote:
> Jochen wrote:
> > On Tue, 2009-01-27 at 13:28 +0900, David Cournapeau wrote:
> >   
> >> Jochen wrote:
> >>     
> >>> On Tue, 2009-01-27 at 12:49 +0900, David Cournapeau wrote:
> >>>   
> >>>       
> >>>> Jochen wrote:
> >>>>     
> >>>>         
> >>>>> Hi all,
> >>>>>
> >>>>> I just wrote ctypes bindings to fftw3 (see
> >>>>> http://projects.scipy.org/pipermail/scipy-user/2009-January/019557.html
> >>>>> for the post to scipy). 
> >>>>> Now I have a couple of numpy related questions:
> >>>>>
> >>>>> In order to be able to use simd instructions I 
> >>>>> create an ndarray subclass, which uses fftw_malloc to allocate the
> >>>>> memory and fftw_free to free the memory when the array is deleted. This
> >>>>> works fine for inplace operations however if someone does something like
> >>>>> this:
> >>>>>
> >>>>> a = fftw3.AlignedArray(1024,complex)
> >>>>>
> >>>>> a = a+1
> >>>>>
> >>>>> a.ctypes.data points to a different memory location (this is actually an
> >>>>> even bigger problem when executing fftw plans), however 
> >>>>> type(a) still gives me <class 'fftw3.planning.AlignedArray'>.
> >>>>>   
> >>>>>       
> >>>>>           
> >>>> I can't comment about subclassing ndarrays, but I can give you a hint
> >>>> about aligned allocator problem: you could maintain two list of cached
> >>>> plans, automatically detect whether your arrays are aligned or not, and
> >>>> use the appropriate list of plans; one list is for aligned arrays, one
> >>>> for unaligned. Before removing support for fftw, I played with some C++
> >>>> code to do exactly that. You can tell fftw to create plans for unaligned
> >>>> arrays by using FFTW_UNALIGNED flag:
> >>>>
> >>>> http://projects.scipy.org/scipy/scipy/browser/branches/refactor_fft/scipy/fftpack/backends/fftw3/src/zfft.cxx
> >>>>
> >>>> cheers,
> >>>>
> >>>> David
> >>>>     
> >>>>         
> >>> Hi David,
> >>>
> >>> I have actually kept more closely to the fftw way of doing things, i.e.
> >>> I create a plan python object from two arrays, it also stores the two
> >>> arrays to prevent someone to delete the original arrays and then
> >>> executing the plan.
> >>>       
> >> I am not sure I follow you when you say the "fftw way": you can avoid
> >> having to store the arrays altogether while still using fftw plans,
> >> there is nothing "unfftw" about using plans that way. I think trying to
> >> guarantee that your arrays data buffers won't change is more complicated.
> >>
> >> David
> >>     
> >
> > Sorry maybe I wasn't quite clear, what I mean by the "fftw way" is
> > creating a plan and executing the plan, instead of doing x=fft(y).
> 
> I guess I was not very clear, because my suggestion has nothing to do
> with the above.
> 
> >  As
> > you say the problem comes when you execute a plan on arrays which don't
> > exist anymore, which causes python to segfault (I'm talking about using
> > fftw_execute() not fftw_execute_dft). So yes essentially my problem is
> > trying to ensure that the buffer does not change. 
> 
> I agree that if you just use fftw_execute, you will have segfaults: I am
> just not sure that your problem is to ensure that the buffer does not
> change :) You can instead handle relatively easily the case where the
> buffers are not aligned, while still using fftw API. The fftw_execute is
> just not an appropriate API for python code, IMHO.

Ah ok, I think I understand you now :). I agree that using fftw_execute
is not the most pythonic way of doing things. However every other way I
could think of, you would need to do a number of checks and possibly
create new plans, as well as keeping old plans around when doing a fft.
(I believe that's what you did in the old fftw bindings in scipy?). So I
decided to create fftw bindings first for maximum performance (I know,
just doing the whole calculation in c is probably more appropriate if
you want maximum performance ;), it also fits my needs. However I've
been thinking about ways to make the whole thing more intuitive. At
least now I can compare how much overhead I'm adding if I start to do
checks in order to make the handling easier :)

> 
> Another solution would be to work on numpy itself, so that it use
> aligned buffers, but that's obviously much longer term - that's just one
> of those things on my TODO list that I never took the time to do,
> 
I agree that would be nice, I'd think a couple of operations would
probably benefit from that. 

> David
> _______________________________________________
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion

_______________________________________________
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to