Blas won't help for fft. So you would need a faster fft library then what
we use and modify the perform() method of those ops to use that new lib.

FFTW is one possible faster implementation. There is others. I can't
comment on which one would be better. Search the web, I recall seeing some
people comparing fft lib that is available in python.

If you modify those ops for that, pushing upstream those change would be
great.

Fred



On Tue, Apr 18, 2017 at 12:16 PM <[email protected]> wrote:

> Hi,
>
> I have implemented a layer which uses functions theano.tensor.fft.rfft
> and theano.tensor.fft.irfft. What might be the best way to improve the
> speed of that layer on cpu? Installing FFTW, an optimized BLAS library?
>
> Thanks
> Cha.
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to