>From tests I've done, the MKL fft library is comparable and sometimes 
faster than the FFTW package. Both are much faster than the numpy fft. It's 
available in the accelerate package from Continuum (mkl conda package only 
has blas).

It also looks like intel has a free conda channel, which builds scipy and 
numpy against MKL
https://software.intel.com/en-us/forums/intel-distribution-for-python/topic/713736

On Wednesday, April 19, 2017 at 10:54:33 AM UTC-7, nouiz wrote:
>
> Blas won't help for fft. So you would need a faster fft library then what 
> we use and modify the perform() method of those ops to use that new lib.
>
> FFTW is one possible faster implementation. There is others. I can't 
> comment on which one would be better. Search the web, I recall seeing some 
> people comparing fft lib that is available in python.
>
> If you modify those ops for that, pushing upstream those change would be 
> great.
>
> Fred
>
>
>
> On Tue, Apr 18, 2017 at 12:16 PM <[email protected] <javascript:>> wrote:
>
>> Hi,
>>
>> I have implemented a layer which uses functions theano.tensor.fft.rfft 
>> and theano.tensor.fft.irfft. What might be the best way to improve the 
>> speed of that layer on cpu? Installing FFTW, an optimized BLAS library?
>>
>> Thanks
>> Cha.
>>
>> -- 
>>
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "theano-users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected] <javascript:>.
>> For more options, visit https://groups.google.com/d/optout.
>>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to