Re: [theano-users] improving fft performance on cpu

2017-04-20 Thread Cha Dicle
Thanks all for valuable feedback. I will first explore the fastest fft
independent from theano. I have been playing with Mac's Accelerate
Framework. It seems pretty well tuned. I could not beat it's performance
using mkl or fftw. I will look into that more carefully and try to post a
systematic result comparing those.

Thanks again,
Best,
Cha.

On Wed, Apr 19, 2017 at 9:05 PM, Patric  wrote:

> btw, as @Jesse mentioned, the Intel distribution Python already includes
> Theano w/ performance improvements out of the box :)
>
> Intel-optimized Deep Learning
> .. Update 2 incorporates two Intel-optimized Deep Learning frameworks,
> Caffe* and Theano*, into the distribution so Python users can take
> advantage of these optimizations out of the box.
>
>
> On Thursday, April 20, 2017 at 2:12:40 AM UTC+8, Jesse Livezey wrote:
>>
>> From tests I've done, the MKL fft library is comparable and sometimes
>> faster than the FFTW package. Both are much faster than the numpy fft. It's
>> available in the accelerate package from Continuum (mkl conda package only
>> has blas).
>>
>> It also looks like intel has a free conda channel, which builds scipy and
>> numpy against MKL
>> https://software.intel.com/en-us/forums/intel-distribution-f
>> or-python/topic/713736
>>
>> On Wednesday, April 19, 2017 at 10:54:33 AM UTC-7, nouiz wrote:
>>>
>>> Blas won't help for fft. So you would need a faster fft library then
>>> what we use and modify the perform() method of those ops to use that new
>>> lib.
>>>
>>> FFTW is one possible faster implementation. There is others. I can't
>>> comment on which one would be better. Search the web, I recall seeing some
>>> people comparing fft lib that is available in python.
>>>
>>> If you modify those ops for that, pushing upstream those change would be
>>> great.
>>>
>>> Fred
>>>
>>>
>>>
>>> On Tue, Apr 18, 2017 at 12:16 PM  wrote:
>>>
 Hi,

 I have implemented a layer which uses functions theano.tensor.fft.rfft
 and theano.tensor.fft.irfft. What might be the best way to improve the
 speed of that layer on cpu? Installing FFTW, an optimized BLAS library?

 Thanks
 Cha.

 --

 ---
 You received this message because you are subscribed to the Google
 Groups "theano-users" group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to theano-users...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.

>>> --
>
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "theano-users" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/
> topic/theano-users/ASze1Z57VN0/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] improving fft performance on cpu

2017-04-19 Thread Frédéric Bastien
Now, mkl is available in conda freely for everybody. You don't need the
accelerate packages, just a recent enough conda. On older conda, you can
update it, it don't need a new installation.

Fred

On Wed, Apr 19, 2017 at 2:12 PM Jesse Livezey 
wrote:

> From tests I've done, the MKL fft library is comparable and sometimes
> faster than the FFTW package. Both are much faster than the numpy fft. It's
> available in the accelerate package from Continuum (mkl conda package only
> has blas).
>
> It also looks like intel has a free conda channel, which builds scipy and
> numpy against MKL
>
> https://software.intel.com/en-us/forums/intel-distribution-for-python/topic/713736
>
> On Wednesday, April 19, 2017 at 10:54:33 AM UTC-7, nouiz wrote:
>
>> Blas won't help for fft. So you would need a faster fft library then what
>> we use and modify the perform() method of those ops to use that new lib.
>>
>> FFTW is one possible faster implementation. There is others. I can't
>> comment on which one would be better. Search the web, I recall seeing some
>> people comparing fft lib that is available in python.
>>
>> If you modify those ops for that, pushing upstream those change would be
>> great.
>>
>> Fred
>>
>>
>>
>> On Tue, Apr 18, 2017 at 12:16 PM  wrote:
>>
> Hi,
>>>
>>> I have implemented a layer which uses functions theano.tensor.fft.rfft
>>> and theano.tensor.fft.irfft. What might be the best way to improve the
>>> speed of that layer on cpu? Installing FFTW, an optimized BLAS library?
>>>
>>> Thanks
>>> Cha.
>>>
>>> --
>>>
>>> ---
>>> You received this message because you are subscribed to the Google
>>> Groups "theano-users" group.
>>>
>> To unsubscribe from this group and stop receiving emails from it, send an
>>> email to theano-users...@googlegroups.com.
>>
>>
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] improving fft performance on cpu

2017-04-19 Thread Jesse Livezey
>From tests I've done, the MKL fft library is comparable and sometimes 
faster than the FFTW package. Both are much faster than the numpy fft. It's 
available in the accelerate package from Continuum (mkl conda package only 
has blas).

It also looks like intel has a free conda channel, which builds scipy and 
numpy against MKL
https://software.intel.com/en-us/forums/intel-distribution-for-python/topic/713736

On Wednesday, April 19, 2017 at 10:54:33 AM UTC-7, nouiz wrote:
>
> Blas won't help for fft. So you would need a faster fft library then what 
> we use and modify the perform() method of those ops to use that new lib.
>
> FFTW is one possible faster implementation. There is others. I can't 
> comment on which one would be better. Search the web, I recall seeing some 
> people comparing fft lib that is available in python.
>
> If you modify those ops for that, pushing upstream those change would be 
> great.
>
> Fred
>
>
>
> On Tue, Apr 18, 2017 at 12:16 PM  wrote:
>
>> Hi,
>>
>> I have implemented a layer which uses functions theano.tensor.fft.rfft 
>> and theano.tensor.fft.irfft. What might be the best way to improve the 
>> speed of that layer on cpu? Installing FFTW, an optimized BLAS library?
>>
>> Thanks
>> Cha.
>>
>> -- 
>>
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "theano-users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to theano-users...@googlegroups.com .
>> For more options, visit https://groups.google.com/d/optout.
>>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [theano-users] improving fft performance on cpu

2017-04-19 Thread Frédéric Bastien
Blas won't help for fft. So you would need a faster fft library then what
we use and modify the perform() method of those ops to use that new lib.

FFTW is one possible faster implementation. There is others. I can't
comment on which one would be better. Search the web, I recall seeing some
people comparing fft lib that is available in python.

If you modify those ops for that, pushing upstream those change would be
great.

Fred



On Tue, Apr 18, 2017 at 12:16 PM  wrote:

> Hi,
>
> I have implemented a layer which uses functions theano.tensor.fft.rfft
> and theano.tensor.fft.irfft. What might be the best way to improve the
> speed of that layer on cpu? Installing FFTW, an optimized BLAS library?
>
> Thanks
> Cha.
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[theano-users] improving fft performance on cpu

2017-04-18 Thread cha
Hi,

I have implemented a layer which uses functions theano.tensor.fft.rfft 
and theano.tensor.fft.irfft. What might be the best way to improve the 
speed of that layer on cpu? Installing FFTW, an optimized BLAS library?

Thanks
Cha.

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.