That is correct as of theano 0.8 (I think).

If you use the bleeding edge version of theano, you can let CorrMM use 
openmp to parallelize across batches. If you have more than 2 cores, this 
should give additional speedup. GPUs are going to be much faster than CPUs 
generally, if you have large batches and lots of cores, CPUs can catch up a 
bit, but GPUs are still going to be faster.

On Monday, March 20, 2017 at 11:59:52 PM UTC-7, C. Ng wrote:
>
> Hi,
>
> Just want to confirm that theano.tensor.nnet.conv2d uses CorrMM (not the 
> legacy convolution) by default in CPU mode ?
>
> I was hoping that forward prop (doing inference only, no training) using 
> CPU for convolution might be as fast as GPU (using CorrMM), given my batch 
> size is only 10. But using GPU is still quite a bit faster.  
>
>
>
>
>  
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to