I don't understand what you mean by "after the training starts". You must
not include the first call to the Theano function in your timing. Did you
do that?

Fred

On Fri, Feb 17, 2017 at 7:53 AM Adam Becker <[email protected]> wrote:

> I noticed this problem late 2016 while investigating scan overhead. Check
> here
> <https://github.com/Theano/libgpuarray/issues/292#issuecomment-262696507>.
> The comment also have a link to a script help reproducing the problem.
>
>
> On Friday, February 17, 2017 at 7:43:39 PM UTC+8, Ozan Çağlayan wrote:
>
> Hi,
>
> With the current Theano HEAD + libgpuarray, I launched two RNN-based MT
> systems with the old backend and the new backend and apparently the new
> backend is a little bit slower than the old one.
>
> This is on CUDA 7.5, CUDNN5.1 and Tesla K40 GPU, batchsize=64:
> old backend: 166ms / batch
> new backend: 186ms / batch
>
> On a moderate 4M sample dataset, this would bring an overhead of ~20
> minutes per epoch.
>
> Is this expected? If yes, why would I prefer the new backend?
>
> Thanks.
>
> --
> Ozan Çağlayan
> Research Assistant
> Galatasaray University - Computer Engineering Dept.
> http://www.ozancaglayan.com
>
> #HayırdaHayırVar
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to