If you build a model that correspond to many RNN at once, then yes, this
should be faster.

But your model won't be just an RNN, but a group of RNN.

Fred

On Wed, Jul 13, 2016 at 3:03 PM, Vishal Ahuja <[email protected]> wrote:

> A friend of mine has designed an RNN for a certain problem using Theano.
> It has 14 input nodes, 2 hidden layers with 7 nodes each, and finally an
> output node. We have around 30,000 such RNNs that need to be trained. I am
> a software engineer with very little exposure to Machine Learning. What I
> need to do is to speed up the training process of these RNNs.
>
> Looking at the problem from a CS perspective, I don't think that anything
> can be done to speed up the training of a single RNN. Running such a small
> RNN on a GPU makes no sense. Instead, we can achieve speed up by batching
> the RNNs, say 1000 at a time, and sending them to the GPU. The nature of
> the problem is SIMD - each RNN is identical, but it has to train on a
> different data set.
>
> Can someone please explain how this could be done using Theano?
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to