It is fine. All the grad path will be summed together. If you take
different subset, then normally there isn't problem. The problem is when
people take the same subset multiple time, sometimes they expect Theano to
find this and handle it as it was taken only once.

If your particular case, a probably small speed up would be to do:


forw_embs = emb[x.flatten()]

bidir_emb = tensor.concatenate([forw_embs, forw_embs[::-1]] , ...)

This would prevent one copy in memory. As usual, only profiling can make
sure this is as fast or faster, but it is my guess.

Fred

On Tue, Aug 23, 2016 at 10:04 AM, Ozan Çağlayan <[email protected]> wrote:

> Hmm. I never receive an error raised from theano.grad but I have a usage
> like the below which gets two embeddings of the same vector_of_indices but
> in reverse. Does this cause any problem during backprop? Or generally, is
> it OK to use the same tensor several times in the graph either in full form
> or as a subtensor of it?
>
>
> forw_embs = emb[x.flatten()]
> x_rev = x[::-1]
> back_embs = emb[x_rev.flatten()]
>
> bidir_emb = tensor.concatenate([forw_embs, back_embs] , ...)
> ..
> ..
> ..
>
> grads = theano.grad(cost, [..., ..., ..., emb])
>
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to