Ok let me try again ;)

I have a model with a theano variable u and a tensor v that depends on u. 
Problem is that I do not have a closed-form expression that allows me to 
directly define v as a function of u. What I have is an error term e that 
depends on u and v and that should be 0. So what I do is, each time I 
modify u during training of my model, I use a solver to calculate v. Now  
dv/du can be expressed in a closed form way using the implicit function 
theorem, and I want to use this expression so that while training my model 
the gradient has the information of how v is going to evolve each time u is 
updated and v solved for.

I can implement this in two ways:
1) Either v is a theano variable -> makes it easy to compute the error term 
and solve for v but I cannot see any way to use dv/du.
2) v is the output of a custom Op

Right now I use 2), the custom op has a grad and perform method defined. 
Perform does not actually perform any computation it only copy from a 
self.data attribute to output. When I use the solver to calculate v, I 
update in the merit function this self.data attribute. I don't like this 
way to much because it forces me to keep track of all the instances of this 
particular Op, so I was wondering if there was not a way to use method 1) 
but to feed in the gradient expression somewhere.


-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to