I think that the known gradients part of the `grad` api is what you are 
looking for.  I've written up something quick 
here: https://gist.github.com/d5350169f8eb521b55bb2f63bb94b3b0

On Thursday, November 3, 2016 at 3:03:53 PM UTC-4, [email protected] wrote:
>
> sorry about unclear question.  i have read the function you sent to me, 
> thank you. but still a little confusing, what i want is actually like 
> following backward function in torch.  
> https://github.com/torch/nn/blob/master/doc/module.md  the initial 
> gradient can be input as the parameter gradOutput.
> can you help me to figure it out ? how to use this function in theano ?  i 
> think it should be possible in theano, right ?
> thanks a lot.
>
> [gradInput] backward(input, gradOutput)
>
> Performs a *backpropagation step* through the module, with respect to the 
> given input. In general this method makes the assumption forward(input) 
> <https://github.com/torch/nn/blob/master/doc/module.md#nn.Module.forward> has 
> been called before, *with the same input*. This is necessary for 
> optimization reasons. If you do not respect this rule, backward() will 
> compute incorrect gradients.
>
> In general input and gradOutput and gradInput are Tensors 
> <https://github.com/torch/torch7/blob/master/doc/tensor.md>. However, 
> some special sub-classes like table layers 
> <https://github.com/torch/nn/blob/master/doc/table.md#nn.TableLayers>might 
> expect something else. Please, refer to each module specification for 
> further information.
>
> A *backpropagation step* consist in computing two kind of gradients at 
> input given gradOutput (gradients with respect to the output of the 
> module). This function simply performs this task using two function calls:
>
>    - A function call to updateGradInput(input, gradOutput) 
>    
> <https://github.com/torch/nn/blob/master/doc/module.md#nn.Module.updateGradInput>
>    .
>    - A function call to accGradParameters(input,gradOutput,scale) 
>    
> <https://github.com/torch/nn/blob/master/doc/module.md#nn.Module.accGradParameters>
>    .
>
> It is not advised to override this function call in custom classes. It is 
> better to override updateGradInput(input, gradOutput) 
> <https://github.com/torch/nn/blob/master/doc/module.md#nn.Module.updateGradInput>
>  and accGradParameters(input, gradOutput,scale) 
> <https://github.com/torch/nn/blob/master/doc/module.md#nn.Module.accGradParameters>
>  functions.
>
>
>
>
>
>
> On Thursday, November 3, 2016 at 7:59:53 AM UTC-7, nouiz wrote:
>>
>> I'm not sure i understand correctly. Here is the doc about our grad. I'm 
>> pretty sure you can do what you want. Maybe the doc will help you. If not, 
>> can you clarify what you want?
>>
>>
>> http://deeplearning.net/software/theano/library/gradient.html#theano.gradient.grad
>>
>> Fred
>>
>> On Thu, Nov 3, 2016 at 2:10 AM, <[email protected]> wrote:
>>
>>> hello , everyone, if i want to specify an input gradient for an function 
>>> f, how should i implement this in theano ?
>>> any rely will be appreciated so much...
>>>
>>> -- 
>>>
>>> --- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "theano-users" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to [email protected].
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to