Hi, Ryan,

Yes, the solution may be suitable for some cases. But in some complicated case, 
I am not sure it is able to handle.  Such as there are three models, Q1, Q2, V, 
and two loss functions Loss1, Loss2.  Loss1 is calculated with the output of V 
and Q1 (RMSE or other form loss function), but we only back prop the gradient 
to the model V's parameters rather than model Q1's parameters. 


This is a simple abstract of the SAC model, I am sure there are more diverse 
demands for the ANN module.  I still need more time to figure out how to 
achieve this, hope our discussion will provide some thoughts (sparks) about it. 
 


I am not sure I explain the case clearly, please let me know if you need more 
information. 


Regards,
Xiaohong

At 2019-02-15 10:15:27, "Ryan Curtin" <[email protected]> wrote:
>On Fri, Feb 15, 2019 at 09:35:03AM +0800, problemset wrote:
>> Hi, all, 
>> 
>> Nowadays, as the ML/DL/RL developed quickly, there is more diversity
>> demand on the flexibility of ANN module. I am wondering that is there
>> a way to stopping gradient back prop through a particular layer in
>> mlpack.  Like Pytorch uses detach() while Tensorflow uses
>> stop_gradien.
>
>Hey there Xiaohong,
>
>Could we create a layer we could add that just doesn't pass a gradient
>through, perhaps?
>
>That may not be the best solution (in fact I am sure it is not) but it
>could at least be a start.
>
>-- 
>Ryan Curtin    | "I know... but I really liked those ones."
>[email protected] |   - Vincent
_______________________________________________
mlpack mailing list
[email protected]
http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack

Reply via email to