Hi, all, 

Nowadays, as the ML/DL/RL developed quickly, there is more diversity demand on 
the flexibility of ANN module. I am wondering that is there a way to stopping 
gradient back prop through a particular layer in mlpack.  Like Pytorch uses 
detach() while Tensorflow uses stop_gradien.


Regards,
Xiaohong


 
_______________________________________________
mlpack mailing list
[email protected]
http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack

Reply via email to