On Fri, Feb 15, 2019 at 09:35:03AM +0800, problemset wrote: > Hi, all, > > Nowadays, as the ML/DL/RL developed quickly, there is more diversity > demand on the flexibility of ANN module. I am wondering that is there > a way to stopping gradient back prop through a particular layer in > mlpack. Like Pytorch uses detach() while Tensorflow uses > stop_gradien.
Hey there Xiaohong, Could we create a layer we could add that just doesn't pass a gradient through, perhaps? That may not be the best solution (in fact I am sure it is not) but it could at least be a start. -- Ryan Curtin | "I know... but I really liked those ones." [email protected] | - Vincent _______________________________________________ mlpack mailing list [email protected] http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack
