On Thu, Jan 18, 2018 at 02:57:20PM +0100, Marcus Edel wrote:
> Hello Eugene,
> 
> Currently, it's not possible to apply penalties on layer parameters or layer
> activity during the optimization process. However, it should be 
> straightforward
> to implement a layer that can apply penalties. Another option would be to
> implement it as a decay policy so that it can be used inside the optimizer.

I don't know if this is a clean solution, but I thought maybe it would
be possible to make an output layer (like NegativeLogLikelihood<>) that
could simply add the penalty of the weights to the Forward(),
Backward(), and Gradient() functions.  The only issue would be that the
output layer doesn't seem to have access to the parameters, so possibly
some function signatures might need to be modified slightly.

Alternately, do you think it might be reasonable to add an extra
template parameter to FFN and RNN for a policy class that handles
penalties?  I suppose either there, or adding that instead to the
optimizers (so you can have regularization for any problem) could be
reasonable... maybe the latter is a better idea.

Anyway, I am just tossing ideas out there; hopefully at least one of
them is a good idea. :)

-- 
Ryan Curtin    | "I can't believe you like money too.  We should
[email protected] | hang out."  - Frito
_______________________________________________
mlpack mailing list
[email protected]
http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack

Reply via email to