Seems like you are looking at some terribly outdated version of 
Boltzmann.jl, try to update to the latest master.

L2 regularization is essentially an additional term in a loss function that 
you try to minimize. You can add this term to the loss function itself or 
add *gradient* of this term to the *gradient* of loss function. 
Boltzmann.jl uses second approach, splitting gradient calculation into 2 
parts: 

1. Calculate original gradient (`gradient_classic` function).
2. Apply "updaters" such as learning rate, momentum, weight decay, etc. 
(`grad_apply_*` functions). 

Regularization (both - L1 and L2) is implemented in 
`grad_apply_weight_decay!` and boils down to the expression: 

axpy!(-decay_rate, rbm.W, dW)

where `decay_rate` is L2 hyperparameter, `rbm.W` is current set of 
parameters (minus biases) and `dW` is currently calculated weight gradient. 

So to use L2 regularization you only need to add parameters 
`weight_decay_kind=:l2` and `weight_decay_rate=<your rate>` to the `fit` 
function (see my first post for example). 


On Wednesday, July 20, 2016 at 5:26:15 PM UTC+3, Ahmed Mazari wrote:
>
> Here are my weights between VISIBLE and HIDDEN units 
>
> # h : hidden,  v : visible  ?
>     gemm!('N', 'T', lr, h_neg, v_neg, 0.0, rbm.dW)
>     gemm!('N', 'T', lr, h_pos, v_pos, -1.0, rbm.dW)
>
> this is the code for standard weights updating .
>
> now l want to modify this two functions to add L2 regularization . How can 
> l do that efficiently ? any ideas  !!
> the change to make and to add is between the two functions :
>
> gemm!('N', 'T', lr, h_neg, v_neg, 0.0, rbm.dW)
> *#  l think it's here to do the regularization *
>  gemm!('N', 'T', lr, h_pos, v_pos, -1.0, rbm.dW)
>
> thanks for help l'm dummy with this concepts
>
> On Tue, Jul 19, 2016 at 8:42 AM, Andrei Zh <[email protected] 
> <javascript:>> wrote:
>
>> Boltzmann.jl <https://github.com/dfdx/Boltzmann.jl> supports both - L1 
>> and L2 regularization (although it's not documented yet):
>>
>> # install if needed
>> Pkg.add("Boltzmann")
>>
>> using Boltzmann
>>
>> # create dataset
>> X = randn(100, 2000)
>> X = (X + abs(minimum(X))) / (maximum(X) - minimum(X))
>> rbm = BernoulliRBM(100, 50) 
>>
>> # fit with L2 regularization (weight decay)
>> fit(rbm, X; weight_decay_kind=:l2, weight_decay_rate=0.9)
>>
>> Note, that observations should be on columns, which goes along with many 
>> other machine learning packages, but may be different from statistical 
>> packages that often put observations on rows. 
>>
>>
>>
>> On Monday, July 18, 2016 at 6:22:19 PM UTC+3, Ahmed Mazari wrote:
>>>
>>> Hello;
>>>
>>> l'm looking for practical resources and code with julia for restricted 
>>> boltzman machine with L2 regularization.
>>>
>>> Thanks for your helps
>>>
>> -- 
>> You received this message because you are subscribed to a topic in the 
>> Google Groups "julia-stats" group.
>> To unsubscribe from this topic, visit 
>> https://groups.google.com/d/topic/julia-stats/P3XdO8Yz-w8/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to 
>> [email protected] <javascript:>.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"julia-stats" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to