I was going through Mahout code. A couple of queries I had related to 
OnlineRegression algorithm (Stochastic Gradient implementation with LR)

1. I saw in the CrossFolder program the LogLikelihood was computed as 

    LogLikelihood += (Math.Log(score) - LogLikelihood)/(Math.min(records, 
windowSize)

My query is, can't we use the formula which says 
LogLikelihood = Sum (log p) or log(1-p) 
depending on the value of y 

where log p is computed online for each row

2. learning rate has been calculated as 

CurrentLearningRate = mu0 * Math.pow (decayFactor, getStep ()) * Math.pow 
(getStep () + stepOffset, forgettingExponent)

Can we use 
LearningRate (epoch) = initialLearningRate / (1 + epoch / annealingRate) 

Where we are going to use Inverse learning rate as it guarantees to converge 
to a limit 

Refference taken from: http://alias-i.com/lingpipe-
3.9.3/docs/api/com/aliasi/stats/AnnealingSchedule.html

Thanks
Nabarun

Reply via email to