Github user dbtsai commented on the pull request:

    https://github.com/apache/spark/pull/2207#issuecomment-54002680
  
    @srowen @mengxr 
    
    I was working on OWLQN for L1 in my company, and I didn't follow the LBFGS 
code so I was confused. The current code in MLlib actually gives the correct 
result.
    
    The Updater api is a little confusing, and after I read my note when I 
implemented LBFGS, I actually use the existing Updater api to get the current 
loss of regularization correctly with a trick by setting the gradient as zero 
vector, stepSize as zero, and iteration as one. 
    
    For SGD, we computed the loss of regularization after weights are updated, 
and we keep this value, and add it into total loss in next iteration. I now 
remembered that I fixed a bug because of the updater design couple months ago - 
the first iteration of the loss of regularization was not properly computed.
    
    Hope the whole design issue can be addressed by #1518 [SPARK-2505][MLlib] 
Weighted Regularizer for Generalized Linear Model once this is finished.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to