Github user tmyklebu commented on the pull request:

    https://github.com/apache/spark/pull/5005#issuecomment-83743835
  
    @dlwh: Intermediate states do not matter in ANNLS.  In ANNLS, we allow 
ourselves to do a crappy job solving the least squares problems at each 
iteration because the result usually isn't very close to an equilibrium anyway. 
 Said least-squares problems are always very small.  (Chih-Jen Lin has a paper 
where he does a simple projected gradient method instead and reports perfectly 
satisfactory results.)  So we use a low iteration cap *and* suboptimal 
solutions aren't too dangerous.  Polyak's projected CG trick (implemented in 
the MLlib NNLS code) accelerates convergence quite substantially when the 
active set does not change, making it more rare that we hit the iteration 
limit.  At this scale, function call overhead when evaluating Mv can be 
noticeable.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to