Github user yanboliang commented on a diff in the pull request:
https://github.com/apache/spark/pull/13262#discussion_r64226662
--- Diff: docs/ml-advanced.md ---
@@ -4,10 +4,85 @@ title: Advanced topics - spark.ml
displayTitle: Advanced topics - spark.ml
---
-# Optimization of linear methods
+* Table of contents
+{:toc}
+
+`\[
+\newcommand{\R}{\mathbb{R}}
+\newcommand{\E}{\mathbb{E}}
+\newcommand{\x}{\mathbf{x}}
+\newcommand{\y}{\mathbf{y}}
+\newcommand{\wv}{\mathbf{w}}
+\newcommand{\av}{\mathbf{\alpha}}
+\newcommand{\bv}{\mathbf{b}}
+\newcommand{\N}{\mathbb{N}}
+\newcommand{\id}{\mathbf{I}}
+\newcommand{\ind}{\mathbf{1}}
+\newcommand{\0}{\mathbf{0}}
+\newcommand{\unit}{\mathbf{e}}
+\newcommand{\one}{\mathbf{1}}
+\newcommand{\zero}{\mathbf{0}}
+\]`
+
+# Optimization of linear methods (developer)
+
+## Limited-memory BFGS (L-BFGS)
+[L-BFGS](http://en.wikipedia.org/wiki/Limited-memory_BFGS) is an
optimization
+algorithm in the family of quasi-Newton methods to solve the optimization
problems of the form
+`$\min_{\wv \in\R^d} \; f(\wv)$`. The L-BFGS method approximates the
objective function locally as a
+quadratic without evaluating the second partial derivatives of the
objective function to construct the
+Hessian matrix. The Hessian matrix is approximated by previous gradient
evaluations, so there is no
+vertical scalability issue (the number of training features) when
computing the Hessian matrix
+explicitly in Newton's method. As a result, L-BFGS often achieves rapider
convergence compared with
+other first-order optimization.
-The optimization algorithm underlying the implementation is called
[Orthant-Wise Limited-memory
QuasiNewton](http://research-srv.microsoft.com/en-us/um/people/jfgao/paper/icml07scalable.pdf)
-(OWL-QN). It is an extension of L-BFGS that can effectively handle L1
-regularization and elastic net.
+(OWL-QN) is an extension of L-BFGS that can effectively handle L1
regularization and elastic net.
+
+L-BFGS was used as solver for
[LinearRegression](api/scala/index.html#org.apache.spark.ml.regression.LinearRegression),
+[LogisticRegression](api/scala/index.html#org.apache.spark.ml.classification.LogisticRegression),
+[AFTSurvivalRegression](api/scala/index.html#org.apache.spark.ml.regression.AFTSurvivalRegression)
+and
[MultilayerPerceptronClassifier](api/scala/index.html#org.apache.spark.ml.classification.MultilayerPerceptronClassifier).
+
+`spark.ml` L-BFGS solver calls the corresponding implementation in
[breeze](https://github.com/scalanlp/breeze/blob/master/math/src/main/scala/breeze/optimize/LBFGS.scala).
+
+## Normal equation solver for weighted least squares (normal)
+
+`spark.ml` implements normal equation solver for weighted least squares by
[WeightedLeastSquares](https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/ml/optim/WeightedLeastSquares.scala).
+
+Given $n$ weighted observations $(w_i, a_i, b_i)$:
+
+* $w_i$ the weight of i-th observation
+* $a_i$ the features vector of i-th observation
+* $b_i$ the label of i-th observation
+
+The number of features for each observation is $m$. We use the following
weighted least squares formulation:
+`\[
+minimize_{x}\frac{1}{2} \sum_{i=1}^n \frac{w_i(a_i^T x
-b_i)^2}{\sum_{i=1}^n w_i} +
\frac{1}{2}\frac{\lambda}{\delta}\sum_{j=1}^m(\sigma_{j} x_{j})^2
+\]`
--- End diff --

---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]