Hi

We have used LogisticRegression with two different optimization method SGD
and LBFGS in MLlib.
With the same dataset and the same training and test split, but get
different weights vector.

For example, we use
spark-1.1.0/data/mllib/sample_binary_classification_data.txt
as our training and test dataset.
With LogisticRegressionWithSGD and LogisticRegressionWithLBFGS as training
method and the same other parameters.

The precisions of these two methods almost near 100% and AUCs are also near
1.0.
As far as I know, the convex optimization problem will converge to the
global minimum value. (We use SGD with mini batch fraction as 1.0)
But I got two different weights vector? Is this expectation or make sense?

Reply via email to