The test accuracy doesn't mean the total loss. All points between (-1,
1) can separate points -1 and +1 and give you 1.0 accuracy, but their
coressponding loss are different. -Xiangrui

On Sun, Sep 28, 2014 at 2:48 AM, Yanbo Liang <yanboha...@gmail.com> wrote:
> Hi
>
> We have used LogisticRegression with two different optimization method SGD
> and LBFGS in MLlib.
> With the same dataset and the same training and test split, but get
> different weights vector.
>
> For example, we use
> spark-1.1.0/data/mllib/sample_binary_classification_data.txt as our training
> and test dataset.
> With LogisticRegressionWithSGD and LogisticRegressionWithLBFGS as training
> method and the same other parameters.
>
> The precisions of these two methods almost near 100% and AUCs are also near
> 1.0.
> As far as I know, the convex optimization problem will converge to the
> global minimum value. (We use SGD with mini batch fraction as 1.0)
> But I got two different weights vector? Is this expectation or make sense?

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org

Reply via email to