Ben,

I can confirm your results with penalty='none' and C=1e9. In both cases, you are running a mostly unpenalized logisitic regression. Usually that's less numerically stable than with a small regularization, depending on the data collinearity.

Running that same code with
 - larger penalty ( smaller C values)
 - or larger number of samples
 yields for me the same coefficients (up to some tolerance).

You can also see that SAGA convergence is not good by the fact that it needs 196000 epochs/iterations to converge.

Actually, I have often seen convergence issues with SAG on small datasets (in unit tests), not fully sure why.

--
Roman

On 09/10/2019 22:10, serafim loukas wrote:
The predictions across solver are exactly the same when I run the code.
I am using 0.21.3 version. What is yours?


In [13]: import sklearn

In [14]: sklearn.__version__
Out[14]: '0.21.3'


Serafeim



On 9 Oct 2019, at 21:44, Benoît Presles <benoit.pres...@u-bourgogne.fr <mailto:benoit.pres...@u-bourgogne.fr>> wrote:

(y_pred_lbfgs==y_pred_saga).all() == False


_______________________________________________
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailman/listinfo/scikit-learn


_______________________________________________
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailman/listinfo/scikit-learn

Reply via email to