Hi Andy,
As you can notice in the code, I fixed C=1e9, so the intercept with
liblinear is not penalised and therefore I get the same solutions with
these solvers when everything goes well.
How can I check the objective of the l-bfgs and liblinear solvers with
sklearn?
Best regards,
Ben
O
Hi Ben.
Liblinear and l-bfgs might both converge but to different solutions,
given that the intercept is penalized.
There is also problems with ill-conditioned problems that are hard to
detect.
My impression of SAGA was that the convergence checks are too loose and
we should improve them.
Have
With lbfgs n_iter_ = 48, with saga n_iter_ = 326581, with liblinear
n_iter_ = 64.
On 08/01/2020 21:18, Guillaume Lemaître wrote:
We issue convergence warning. Can you check n_iter to be sure that you
did not convergence to the stated convergence?
On Wed, 8 Jan 2020 at 20:53, Benoît Presles
We issue convergence warning. Can you check n_iter to be sure that you did
not convergence to the stated convergence?
On Wed, 8 Jan 2020 at 20:53, Benoît Presles
wrote:
> Dear sklearn users,
>
> I still have some issues concerning logistic regression.
> I did compare on the same data (simulated
Dear sklearn users,
I still have some issues concerning logistic regression.
I did compare on the same data (simulated data) sklearn with three
different solvers (lbfgs, saga, liblinear) and statsmodels.
When everything goes well, I get the same results between lbfgs, saga,
liblinear and stat
On 10/10/19 1:14 PM, Benoît Presles wrote:
Thanks for your answers.
On my real data, I do not have so many samples. I have a bit more than
200 samples in total and I also would like to get some results with
unpenalized logisitic regression.
What do you suggest? Should I switch to the lbfgs
Thanks for your answers.
On my real data, I do not have so many samples. I have a bit more
than 200 samples in total and I also would like to get some results
with unpenalized logisitic regression.
What do you suggest? Should I switch to the lbfgs solver? Am I sure
th
Ups I did not see the answer of Roman. Sorry about that. It is coming back
to the same conclusion :)
On Wed, 9 Oct 2019 at 23:37, Guillaume Lemaître
wrote:
> Uhm actually increasing to 1 samples solve the convergence issue.
> SAGA is not designed to work with a so small sample size most prob
Uhm actually increasing to 1 samples solve the convergence issue.
SAGA is not designed to work with a so small sample size most probably.
On Wed, 9 Oct 2019 at 23:36, Guillaume Lemaître
wrote:
> I slightly change the bench such that it uses pipeline and plotted the
> coefficient:
>
> https:/
I slightly change the bench such that it uses pipeline and plotted the
coefficient:
https://gist.github.com/glemaitre/8fcc24bdfc7dc38ca0c09c56e26b9386
I only see one of the 10 splits where SAGA is not converging, otherwise the
coefficients
look very close (I don't attach the figure here but they
Ben,
I can confirm your results with penalty='none' and C=1e9. In both cases,
you are running a mostly unpenalized logisitic regression. Usually
that's less numerically stable than with a small regularization,
depending on the data collinearity.
Running that same code with
- larger penalty
The predictions across solver are exactly the same when I run the code.
I am using 0.21.3 version. What is yours?
In [13]: import sklearn
In [14]: sklearn.__version__
Out[14]: '0.21.3'
Serafeim
On 9 Oct 2019, at 21:44, Benoît Presles
mailto:benoit.pres...@u-bourgogne.fr>> wrote:
(y_pred_l
Dear scikit-learn users,
I did what you suggested (see code below) and I still do not get the
same results between solvers. I do not have the same predictions and I
do not have the same coefficients.
Best regards,
Ben
Here is the new source code:
from sklearn.datasets import make_classific
Could you generate more samples, set penalty to none, reduce the tolerance and
check the coefficients instead of predictions. This is sure to be sure that
this is not only a numerical error.
Sent from my phone - sorry to be brief and potential misspell.
Original Message
Fro
Dear scikit-learn users,
Do you think it is a bug in scikit-learn?
Best regards,
Ben
Le 08/10/2019 à 20:19, Benoît Presles a écrit :
As you can notice in the code below, I do scale the data. I do not get any
convergence warning and moreover I always have n_iter_ < max_iter.
Le 8 oct. 2019
As you can notice in the code below, I do scale the data. I do not get any
convergence warning and moreover I always have n_iter_ < max_iter.
> Le 8 oct. 2019 à 19:51, Andreas Mueller a écrit :
>
> I'm pretty sure SAGA is not converging. Unless you scale the data, SAGA is
> very slow to conve
I'm pretty sure SAGA is not converging. Unless you scale the data, SAGA
is very slow to converge.
On 10/8/19 7:19 PM, Benoît Presles wrote:
Dear scikit-learn users,
I am using logistic regression to make some predictions. On my own
data, I do not get the same results between solvers. I manage
17 matches
Mail list logo