Sorry for incomplete email.

Hi,

My question was that even after using many solvers, i dont get convergence
for Logistic regression. The loss  value as calculated in the previous
email was less for maxiter=10 than when maxiter = 30. So, does the
optimization method diverge and also how do we monitor and store the loss
(or any metric) after each iteration?

Thanks
Mahesh

On Sat, Feb 11, 2017 at 3:18 PM, Mahesh Chandra <
[email protected]> wrote:

> >reg = 0.1
> lr = LogisticRegression(C=1/reg,max_iter=100, 
> fit_intercept=True,solver='lbfgs').fit(X_train,
> y_train)
> ytrain_hat = lr.predict_proba(X_train)
> loss = log_loss(y_train,ytrain_hat)
> print loss
> print loss + 0.5*reg*LA.norm(lr.coef_)
>
> Maybe i am doing it wrong
>
_______________________________________________
scikit-learn mailing list
[email protected]
https://mail.python.org/mailman/listinfo/scikit-learn

Reply via email to