2011/9/25  <[email protected]>:
> The predict_proba are just nonlinear monotonic transformations of the
> parameters. So the difference is only in specifying the convergence
> tolerance.

That's what I thought, and I'd be so lazy to let the client determine
the tolerance parameter ;)

> However, the problem that we just had is the complete (quasi-) separation
> case. In this case the predict_proba converge to 0 and 1, while the
> parameters will go off to infinity.
> So the boundary behavior might be messy.

Right, so unless I map the parameters back from log-space to [0,1]
(which is exactly what NB's predict_proba does), predict_proba would
actually be a safer bet than coef_ + intercept_?

-- 
Lars Buitinck
Scientific programmer, ILPS
University of Amsterdam

------------------------------------------------------------------------------
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to