On Tue, Nov 27, 2012 at 9:11 AM, Ak wrote:
> For OneVsRestClassifier, if separate classifiers are trained for every
> category, would online learning mean training only a particular
> classifier for
> the category that the new sample belongs to?
>
Remember that one-vs-rest trains each base cl
For OneVsRestClassifier, if separate classifiers are trained for every
category, would online learning mean training only a particular classifier for
the category that the new sample belongs to?
For e.g. if I train OneVsRestClassifier(SGDClassifier()), right now for
predicting the probability
Could you please paste the y_true and y_probas you use to calculate the
roc_curve? Sometimes the roc curve may look very strange as the denominator
for precision is "relevant document" retrieved which varies with
thresholds. Could please also check that whether recall is decreasing in
your case?
O
2012/11/26 François Kawala :
> Hello everybody,
>
> I'm interacting with "Scikit-learn peoples" for the first time, and I have
> to say that is an amazing work that you've done here. I am very grateful for
> the time you've spent in order that beginners like could play with such
> great tools.
>
>
Hello everybody,
I'm interacting with "Scikit-learn peoples" for the first time, and I
have to say that is an amazing work that you've done here. I am very
grateful for the time you've spent in order that beginners like could
play with such great tools.
Having seen this example
http://scikit-lear
Webinar signup:
Advances in Gradient Boosting: the Power of Post-Processing
December 14, 10-11 a.m., PST
Webinar Registration:
http://2.salford-systems.com/gradientboosting-and-post-processing/
Course Outline:
*Gradient Boosting and Post-Processing:
o What is missing from Gradien
Funny: when I try to 'make test', I get the error
==
FAIL: sklearn.tests.test_common.test_classifiers_train
--
Traceback (most recent call last):
File "/usr/li