>> Yes, the score_func option is intended exactly for this purpose.
> The problem I have with it is that my score function is defined in
> terms of the probabilistic outcome of the classifier (i.e.
> predict_proba) whereas the score_func's caller pass it the predicted
> class (i.e. the outcome of
Hello,
I am trying to classify a large document set with LinearSVC. I get good
accuracy. However I was wondering how to optimize the interface to this
classifier. For e.g.If I have an predict interface that accepts the raw
document,
and uses a precomputed classifier object, the time to predic
Hi Andreas,
> Yes, the score_func option is intended exactly for this purpose.
The problem I have with it is that my score function is defined in
terms of the probabilistic outcome of the classifier (i.e.
predict_proba) whereas the score_func's caller pass it the predicted
class (i.e. the outcome
Hi Christian.
Yes, the score_func option is intended exactly for this purpose.
If you are using GridSearchCV, you have to take care of whether you have
a score or a loss function, but if you overload ``score``, you have
the same problem.
Cheers,
Andy
On 09/21/2012 08:43 PM, Christian Jauvin wrot
Hi Andreas,
You mean that I could use cross_val_score's score_func argument? I
tried it once, and it didn't work for some reason, and so I sticked
with the inheritance solution, which is really a 3 line modification
anyway. Is there another way?
Best,
Christian
On 21 September 2012 15:36, Andr
Hi Gilles,
> Are you sure the RF classifier is the same in both case? (have you set
> the random state to the same value?)
You're right, I forgot about that!
I just tested it, and both classifiers indeed produce identical
predictions with the same random_state value.
Thanks,
Christian
---
Hi Christian.
Why do you need to inherit from the classifier to use a different
scoring function?
That should really not be necessary.
Cheers,
Andy
--
Got visibility?
Most devs has no idea what their production app looks
Hi Christian,
The score method does not play any role in fit.
Are you sure the RF classifier is the same in both case? (have you set
the random state to the same value?)
Can you provide some code in any case?
Thanks,
Gilles
On 21 September 2012 20:45, Christian Jauvin wrote:
> I have a clas
I have a classifier which derives from RandomForestClassifier, in
order to implement a custom "score" method. This obviously affects
scoring results obtained with cross-validation, but I observed that it
seems to also affect the actual predictions. In other words, the same
RF classifier with two di
On 09/21/2012 11:59 AM, Sheila the angel wrote:
> Thanks for reply.
> metrics.confusion_matrix is what I was looking for..still I need to
> modify it little.
>
> Moreover it will be great if we will have
> classifier.score_by_class()
This will quite probably not happen as it blows up the API.
> OR
Thanks for reply.
metrics.confusion_matrix is what I was looking for..still I need to modify
it little.
Moreover it will be great if we will have
classifier.score_by_class()
OR
classifier.score(X_test, y_test, score_by_class=True)
method which will return accuracy by individual class. This will g
2012/9/21 Andreas Mueller :
> On 09/20/2012 09:48 PM, Lars Buitinck wrote:
>> Below are some excerpts from the "Build failed" message that I got
>> after git rm'ing the sparse linear models code. The strange thing is
>> that it seems to start rebuilding in the middle of the tests. The same
>> thing
Hi Sheila.
For this you can use metrics.confusion_matrix and
metrics.classification_report.
You have to get the predictions using classifier.predict(X_test), then
feed it to the evaluation method.
Btw, testing and training on the same data is usually not very
informative, because
of overfittin
On 09/20/2012 09:48 PM, Lars Buitinck wrote:
> Below are some excerpts from the "Build failed" message that I got
> after git rm'ing the sparse linear models code. The strange thing is
> that it seems to start rebuilding in the middle of the tests. The same
> thing happened when I tried nosetests s
I got it: there must be some files left on the server (these are files
that you just removed, right?) and coverage is trying to report some
coverage on them.
It could be .pyc, or something in the .coverage.
I seem to remember that @ogrisel already had this problem, and solved it.
Does it ring a b
Hello all,
Is there any method to get the separate classification accuracy score for
each class from any classifier.
The score method
>>>SVC().fit(X, y).score(X,y)
gives accuracy of classification but not by class. I need individual score
for each class !!
Thanks
--
Sheila
On Fri, Sep 21, 2012 at 12:36:48PM +0200, Gael Varoquaux wrote:
> I haven't had time to look at this, but the problem may lie in that the
> common tests run a 'configure' step of the setup.pu, to test the
> setup.py. This is probably where it fails.
OK, now that I am digging a bit more, this is no
On Thu, Sep 20, 2012 at 10:48:32PM +0200, Lars Buitinck wrote:
> Below are some excerpts from the "Build failed" message that I got
> after git rm'ing the sparse linear models code. The strange thing is
> that it seems to start rebuilding in the middle of the tests.
I haven't had time to look at t
Hello,
I was asking Olivier about CRF in sklearn and I ended up discussing my
experience with sklearn with him.
I am forwarding my email to this list (I hope its the right one) on his
suggestion.
Thanks to the sklearn team (especially the text classification module
authors) for helping me win a c
19 matches
Mail list logo