I found this very nice JMLR paper that seems to address exactly the
problem I was raising: "How to Explain Individual Classification
Decisions" [0]. They provide a general framework allowing to analyse
the individual decisions of any classifier in terms of an "explanation
vector", which corresponds to the local gradient of the class
probability function: the value of its component d corresponds to the
influence of the feature d with respect to the chosen class. Given its
generality, I think it would make a very useful addition to the
sklearn framework (although for my taste, it is a bit heavy on the
math side). What do you think?

[0] http://jmlr.csail.mit.edu/papers/volume11/baehrens10a/baehrens10a.pdf



On 2 October 2012 14:34, Christian Jauvin <[email protected]> wrote:
>> * "Advice for applying Machine Learning" [1] gives general recommendations
>> on how
>> to diagnose trained models
>
> Thanks Immanuel, I find this document in particular to be a great
> source of very practical advices and ideas.
>
> Christian

------------------------------------------------------------------------------
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to