Hi everyone,

I have this (rather vague) intuition that studying the "reasons" which
led a trained classifier to behave like it did on particular instances
of a problem might be a good way to increase its understanding. If you
have for instance a very imbalanced problem, it might be useful to
identify the few cases where a (trained) classifier answered right (in
terms of classification or probabilistic output) on the least likely
class, in order to determine which particular features have played a
positive role, and which haven't. The way I see it, this would be a
bit like "reverse engineering the features".

So my question: is there a mechanism or maybe an already existing
framework or theory for doing this? And would something approaching it
be possible currently with Sklearn?

Thanks,

Christian

------------------------------------------------------------------------------
Got visibility?
Most devs has no idea what their production app looks like.
Find out how fast your code is with AppDynamics Lite.
http://ad.doubleclick.net/clk;262219671;13503038;y?
http://info.appdynamics.com/FreeJavaPerformanceDownload.html
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to