I think you can also use RFECV directly without doing any wrapping.
On 11/20/19 12:24 AM, Brown J.B. via scikit-learn wrote:
Dear Malik,
Your request to do performance checking of the steps of SVM-RFE is a
pretty common task.
Since the contributors to scikit-learn have done great to make the
interface to RFE easy to use, the only real work required from you
would be to build a small wrapper function that:
(a) computes the step sizes you want to output prediction performances
for, and
(b) loops over the step sizes, making each step size the n_features
attribute of RFE (and built from the remaining features), making
predictions from a SVM retrained (and possibly optimized) on the
reduced feature set, and then outputting your metric(s) appropriate to
your problem.
Tracing the feature weights is then done by accessing the "coef_"
attribute of the linear SVM trained.
This can be output in loop step (b) as well.
where each time 10% for the features are removed.
How one can get the accuracy overall the levels of the elimination
stages. For example, I want to get performance over 1000 features,
900 features, 800 features,....,2 features, 1 feature.
Just a technicality, but by 10% reduction you would have
1000, 900, 810, 729, 656, ... .
Either way, if you allow your wrapper function to take a pre-computed
list of feature sizes, you can flexibly change between a systematic
way or a context-informed way of specifying feature sizes (and
resulting weights) to trace.
Hope this helps.
J.B. Brown
Kyoto University Graduate School of Medicine
_______________________________________________
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailman/listinfo/scikit-learn
_______________________________________________
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailman/listinfo/scikit-learn