2014-10-20 22:08 GMT+02:00 George Bezerra <gbeze...@gmail.com>:
> Not an expert, but I think the idea is that you remove (or add) features one
> by one, starting from the ones that have the least (or most) impact.
>
> E.g., try removing a feature, if performance improves, keep it that way and
> move on to the next feature. It's a greedy approach; not optimal, but avoids
> exponential complexity.

No. That would be backward stepwise selection. Neither that, nor its
forward cousin (find most discriminative feature, then second-most,
etc.) are implemented in scikit-learn.

The feature selection in sklearn.feature_selection computes a score
per feature (in practice always a significance test, but the API is
set up so that other scores are possible), then keeps the k best
features, the p% best, or the ones that don't exceed some
threshold/p-value.

------------------------------------------------------------------------------
Comprehensive Server Monitoring with Site24x7.
Monitor 10 servers for $9/Month.
Get alerted through email, SMS, voice calls or mobile push notifications.
Take corrective actions from your mobile device.
http://p.sf.net/sfu/Zoho
_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to