Hello All,
My dataset has 93 features and just under 62,000 observations (61,878 to be
exact). I'm running out of memory right after the mean sigma value is
computed/displayed. I've tried using dimensionality reduction via TruncatedSVD
with n_components set at different levels (78, 50 and 2 res
Hi,
In general, I agree that we should at least add a way to compute feature
importances using permutations. This is an alternative, yet standard, way
to do it in comparison to what we do (mean decrease of impurity, which is
also standard).
Assuming we provide permutation importances as a buildin