On Mon, 29 Nov 2010, Jakob Scherer wrote: > > actually it depends... e.g. if underlying classifier's regularization is > > invariant to the transformation (e.g. margin width), then yeap -- there > > should > > be no effect. But if it is sensitive to it (e.g. feature selection , like > > in > > SMLR), then you might get advantage since, like in the case of SMLR, the > > goal > > of having fewer important features might be achieved with higher > > generalization. > A follow-up question; is the inverse true too: can having fewer > important features lead to a higher generalization?
if you are asking: * "can having fewer important features among bulk of irrelevant features" then I guess answer is "No" * "can having fewer features (just important ones)..." then the answer "oh Yes" -- that is the goal of feature selection procedures, to distill featureset so only important ones are left or did I misunderstand entirely? -- =------------------------------------------------------------------= Keep in touch www.onerussian.com Yaroslav Halchenko www.ohloh.net/accounts/yarikoptic _______________________________________________ Pkg-ExpPsy-PyMVPA mailing list Pkg-ExpPsy-PyMVPA@lists.alioth.debian.org http://lists.alioth.debian.org/mailman/listinfo/pkg-exppsy-pymvpa