to complexify a bit the pb note that in the SVM/Lasso/... case the precomputed gram is np.dot(X, X.T) which means that the cross-val can be done just with it while for the covariance estimation, like GraphLassoCV, the empirical covariance is np.dot(X.T, X) hence the fit needs X as input.
so it seems to me we have 3 cases: - kernel / similarity, shape (n_samples, n_features) - distance, shape (n_samples, n_features) - cov, shape (n_features, n_features) HTH, Alex On Wed, Nov 9, 2011 at 6:06 PM, Gael Varoquaux <[email protected]> wrote: > On Wed, Nov 09, 2011 at 11:43:40PM +0100, bthirion wrote: >> > What do people think? Should I: > >> > 1. change graph_lasso to take the empirical covariance as an input > >> > 2. add an 'X_is_cov' parameter to the estimators >> +1 for the second one. > > I actually was suggesting both, and 1 as a mean for 2. > >> If we want to introduce some kind of automated guess of the >> regularization parameter, we'll have to know the dimension I believe ? > > You mean the number of samples? Actually, no, what is important is the > number of degrees of freedom (I know that you know this). Things like the > OAS try to estimate it from the covariance matrix. > > G > > ------------------------------------------------------------------------------ > RSA(R) Conference 2012 > Save $700 by Nov 18 > Register now > http://p.sf.net/sfu/rsa-sfdev2dev1 > _______________________________________________ > Scikit-learn-general mailing list > [email protected] > https://lists.sourceforge.net/lists/listinfo/scikit-learn-general > ------------------------------------------------------------------------------ RSA(R) Conference 2012 Save $700 by Nov 18 Register now http://p.sf.net/sfu/rsa-sfdev2dev1 _______________________________________________ Scikit-learn-general mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
