On Tue, Dec 06, 2011 at 07:43:26PM -0500, David Warde-Farley wrote:
> I think that scaling by n_samples makes sense in the supervised learning
> context (we often do the equivalent thing where we take the mean, rather than
> the sum, over the unregularized training objective, making the regularization
> invariant to the size of the training set), however there is a disconnect
> between the dictionary learning notion of n_samples and the supervised
> estimator notion of n_samples, and the conflation of these two because one
> can be implemented by the other.

+1. We may need to have a different convention in Lasso and
sparse_encode. This should be well documented.

G

------------------------------------------------------------------------------
Cloud Services Checklist: Pricing and Packaging Optimization
This white paper is intended to serve as a reference, checklist and point of 
discussion for anyone considering optimizing the pricing and packaging model 
of a cloud services business. Read Now!
http://www.accelacomm.com/jaw/sfnl/114/51491232/
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to