On Wed, Dec 7, 2011 at 9:43 AM, David Warde-Farley
<[email protected]> wrote:

> To be precise, (and I hope I got this right lest I confuse things further), a
> sparse coding problem with K different training examples and L different
> input features and M sparse components corresponds to K independent lasso
> problems with L training examples each and M input features.

In the doc, it would probably be easier to understand if we make an
analogy with regression.

A regression problem consists in learning w given X and y:

argmin_w ||y - X^T w||^2

w: [n_features, ]
X: [n_samples, n_features]
y: [n_samples, ]

A sparse coding problem consists in learning alpha given D and x:

argmin_alpha ||x - D^T alpha||^2

alpha: [n_components, ]
D: [n_features, n_components]
x: [n_features, ]

Therefore, to encode the entire dataset X, we need to to solve
n_samples regression problems each with n_features instances and
n_components features.
When using the square loss, each problem is independent (not when
using the hinge loss).

Mathieu

------------------------------------------------------------------------------
Systems Optimization Self Assessment
Improve efficiency and utilization of IT resources. Drive out cost and 
improve service delivery. Take 5 minutes to use this Systems Optimization 
Self Assessment. http://www.accelacomm.com/jaw/sdnl/114/51450054/
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to