On 2011-12-05, at 5:50 PM, Ian Goodfellow wrote:
> 
> I think I was mostly confused by the terminology-- I don't consider the code
> to be part of a sparse coding model, nor to be estimated (I am aware that
> sparse coding involves iterative optimization but I don't consider the 
> optimizer
> to be solving an estimation problem).
> 
> I don't understand exactly what interface Alexandre is saying to use.
> 
> To use the sparse_encode interface, should I pass a dictionary of shape
> (num_data_features, num_code_elements) for X and a data matrix of shape
> (num_data_features, num_examples) for Y?
> 
> I have tried doing that, but for alpha = 1. or alpha = 0.1 it returns
> a matrix of
> all zeros, and for alpha = .01 it returns a code with NaNs in it.

This actually gets at something I've been meaning to fiddle with and report but 
haven't had time: I'm not sure I completely trust the coordinate descent 
implementation in scikit-learn, because it seems to give me bogus answers a lot 
(i.e., the optimality conditions necessary for it to be an actual solution are 
not even approximately satisfied). Are you guys using something weird for the 
termination condition?

David
------------------------------------------------------------------------------
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to