Excellent,
That makes us 5 people interested on the same topic, I think this is
going to be a great sprint.
--
Olivier
--
Cloud Services Checklist: Pricing and Packaging Optimization
This white paper is intended to
2011/12/7 Gael Varoquaux gael.varoqu...@normalesup.org:
On Tue, Dec 06, 2011 at 10:26:04AM -0500, Ian Goodfellow wrote:
I agree with David that it seems like the optimizer is broken, but I
disagree that the problem is the termination criterion. There should
not be any NaNs anywhere in the
On Wed, Dec 07, 2011 at 09:35:05AM +0100, Olivier Grisel wrote:
How about I address these issues in the pull request I opened earlier today?
+1, also make it possible to expose the ability to up the max_iter of
LARS from the sparse_encode API as reported by Ian.
+1.
I just remember
2011/12/7 Gael Varoquaux gael.varoqu...@normalesup.org:
On Tue, Dec 06, 2011 at 07:43:26PM -0500, David Warde-Farley wrote:
I think that scaling by n_samples makes sense in the supervised learning
context (we often do the equivalent thing where we take the mean, rather than
the sum, over the
On Wed, Dec 07, 2011 at 09:38:32AM +0100, Olivier Grisel wrote:
Can people confirm that some other solvers (e.g. GLMnet) do not have the
same problem? In which case, we need to figure out how they do it.
Are we still talking about the Lasso with Coordinate Descent
Yes.
Would be great to
Virgile? I thought that you had almost isolated a simple test case.
Not really, I could have the graph_lasso crash but I was using a
configuration in which my n_features-dimensional observations actually lied
in a (n_features - 1)-dimensional subspace.
Currently, I am using the graph_lasso to
I would love to sit in, and learn, and contribute where i can.
Probably won't have time for this during the sprint -- but i want to
throw it out there:
The importance of locality in many manifold learning algos them good
candidates for distribution.
On Wed, Dec 7, 2011 at 3:22 AM, Olivier
2011/12/7 Timmy Wilson tim...@smarttypes.org:
I would love to sit in, and learn, and contribute where i can.
Probably won't have time for this during the sprint -- but i want to
throw it out there:
The importance of locality in many manifold learning algos them good
candidates for
I think I know what happened here.
An upstream change in scipy removed scipy.lena() and left only
scipy.misc.lena().
I wonder if this affects other examples as well. I will try to check
and patch this soon.
Vlad
On Wed, Dec 7, 2011 at 1:38 PM, Gael Varoquaux
gael.varoqu...@normalesup.org
I get the following error when running Hessian-based LLE::
/home/timmyt/projects/smarttypes/smarttypes/scripts/reduce_twitter_graph.py
in module()
32 print Passed our little test: following %s users! %
len(tmp_followies)
33
--- 34 results = reduce_graph(adjacency_matrix,
10 matches
Mail list logo