arized
NMF. I'm also interested in computing a minimum alpha (the smallest at
which there are more nonzero coefficients than with alpha=0).
Does anyone know how this could be done?
Thanks,
James Jensen
PhD student, Bioinformatics and Systems Biology
Trey Ideker lab
University of Califo
Hi, everyone,
As you know, NMF finds two non-negative matrices W and H whose product
approximates a matrix V. One of the two matrices relates samples to
latent factors, and the other relates variables to latent factors.The
implementation of NMF in scikit-learn returns only one set of
component
not sure how
closely related MAB is with Bayesian optimization, but I think
something along those lines should definitely be implemented for
hyperparameters, since they're expensive functions almost by
definition.
Great idea! I certainly wish it gets implemented as well.
O
I usually hesitate to suggest a new feature in a library like this
unless I am in a position to work on it myself. However, given the
number of people who seem eager to find something to contribute, and
given the recent discussion about improving the Gaussian process module,
I thought I'd ventu
Thanks to everyone for their help with this.
From your input, I now know how to compute the maximum regularization
strength for both lasso and elastic net. I thought my problem was
solved, but I'm realizing that it probably isn't, and I'll explain why.
If anyone has ideas of how to approach th
John, you're right about the difference in nomenclature. I've been using
scikit-learn's names for the parameters, so the alpha I've referred to
is the regularization strength and corresponds to lambda in glmnet. The
mixing parameter, referred to in glmnet as alpha, is the L1-ratio in
scikit-lea
Thanks, Alex. That is helpful. Looks like the glmnet documentation says
that this is how they do it as well. What they don't explain is how to
find alpha_max in the first place. The only thing I've thought of is
doing something like a binary search until you find the smallest alpha
yielding the
Thank you, Olivier.
Just to clarify: you say
You can control the centering with `normalize=True` flag of the
ElasticNet class (or any other linear regression model).
I've noticed people use the term "normalize" in different ways. In the
case of the `normalize=True` flag of the linear mod
I've been applying preprocessing.scale() to my data prior to using
scikit-learn's elastic net, with the understanding that elastic net will
not work correctly if the features do not each have zero mean and unit
variance. scale() both centers and normalizes the data. ElasticNet has
an option to
How is the default grid of alphas and L1 ratios chosen for
scikit-learn's enet_cv, and what is the reasoning behind it? What other
approaches exist for choosing this parameter grid, and what are they
based on?
I'm using elastic net to calculate regularized canonical correlation.
Given data ma
Hello!
You may already be familiar with canonical correlation analysis (CCA).
Given two sets of variables, CCA yields the linear combinations with
maximum correlation between them. It is similar to PCA, which finds
projections with maximum variance for a single set of variables; in fact,
PCA can
Thanks. You mentioned that I could "[add] positive to LassoCV and [pass]
it to the Lasso estimators used in the cross-val." In the directory of
my own installation of scikit-learn, I modified
sklearn/linear_model/coordinate_descent.py to include "positive=False"
to the parameter list of __init_
I'm looking to do regularized regression with a non-negativity
constraint. Scikit-learn's Lasso method has a 'positive' option that
applies this constraint, so it seems like a good tool for the job. At
the same time, the automatic tuning of the regularization parameter that
is offered by LassoC
13 matches
Mail list logo