Hi everyone,
I am using the sklearn.linear_model.ElasticNet class to fit some data. The
structure of the data is y = Xw, and I am trying to solve for w where
y.shape is (150,) and X.shape is (150,150), with a non-negativity
constraint. Both y and each column of X is mean-removed. Some of the
columns of X are quite correlated with each other. I have been playing
around a bit with different settings of inputs to the initialization of
ElasticNet and I am running into the following issue understanding alpha
and rho: for a given value of alpha (rather small, alpha=0.0075) , when I
change rho from 0 to 0.5 to 1, I get smaller L1 norm (np.sum(w)) and a
larger L2 norm (np.sum(w**2)). This defies my intuition that larger values
of rho should make ElasticNet more and more averse to growing L2 norm and
less and less averse to growing L1 norm, so I was expecting the exact
opposite. What is the explanation for this behavior?
Thanks!
Ariel
------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and
threat landscape has changed and how IT managers can respond. Discussions
will include endpoint security, mobile security and the latest in malware
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general