Oh, and as a follow up on the topic:

Given that A is not a row matrix, but is a column matrix, the correct
way to solve it is to treat every separate column of A as a distinct
linear problem.

Currently, I do this:

    A = []
    for idx in range(B.shape[0]):
        enet_pred = ElasticNet(rho = rho, fit_intercept = False,
                               tol = 0.000001, alphsa = alphas)
        y = np.array(B[idx,:]).flatten()
        enet_pred.fit(X.T, -y.T)
        A.append(enet_pred.coef_)
    A = np.array(A)
    A = A.T

If I am using ElasticNetCV to solve the problem, is there a way to do
the cross validation of the ElasticNet across all columns of A, and
not one column at a time?  My intuition is that using different alphas
across different columns is incorrect, since the rational for using
alpha is that we assume the underlying true solution is actually
sparse, and alpha is a way of enforcing sparseness - it would make
sense that alpha would be uniform across the problem.

Is the way to do that to just manually do cross validation using
different alphas across all the columns of the matrix, and then look
at the solution which gives the lowest cost function value across the
cross validation?

Federico

On Mon, Oct 22, 2012 at 4:51 PM, federico vaggi
<vaggi.feder...@gmail.com> wrote:
> Josef,
>
> could you explain that in slightly more detail?  I'm afraid I'm not
> familiar with the literature you are referencing at all.
>
> If you have some code I could take a look at, that would probably be
> the easiest thing for me.
>
> Thanks!
>
> Federico
>
> On Mon, Oct 22, 2012 at 4:40 PM,  <josef.p...@gmail.com> wrote:
>> On Mon, Oct 22, 2012 at 10:33 AM,  <josef.p...@gmail.com> wrote:
>>> On Mon, Oct 22, 2012 at 10:05 AM, federico vaggi
>>> <vaggi.feder...@gmail.com> wrote:
>>>> Hi Gael,
>>>>
>>>> I took the time to dig a little bit, and found some MATLAB code
>>>> written by Diego di Bernardo.
>>>>
>>>> http://dibernardo.tigem.it/wiki/index.php/Network_Inference_by_Reverse-engineering_NIR
>>>>
>>>> He has a closed form result (the formula is in calc_cov.m), but the
>>>> formula is really weird, and I can't really figure out where it comes
>>>> from.
>>>>
>>>> function covA = calc_cov(A,X,sX,P,sP,RIDGE,W);
>>>>
>>>> % covA = calc_cov(A,X,sX,P,sP [, RIDGE, W]);
>>>> % X,sX,P,SP are N x M, where N=number of genes, M=number of expts.
>>>> % RIDGE is an optional ridge regression parameter
>>>> % W is an optional weight parameter.
>>>>
>>>> [rows,N]=size(A);
>>>> covA=zeros(N,N,rows);
>>>> [N,M] = size(X);
>>>>
>>>> if 1~=exist('RIDGE')
>>>>     RIDGE = 0;
>>>> end
>>>> if 1~=exist('W')
>>>>     W = eye(M);
>>>> end
>>>>
>>>> Q = W*W';
>>>>
>>>> for g=1:rows
>>>>     idx = find(A(g,:)~=0);
>>>>     vEta = sP(g,:).^2 + A(g,:).^2 * sX.^2;
>>>> %     vEta = sP(g,g).^2 + A(g,:).^2 * sX.^2;  % Deigo's way
>>>>     Z=X(idx,:);
>>>>     T=inv(Z*Q*Z'+RIDGE*eye(length(idx)))*Z*Q';
>>>>     covA(idx,idx,g) = T*diag(vEta)*T';
>>>> end
>>>>
>>>> I'm not really sure how exactly the formula was derived though.
>>>
>>> I don't know about L1 penalization, but for Ridge there is a
>>> literature on Ridge Regression with heteroscedastic or autocorrelated
>>> errors, or general error covariance matrix.
>>>
>>> W looks like it's just a weight matrix, combined with ridge, Q is the
>>> error covariance.
>>
>> Also calculating Q is not very efficient in this case.
>> If you know W, then you can just calculate new data by whitening
>> or transforming  x_new = W * X for the estimation.
>>
>> Josef
>>
>>>
>>> Your problem is a bit different from standard Ridge regression because
>>> you have several dependent/outcome variables.
>>>
>>> Josef
>>>
>>>
>>>>
>>>> Note - while the code is free to download, he specifically says that
>>>> he doesn't want the code used for commercial purposes without
>>>> permission.  I presume reproducing it is ok?
>>>>
>>>> Federico
>>>>
>>>>
>>>>
>>>> On Mon, Oct 22, 2012 at 3:34 PM, Gael Varoquaux
>>>> <gael.varoqu...@normalesup.org> wrote:
>>>>> On Fri, Oct 19, 2012 at 01:09:21PM +0200, federico vaggi wrote:
>>>>>> Assuming that X and B are experimentally measured values with
>>>>>> uncertainties, what's the correct way to transfer that uncertainty to
>>>>>> A?
>>>>>
>>>>> There exists to my knowledge no theoretical/closed form result. I would
>>>>> rely on bootstrap:
>>>>> http://en.wikipedia.org/wiki/Bootstrapping_%28statistics%29
>>>>>
>>>>> G
>>>>>
>>>>> ------------------------------------------------------------------------------
>>>>> Everyone hates slow websites. So do we.
>>>>> Make your web apps faster with AppDynamics
>>>>> Download AppDynamics Lite for free today:
>>>>> http://p.sf.net/sfu/appdyn_sfd2d_oct
>>>>> _______________________________________________
>>>>> Scikit-learn-general mailing list
>>>>> Scikit-learn-general@lists.sourceforge.net
>>>>> https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
>>>>
>>>> ------------------------------------------------------------------------------
>>>> Everyone hates slow websites. So do we.
>>>> Make your web apps faster with AppDynamics
>>>> Download AppDynamics Lite for free today:
>>>> http://p.sf.net/sfu/appdyn_sfd2d_oct
>>>> _______________________________________________
>>>> Scikit-learn-general mailing list
>>>> Scikit-learn-general@lists.sourceforge.net
>>>> https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
>>
>> ------------------------------------------------------------------------------
>> Everyone hates slow websites. So do we.
>> Make your web apps faster with AppDynamics
>> Download AppDynamics Lite for free today:
>> http://p.sf.net/sfu/appdyn_sfd2d_oct
>> _______________________________________________
>> Scikit-learn-general mailing list
>> Scikit-learn-general@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

------------------------------------------------------------------------------
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_sfd2d_oct
_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to