On Mon, Oct 22, 2012 at 10:33 AM,  <josef.p...@gmail.com> wrote:
> On Mon, Oct 22, 2012 at 10:05 AM, federico vaggi
> <vaggi.feder...@gmail.com> wrote:
>> Hi Gael,
>>
>> I took the time to dig a little bit, and found some MATLAB code
>> written by Diego di Bernardo.
>>
>> http://dibernardo.tigem.it/wiki/index.php/Network_Inference_by_Reverse-engineering_NIR
>>
>> He has a closed form result (the formula is in calc_cov.m), but the
>> formula is really weird, and I can't really figure out where it comes
>> from.
>>
>> function covA = calc_cov(A,X,sX,P,sP,RIDGE,W);
>>
>> % covA = calc_cov(A,X,sX,P,sP [, RIDGE, W]);
>> % X,sX,P,SP are N x M, where N=number of genes, M=number of expts.
>> % RIDGE is an optional ridge regression parameter
>> % W is an optional weight parameter.
>>
>> [rows,N]=size(A);
>> covA=zeros(N,N,rows);
>> [N,M] = size(X);
>>
>> if 1~=exist('RIDGE')
>>     RIDGE = 0;
>> end
>> if 1~=exist('W')
>>     W = eye(M);
>> end
>>
>> Q = W*W';
>>
>> for g=1:rows
>>     idx = find(A(g,:)~=0);
>>     vEta = sP(g,:).^2 + A(g,:).^2 * sX.^2;
>> %     vEta = sP(g,g).^2 + A(g,:).^2 * sX.^2;  % Deigo's way
>>     Z=X(idx,:);
>>     T=inv(Z*Q*Z'+RIDGE*eye(length(idx)))*Z*Q';
>>     covA(idx,idx,g) = T*diag(vEta)*T';
>> end
>>
>> I'm not really sure how exactly the formula was derived though.
>
> I don't know about L1 penalization, but for Ridge there is a
> literature on Ridge Regression with heteroscedastic or autocorrelated
> errors, or general error covariance matrix.
>
> W looks like it's just a weight matrix, combined with ridge, Q is the
> error covariance.

Also calculating Q is not very efficient in this case.
If you know W, then you can just calculate new data by whitening
or transforming  x_new = W * X for the estimation.

Josef

>
> Your problem is a bit different from standard Ridge regression because
> you have several dependent/outcome variables.
>
> Josef
>
>
>>
>> Note - while the code is free to download, he specifically says that
>> he doesn't want the code used for commercial purposes without
>> permission.  I presume reproducing it is ok?
>>
>> Federico
>>
>>
>>
>> On Mon, Oct 22, 2012 at 3:34 PM, Gael Varoquaux
>> <gael.varoqu...@normalesup.org> wrote:
>>> On Fri, Oct 19, 2012 at 01:09:21PM +0200, federico vaggi wrote:
>>>> Assuming that X and B are experimentally measured values with
>>>> uncertainties, what's the correct way to transfer that uncertainty to
>>>> A?
>>>
>>> There exists to my knowledge no theoretical/closed form result. I would
>>> rely on bootstrap:
>>> http://en.wikipedia.org/wiki/Bootstrapping_%28statistics%29
>>>
>>> G
>>>
>>> ------------------------------------------------------------------------------
>>> Everyone hates slow websites. So do we.
>>> Make your web apps faster with AppDynamics
>>> Download AppDynamics Lite for free today:
>>> http://p.sf.net/sfu/appdyn_sfd2d_oct
>>> _______________________________________________
>>> Scikit-learn-general mailing list
>>> Scikit-learn-general@lists.sourceforge.net
>>> https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
>>
>> ------------------------------------------------------------------------------
>> Everyone hates slow websites. So do we.
>> Make your web apps faster with AppDynamics
>> Download AppDynamics Lite for free today:
>> http://p.sf.net/sfu/appdyn_sfd2d_oct
>> _______________________________________________
>> Scikit-learn-general mailing list
>> Scikit-learn-general@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

------------------------------------------------------------------------------
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_sfd2d_oct
_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to