Hi Jeremias.
I haven't thought that trough but shouldn't it be possible
to achieve the same effect by doing a linear transformation of your data an labels
and then shrinking to zero?

Cheers,
Andy


On 03/21/2012 03:12 PM, Jeremias Engelmann wrote:
Hi

I'm using scikit learn's linear model's ridge regression to do ridge regression with large sparse matrices. I know that, by design, ridge regression penalizes parameters for moving away from zero. What I actually want is to penalize parameters to move away from a certain prior (each parameter has a different prior). I was wondering if that sort of thing is coming in a newer version of scikit learn or what type of changes I would have to make to the code for this to work.

Thank you very much


Sincerely yours


------------------------------------------------------------------------------
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here
http://p.sf.net/sfu/sfd2d-msazure


_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

------------------------------------------------------------------------------
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to