Here's a wee summary on the non-negative garrote (NG) i pieced together:
The original non-negative garrote from Breiman (1995) is basically a scaled
version of the least square estimate.
Basically take a OLS estimator and then shrink that estimator to obtain a
more sparse representation.
The shrinkage is done by multiplying the OLS estimator by some shrinkage
factor, say `d`,
which is found by minimising the sum of square residuals, under the
restriction that the `d`'s are positive
and that some are bound by a certain shrinkage parameter.
The algorithm proposed in this paper, is rather similar to that of the
Lars LASSO, but with a complicating
factor being a non-negative constraint on the shrinkage factor. (See eq.
(2) in this paper <http://www2.isye.gatech.edu/statistics/papers/05-25.pdf>)
Once you've computed your shrinkage factor, you basically have your
regression coefficients
seeing as your NG coefficient = shrinkage factor * regression coefficient
He showed it to be a stable selection method and often outperforms it's
competitors like
subset regression and ridge regression.
The solution path of the NG is piece-wise linear and it's whole path can be
computed quickly.
It is also path-consistent (A solution that contains at least one desirable
estimate) given an appropriate initial estimate. The path-consistency of
the NG is highlighted to be in contrast to the fact that the LASSO is not
always path consistent (Peng Zhao & Hui Zou, personal communication). It is
argued that the NG has the ability to turn
a consistent estimate into an estimate that is both consistent in terms of
estimation and in terms of variable selection.
A drawback is the NG's explicit reliance on the full least square estimate,
as a small sample size may cause it to perform poorly - however a ridge
regression is suggested as an initial estimate for defining the NG
estimate, instead
of the least square estimate.
Hope this is of any help
Jaques
2012/4/10 Alexandre Gramfort <[email protected]>
> > Does it give it extra consistency properties? e.g. unbiased estimates?
>
> could be … Jaques will explain this to us tomorrow :)
>
> He's watching the talk on video-lectures :)
>
> Alex
>
>
> ------------------------------------------------------------------------------
> Better than sec? Nothing is better than sec when it comes to
> monitoring Big Data applications. Try Boundary one-second
> resolution app monitoring today. Free.
> http://p.sf.net/sfu/Boundary-dev2dev
> _______________________________________________
> Scikit-learn-general mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
>
------------------------------------------------------------------------------
Better than sec? Nothing is better than sec when it comes to
monitoring Big Data applications. Try Boundary one-second
resolution app monitoring today. Free.
http://p.sf.net/sfu/Boundary-dev2dev
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general