Le 11 avril 2012 00:28, Michael Selik a écrit :
> Hello,
>
> As per the docs' suggestion to ask around before starting my own work: is
> anyone working on a weighted mean shift implementation?
>
> The purpose of this is to account for some observations being more reliable
> than others. Or perha
Hello,
As per the docs' suggestion to ask around before starting my own work: is
anyone working on a weighted mean shift implementation?
The purpose of this is to account for some observations being more reliable
than others. Or perhaps I've misunderstood the current implementation and it
alr
On 04/01/2012 09:27 PM, Alexandre Gramfort wrote:
Afaik, it was with a l1-penalized logistic. In my experience,
l2-penalized models and less sensitive to choice of the penality
parameter, and hinge loss (aka SVM) and less sensitive than l2 of
logistic loss.
> indeed.
>
>> I thin
> Does it give it extra consistency properties? e.g. unbiased estimates?
could be … Jaques will explain this to us tomorrow :)
He's watching the talk on video-lectures :)
Alex
--
Better than sec? Nothing is better than
To continue with funny algorithm names, can we have the top moumoute online
natural gradient algorithm in scikit-learn :) ?
http://nicolas.le-roux.name/publications/LeRoux08_tonga.pdf
Mathieu
--
Better than sec? Nothing i
On Tue, Apr 10, 2012 at 02:52:04PM +0200, Alexandre Gramfort wrote:
> it has a rescaling step like Adaptive Lasso but using OLS. The
> positivity is just to impose the same sign of in coef_ and coef_ as
> obtained with OLS
Does it give it extra consistency properties? e.g. unbiased estimates?
G
> Yes, basically the non-negative garrote is a non-negative Lasso, if I
> understand it correctly. Thus if your priors are that your model is
> sparse, and with only positive weights, the non-negative garrote is the
> right estimator.
no :)
it has a rescaling step like Adaptive Lasso but using OL
On Tue, Apr 10, 2012 at 02:44:56PM +0200, Jaques Grobler wrote:
>This paper mentions
>We also show that the nonnegative garrote has the nice property that
>with probability tending to one, the solution path contains an
>estimate that correctly identi es the set of important variable
> What the benefit of non-negative Garotte?
unclear to me for now. I'm still reading on the topic. But if somebody
can pitch in I'm interested.
Alex
--
Better than sec? Nothing is better than sec when it comes to
monitor
hahaha @Olivier 's Garotte cake
This paper mentions
We also show that the nonnegative garrote has the nice
property that with probability tending to one, the solution path contains an
estimate that correctly identi es the set of important variables and is
consis-
tent for the coecients of the im
> > What the benefit of non-negative Garotte?
> To cook a non-negative Garotte cake?
Definitely tastier than a negative Garotte cake.
--
Better than sec? Nothing is better than sec when it comes to
monitoring Big Data ap
Le 10 avril 2012 14:39, Gael Varoquaux a écrit :
> What the benefit of non-negative Garotte?
To cook a non-negative Garotte cake?
--
Olivier
http://twitter.com/ogrisel - http://github.com/ogrisel
--
Better than sec? No
What the benefit of non-negative Garotte?
G
--
Better than sec? Nothing is better than sec when it comes to
monitoring Big Data applications. Try Boundary one-second
resolution app monitoring today. Free.
http://p.sf.net
hi,
as soon as we have Immanuel's branch with positive lasso [1] merged we
could have a non-negative
Garotte in the scikit. A quick gist (hopefully not too buggy):
https://gist.github.com/2351057
Feed back welcome and if someone is willing to cleanly merge this …
Alex
PS: I've added the snippe
14 matches
Mail list logo