On Sat, Sep 29, 2012 at 10:39:46AM -0700, Ariel Rokem wrote:
> A sneakier (and probably not great) approach would be to have
> ElasticNet itself try calling LinearRegression when alpha is set to 0.
> In a way, that's what a user is asking for by setting alpha to 0 - give
> me the the OLS solution.
>
> A sneakier (and probably not great) approach would be to have ElasticNet
> itself try calling LinearRegression when alpha is set to 0. In a way,
> that's what a user is asking for by setting alpha to 0 - give me the the
> OLS solution.
>
> Ariel
>
I would rather forbid the user to set alpha to
On Sat, Sep 29, 2012 at 9:39 AM, Olivier Grisel wrote:
> 2012/9/29 Gael Varoquaux :
> > Hey Ariel,
> >
> > On Sat, Sep 29, 2012 at 08:54:46AM -0700, Ariel Rokem wrote:
> >> Sure - here's a minimal example based on what I'm trying to do with
> this (data
> >> at the top, calculations at the bottom)
Hi Gael,
On Sat, Sep 29, 2012 at 9:20 AM, Gael Varoquaux <
[email protected]> wrote:
> Hey Ariel,
>
> On Sat, Sep 29, 2012 at 08:54:46AM -0700, Ariel Rokem wrote:
> > Sure - here's a minimal example based on what I'm trying to do with this
> (data
> > at the top, calculations at the b
2012/9/29 Gael Varoquaux :
>> BTW, while you are reviewing mergeable stuff, I think this one is
>> ready for a green button:
>
>> https://github.com/scikit-learn/scikit-learn/pull/1187
>
> OK, I'll try to review. But I'll have to run soon, to go to celebrate
> @NelleV's birthday :)
Happy birthday
> BTW, while you are reviewing mergeable stuff, I think this one is
> ready for a green button:
> https://github.com/scikit-learn/scikit-learn/pull/1187
OK, I'll try to review. But I'll have to run soon, to go to celebrate
@NelleV's birthday :)
G
2012/9/29 Gael Varoquaux :
> On Sat, Sep 29, 2012 at 06:41:19PM +0200, Olivier Grisel wrote:
>> You do it or shall I do it?
>
> OK, I'll do it.
BTW, while you are reviewing mergeable stuff, I think this one is
ready for a green button:
https://github.com/scikit-learn/scikit-learn/pull/1187
--
O
On Sat, Sep 29, 2012 at 06:41:19PM +0200, Olivier Grisel wrote:
> You do it or shall I do it?
OK, I'll do it.
G
--
How fast is your code?
3 out of 4 devs don\\\'t know how their code performs in production.
Find out how
2012/9/29 Gael Varoquaux :
> On Sat, Sep 29, 2012 at 06:39:14PM +0200, Olivier Grisel wrote:
>> I think the user warning could be improved by advising the user to
>> switch to sklearn.linear_model.LinearRegression instead.
>
> +1
You do it or shall I do it?
--
Olivier
http://twitter.com/ogrisel
On Sat, Sep 29, 2012 at 06:39:14PM +0200, Olivier Grisel wrote:
> I think the user warning could be improved by advising the user to
> switch to sklearn.linear_model.LinearRegression instead.
+1
--
How fast is your code?
2012/9/29 Gael Varoquaux :
> Hey Ariel,
>
> On Sat, Sep 29, 2012 at 08:54:46AM -0700, Ariel Rokem wrote:
>> Sure - here's a minimal example based on what I'm trying to do with this
>> (data
>> at the top, calculations at the bottom):
>
>> https://gist.github.com/3804428
>
> I do believe that it's
Hey Ariel,
On Sat, Sep 29, 2012 at 08:54:46AM -0700, Ariel Rokem wrote:
> Sure - here's a minimal example based on what I'm trying to do with this (data
> at the top, calculations at the bottom):
> https://gist.github.com/3804428
I do believe that it's a convergence problem. I have updated your
Hi Gael,
On Thu, Sep 27, 2012 at 10:55 PM, Gael Varoquaux <
[email protected]> wrote:
> On Thu, Sep 27, 2012 at 06:18:45PM -0700, Ariel Rokem wrote:
> > Still, the r-squared between the fit and data is only about 0.95 (that's
> not
> > just 'numerical error', if you mean floating poin
On Thu, Sep 27, 2012 at 06:18:45PM -0700, Ariel Rokem wrote:
> Still, the r-squared between the fit and data is only about 0.95 (that's not
> just 'numerical error', if you mean floating point rounding kind of stuff,
> right?). Again, this is for a case where OLS fitting gives a perfect
> correlati
Hi again,
On Wed, Sep 26, 2012 at 10:27 PM, Gael Varoquaux <
[email protected]> wrote:
> On Wed, Sep 26, 2012 at 09:53:36PM -0700, Ariel Rokem wrote:
> > I haven't tried this yet - I'll try it tomorrow. In a way it sounds like
> it's
> > inadvertently implementing an early stopping cr
> > In practice, it is not recommended to use coordinate descent with a very
> > small regularization.
> Isn't gradient boosting a form of coordinate descent?
OK, I should state that above, when I mentionned coordinate descent, I
was thinking of the vanilla coordinate descent as done in GLMnet. T
>> (Also, I believe that GB in sklearn is unregularized in its current
>> implementation?)
>>
>
> It doesn't have a regularization term but the learning rate parameter can be
> used to avoid taking overly big steps:
> http://scikit-learn.org/stable/auto_examples/ensemble/plot_gradient_boosting_regu
On Thu, Sep 27, 2012 at 2:57 PM, Joseph Turian wrote:
> Isn't gradient boosting a form of coordinate descent?
>
It's coordinate descent with greedy selection of the coordinates and early
stopping when n_estimators is reached.
>
> (Also, I believe that GB in sklearn is unregularized in its curre
> In practice, it is not recommended to use coordinate descent with a very
> small regularization.
Isn't gradient boosting a form of coordinate descent?
(Also, I believe that GB in sklearn is unregularized in its current
implementation?)
Best,
Joseph
--
On Wed, Sep 26, 2012 at 09:53:36PM -0700, Ariel Rokem wrote:
> I haven't tried this yet - I'll try it tomorrow. In a way it sounds like it's
> inadvertently implementing an early stopping criterion,
Yes: maxiter and tol
> which is also a form of regularization. That's confusing, considering
> tha
Hey Gael and Alex,
Thanks for getting back to me:
On Wed, Sep 26, 2012 at 12:42 AM, Alexandre Gramfort <
[email protected]> wrote:
> hi ariel,
>
> indeed coordinate descent (an all interative solvers I know) will
> converge slowly for low regularization. So just increase max_iter and
>
hi ariel,
indeed coordinate descent (an all interative solvers I know) will
converge slowly for low regularization. So just increase max_iter and
set tol to 1e-15
Best,
Alex
On Wed, Sep 26, 2012 at 7:36 AM, Gael Varoquaux
wrote:
> Hi Ariel,
>
> On Tue, Sep 25, 2012 at 05:44:21PM -0700, Ariel Ro
Hi Ariel,
On Tue, Sep 25, 2012 at 05:44:21PM -0700, Ariel Rokem wrote:
> Initially, I suspected that this has to do with the non-negativity
> constraint I applied, so I removed that.
Indeed, if you are imposing positivity, you do not have a least square.
> Then, I was wondering whether it might
Hi everyone,
I am still trying to understand ElasticNet. Here's my description (from a
previous thread) of the kind of problem I am trying to solve:
On Mon, Sep 17, 2012 at 9:56 AM, Ariel Rokem wrote:
> I am using the sklearn.linear_model.ElasticNet class to fit some data. The
> structure of th
24 matches
Mail list logo