On 07/10/2012 11:08 PM, Olivier Grisel wrote:
> 2012/7/10 Andreas Mueller :
>> Hi Emanuel.
>> Is there a reason not to train multinomial logistic regression
>> (other than that it is not finished yet) ?
>> I think it would be more straight-forward and any help
>> on the multinomial logistic regress
On 07/10/2012 10:08 PM, Olivier Grisel wrote:
> 2012/7/10 Andreas Mueller :
>> Hi Emanuel.
>> Is there a reason not to train multinomial logistic regression
>> (other than that it is not finished yet) ?
>> I think it would be more straight-forward and any help
>> on the multinomial logistic regress
2012/7/10 Andreas Mueller :
> Hi Emanuel.
> Is there a reason not to train multinomial logistic regression
> (other than that it is not finished yet) ?
> I think it would be more straight-forward and any help
> on the multinomial logistic regression would be great
> (I'm very busy at the moment unf
On 07/09/2012 01:32 PM, Philipp Singer wrote:
> Am 09.07.2012 13:59, schrieb Vlad Niculae:
>> Another (hackish) idea to try would be to keep the labels of the extra
>> data bit give it a sample_weight low enough not to override your good
>> training data.
> That's actually a great and simple idea.
Hi Emanuel.
Is there a reason not to train multinomial logistic regression
(other than that it is not finished yet) ?
I think it would be more straight-forward and any help
on the multinomial logistic regression would be great
(I'm very busy at the moment unfortunately).
Cheers,
Andy
On 07/09/20
2012/7/10 Lars Buitinck :
> 2012/7/10 Olivier Grisel :
>> When doing single node multi cpu parallel machine learning (e.g grid
>> search, one vs all SGD, random forests), it would be great to avoid
>> duplicating memory, especially for the input dataset that is used as a
>> readonly resource in mos
2012/7/10 Olivier Grisel :
> 2012/7/10 Lars Buitinck :
>> 2012/7/10 Olivier Grisel :
>>> When doing single node multi cpu parallel machine learning (e.g grid
>>> search, one vs all SGD, random forests), it would be great to avoid
>>> duplicating memory, especially for the input dataset that is used
2012/7/10 Lars Buitinck :
> 2012/7/10 Olivier Grisel :
>> When doing single node multi cpu parallel machine learning (e.g grid
>> search, one vs all SGD, random forests), it would be great to avoid
>> duplicating memory, especially for the input dataset that is used as a
>> readonly resource in mos
2012/7/10 Olivier Grisel :
> When doing single node multi cpu parallel machine learning (e.g grid
> search, one vs all SGD, random forests), it would be great to avoid
> duplicating memory, especially for the input dataset that is used as a
> readonly resource in most of our common usecases.
I may
Hi all,
When doing single node multi cpu parallel machine learning (e.g grid
search, one vs all SGD, random forests), it would be great to avoid
duplicating memory, especially for the input dataset that is used as a
readonly resource in most of our common usecases.
This could be done either with
Hi Federico,
no not yet - I just approached them recently regarding this issue - I
let you know as soon as I hear from them.
best,
Peter
2012/7/10 federico vaggi :
> Peter - did you get any updates from Kaggle? If not, is there anything that
> we as a community can do to sway them?
>
>
> On Sa
> --
> GOAL: Efficiently support multiple regression targets (bidimensional Y) in
> all linear models, like ridge regression and orthogonal matching pursuit
> currently do.
>
> STATUS: Pull request under review.
can you give the like so everyone can take a look and eventually give
a hand to revi
Exactly, thanks for the clarification.
2012/7/10 Olivier Grisel
> Ok so if I understand correctly the sentence:
>
> > The glmnet implementation is not yet competitive with the current
> > implementation.
>
> Should read:
>
> """
> The l1+l2 penalized least square regression implemented with
> co
As per Gael's request, here is my progress compared to what was initially
stated as mid-term goals.
Overall the project is behind schedule, but not far, and I am fairly confident
about its successful completion.
--
GOAL: Set up a running performance benchmark such as speed.pypy.org or Wes
McK
Dear all,
since the start of the project I've been in continuous exchange with my
mentor (Alexandre Gramfort)
via several pull-request comments. There, I've been reporting my status and
asked for feedback, when needed. The promptly feedback of Alexandre kept me
going and assured me being on the r
Ok so if I understand correctly the sentence:
> The glmnet implementation is not yet competitive with the current
> implementation.
Should read:
"""
The l1+l2 penalized least square regression implemented with
coordinate descent and
covariance updates not yet competitive with the current impleme
"Extremely efficient procedures for fitting the entire lasso or elastic-net
regularization path for linear regression, logistic and multinomial
regression models.
The algorithm uses cyclical coordinate descent in a pathwise fashion, as
described in the paper:
[1] Regularized Paths for Generalize
2012/7/10 iBayer :
> Dear all,
>
> since the start of the project I've been in continuous exchange with my
> mentor (Alexandre Gramfort)
>
> via several pull-request comments. There, I've been reporting my status and
> asked for feedback, when needed. The promptly feedback of Alexandre kept me
> go
Hello,
early bird registration for Euroscipy 2012 is soon coming to an end, with
the deadline on July 22nd. Don't forget to register soon! Reduced fees
are available for academics, students and speakers. Registration takes
place online on http://www.euroscipy.org/conference/euroscipy2012.
Euroscip
19 matches
Mail list logo