Congrats Loïc! Looking forward to your comments :)
On 23 Jun 2016 19:09, "Manoj Kumar" wrote:
> Hi Loic,
>
> Congratulation!
>
> On Thu, Jun 23, 2016 at 4:11 AM, Joel Nothman
> wrote:
>
>> Thanks for some great work so far, Loic; I'm looking forward to more of
>> your well-considered comments an
Hi Loic,
Congratulation!
On Thu, Jun 23, 2016 at 4:11 AM, Joel Nothman
wrote:
> Thanks for some great work so far, Loic; I'm looking forward to more of
> your well-considered comments and contributions!
>
> On 23 June 2016 at 18:52, Arnaud Joly wrote:
>
>> Congratulation Loic!
>>
>> Arnaud
>>
Reg. the "Needs Review" tag -
Could I request the maintainers to unlabel the PR once a review has been
completed and is waiting for the author? (Should filter out a lot of
noise). The use case I envision for this tag would be to serve as a
bookmark or a green flag to the maintainer who labels it s
On 23 June 2016 at 22:47, Raghav R V wrote:
> > "nag if needed"!
>
> I always assume it to be an implicit advice ;P
>
I could tell.
___
scikit-learn mailing list
[email protected]
https://mail.python.org/mailman/listinfo/scikit-learn
> "nag if needed"!
I always assume it to be an implicit advice ;P
On Wed, Jun 22, 2016 at 9:39 PM, Andreas Mueller wrote:
> Sorry, I've been off review duty for a while, should be back later this
> summer ;)
>
>
> On 06/21/2016 12:09 AM, olologin wrote:
>
>> Hi guys, I know scikit-learn may not
Thanks, Chris. I will look into your recommendations. I have tried
artificial neural network and it was giving me good results on test set as
well.
Regards
Waseem
On Thu, Jun 23, 2016 at 12:00 PM, chris brew wrote:
> It is probably a good idea to start by separating off part of your
> training
Thanks for some great work so far, Loic; I'm looking forward to more of
your well-considered comments and contributions!
On 23 June 2016 at 18:52, Arnaud Joly wrote:
> Congratulation Loic!
>
> Arnaud
>
> > On 23 Jun 2016, at 07:57, Gael Varoquaux
> wrote:
> >
> > Hi,
> >
> > I'd like to welcome
It is probably a good idea to start by separating off part of your training
data into a held-out development set that is not used for training, which
you can use to create learning curves and estimate probable performance on
unseen data. I really recommend Andrew Ng's machine learning course
materi
Hi Brian,
Thanks for your email, I did try
tree.export_graphviz(model,out_file='tree.dot'),but I got an error saying
AttributeError: 'RandomForestRegressor' object has no attribute 'tree_' which
I think is because this is a forest, not a single tree that's why I can't
visualise it, No?
Also, do yo
Hi Muhammad,
If you've not yet read the documentation I would highly recommend starting
with the Decision Tree [1] and working your way through the examples on
your own data. You'll find an example [2] of how to generate a graphviz
compatible dot file and visualise it.
Once your satisfied that y
Hi All,
I am trying to use random forests for a regression problem, with 10 input
variables and one output variable. I am getting very good fit even with
default parameters and low n_estimators. Even with n_estimator = 10, I get
R^2 value of 0.95 on testing dataset (MSE=23) and a value of 0.99 for
Congratulation Loic!
Arnaud
> On 23 Jun 2016, at 07:57, Gael Varoquaux
> wrote:
>
> Hi,
>
> I'd like to welcome Loic Esteve (@lesteve) as a new core contributor to
> the scikit-learn team.
>
> Loic has been reviewing very seriously a number of PR, beyond his own
> contributions. It's great t
12 matches
Mail list logo