Hi folks,
I went ahead and made a POC for a more complete implementation of option #4:
https://github.com/staple/scikit-learn/commit/e76fa8887cd35ad7a249ee157067cd12c89bdefb
Aaron
On Tue, Oct 28, 2014 at 11:35 PM, Aaron Staple
wrote:
> Following up on Andy’s questions:
>
> The scorer implemen
hi all,
Please I need help on two things:
First where to learn/ improve my python skills and secondly how to apply
scikit-learn to pin point/ extract features that are similar in all my data
sets
Raphael
On 2 November 2014 21:52, Andy wrote:
> Why doesn't it help you?
> And why do you want
I'll definitely try it out! Thanks for all the work.
On Sun, Nov 2, 2014 at 9:17 PM, Robert McGibbon wrote:
> The feature set is pretty similar to spearmint. We have found that the MOE
> GP package is much more robust than the code in spearmint though (it was
> open sourced by yelp and is used
On 11/02/2014 04:15 PM, Lars Buitinck wrote:
> 2014-11-02 22:09 GMT+01:00 Andy :
>>> No. That would be backward stepwise selection. Neither that, nor its
>>> forward cousin (find most discriminative feature, then second-most,
>>> etc.) are implemented in scikit-learn.
>>>
>> Isn't RFE the backward
2014-11-02 22:09 GMT+01:00 Andy :
>> No. That would be backward stepwise selection. Neither that, nor its
>> forward cousin (find most discriminative feature, then second-most,
>> etc.) are implemented in scikit-learn.
>>
> Isn't RFE the backward step selection using a maximum number of features?
On 10/15/2014 04:59 AM, Michael Eickenberg wrote:
> +1 for what Gaël and Arnaud say.
>
> In addition to that, I don't know if a distinction between two groups
> of metrics is necessarily straightforward. At which number of
> properties would one draw the line? Triangular inequality? Is KL
> dive
On 10/20/2014 04:29 PM, Lars Buitinck wrote:
> 2014-10-20 22:08 GMT+02:00 George Bezerra :
>> Not an expert, but I think the idea is that you remove (or add) features one
>> by one, starting from the ones that have the least (or most) impact.
>>
>> E.g., try removing a feature, if performance impro
Why doesn't it help you?
And why do you want to port an SVM?
So this is more of a coding exercise?
There is a blog-post by Mathieu on implementing SVMs using CVXOPT
(http://www.mblondel.org/journal/2010/09/19/support-vector-machines-in-python/)
and there are some more educational implementations f
Hi Sturla,
yes I did - doesn't help me very much I fear.
Best
Philipp
On 01/11/14 12:16, Philipp Schiffer wrote:
> Would someone maybe know of a good tutorial, which could help porting a
> SVM written in Matlab to Python and Scikit-learn?
I suppose you have seen this?
http://scikit-learn.org/
The feature set is pretty similar to spearmint. We have found that the MOE
GP package is much more robust than the code in spearmint though (it was
open sourced by yelp and is used in production there).
In contrast to hyperopt, osprey is a little bit more geared towards ML, in
that ideas about cro
Looks neat, but how does it differ from hyperopt or spearmint?
On Fri, Oct 31, 2014 at 11:46 PM, Robert McGibbon
wrote:
> Hey,
>
> I started working on a project for hyperparmeter optimization of sklearn
> models.
> The package is here: https://github.com/rmcgibbo/osprey. It's designed to
> be e
11 matches
Mail list logo