On Sun, Nov 20, 2011 at 3:56 PM, Alexandre Gramfort
<[email protected]> wrote:
>> 2. Gaussian process w. Expected Improvement global optimization.
>> This is an established technique for global optimization that has
>> about the right scaling properties to be good for hyper-parameter
>> optimization.  I think you probably can't do much better than a
>> Gaussian Process (GP) with Expected Improvement (EI) for optimizing
>> the parameters of say, an SVM, but we can only try and see (and
>> compare with the variety of other techniques for global optimization).
>> The scikit already has GP fitting in it, scipy has good optimization
>> routines, so why not put them together to make a hyper-parameter
>> optimizer? I think this would be a good addition to the scikit, and
>> not too hard (the hard parts are already done).
>
> can you point us to some pdfs ? or maybe write some kind of pseudo code?

Eric Brochu's thesis: chapter 2 is very readable, gives lots of good
reference as well.

> And as usual pull request / patch welcome :)

Let me work out the bugs in hyperopt's GP optimization first, and then
maybe we can talk more about it at NIPS.

- James

------------------------------------------------------------------------------
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to