On 10/06/2012 04:21 PM, James Bergstra wrote: > Also on the subject of @amueller's recent PR on hyperparameter > optimization, I was wondering if anyone is interested in packaging > optimization algorithms for tree-structured spaces. There are several > algorithms in the works by myself and others (which build on many of > the other algorithms already in sklearn) so I think it's a good time > for a discussion on this kind of interface. Do you think that > algorithms for optimizing cost functions over this sort of search > space should be in sklearn, or should they be in another package > (scikit-bayesian-optimization, aka skbo?), which would certainly be > designed to work well with sklearn? OT: I didn't know about your newest work. Did you announce that somewhere?
I think tree-structured parameter spaces are rather the norm than the exception and fit well with large pipelines. I think using dicts and lists as you proposed is a good idea and should be easy to implement for the random search. The question is whether we want global optimization for that. I think this might be out of scope currently. I'd love to have global optimization for an R^n search space, but more seems asking a bit much for the moment imho. I'd also rather focus on the 1.0 atm. Cheers, Andy ------------------------------------------------------------------------------ Don't let slow site performance ruin your business. Deploy New Relic APM Deploy New Relic app performance management and know exactly what is happening inside your Ruby, Python, PHP, Java, and .NET app Try New Relic at no cost today and get our sweet Data Nerd shirt too! http://p.sf.net/sfu/newrelic-dev2dev _______________________________________________ Scikit-learn-general mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
