On Tue, Mar 24, 2015 at 9:38 PM, Christof Angermueller <
c.angermuel...@gmail.com> wrote:

> Thanks Andy! I replied to your comments:
>
> https://docs.google.com/document/d/1bAWdiu6hZ6-FhSOlhgH-7x3weTluxRfouw9op9bHBxs/edit?usp=sharing
> .
>
> I summary,
> * I will not mentioned parallelization as an extended features,
> * suggest concrete data sets for benchmarking,
> * mentioned tasks for which I expect an improvement.
>
> Any further ideas?
> Where can I find the PR for gaussian_processes? I would like to know
> about what will be implemented and to which extend I can contribute.
>

https://github.com/scikit-learn/scikit-learn/pull/4270/


>
> I will upload the final version to melange tomorrow.
>
>
> Cheers,
> Christof
>
>
> Any further ideas on
>
>
> On 20150324 00:07, Andreas Mueller wrote:
> > Hi Christof.
> > I gave some comments on the google doc.
> >
> > Andy
> >
> > On 03/19/2015 05:12 PM, Christof Angermueller wrote:
> >> Hi All,
> >>
> >> you can find my proposal for the hyperparameter optimization topic here:
> >> * http://goo.gl/XHuav8
> >> *
> >>
> https://docs.google.com/document/d/1bAWdiu6hZ6-FhSOlhgH-7x3weTluxRfouw9op9bHBxs/edit?usp=sharing
> >>
> >> Please give feedback!
> >>
> >> Cheers,
> >> Christof
> >>
> >>
> >> On 20150310 15:27, Sturla Molden wrote:
> >>> Andreas Mueller <t3k...@gmail.com> wrote:
> >>>> Does emcee implement Bayesian optimization?
> >>>> What is the distribution you assume? GPs?
> >>>> I thought emcee was a sampler. I need to check in with Dan ;)
> >>> Just pick the mode :-)
> >>>
> >>> The distribution is whatever you want it to be.
> >>>
> >>> Sturla
> >>>
> >>>
> >>>
> >>>
> >>>> On 03/09/2015 09:27 AM, Sturla Molden wrote:
> >>>>> For Bayesian optimization with MCMC (which I believe spearmint also
> >>>>> does) I have found that emcee is very nice:
> >>>>>
> >>>>> http://dan.iel.fm/emcee/current/
> >>>>>
> >>>>> It is much faster than naïve MCMC methods and all we need to do is
> >>>>> compute a callback that computes the loglikelihood given the
> parameter
> >>>>> set (which can just as well be hyperparameters).
> >>>>>
> >>>>> To do this computation in parallel one can simply evaluate the
> walkers
> >>>>> in parallel and do a barrier synchronization after each step. The
> >>>>> contention due to the barrier can be reduced by increasing the
> number of
> >>>>> walkers as needed. Also one should use something like DCMT for random
> >>>>> numbers to make sure there are no contention for the PRNG and to
> ensure
> >>>>> that each thread (or process) gets an independent stream of random
> numbers.
> >>>>>
> >>>>> emcee implements this kind of optimization using multiprocessing,
> but it
> >>>>> passes parameter sets around using pickle and is therefore not very
> >>>>> efficient compared to just storing the current parameter for each
> walker
> >>>>> in shared memory. So there is a lot of room for improvement here.
> >>>>>
> >>>>>
> >>>>> Sturla
> >>>>>
> >>>>>
> >>>>>
> >>>>> On 07/03/15 15:06, Kyle Kastner wrote:
> >>>>>> I think finding one method is indeed the goal. Even if it is not the
> >>>>>> best every time, a 90% solution for 10% of the complexity would be
> >>>>>> awesome. I think GPs with parameter space warping are *probably* the
> >>>>>> best solution but only a good implementation will show for sure.
> >>>>>>
> >>>>>> Spearmint and hyperopt exist and work for more complex stuff but
> with
> >>>>>> far more moving parts and complexity. Having a tool which is easy
> to use
> >>>>>> as the grid search and random search modules currently are would be
> a
> >>>>>> big benefit.
> >>>>>>
> >>>>>> My .02c
> >>>>>>
> >>>>>> Kyle
> >>>>>>
> >>>>>> On Mar 7, 2015 7:48 AM, "Christof Angermueller"
> >>>>>> <c.angermuel...@gmail.com
> >>>>>> <mailto:c.angermuel...@gmail.com>> wrote:
> >>>>>>
> >>>>>>         Hi Andreas (and others),
> >>>>>>
> >>>>>>         I am a PhD student in Bioinformatics at the University of
> Cambridge,
> >>>>>>         (EBI/EMBL), supervised by Oliver Stegle and Zoubin
> Ghahramani. In my
> >>>>>>         PhD, I apply and develop different machine learning
> algorithms for
> >>>>>>         analyzing biological data.
> >>>>>>
> >>>>>>         There are different approaches for hyperparameter
> optimization, some
> >>>>>>         of which you mentioned on the topics page:
> >>>>>>         * Sequential Model-Based Global Optimization (SMBO) ->
> >>>>>>         http://www.cs.ubc.ca/labs/beta/Projects/SMAC/
> >>>>>>         * Gaussian Processes (GP) -> Spearmint;
> >>>>>>         https://github.com/JasperSnoek/spearmint
> >>>>>>         * Tree-structured Parzen Estimator Approach (TPE) ->
> Hyperopt:
> >>>>>>         http://hyperopt.github.io/hyperopt/
> >>>>>>
> >>>>>>         And more recent approaches based on neural networks:
> >>>>>>         * Deep Networks for Global Optimization (DNGO) ->
> >>>>>>         http://arxiv.org/abs/1502.05700
> >>>>>>
> >>>>>>         The idea is to implement ONE of this approaches, right?
> >>>>>>
> >>>>>>         Do you prefer a particular approach due to theoretical or
> practical
> >>>>>>         reasons?
> >>>>>>
> >>>>>>         Spearmint also supports distributing jobs on a cluster
> (SGE). I
> >>>>>>         imagine that this requires platform specific code, which
> could be
> >>>>>>         difficult to maintain. What do you think?
> >>>>>>
> >>>>>>         Spearmint and hyperopt are already established python
> packages.
> >>>>>>         Another sklearn implementation might be considered as
> redundant, are
> >>>>>>         hard to establish. Do you have a particular new feature in
> mind?
> >>>>>>
> >>>>>>
> >>>>>>         Cheers,
> >>>>>>         Christof
> >>>>>>
> >>>>>>         --
> >>>>>>         Christof Angermueller
> >>>>>>         cangermuel...@gmail.com
> >>>>>> <mailto:cangermuel...@gmail.com>
> >>>>>>         http://cangermueller.com
> >>>>>>
> >>>>>>
> >>>>>>
>  
> ------------------------------------------------------------------------------
> >>>>>>         Dive into the World of Parallel Programming The Go Parallel
> Website,
> >>>>>>         sponsored
> >>>>>>         by Intel and developed in partnership with Slashdot Media,
> is your
> >>>>>>         hub for all
> >>>>>>         things parallel software development, from weekly thought
> leadership
> >>>>>>         blogs to
> >>>>>>         news, videos, case studies, tutorials and more. Take a look
> and join the
> >>>>>>         conversation now. http://goparallel.sourceforge.net/
> >>>>>>         _______________________________________________
> >>>>>>         Scikit-learn-general mailing list
> >>>>>>         Scikit-learn-general@lists.sourceforge.net
> >>>>>>         <mailto:Scikit-learn-general@lists.sourceforge.net>
> >>>>>>
> https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>>
> ------------------------------------------------------------------------------
> >>>>>> Dive into the World of Parallel Programming The Go Parallel
> Website, sponsored
> >>>>>> by Intel and developed in partnership with Slashdot Media, is your
> hub for all
> >>>>>> things parallel software development, from weekly thought
> leadership blogs to
> >>>>>> news, videos, case studies, tutorials and more. Take a look and
> join the
> >>>>>> conversation now. http://goparallel.sourceforge.net/
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>
> ------------------------------------------------------------------------------
> >>>>> Dive into the World of Parallel Programming The Go Parallel Website,
> sponsored
> >>>>> by Intel and developed in partnership with Slashdot Media, is your
> hub for all
> >>>>> things parallel software development, from weekly thought leadership
> blogs to
> >>>>> news, videos, case studies, tutorials and more. Take a look and join
> the
> >>>>> conversation now. http://goparallel.sourceforge.net/
> >>>>> _______________________________________________
> >>>>> Scikit-learn-general mailing list
> >>>>> Scikit-learn-general@lists.sourceforge.net
> >>>>> https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
> >>>>
> ------------------------------------------------------------------------------
> >>>> Dive into the World of Parallel Programming The Go Parallel Website,
> sponsored
> >>>> by Intel and developed in partnership with Slashdot Media, is your
> hub for all
> >>>> things parallel software development, from weekly thought leadership
> blogs to
> >>>> news, videos, case studies, tutorials and more. Take a look and join
> the
> >>>> conversation now. http://goparallel.sourceforge.net/
> >>>
> ------------------------------------------------------------------------------
> >>> Dive into the World of Parallel Programming The Go Parallel Website,
> sponsored
> >>> by Intel and developed in partnership with Slashdot Media, is your hub
> for all
> >>> things parallel software development, from weekly thought leadership
> blogs to
> >>> news, videos, case studies, tutorials and more. Take a look and join
> the
> >>> conversation now. http://goparallel.sourceforge.net/
> >>> _______________________________________________
> >>> Scikit-learn-general mailing list
> >>> Scikit-learn-general@lists.sourceforge.net
> >>> https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
> >
> >
> ------------------------------------------------------------------------------
> > Dive into the World of Parallel Programming The Go Parallel Website,
> sponsored
> > by Intel and developed in partnership with Slashdot Media, is your hub
> for all
> > things parallel software development, from weekly thought leadership
> blogs to
> > news, videos, case studies, tutorials and more. Take a look and join the
> > conversation now. http://goparallel.sourceforge.net/
> > _______________________________________________
> > Scikit-learn-general mailing list
> > Scikit-learn-general@lists.sourceforge.net
> > https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
>
> --
> Christof Angermueller
> cangermuel...@gmail.com
> http://cangermueller.com
>
>
>
> ------------------------------------------------------------------------------
> Dive into the World of Parallel Programming The Go Parallel Website,
> sponsored
> by Intel and developed in partnership with Slashdot Media, is your hub for
> all
> things parallel software development, from weekly thought leadership blogs
> to
> news, videos, case studies, tutorials and more. Take a look and join the
> conversation now. http://goparallel.sourceforge.net/
> _______________________________________________
> Scikit-learn-general mailing list
> Scikit-learn-general@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
>
------------------------------------------------------------------------------
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/
_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to