This is off-topic, but I should note that there is a patch at
https://github.com/scikit-learn/scikit-learn/pull/2784 awaiting review for
a while now...

On 20 March 2015 at 08:16, Charles Martin <charlesmarti...@gmail.com> wrote:

> I would like to propose extending the linearSVC package
> by replacing the liblinear version with a newer version that
>
> 1. allows setting instance weights
> 2. provides the dual variables /Lagrange multipliers
>
> This would facilitate research and development of transductive SVMs
> and related semi-supervised methods.
>
>
> Charles H Martin, PhD
>
>
>
> On Thu, Mar 19, 2015 at 2:12 PM, Christof Angermueller
> <c.angermuel...@gmail.com> wrote:
> > Hi All,
> >
> > you can find my proposal for the hyperparameter optimization topic here:
> > * http://goo.gl/XHuav8
> > *
> >
> https://docs.google.com/document/d/1bAWdiu6hZ6-FhSOlhgH-7x3weTluxRfouw9op9bHBxs/edit?usp=sharing
> >
> > Please give feedback!
> >
> > Cheers,
> > Christof
> >
> >
> > On 20150310 15:27, Sturla Molden wrote:
> >> Andreas Mueller <t3k...@gmail.com> wrote:
> >>> Does emcee implement Bayesian optimization?
> >>> What is the distribution you assume? GPs?
> >>> I thought emcee was a sampler. I need to check in with Dan ;)
> >> Just pick the mode :-)
> >>
> >> The distribution is whatever you want it to be.
> >>
> >> Sturla
> >>
> >>
> >>
> >>
> >>>
> >>> On 03/09/2015 09:27 AM, Sturla Molden wrote:
> >>>> For Bayesian optimization with MCMC (which I believe spearmint also
> >>>> does) I have found that emcee is very nice:
> >>>>
> >>>> http://dan.iel.fm/emcee/current/
> >>>>
> >>>> It is much faster than naïve MCMC methods and all we need to do is
> >>>> compute a callback that computes the loglikelihood given the parameter
> >>>> set (which can just as well be hyperparameters).
> >>>>
> >>>> To do this computation in parallel one can simply evaluate the walkers
> >>>> in parallel and do a barrier synchronization after each step. The
> >>>> contention due to the barrier can be reduced by increasing the number
> of
> >>>> walkers as needed. Also one should use something like DCMT for random
> >>>> numbers to make sure there are no contention for the PRNG and to
> ensure
> >>>> that each thread (or process) gets an independent stream of random
> numbers.
> >>>>
> >>>> emcee implements this kind of optimization using multiprocessing, but
> it
> >>>> passes parameter sets around using pickle and is therefore not very
> >>>> efficient compared to just storing the current parameter for each
> walker
> >>>> in shared memory. So there is a lot of room for improvement here.
> >>>>
> >>>>
> >>>> Sturla
> >>>>
> >>>>
> >>>>
> >>>> On 07/03/15 15:06, Kyle Kastner wrote:
> >>>>> I think finding one method is indeed the goal. Even if it is not the
> >>>>> best every time, a 90% solution for 10% of the complexity would be
> >>>>> awesome. I think GPs with parameter space warping are *probably* the
> >>>>> best solution but only a good implementation will show for sure.
> >>>>>
> >>>>> Spearmint and hyperopt exist and work for more complex stuff but with
> >>>>> far more moving parts and complexity. Having a tool which is easy to
> use
> >>>>> as the grid search and random search modules currently are would be a
> >>>>> big benefit.
> >>>>>
> >>>>> My .02c
> >>>>>
> >>>>> Kyle
> >>>>>
> >>>>> On Mar 7, 2015 7:48 AM, "Christof Angermueller"
> >>>>> <c.angermuel...@gmail.com
> >>>>> <mailto:c.angermuel...@gmail.com>> wrote:
> >>>>>
> >>>>>       Hi Andreas (and others),
> >>>>>
> >>>>>       I am a PhD student in Bioinformatics at the University of
> Cambridge,
> >>>>>       (EBI/EMBL), supervised by Oliver Stegle and Zoubin Ghahramani.
> In my
> >>>>>       PhD, I apply and develop different machine learning algorithms
> for
> >>>>>       analyzing biological data.
> >>>>>
> >>>>>       There are different approaches for hyperparameter
> optimization, some
> >>>>>       of which you mentioned on the topics page:
> >>>>>       * Sequential Model-Based Global Optimization (SMBO) ->
> >>>>>       http://www.cs.ubc.ca/labs/beta/Projects/SMAC/
> >>>>>       * Gaussian Processes (GP) -> Spearmint;
> >>>>>       https://github.com/JasperSnoek/spearmint
> >>>>>       * Tree-structured Parzen Estimator Approach (TPE) -> Hyperopt:
> >>>>>       http://hyperopt.github.io/hyperopt/
> >>>>>
> >>>>>       And more recent approaches based on neural networks:
> >>>>>       * Deep Networks for Global Optimization (DNGO) ->
> >>>>>       http://arxiv.org/abs/1502.05700
> >>>>>
> >>>>>       The idea is to implement ONE of this approaches, right?
> >>>>>
> >>>>>       Do you prefer a particular approach due to theoretical or
> practical
> >>>>>       reasons?
> >>>>>
> >>>>>       Spearmint also supports distributing jobs on a cluster (SGE). I
> >>>>>       imagine that this requires platform specific code, which could
> be
> >>>>>       difficult to maintain. What do you think?
> >>>>>
> >>>>>       Spearmint and hyperopt are already established python packages.
> >>>>>       Another sklearn implementation might be considered as
> redundant, are
> >>>>>       hard to establish. Do you have a particular new feature in
> mind?
> >>>>>
> >>>>>
> >>>>>       Cheers,
> >>>>>       Christof
> >>>>>
> >>>>>       --
> >>>>>       Christof Angermueller
> >>>>>       cangermuel...@gmail.com
> >>>>> <mailto:cangermuel...@gmail.com>
> >>>>>       http://cangermueller.com
> >>>>>
> >>>>>
> >>>>>
>  
> ------------------------------------------------------------------------------
> >>>>>       Dive into the World of Parallel Programming The Go Parallel
> Website,
> >>>>>       sponsored
> >>>>>       by Intel and developed in partnership with Slashdot Media, is
> your
> >>>>>       hub for all
> >>>>>       things parallel software development, from weekly thought
> leadership
> >>>>>       blogs to
> >>>>>       news, videos, case studies, tutorials and more. Take a look
> and join the
> >>>>>       conversation now. http://goparallel.sourceforge.net/
> >>>>>       _______________________________________________
> >>>>>       Scikit-learn-general mailing list
> >>>>>       Scikit-learn-general@lists.sourceforge.net
> >>>>>       <mailto:Scikit-learn-general@lists.sourceforge.net>
> >>>>>
> https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
> >>>>>
> >>>>>
> >>>>>
> >>>>>
> ------------------------------------------------------------------------------
> >>>>> Dive into the World of Parallel Programming The Go Parallel Website,
> sponsored
> >>>>> by Intel and developed in partnership with Slashdot Media, is your
> hub for all
> >>>>> things parallel software development, from weekly thought leadership
> blogs to
> >>>>> news, videos, case studies, tutorials and more. Take a look and join
> the
> >>>>> conversation now. http://goparallel.sourceforge.net/
> >>>>>
> >>>>>
> >>>>>
> >>>>
> >>>>
> ------------------------------------------------------------------------------
> >>>> Dive into the World of Parallel Programming The Go Parallel Website,
> sponsored
> >>>> by Intel and developed in partnership with Slashdot Media, is your
> hub for all
> >>>> things parallel software development, from weekly thought leadership
> blogs to
> >>>> news, videos, case studies, tutorials and more. Take a look and join
> the
> >>>> conversation now. http://goparallel.sourceforge.net/
> >>>> _______________________________________________
> >>>> Scikit-learn-general mailing list
> >>>> Scikit-learn-general@lists.sourceforge.net
> >>>> https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
> >>>
> >>>
> ------------------------------------------------------------------------------
> >>> Dive into the World of Parallel Programming The Go Parallel Website,
> sponsored
> >>> by Intel and developed in partnership with Slashdot Media, is your hub
> for all
> >>> things parallel software development, from weekly thought leadership
> blogs to
> >>> news, videos, case studies, tutorials and more. Take a look and join
> the
> >>> conversation now. http://goparallel.sourceforge.net/
> >>
> >>
> ------------------------------------------------------------------------------
> >> Dive into the World of Parallel Programming The Go Parallel Website,
> sponsored
> >> by Intel and developed in partnership with Slashdot Media, is your hub
> for all
> >> things parallel software development, from weekly thought leadership
> blogs to
> >> news, videos, case studies, tutorials and more. Take a look and join the
> >> conversation now. http://goparallel.sourceforge.net/
> >> _______________________________________________
> >> Scikit-learn-general mailing list
> >> Scikit-learn-general@lists.sourceforge.net
> >> https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
> >
> > --
> > Christof Angermueller
> > cangermuel...@gmail.com
> > http://cangermueller.com
> >
> >
> >
> ------------------------------------------------------------------------------
> > Dive into the World of Parallel Programming The Go Parallel Website,
> sponsored
> > by Intel and developed in partnership with Slashdot Media, is your hub
> for all
> > things parallel software development, from weekly thought leadership
> blogs to
> > news, videos, case studies, tutorials and more. Take a look and join the
> > conversation now. http://goparallel.sourceforge.net/
> > _______________________________________________
> > Scikit-learn-general mailing list
> > Scikit-learn-general@lists.sourceforge.net
> > https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
>
>
>
> --
> This e-mail message, and any attachments, is intended only for the use
> of the individual or entity identified in the alias address of this
> message and may contain information that is confidential, privileged
> and subject to legal restrictions and penalties regarding its
> unauthorized disclosure and use. Any unauthorized review, copying,
> disclosure, use or distribution is strictly prohibited. If you have
> received this e-mail message in error, please notify the sender
> immediately by reply e-mail and delete this message, and any
> attachments, from your system. Thank you.
>
>
> ------------------------------------------------------------------------------
> Dive into the World of Parallel Programming The Go Parallel Website,
> sponsored
> by Intel and developed in partnership with Slashdot Media, is your hub for
> all
> things parallel software development, from weekly thought leadership blogs
> to
> news, videos, case studies, tutorials and more. Take a look and join the
> conversation now. http://goparallel.sourceforge.net/
> _______________________________________________
> Scikit-learn-general mailing list
> Scikit-learn-general@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
>
------------------------------------------------------------------------------
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/
_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to