On 7 January 2017 at 21:20, Sebastian Raschka <se.rasc...@gmail.com> wrote:

> Hi, Thomas,
> sorry, I overread the regression part …
> This would be a bit trickier, I am not sure what a good strategy for
> averaging regression outputs would be. However, if you just want to compute
> the average, you could do sth like
> np.mean(np.asarray([r.predict(X) for r in list_or_your_mlps]))
>
> However, it may be better to use stacking, and use the output of
> r.predict(X) as meta features to train a model based on these?
>

​Like to train an SVR to combine the predictions of the top 10%
MLPRegressors using the same data that were used for training of the
MLPRegressors? Wouldn't that lead to overfitting?
​


>
> Best,
> Sebastian
>
> > On Jan 7, 2017, at 1:49 PM, Thomas Evangelidis <teva...@gmail.com>
> wrote:
> >
> > Hi Sebastian,
> >
> > Thanks, I will try it in another classification problem I have. However,
> this time I am using regressors not classifiers.
> >
> > On Jan 7, 2017 19:28, "Sebastian Raschka" <se.rasc...@gmail.com> wrote:
> > Hi, Thomas,
> >
> > the VotingClassifier can combine different models per majority voting
> amongst their predictions. Unfortunately, it refits the classifiers though
> (after cloning them). I think we implemented it this way to make it
> compatible to GridSearch and so forth. However, I have a version of the
> estimator that you can initialize with “refit=False” to avoid refitting if
> it helps. http://rasbt.github.io/mlxtend/user_guide/classifier/
> EnsembleVoteClassifier/#example-5-using-pre-fitted-classifiers
> >
> > Best,
> > Sebastian
> >
> >
> >
> > > On Jan 7, 2017, at 11:15 AM, Thomas Evangelidis <teva...@gmail.com>
> wrote:
> > >
> > > Greetings,
> > >
> > > I have trained many MLPRegressors using different random_state value
> and estimated the R^2 using cross-validation. Now I want to combine the top
> 10% of them in how to get more accurate predictions. Is there a
> meta-estimator that can get as input a few precomputed MLPRegressors and
> give consensus predictions? Can the BaggingRegressor do this job using
> MLPRegressors as input?
> > >
> > > Thanks in advance for any hint.
> > > Thomas
> > >
> > >
> > > --
> > > ======================================================================
> > > Thomas Evangelidis
> > > Research Specialist
> > > CEITEC - Central European Institute of Technology
> > > Masaryk University
> > > Kamenice 5/A35/1S081,
> > > 62500 Brno, Czech Republic
> > >
> > > email: tev...@pharm.uoa.gr
> > >               teva...@gmail.com
> > >
> > > website: https://sites.google.com/site/thomasevangelidishomepage/
> > >
> > >
> > > _______________________________________________
> > > scikit-learn mailing list
> > > scikit-learn@python.org
> > > https://mail.python.org/mailman/listinfo/scikit-learn
> >
> > _______________________________________________
> > scikit-learn mailing list
> > scikit-learn@python.org
> > https://mail.python.org/mailman/listinfo/scikit-learn
> > _______________________________________________
> > scikit-learn mailing list
> > scikit-learn@python.org
> > https://mail.python.org/mailman/listinfo/scikit-learn
>
> _______________________________________________
> scikit-learn mailing list
> scikit-learn@python.org
> https://mail.python.org/mailman/listinfo/scikit-learn
>



-- 

======================================================================

Thomas Evangelidis

Research Specialist
CEITEC - Central European Institute of Technology
Masaryk University
Kamenice 5/A35/1S081,
62500 Brno, Czech Republic

email: tev...@pharm.uoa.gr

          teva...@gmail.com


website: https://sites.google.com/site/thomasevangelidishomepage/
_______________________________________________
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailman/listinfo/scikit-learn

Reply via email to