Hi, if you want to have the full posterior distribution over the values of the hyper parameters, there is a good example on how to do that with George + emcee, another GP package for Python.
http://dan.iel.fm/george/current/user/hyper/ On Tue, 8 Nov 2016 at 16:10 Quaglino Alessio <[email protected]> wrote: > Hello, > > I am using scikit-learn 0.18 for doing GP regressions. I really like it > and all works great, but I am having doubts concerning the confidence > intervals computed by predict(X,return_std=True): > > - Are they true confidence intervals (i.e. of the mean / latent function) > or they are in fact prediction intervals? I tried computing the prediction > intervals using sample_y(X) and I get the same answer as that returned by > predict(X,return_std=True). > > - My understanding is therefore that scikit-learn is not fully Bayesian, > i.e. it does not compute probability distributions for the parameters, but > rather the values that maximize the likelihood? > > - If I want the confidence interval, is my best option to use an external > MCMC optimizer such as PyMC? > > Thank you in advance! > > Regards, > ------------------------------------------------- > Dr. Alessio Quaglino > Postdoctoral Researcher > Institute of Computational Science > Università della Svizzera Italiana > > > > > > > _______________________________________________ > scikit-learn mailing list > [email protected] > https://mail.python.org/mailman/listinfo/scikit-learn >
_______________________________________________ scikit-learn mailing list [email protected] https://mail.python.org/mailman/listinfo/scikit-learn
