Dear Alessio, if it helps, the implementation quite strictly follows what is described in GPML: http://www.gaussianprocess.org/gpml/chapters/
https://github.com/scikit-learn/scikit-learn/blob/412996f09b6756752dfd3736c306d46fca8f1aa1/sklearn/gaussian_process/gpr.py#L23 Hyperparameter optimization is done by gradient descent. Michael On Tue, Nov 8, 2016 at 4:10 PM, Quaglino Alessio <[email protected]> wrote: > Hello, > > I am using scikit-learn 0.18 for doing GP regressions. I really like it > and all works great, but I am having doubts concerning the confidence > intervals computed by predict(X,return_std=True): > > - Are they true confidence intervals (i.e. of the mean / latent function) > or they are in fact prediction intervals? I tried computing the prediction > intervals using sample_y(X) and I get the same answer as that returned by > predict(X,return_std=True). > > - My understanding is therefore that scikit-learn is not fully Bayesian, > i.e. it does not compute probability distributions for the parameters, but > rather the values that maximize the likelihood? > > - If I want the confidence interval, is my best option to use an external > MCMC optimizer such as PyMC? > > Thank you in advance! > > Regards, > ------------------------------------------------- > Dr. Alessio Quaglino > Postdoctoral Researcher > Institute of Computational Science > Università della Svizzera Italiana > > > > > > > > _______________________________________________ > scikit-learn mailing list > [email protected] > https://mail.python.org/mailman/listinfo/scikit-learn > >
_______________________________________________ scikit-learn mailing list [email protected] https://mail.python.org/mailman/listinfo/scikit-learn
