Hi Henning,

Unfortunately you cannot do it yet in a simple way. There is an undocumented 
feature that would have been a good candidate (the PosteriorDistribution) but 
it needs additional work to fit your needs.

But nothing is lost! We imagined that people would need to define their own 
distributions in Python, and the attached script do exactly what you want. 
Things are not perfect: you cannot call the drawPDF() method without generating 
an error (still digging to find why), but by defining the three following 
methods:
+ computePDF()
+ computeCDF()
+ getRange()
you can define your custom OpenTURNS distribution, use it to build your 
favorite probabilistic model, perform Monte Carlo simulation and so on. The 
details are in the attached file. A good candidate for a Jupyter notebook ;-)

Cheers

Régis



>________________________________
> De : Henning Brüske <[email protected]>
>À : "[email protected]" <[email protected]> 
>Envoyé le : Mercredi 23 mars 2016 11h17
>Objet : [ot-users] Bayesian updating
> 
>
>
> 
>Dear OT users,
> 
>Yesterday I got quick and successful help with creating a mixture 
>distribution. Thanks.
> 
>This time I try to implement a rather simple Bayesian updating of continuous 
>distributions with OT.
>Posterior = (Prior.pdf * Likelihood.pdf) / (Integral(Prior.pdf * 
>Likelihood.pdf))
> 
>But my many attempts failed. I hope have an answer. Here follows a scipy 
>example of what I want to achieve:
> 
>#==============================================================================
># beginning of code
>#==============================================================================
>import numpy as np
>from scipy.stats import norm
>from matplotlib import pyplot as plt
> 
> 
>x = np.linspace(-20, 20, 10000)
>Prior = norm(6., .8)
>Likelihood = norm(4., 1.2)
> 
>Posterior_pdf = Prior.pdf(x) * Likelihood.pdf(x) / np.sum(Prior.pdf(x) * 
>Likelihood.pdf(x) * (abs(x[0]) + abs(x[-1]))/len(x))
> 
>plt.figure()
>plt.plot(x, Prior.pdf(x), label="Prior")
>plt.plot(x, Likelihood.pdf(x), label="Likelihood")
>plt.plot(x, Posterior_pdf, label="Posterior")
>plt.xlim(0,8)
>plt.legend(loc = 'best')
>plt.show()
>#==============================================================================
># end of code
>#==============================================================================
> 
>Best regards and thanks,
> 
>Henning
> 
>_______________________________________________
>OpenTURNS users mailing list
>[email protected]
>http://openturns.org/mailman/listinfo/users
>
>
>
from openturns import *
import numpy as np
from matplotlib import pyplot as plt

# Create a custom OpenTURNS distribution from Python
class MyPosteriorDistribution(PythonDistribution):
    def __init__(self, prior, likelihood):
        # Here you must set the dimension of your distribution
        super(MyPosteriorDistribution, self).__init__(1)
        self.prior_ = prior
        self.likelihood_ = likelihood
        self.normalization_ = self.computeNormalization()

    # The key point: the normalization
    def computeNormalization(self):
        def kernel(x):
            return [self.prior_.computePDF(x) * self.likelihood_.computePDF(x)]
        
        return GaussKronrod().integrate(PythonFunction(1, 1, kernel), self.getRange())[0]

    # Here we must give the thightest interval in order to speed-up numerical integration
    def getRange(self):
        return self.prior_.getRange().intersect(self.prior_.getRange())

    # Mandatory as a general distribution is defined by its CDF in OpenTURNS
    def computeCDF(self, X):
        def kernel(x):
            return [self.computePDF(x)]
        return GaussKronrod().integrate(PythonFunction(1, 1, kernel), Interval(self.getRange().getLowerBound(), X))[0]

    # The only thing we can easily compute from the definition of the distribution
    def computePDF(self, X):
        # print "X=", X
        prior_pdf = self.prior_.computePDF(X)
        likelihood_pdf = self.likelihood_.computePDF(X)
        # print "prior_pdf=", prior_pdf, "likelihood_pdf=", likelihood_pdf
        return prior_pdf * likelihood_pdf / self.normalization_

    # Some pretty-printing for debug purpose
    def __str__(self):
        return "MyPosteriorDistribution(prior=" + str(self.prior_) + ", likelihood=" + str(self.likelihood_) + ", normalization=" + str(self.normalization_)

####################################
# HERE WE USE OUR NEW DISTRIBUTION #
####################################

Prior = Normal(6.0, 0.8)
Likelihood = Normal(4.0, 1.2)

Posterior = Distribution(MyPosteriorDistribution(Prior, Likelihood))
print Posterior

# The services we defined
print
print "Basic services"
print "pdf=", Posterior.computePDF([0.5])
print "cdf=", Posterior.computePDF([0.5])

# Inherited useful services
print
print "Inherited services"
print "quantile=", Posterior.computeQuantile(0.95)
print "sample=", Posterior.getSample(5)
print "mean=", Posterior.getMean()
print "var=", Posterior.getCovariance()
print "skewness=", Posterior.getSkewness()
print "krutosis=", Posterior.getKurtosis()

# More sophisticated usages: orthonormal polynomials
print
print "Orthonormal polynomials"
factory = StandardDistributionPolynomialFactory(Posterior)
for i in range(5):
    print "P_" + str(i) + "(X)=", factory.build(i)

# Here your graphical post-processing
# OpenTURNS distributions are drawn using 2D arrays
x = [[pt] for pt in np.linspace(-20, 20, 10000)]
# Here a bug prevent us to call computePDF() on a sample...
pdf_posterior = [Posterior.computePDF(pt) for pt in x]

plt.figure()
plt.plot(x, Prior.computePDF(x), label="Prior")
plt.plot(x, Likelihood.computePDF(x), label="Likelihood")
plt.plot(x, pdf_posterior, label="Posterior")
plt.xlim(0,8)
plt.legend(loc = 'best')
plt.show()
_______________________________________________
OpenTURNS users mailing list
[email protected]
http://openturns.org/mailman/listinfo/users

Reply via email to