Hi,

You can split your database into a learning part and a validation part (or test 
part), then use the first part to build the metamodel and the second one to 
assess its quality. I have modified your script to illustrate the point. Feel 
free to contact me directly if you need further assistance.

Cheers

Régis

import openturns as ot

# ==============
# Read the design of experiments samples (5 columns for x1,x2,x3,x4,x5)
#  and response variables (1 column)
# ==============
inputSample = ot.Sample.ImportFromCSVFile('InputSamples.csv',',')
y1 = ot.Sample.ImportFromCSVFile('OutputSamples.csv',',')

# ==============
# Split your database in two parts: learning/test
# ==============
ratio = 0.1 # 10% of the database to test the accuracy
testX = inputSample.split(int(ratio * size))
testY = y1.split(int(ratio * size))

# ==============
# Construct a kriging metamodel
# ==============
dimension = inputSample.getDimension()
basis = ot.ConstantBasisFactory(dimension).build()
covarianceModel = ot.SquaredExponential([1.]*dimension, [1.0])
algo = ot.KrigingAlgorithm(inputSample, y1, covarianceModel, basis)
algo.run()
result = algo.getResult()
krigingMetamodel = result.getMetaModel()
print(result.getTrendCoefficients())
print(result.getCovarianceModel())

# ==============
# Perform assessment of the quality of the metamodel constructed
# ==============
validation = ot.MetaModelValidation(testX, testY, krigingMetamodel)
print("predictivity factor=", validation.computePredictivityFactor())
graph = validation.drawValidation()
ot.Show(graph)
graph = validation.getResidualDistribution().drawPDF()
graph.setXTitle("Residuals")
ot.Show(graph)









Le jeudi 26 mars 2020 à 21:17:21 UTC+1, Aytekin Gel <[email protected]> a écrit : 





Hello,

I just started exploring OpenTURNS, which seems to be a quite powerful UQ 
toolbox with advanced features implemented. Thanks for offering it to the users 
worldwide.
I wanted seek help regarding few tasks in spite of my efforts to find similar 
examples.

I have a quite expensive computational fluid dynamics simulations (multiphase 
flow), which requires building a surrogate model (metamodel) in order to 
perform other uncertainty quantification analysis.

I conducted by generating an optimal Latin Hypercube based sampling for 5 input 
parameters (x1, ..., x5) and one scalar response variables or quantities of 
interest (y1) using other tools extrernally.

I am trying to perform several tasks as listed below with OpenTURNS:
(1) construct a kriging metamodel and and assess the adequacy of the metamodel 
preferably with cross-validation error assessment or some other statistical 
measure.
(2) perform sensitivity analysis using ANCOVA as my input parameters are 
correlated such that x1= 1- (x2+x3+x4+x5) and each of the inputs have a certain 
range of lower and upper bounds.

Reviewing the available examples, I think I was able to generate the kriging 
metamodel as shown at the end of my email. However, I couldn't figure out how 
to perform cross-validation or assessment of the adequacy of the metamodel. 
All of the examples that I was able to find appears to have an explicit model 
defined algebraically such as in the Kriging the cantilever beam model:
model = ot.SymbolicFunction(["E", "F", "L", "I"], ["F*L^3/(3*E*I)"])

In my case, I don't have an explicit model definition so could you please help 
me showing an example on how to assess the adequacy of the metamodel 
constructed? I found the following information but again there was explicit 
definition for model and metamodel:
http://openturns.github.io/openturns/master/user_manual/response_surface/_generated/openturns.MetaModelValidation.html#openturns-metamodelvalidation


My example that I have generated:
---------------------------------------------------------------------
import openturns as ot

# ==============
# Read the design of experiments samples (5 columns for x1,x2,x3,x4,x5) 
#  and response variables (1 column)
# ==============
inputSample = ot.Sample.ImportFromCSVFile('InputSamples.csv',',')
y1 = ot.Sample.ImportFromCSVFile('OutputSamples.csv',',')

# ============== 
# Construct a kriging metamodel
# ==============
dimension = inputSample.getDimension()
basis = ot.ConstantBasisFactory(dimension).build()
covarianceModel = ot.SquaredExponential([1.]*dimension, [1.0])
algo = ot.KrigingAlgorithm(inputSample, y1, covarianceModel, basis)
algo.run()
result = algo.getResult()
krigingMetamodel = result.getMetaModel()
print(result.getTrendCoefficients())
print(result.getCovarianceModel())

# ==============
# Perform assessment of the quality of the metamodel constructed
# ==============
x1_dist = ot.Uniform(0.2088,0.6975)
x1_dist.setDescription("x1")
x2_dist = ot.Uniform(0.0015,0.499)
x2_dist.setDescription("x2")
x3_dist = ot.Uniform(0.00164,0.4985)
x3_dist.setDescription("x3")
x4_dist = ot.Uniform(2.3921e-5,0.0998)
x4_dist.setDescription("x4")
x5_dist = ot.Uniform(0.000267,0.0999)
x5_dist.setDescription("x5")
myDistribution = ot.ComposedDistribution([x1_dist, x2_dist, x3_dist, x4_dist, 
x5_dist])
sampleSize_test = 100

<.... assess the adequacy of the metamodel constructed ...>
---------------------------------------------------------------------

Could you please let me know how can I perform quality check of the metamodel 
built in the above example? Then I will try to use example ANCOVA to perform 
sensitivity analysis using the metamodel constructed in the above example.

Thank you very much for your help and time, and my apologies for the long email.

Aytekin Gel, Ph.D.

-- 
Aytekin Gel, Ph.D.

Manager,
ALPEMI Consulting, LLC
_______________________________________________
OpenTURNS users mailing list
[email protected]
http://openturns.org/mailman/listinfo/users
_______________________________________________
OpenTURNS users mailing list
[email protected]
http://openturns.org/mailman/listinfo/users

Reply via email to