Hi Pamphile,
I don't know what are the differences between Saltelli 2002 and Saltelli 2010
as it is not my main field of interest, but OpenTURNS proposes alternative
sampling algorithms to compute Sobol indices (first and second order).You are
not forced to use a Monte Carlo sampling, you can use any weighted experiment
with uniform weights, including LHS experiment and low discrepancy experiments.
It is demonstrated in the following script:
===== BEGIN =====from openturns import *
mode = "QMC"size = 10000
# Toy modelmodel = SymbolicFunction(["x0", "x1", "x2", "x3"],
["x0+x1+x2+x3+x1*x2"])
# Here you can use any multivariate distribution with independent
copuladistribution = Normal(4)if mode == "QMC": sampling_doe =
LowDiscrepancyExperiment(SobolSequence(4), distribution, size, False)
sampling_doe.setRandomize(True) # If set to False, the results are wrong
sampling_doe.setRestart(False) # or Trueelif mode == "LHS": sampling_doe =
LHSExperiment(distribution, size) sampling_doe.setAlwaysShuffle(True) # If
set to False, the results are wrong sampling_doe.setRandomShift(False) # or
Trueelse: sampling_doe = MonteCarloExperiment(distribution, size)sobol_doe =
SobolIndicesExperiment(sampling_doe, True)inputDesign =
sobol_doe.generate()outputDesign = model(inputDesign)
# Example of use with the different Sobol algorithmsLog.Show(Log.NONE)algo =
JansenSensitivityAlgorithm(inputDesign, outputDesign,
size)print("Jansen")print("First order=",
algo.getFirstOrderIndices())print("Second order=",
algo.getSecondOrderIndices())algo = MartinezSensitivityAlgorithm(inputDesign,
outputDesign, size)print("Martinez")print("First order=",
algo.getFirstOrderIndices())print("Second order=",
algo.getSecondOrderIndices())algo =
MauntzKucherenkoSensitivityAlgorithm(inputDesign, outputDesign,
size)print("MauntzKucherenko")print("First order=",
algo.getFirstOrderIndices())print("Second order=",
algo.getSecondOrderIndices())algo = SaltelliSensitivityAlgorithm(inputDesign,
outputDesign, size)print("Saltelli")print("First order=",
algo.getFirstOrderIndices())print("Second order=",
algo.getSecondOrderIndices())===== END =====
Note that you have to tweak a little bit the LowDiscrepancyExperiment and
LHSExperiment classes to get correct results. The fact that the permutation has
to be regenerated each time an LHS experiment is sampled is quite natural, but
the fact that the low discrepancy sequence has to be randomly shifted is less
clear to me... I have not played with the other low discrepancy sequences
(Halton, Faure etc) to see if this behavior is general or not.
Best regards,
Régis LEBRUN
Le lundi 18 juin 2018 à 17:54:52 UTC+2, roy <[email protected]> a écrit :
Hi everyone!
>From the documentation, Sobol’ indices with Saltelli's formulation are
>computed using its 2002 formula.In Saltelli2010, there is another formulation
>which is demonstrated to be better.
Another point, there is the possibility to use an experiment in
SobolIndicesExperiment(). But it is not possible to use Sobol’ sequence here.It
would be great to add this possibility as it is known to converge faster than
the MonteCarloExperiment(). To get A and B with Sobol’ sequence,we must split
the matrix vertically:
input_design = np.array(ot.SobolSequence(dim * 2).generate(size))A =
input_design[:, :dim]B = input_design[:, dim:]
Another possibility is to use Sobol’ scrambled and split horizontally. But from
what I know, this sampler is not available in OT?
>From the documentation, it is not really clear that the matrices are stacked
>as [A, B, A_B, B_A]. I had to look at the code to reallyunderstand that. Maybe
>this could be improved?
Thanks for the support :)
Pamphile ROY
PhD candidate in Uncertainty Quantification
CERFACS —Toulouse (31)—France
+33 (0) 5 61 19 31 57
+33 (0) 7 86 43 24 22
_______________________________________________
OpenTURNS users mailing list
[email protected]
http://openturns.org/mailman/listinfo/users
_______________________________________________
OpenTURNS users mailing list
[email protected]
http://openturns.org/mailman/listinfo/users