Dear All.
On Mon, 2021-11-01 at 14:36 +0700, Andreas Reichel wrote:
> Am I right that most of the Use Cases tests are executed serially
> (only)?
Looks like I was semi-right:
// set heap size for the test JVM(s)
minHeapSize = "128m"
maxHeapSize = "1512m"
// Specifying the local via system properties did not work, so we set them this
way
jvmArgs << [
'-Djava.io.tmpdir=build',
'-DPOI.testdata.path=../test-data',
'-Djava.awt.headless=true',
'-Djava.locale.providers=JRE,CLDR',
'-Duser.language=en',
'-Duser.country=US',
'-Djavax.xml.stream.XMLInputFactory=com.sun.xml.internal.stream.XMLInputFactoryImpl',
"-Dversion.id=${project.version}",
'-ea',
'-Djunit.jupiter.execution.parallel.config.strategy=fixed',
'-Djunit.jupiter.execution.parallel.config.fixed.parallelism=2'
//
-Xjit:verbose={compileStart|compileEnd},vlog=build/jit.log${no.jit.sherlock}
... if ${isIBMVM}
]
Questions please:
1) why do we not allocate maxHeapSize dynamically based on the (Free)
Memory of the OS, e.g. use 50% of that memory?
2) why do we not allocate all Cpu Cores, but just 2? What advantage do
the following lines have:
'-Djunit.jupiter.execution.parallel.config.strategy=fixed',
'-Djunit.jupiter.execution.parallel.config.fixed.parallelism=2'
Best regards
Andreas