How do autotests work for MPI?
We simply configure the test script to invoke the same tests using
This is a bigger issue. We have test suites that test MPI features
without checking MPI processor counts (eg the Magics /Metview code).
One workaround is to enable oversubscribe to allow the test to work
(inefficiently), though the suites that use MPI should really detect
and disable such tests if resources are not found. We will always have
features in our codes that our build/test systems aren't capable of
testing: eg. pmix is designed to work scalably to > 100,000 cores. We
can't test that :-)
Maybe the testing for many cores does not need to happen at upload time.
And maybe the testing for behavior in parallel environments does need to
be performed for all platforms but just one. There could then be a
service Debian provides, analogously to reproducible builds etc, that
performs testing in parallel environments. The unknown limits of
available cores is something the users of
better-than-what-Debian-decides-to-afford infrastructure can address
themselves. The uploader of a package/build demons would just invoke the
parallel run on a single node. Personally, I would like to see multiple
tests, say consecutively on 1,2,4,8,16,32,64,128,256 nodes and stop
testing when there is no more speedup. How many packages would reach
There are quite some packages in our distro that are multithreaded, i.e.
that don't need mpi. Today, we don't test their performance in parallel
either. But we should. Don't have any systematic way to do so, yet,
though. I could also imagine that such a testing in parallel
environments help gluing our distro with upstream developers a bit more.
Maybe this is something to discuss together with the cloud team who know
how to spawn an arbitrary number of nodes, quickly? And maybe have an
outreach to phoronix.com and/or their openbenchmarking.org?