On Tue, 20 Jan 2026 13:25:32 +0000 Mark Brown <[email protected]> wrote:
> At present the mm selftests are integrated into the kselftest harness by > having it run run_vmtest.sh and letting it pick it's default set of > tests to invoke, rather than by telling the kselftest framework about > each test program individually as is more standard. This has some > unfortunate interactions with the kselftest harness: > > - If any of the tests hangs the harness will kill the entire mm > selftests run rather than just the individual test, meaning no > further tests get run. > - The timeout applied by the harness is applied to the whole run rather > than an individual test which frequently leads to the suite not being > completed in production testing. > > Deploy a crude but effective mitigation for these issues by telling the > kselftest framework to run each of the test categories that run_vmtests.sh > has separately. Since kselftest really wants to run test programs this > is done by providing a trivial wrapper script for each categorty that > invokes run_vmtest.sh, this is not a thing of great elegence but it is > clear and simple. Since run_vmtests.sh is doing runtime support > detection, scenario enumeration and setup for many of the tests we can't > consistently tell the framework about the individual test programs. > > This has the side effect of reordering the tests, hopefully the testing > is not overly sensitive to this. Thanks, let's see what people think. What happens with tests which are newly added but which don't integrate into this new framework? eg, https://lkml.kernel.org/r/[email protected]

