Those who feel they need some background for the discussion can keep
reading.
This is more like a status mail. Mike and Stuart can stop reading here :-)

The meeting has two main goals:
-decide what to do with the test environment.
-decide where to place the helper tests (test/helpers vs helper/tests).

about the test environment:
********************************
There are two main proposal: they essentially differ on where the tests are
ran from:
proposal 1 assume test/validation is still the main test entry point (as
today). Proposal 2 makes the platform/test the main entry point.


1) validation: proposal 1:
----------------------------------
validation would contain 1 directory per module + common.
Each <module> directory would contain all the C files defining all the
module tests, the main files (one per executable), and a Makefile.am.
This Makefile.am would build (but not run):
-lib<module>.a: a lib containing all test for this module.
-<module>_*: the executable(s) to run the test. (usually one)
In most of the cases lib<module>.a would be build from a single C file. And
the unique executable, called <module>, would be built from compiling
<module>_main.c and linking with lib<module>.a  and common things.
But some module may have many C files to define the tests (crypto, init,...
probably many more in the future)... and many executable (and main)

For init the validation/odp_init directory would contain:
-init_ok.c
-init_abort.c
-init_log.c
-init_abort_main.c
-init_log_main.c
-init_ok_main.c
-Makefile.am which would build (but not run):
 The following files would be created on build:
-libinit.a (containing all test from init_ok.c, init_log.c, init_abort.c)
-init_abort, init_ok, and init_log. the 3 executable to run the tests,
built fromtheir respective main, libinit.a and common.
Executables (mains) would just be a call to "run_main" (or whatever you
want to call it), with a suite as parameter. "run_main" would be part of a
Lib (in built in validation/common) which would define "run_main" and the 4
weak functions (tests_global_init/term and platform_init/term). This lib
would be given last in the  <executable>_LDADD Makefile.am variable, making
sure (hopefully) that the weak get overloaded by "strongs" when needed.

The validation/common would contain the lib containing "run_main" + the
weak versions of global_init/term (doing odp_init/term as default) and the
platform_init/term (doing nothing as default)

The common validation/Makefile.am would list all the executables to be run
as part of tests

What I like in this approach is that:
- everything belonging to one module is located under the module directory.
- Modules can have many test executable (as init), clearly gathered in the
module directory
- A superlib containing all tests can easily be done by linking all the
lib<module>.a together
- Makefile for each module are separated: less merge burden if two persons
are adding tests in 2 different modules. (just the addition of a new
executable or new API module would affect the validation/Makefile.am
- The structure is there to start with. No-one will be tempted to add a
test directly under validation if the <module> directory is already there.
- Platform tests need to be added only if platform specific stuff is needed.
- Tests are listed an run from validation, so we still get the grand-total
we are used to.

The drawbacks:
- Platforms will only have 2 hooks (before and after the all tests of a
module). nothing more: It seems you already agree on that :-)... but
alternatives keep poping up... In the future a function to disable some
tests can be added to give the platform a chance to skip a test.
- The grand total one we get when running the tests will not match the
number of modules, but the number of test executables instead. (as today)
- This "hook" approaches diverges a bit from an homogeneous CUNIT approach.
If we do build this "superlib" containing all tests, its usage will be
hard, unless we can integrate some of these hooks into CUNIT (future
improvement?).

2) validation: proposal 2:
----------------------------------
<platform>/test would contain one directory (called <module>) per
module.This directory would contain at least a Makefile.am. Possibly other
C files. The makefile would build the executables also located in
<platform>/test/<module>,
by picking the platform agnostic stuff from test/validation/<module>, or
from a "superlib" (containing all platform agnostic tests) in  test/validation,
and adding platform dependent stuff.
There are two variant of this proposal: either test/validation contain only
the C functions defining the platform agnostic tests (as lib) and the
<platform>/test/<module> directory always must contain a main possibly just
 declaring platform agnostic tests into C UNIT. Or est/validation can
declare its own functions into CUNIT.
A first implementation of this approach is still visible there
<https://git.linaro.org/people/christophe.milard/odp.git>,
branch test_platform_as_main_test_entry2.

What I like in this approach is that:
- the platform functionality can be fully used in tests. It becomes
legitimate to run scripts, use environment variables, or whatever the
platform offers. (we are no longer restricted to 2 hooks.). Also, the
platform is given a chance to set its test settings with a finer
granularity (eg change settings between 2 tests of a same suite.)
- everything belonging to one module is located under the module directory.
- Modules can have many test executable (as init), clearly gathered in the
module directory. But these different tests could be called by another
executable (eg a script in the same directory), therefore enabling for
having a grand total matching the number of modules.
- Makefile for each module are separated: less merge burden if two persons
are adding tests in 2 different modules. (just the addition of a new
executable or new API module would affect the <platform>/test/Makefile.am

The drawbacks:
- CUNIT place becomes unclear: if tests are started by scripts... the usage
of CUNIT may become awkward or even obsolete.
- The flexibility given to each platform allows for more variation in the
tests. It could become hard to compare test results betwen platforms if
each of the platforms does it its own way...
- It becomes harder to get a comparable test result between platforms, such
as the one offered by CUNIT.

/Christophe
_______________________________________________
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp

Reply via email to