On Fri, Aug 9, 2024 at 3:42 AM Nathan Hartman <hartman.nat...@gmail.com> wrote:
> On Thu, Aug 8, 2024 at 8:58 PM Gregory Nutt <spudan...@gmail.com> wrote:
> > On 8/8/2024 6:48 PM, Nathan Hartman wrote:
> > > A dedicated "citest" program is a good idea!
> > I think that ostest could meet all of the needs of a "citest" program.
> > I just needs more control of the verbosity and format of the output.  It
> > already meets 95% of the requirement.
>
> Ok so perhaps it could use a command line arg to instruct it how much
> output to produce. Many unix programs have the convention of --quiet for no
> output, --verbose for full output, and by default only necessary messages.
> But a CI test might need different output altogether, since it needs to be
> compared somehow. So, maybe we need a --ci argument that puts the output in
> a format suitable for automatic CI testing. My thinking is to provide one
> mode for CI and another (more user friendly) mode for manual testing. I
> think that's needed because if the CI tests fail, then we would likely want
> to run it manually and see what isn't working.

True `ostest --ci`  would keep everything in one place and make things
easier to maintain + provide output easy to parse by ci/test
frameworks that we need :-)

Also good idea to mark all tests PASS, FAIL, SKIP as this would
provide proof on what was done, what was skipped, what failed, so
called tests accountability artifacts. The optional `--verbose` switch
that would provide problem details when needed :-)

I think most test there would have to pass in order to merge code to
upstream or make a release. Some tests might be skipped for various
reasons thus skip_list (short one) seems more reasonable than
test_list (long one). On the other hand when initial non verbose
testing has some FAIL results it may be necessary to launch only given
set of verbose test to gather fail reason details. Also it may be
desired to group tests in tiers - for instance Tier0 group is critical
and all test must pass cannot be skipped, Tier1 group may contain
tests that can be skipper, Tier2 tests are optional, etc.

Thus the idea of test code names using unique OID like mechanism so
each code would tell exactly what the test is for. Maybe we could use
existing OID base codes, maybe we need to create our own TID (Test ID)
tree. If there are OID identifiers already defined that could cover
our test areas we would use OID to mark what is tested and Test Number
to mark the test number (i.e. OID=0.1.2.3 T=1). Assuming that existing
OID/TID/OID+T won't change in future to keep results and tools
coherent. I am not that familiar with OID database though here may be
a starting point: http://www.oid-info.com/

If there are already well adopted existing open-source test analysis
frameworks / standards out there (other big Apache projects?) we may
just use them as reference for test organization and automated
analysis :-)

Have a good day folks! :-)
Tomek

-- 
CeDeROM, SQ7MHZ, http://www.tomek.cedro.info

Reply via email to