Yes, I think by default ostest should only print SUCCESS, FAIL or SKIPPED.
And in case of failure it could print the error message to quick
understanding of the issue.

For hardware testing something we can use is a 16-ports USB HUB with
individual port power controlled by uhubctl ( https://github.com/mvp/uhubctl
).

I got this "low" cost USB HUB
https://aliexpress.com/item/1005006147378255.html and uhubctl was able to
turn ON/OFF individually all the ports.
According to the uhubctl project some versions of this HUB will not work,
but this model I found on Aliexpress worked fine.

So initially we will need to use the USB HUB with Raspberry Pi board, but
later we could port uhubctl to run on NuttX.

BR,

Alan

On Fri, Aug 9, 2024 at 10:00 AM Nathan Hartman <hartman.nat...@gmail.com>
wrote:

> I like Tomek's idea of testing Tiers, with Tier0 being "critical tests"
> that MUST pass, on all architectures and boards.
>
> I pike Alin's idea of an automatically updated test status report form,
> which can be viewed on GitHub.
>
> I recommend to list ALL supported boards in the report form, with "Need
> Test Hardware" (or something like that) shown for boards that are not
> included in any HW test cluster. The idea is to help find volunteers to
> test those boards.
>
> What happens if multiple people have the same board in their test clusters?
> And what happens if tests pass for one person but fail for another, on the
> same board model? We might need to be able to show: Pass, Fail, Need Test
> Hardware, or a percentage, like 2/3 (passing on 2 instances out of 3). That
> might indicate bugs that are sensitive to timing.
>
> Cheers,
> Nathan
>
> On Fri, Aug 9, 2024 at 7:23 AM Tomek CEDRO <to...@cedro.info> wrote:
>
> > On Fri, Aug 9, 2024 at 3:42 AM Nathan Hartman <hartman.nat...@gmail.com>
> > wrote:
> > > On Thu, Aug 8, 2024 at 8:58 PM Gregory Nutt <spudan...@gmail.com>
> wrote:
> > > > On 8/8/2024 6:48 PM, Nathan Hartman wrote:
> > > > > A dedicated "citest" program is a good idea!
> > > > I think that ostest could meet all of the needs of a "citest"
> program.
> > > > I just needs more control of the verbosity and format of the output.
> > It
> > > > already meets 95% of the requirement.
> > >
> > > Ok so perhaps it could use a command line arg to instruct it how much
> > > output to produce. Many unix programs have the convention of --quiet
> for
> > no
> > > output, --verbose for full output, and by default only necessary
> > messages.
> > > But a CI test might need different output altogether, since it needs to
> > be
> > > compared somehow. So, maybe we need a --ci argument that puts the
> output
> > in
> > > a format suitable for automatic CI testing. My thinking is to provide
> one
> > > mode for CI and another (more user friendly) mode for manual testing. I
> > > think that's needed because if the CI tests fail, then we would likely
> > want
> > > to run it manually and see what isn't working.
> >
> > True `ostest --ci`  would keep everything in one place and make things
> > easier to maintain + provide output easy to parse by ci/test
> > frameworks that we need :-)
> >
> > Also good idea to mark all tests PASS, FAIL, SKIP as this would
> > provide proof on what was done, what was skipped, what failed, so
> > called tests accountability artifacts. The optional `--verbose` switch
> > that would provide problem details when needed :-)
> >
> > I think most test there would have to pass in order to merge code to
> > upstream or make a release. Some tests might be skipped for various
> > reasons thus skip_list (short one) seems more reasonable than
> > test_list (long one). On the other hand when initial non verbose
> > testing has some FAIL results it may be necessary to launch only given
> > set of verbose test to gather fail reason details. Also it may be
> > desired to group tests in tiers - for instance Tier0 group is critical
> > and all test must pass cannot be skipped, Tier1 group may contain
> > tests that can be skipper, Tier2 tests are optional, etc.
> >
> > Thus the idea of test code names using unique OID like mechanism so
> > each code would tell exactly what the test is for. Maybe we could use
> > existing OID base codes, maybe we need to create our own TID (Test ID)
> > tree. If there are OID identifiers already defined that could cover
> > our test areas we would use OID to mark what is tested and Test Number
> > to mark the test number (i.e. OID=0.1.2.3 T=1). Assuming that existing
> > OID/TID/OID+T won't change in future to keep results and tools
> > coherent. I am not that familiar with OID database though here may be
> > a starting point: http://www.oid-info.com/
> >
> > If there are already well adopted existing open-source test analysis
> > frameworks / standards out there (other big Apache projects?) we may
> > just use them as reference for test organization and automated
> > analysis :-)
> >
> > Have a good day folks! :-)
> > Tomek
> >
> > --
> > CeDeROM, SQ7MHZ, http://www.tomek.cedro.info
> >
>

Reply via email to