Adding hardware raises complexities or design, manufacturing,
distribution, and support.  Not insumountble, but not simple either.
This is why I suggested that we "grow" into it.
Or don't develop custom hardware.  Use COTS (Commerical Off-The-Shelf) only.

(1) Perfect the current coding standard and build test.

(2) When #1 is complete, add static analysis.

(3) When #2 is complete, add software-only automated test suite under
simulation.

What would be the relationship between PR checks?  In past Xiao Xiang has proposed this as a step in validated PRs.

My interested would be in setting up a custom standalone test harness that I could use in my office, independent of github tests.  I don't think those should be mutually exclusive and I don't think your steps apply to the latter.

(4) When #3 is complete, add hardware testing.

I don't think we need a rigid sequence of steps.  I see nothing wrong with skipping directly to 3, skipping 1 and 2.  I see nothing wrong with some people doing 3 while others are doing 4 concurrent.  These sequence is not useful.

What would be useful would be:

1. Selection of a common tool,
2. Determination of the requirements for a test case, and
3. A repository of retaining share-able test cases.

If we have those central coordinating resources then the rest can be a happy anarchy like everything else done here.

Greg


Reply via email to