On Mon, Oct 14, 2013 at 7:10 AM, Olof Kindgren <[email protected]>wrote:

> Any thoughts?
>

Hi Olof,

As I've mentioned, I have the start of a testsuite here:
https://github.com/pgavin/or1k-test

I've described it on this list before, I believe, but I'll say a few things
about it again since we're on this topic.  I wanted the tests to
encapsulate all the information needed for the test into a single file,
including any assembly/C code and any linker script that's necessary.
 (However C/asm code could theoretically be kept in separate files and
linked in if desired.)  So to that end I used m4 for preprocessing the test
cases.  M4 is also useful as it helps with generating the type of
repetitive code found in testsuites.  (However, it does have some
programmability limitations that I've skirted by using python to generate
code in some cases.)

The tree has 3 directories:  m4/ contains a library of m4 files, etc/
contains miscellaneous junk, and tests/ contains the test cases.  The
process.py script takes care of running m4 on the test cases, and there's a
sample Makefile that will build and run the testsuite.  It should build and
run on or1ksim as-is provided the toolchain is in your PATH.

We should be able to provide some sort of feature dependency between tests,
so that e.g. a test for feature A that requires feature B is only executed
after feature B is tested.  For example, ALU tests cannot work correctly
unless the sfeq instruction works correctly, so that the result can be
compared with what's expected.  I also tried to minimize the number of such
dependencies between tests.  I haven't fully realized this dependency idea,
though.  Right now I just keep a list of all the tests, manually sorted so
that the more complex tests with more dependencies come after the simpler
tests.  This sort of organization helps implementers by letting them
incrementally develop features.

The test cases do not do any I/O.  Other testsuites print out any results
generated, which are then compared with a file that contains what's
expected.  I don't like this because it either requires the l.nop hack, or
it requires that the store instructions have been tested.  Instead, the
testcases use sfeq to compare the result and expected value, and execute
user-supplied fail code if they do not match.  If all tests fail, the
testcase executes the user-supplied pass code.  (The pass/fail code is
directly inlined using m4, so function calls aren't needed.)

We should keep implementation-specific tests out of the unified test suite,
IMO.  Instead we should allow the testsuite to be extended by implementers
with testcases for their platform.

I think eventually the testsuite should be configurable to either run each
test as a completely separate program, or to run all the tests sequentially
in one go.  This will make it simple to test a complete working system, and
e.g. the testsuite could be embedded in the system firmware to test the
system at boot time.

Obviously, I'd really like it if my testsuite was used as the starting
point for this, but I also understand if it's not what everyone wants.  I
just think there are a few good ideas in there that will be useful for this
effort.

-Pete
_______________________________________________
OpenRISC mailing list
[email protected]
http://lists.openrisc.net/listinfo/openrisc

Reply via email to