On Tue, Mar 6, 2012 at 9:12 AM, R. Diez <[email protected]> wrote:
> Hi all:
>
> I have been looking at the test cases for the or1ksim simulator and for the 
> or1200 OpenRISC core (the test suite is in the ORPSoC v2 project). I was 
> surprised to hear that or1ksim has many more test cases than or1200. After 
> all,
> one would think that most of the test cases should be shared, that is, the 
> same test cases for the OpenRISC instruction set should run against the 
> software simulator and against the Verilog simulation.

Hi,

Thanks for your input on this. It's something which has been on the
TODO list for a while :
http://opencores.org/or1k/OR1K:Community_Portal#Wishlist

... but perhaps it wasn't so prominent. It's a big deal, I think
though, and any work toward unifying our architecture-specific test
code base would be well worth it.

>
> I am thinking about automating the test suite across all projects. While some 
> of the tests are platform specific, most of them are probably assembly or C 
> code that could be compiled for a reference SoC system, and then run against 
> these platforms:
>
>     1) Verilog source code, a combination of:
>       a) Processor cores:
>         - Stable branch of or1200
>           - With minimal features
>           - With all features (cache, floating point, etc)
>         - Unstable branch with the next version of or1200
>         - Some other OpenRISC core implementation
>       b) Verilog simulators:
>         - Icarus Verilog
>         - Verilator
>         - Xilinx ISim
>
>     2) or1ksim simulator:
>       - Stable branch of or1ksim:
>         - Release build (what the users normally use)
>         - Debug build (to check whether the test cases trigger any assertions)
>       - Unstable branch with the next version of or1ksim.
>
>     3) A synthesised system on a real FPGA, that is, against the current host 
> system.
>
> I already have the orbuild framework which can take up the task of building 
> all the different versions of or1ksim, Verilator and so on.

This sounds good. Are you proposing on unifying the current test code?
I would think that's the best approach, but then you have to figure
out how to make things like or1ksim and ORPSoC use it instead of its
local copy.

>
> I've seen the following definitions in the OPRSoC 2 project for the assembly 
> side:
>
>   /*
>    * l.nop constants
>    *
>    */
>   #define NOP_NOP         0x0000      /* Normal nop instruction */
>   #define NOP_EXIT        0x0001      /* End of simulation */
>   #define NOP_REPORT      0x0002      /* Simple report */
>   #define NOP_PRINTF      0x0003      /* Simprintf instruction */
>   #define NOP_PUTC        0x0004      /* Simulation putc instruction */
>   #define NOP_REPORT_FIRST 0x0400     /* Report with number */
>   #define NOP_REPORT_LAST  0x03ff      /* Report with number */
>
> The definitions for the C side are slightly different:
>
>   /*
>    * l.nop constants
>    *
>    */
>   #define NOP_NOP          0x0000      /* Normal nop instruction */
>   #define NOP_EXIT         0x0001      /* End of simulation */
>   #define NOP_REPORT       0x0002      /* Simple report */
>   /*#define NOP_PRINTF       0x0003       Simprintf instruction (obsolete)*/
>   #define NOP_PUTC         0x0004      /* JPB: Simputc instruction */
>   #define NOP_CNT_RESET    0x0005         /* Reset statistics counters */
>   #define NOP_GET_TICKS    0x0006         /* JPB: Get # ticks running */
>   #define NOP_GET_PS       0x0007      /* JPB: Get picosecs/cycle */
>   #define NOP_REPORT_FIRST 0x0400      /* Report with number */
>   #define NOP_REPORT_LAST  0x03ff      /* Report with number */
>
> These are the ones for the or1ksim test suite:
>
>   #define NOP_NOP          0x0000      /* Normal nop instruction */
>   #define NOP_EXIT         0x0001      /* End of simulation */
>   #define NOP_REPORT       0x0002      /* Simple report */
>   /*#define NOP_PRINTF       0x0003       Simprintf instruction (obsolete)*/
>   #define NOP_PUTC         0x0004      /* JPB: Simputc instruction */
>   #define NOP_CNT_RESET    0x0005         /* Reset statistics counters */
>   #define NOP_GET_TICKS    0x0006         /* JPB: Get # ticks running */
>   #define NOP_GET_PS       0x0007      /* JPB: Get picosecs/cycle */
>   #define NOP_TRACE_ON     0x0008      /* Turn on tracing */
>   #define NOP_TRACE_OFF    0x0009      /* Turn off tracing */
>   #define NOP_RANDOM       0x000a      /* Return 4 random bytes */
>   #define NOP_OR1KSIM      0x000b      /* Return non-zero if this is Or1ksim 
> */
>
>
> I'm thinking about unifying those codes across all platforms, as well as 
> adding some abstraction layer so that, when running on a real FPGA host, a 
> test case would call the C run-time routine exit() instead of executing 
> "l.nop NOP_EXIT".

This is sensible. A master list somewhere would be best. A reserved
space for some model-specific NOP values would be good, though.

However, I was thinking in or1ksim we could have ea nop_defs.h which
these defined, and have that as the master list, and document that
fact on the wiki. We've already tried to do this with the spr_defs.h
file (that or1ksim's is the master copy.)

>
> I've also seen that the ORPSoC test suite uses grep on each test result log 
> in order to find out whether the test was successful:
>
>   /sim/bin/Makefile:
>     $(Q)echo "function check-test-log { if [ \`grep -c -i "$(TEST_OK_STRING)" 
> "$(RTL_SIM_RESULTS_DIR)"/"$(TEST)$(TEST_OUT_FILE_SUFFIX)"\` -gt 0 ]; then 
> return 0; else return 1; fi; }" >> $@
>
> Variable TEST_OK_STRING is just defined as the number "8000000d". I would 
> think that this is not a very reliable way to determine if the test succeeded.
>
> I'm also thinking about providing some way for the test case to generate a 
> result data file, something like a "l.nop WRITE_TO_RESULTS_FILE" opcode, so 
> that the test framework can compare the test results across different 
> platforms. For example, we could check that some multiplication results 
> produced under Verilator are the same as the ones produced under or1ksim. Or 
> maybe it's not worth going that far: can you think of many test cases where 
> that would be desirable?

What we get is basically a binary output of pass/fail from the tests
at the moment. I would think that's enough to be able to tell if
there's regressions due to the last checkin.

To check if particular values are right or not you will probably want
to use something like the expect stuff that dejagnu uses. This allows
you to specify exactly what the expected output should be, and so can
check for values after multiply (which can be dumped via the report
l.nop.)

My opinion on this is that the dejagnu/expect stuff is a hassle to
setup, and that lots of smaller tests spitting out pass/fail is easier
to develop with and for. Additionally you probably need to have the
whole thing in an autotools setup which is also a hassle to setup and
use. But, the dejagnu stuff is a bit more thorough in terms of what it
can test. The other worry is that you'll get false positives on the
simpler test mechanism, for example if your comparison functionality
isn't working correctly to see if a multiply result is as expected.
However, we can order the tests to ensure that the more fundamental
features work before relying on those things to test higher
functionality.

But probably the autotools/expect/dejagnu setup is the way to go.
Unifying the code base into a single autotools-based project which can
then have some dejagnu control files which know how to drive or1ksim,
ORPSoC RTL models and an FPGA would probably be what we want. You are
then, however, relying on an external project to test everything, and
that detachment can be a pain. OTOH it would help ensure better
testing of everything and help minimise the verification work.

How does your orbuild system handle things at the moment?

Again thanks for your work on this,

    Julius
_______________________________________________
OpenRISC mailing list
[email protected]
http://lists.openrisc.net/listinfo/openrisc

Reply via email to