Hi all:

I have been looking at the test cases for the or1ksim simulator and for the 
or1200 OpenRISC core (the test suite is in the ORPSoC v2 project). I was 
surprised to hear that or1ksim has many more test cases than or1200. After all,
one would think that most of the test cases should be shared, that is, the same 
test cases for the OpenRISC instruction set should run against the software 
simulator and against the Verilog simulation.

I am thinking about automating the test suite across all projects. While some 
of the tests are platform specific, most of them are probably assembly or C 
code that could be compiled for a reference SoC system, and then run against 
these platforms:

    1) Verilog source code, a combination of:
      a) Processor cores:
        - Stable branch of or1200
          - With minimal features
          - With all features (cache, floating point, etc)
        - Unstable branch with the next version of or1200
        - Some other OpenRISC core implementation
      b) Verilog simulators:
        - Icarus Verilog
        - Verilator
        - Xilinx ISim

    2) or1ksim simulator:
      - Stable branch of or1ksim:
        - Release build (what the users normally use)
        - Debug build (to check whether the test cases trigger any assertions)
      - Unstable branch with the next version of or1ksim.

    3) A synthesised system on a real FPGA, that is, against the current host 
system.

I already have the orbuild framework which can take up the task of building all 
the different versions of or1ksim, Verilator and so on.

I've seen the following definitions in the OPRSoC 2 project for the assembly 
side:

  /*
   * l.nop constants
   *
   */
  #define NOP_NOP         0x0000      /* Normal nop instruction */
  #define NOP_EXIT        0x0001      /* End of simulation */
  #define NOP_REPORT      0x0002      /* Simple report */
  #define NOP_PRINTF      0x0003      /* Simprintf instruction */
  #define NOP_PUTC        0x0004      /* Simulation putc instruction */
  #define NOP_REPORT_FIRST 0x0400     /* Report with number */
  #define NOP_REPORT_LAST  0x03ff      /* Report with number */

The definitions for the C side are slightly different:

  /*
   * l.nop constants
   *
   */
  #define NOP_NOP          0x0000      /* Normal nop instruction */
  #define NOP_EXIT         0x0001      /* End of simulation */
  #define NOP_REPORT       0x0002      /* Simple report */
  /*#define NOP_PRINTF       0x0003       Simprintf instruction (obsolete)*/
  #define NOP_PUTC         0x0004      /* JPB: Simputc instruction */
  #define NOP_CNT_RESET    0x0005         /* Reset statistics counters */
  #define NOP_GET_TICKS    0x0006         /* JPB: Get # ticks running */
  #define NOP_GET_PS       0x0007      /* JPB: Get picosecs/cycle */
  #define NOP_REPORT_FIRST 0x0400      /* Report with number */
  #define NOP_REPORT_LAST  0x03ff      /* Report with number */

These are the ones for the or1ksim test suite:

  #define NOP_NOP          0x0000      /* Normal nop instruction */
  #define NOP_EXIT         0x0001      /* End of simulation */
  #define NOP_REPORT       0x0002      /* Simple report */
  /*#define NOP_PRINTF       0x0003       Simprintf instruction (obsolete)*/
  #define NOP_PUTC         0x0004      /* JPB: Simputc instruction */
  #define NOP_CNT_RESET    0x0005         /* Reset statistics counters */
  #define NOP_GET_TICKS    0x0006         /* JPB: Get # ticks running */
  #define NOP_GET_PS       0x0007      /* JPB: Get picosecs/cycle */
  #define NOP_TRACE_ON     0x0008      /* Turn on tracing */
  #define NOP_TRACE_OFF    0x0009      /* Turn off tracing */
  #define NOP_RANDOM       0x000a      /* Return 4 random bytes */
  #define NOP_OR1KSIM      0x000b      /* Return non-zero if this is Or1ksim */


I'm thinking about unifying those codes across all platforms, as well as adding 
some abstraction layer so that, when running on a real FPGA host, a test case 
would call the C run-time routine exit() instead of executing "l.nop NOP_EXIT".

I've also seen that the ORPSoC test suite uses grep on each test result log in 
order to find out whether the test was successful:

  /sim/bin/Makefile:
    $(Q)echo "function check-test-log { if [ \`grep -c -i "$(TEST_OK_STRING)" 
"$(RTL_SIM_RESULTS_DIR)"/"$(TEST)$(TEST_OUT_FILE_SUFFIX)"\` -gt 0 ]; then 
return 0; else return 1; fi; }" >> $@

Variable TEST_OK_STRING is just defined as the number "8000000d". I would think 
that this is not a very reliable way to determine if the test succeeded.

I'm also thinking about providing some way for the test case to generate a 
result data file, something like a "l.nop WRITE_TO_RESULTS_FILE" opcode, so 
that the test framework can compare the test results across different 
platforms. For example, we could check that some multiplication results 
produced under Verilator are the same as the ones produced under or1ksim. Or 
maybe it's not worth going that far: can you think of many test cases where 
that would be desirable?

Any thoughts on this?

Thanks,
  R. Diez
_______________________________________________
OpenRISC mailing list
[email protected]
http://lists.openrisc.net/listinfo/openrisc

Reply via email to