In case the list still hasn't gotten it: ---------- Forwarded message ---------- From: Guillaume Rembert <[email protected]> Date: Fri, Oct 18, 2013 at 9:11 AM Subject: Fwd: [Openrisc] [OpenRISC] or1k test suite To: Olof Kindgren <[email protected]>, Peter Gavin <[email protected]>
Hi Olof, Pete, I tried with two different email addresses, but I am still getting this message, could you forward to the list if needed? Guillaume ---------- Forwarded message ---------- From: <[email protected]> Date: Fri, Oct 18, 2013 at 2:57 PM Subject: Re: [Openrisc] [OpenRISC] or1k test suite To: [email protected] You are not allowed to post to this mailing list, and your message has been automatically rejected. If you think that your messages are being rejected in error, contact the mailing list owner at [email protected]. ---------- Forwarded message ---------- From: Guillaume Rembert <[email protected]> To: Olof Kindgren <[email protected]> Cc: Peter Gavin <[email protected]>, openrisc <[email protected]>, "[email protected]" <[email protected]> Date: Fri, 18 Oct 2013 14:57:06 +0200 Subject: Re: [Openrisc] [OpenRISC] or1k test suite Hi Olof, Hi Pete, If I can add my two cents regarding to tests and directory organisation, I would suggest to differenciate / categorise the tests depending on their nature and objectives, on top of the code type (by creating sub-directories or compilation options for example). I am usually differentiating 4 type of tests: --> functional tests: making sure that the system is doing what is required - for example the linux port or the I/O tests might be found here, --> interoperability tests: making sure that the system is working properly with other systems - for example, the fact that the system can be ported on Altera / Xilinx and other FPGAs or the wishbone compatibility tests might be found here, --> stress tests: making sure that the system is stable and wouldn't die in case of unattended use - for example, random Read/Write cycles as well as random instructions execution tests might be found here, --> performances tests: measure its performances and assess its capabilities - for example, CPU dhrystone or Read/Write/Rewrite/Re-read speed tests might be found here. This would help to establish a proper test sequence for any design and validating step by steps any implementation, starting by the base. This might also help to debug and analyse these tests. What do you think about it? Guillaume On Tue, Oct 15, 2013 at 3:33 PM, Olof Kindgren <[email protected]>wrote: > > On Mon, Oct 14, 2013 at 11:16 PM, Peter Gavin <[email protected]> wrote: > >> On Mon, Oct 14, 2013 at 7:10 AM, Olof Kindgren >> <[email protected]>wrote: >> >>> Any thoughts? >>> >> >> Hi Olof, >> >> As I've mentioned, I have the start of a testsuite here: >> https://github.com/pgavin/or1k-test >> >> I've described it on this list before, I believe, but I'll say a few >> things about it again since we're on this topic. I wanted the tests to >> encapsulate all the information needed for the test into a single file, >> including any assembly/C code and any linker script that's necessary. >> (However C/asm code could theoretically be kept in separate files and >> linked in if desired.) So to that end I used m4 for preprocessing the test >> cases. M4 is also useful as it helps with generating the type of >> repetitive code found in testsuites. (However, it does have some >> programmability limitations that I've skirted by using python to generate >> code in some cases.) >> >> The tree has 3 directories: m4/ contains a library of m4 files, etc/ >> contains miscellaneous junk, and tests/ contains the test cases. The >> process.py script takes care of running m4 on the test cases, and there's a >> sample Makefile that will build and run the testsuite. It should build and >> run on or1ksim as-is provided the toolchain is in your PATH. >> >> We should be able to provide some sort of feature dependency between >> tests, so that e.g. a test for feature A that requires feature B is only >> executed after feature B is tested. For example, ALU tests cannot work >> correctly unless the sfeq instruction works correctly, so that the result >> can be compared with what's expected. I also tried to minimize the number >> of such dependencies between tests. I haven't fully realized this >> dependency idea, though. Right now I just keep a list of all the tests, >> manually sorted so that the more complex tests with more dependencies come >> after the simpler tests. This sort of organization helps implementers by >> letting them incrementally develop features. >> >> The test cases do not do any I/O. Other testsuites print out any results >> generated, which are then compared with a file that contains what's >> expected. I don't like this because it either requires the l.nop hack, or >> it requires that the store instructions have been tested. Instead, the >> testcases use sfeq to compare the result and expected value, and execute >> user-supplied fail code if they do not match. If all tests fail, the >> testcase executes the user-supplied pass code. (The pass/fail code is >> directly inlined using m4, so function calls aren't needed.) >> >> We should keep implementation-specific tests out of the unified test >> suite, IMO. Instead we should allow the testsuite to be extended by >> implementers with testcases for their platform. >> >> I think eventually the testsuite should be configurable to either run >> each test as a completely separate program, or to run all the tests >> sequentially in one go. This will make it simple to test a complete >> working system, and e.g. the testsuite could be embedded in the system >> firmware to test the system at boot time. >> >> Obviously, I'd really like it if my testsuite was used as the starting >> point for this, but I also understand if it's not what everyone wants. I >> just think there are a few good ideas in there that will be useful for this >> effort. >> >> -Pete >> > > Hi Peter, > > We should definitely use the great work you have done here. I have only > taken a quick glance at your test suite, so I'm not sure if it can be > easily extended with plain C or ASM test cases, but I think we want that as > well to make it easy for people to add tests without having to learn m4. > > What I think we should do is to use your test suite as one of the > alternative test suites (with orpsocv2, or1ksim, gcc being the other test > suites atm). The easiest way would be to put the contents of your repo in a > sub directory in openrisc/or1k-tests. If someone wants to add tests, they > can add it to your m4 infrastructure or write plain C/asm code. If we > decide to use your infrastructure as the preferred way to write tests, we > could migrate the c/asm tests one by one later on. > > //Olof > > _______________________________________________ > Openrisc mailing list > [email protected] > http://lists.opencores.org/listinfo/openrisc > >
_______________________________________________ OpenRISC mailing list [email protected] http://lists.openrisc.net/listinfo/openrisc
