Just checked v19 and it seems that, in this release, failing tests would
output stderr and stdout to .testing-results, but if memory serves this was
flakey. I _think_ the output wasn't recorded if gem5 crashed, which is a
very common cause for test-case failure. In  v20 nothing is recorded. If I
had to guess, we probably messed things up when adding support for Python3,
as I believe that's the only significant change we made to the TestLib for
the v20 release.


--
Dr. Bobby R. Bruce
Room 2235,
Kemper Hall, UC Davis
Davis,
CA, 95616

web: https://www.bobbybruce.net

On Mon, Aug 24, 2020 at 2:04 AM Gabe Black <[email protected]> wrote:

> This at least seems reasonable, although I haven't thought about it long
> enough to commit to too strongly :-). I *used* to be able to find what I
> would need (with enough digging) in .testing-results/blah-blah, but now
> there are only two files in there, results.xml and results.pickle. Did this
> break recently?
>
> Gabe
>
> On Mon, Aug 24, 2020 at 1:43 AM Bobby Bruce <[email protected]> wrote:
>
>> I agree the TestLib framework is broken when it comes to reporting
>> output, which makes problems pretty difficult to figure things out. As far
>> as I know, it's been this way for some time. My workflow when i get a
>> failing test is to use `./main.py run --uid <UID>` on the failing test
>> suite which, when run (at least on my machine), will print the location of
>> the `simout` and `simerr` files (not deleted).
>>
>> E.g.:
>>
>> ```
>> $ ./main.py run --uid
>> SuiteUID:tests/gem5/hello_se/test_hello_se.py:test-hello-linux-TimingSimpleCPU-RISCV-x86_64-opt
>> Running the new gem5 testing script.
>> For more information see TESTING.md.
>> To see details as the testing scripts are running, use the option -v,
>> -vv, or -vvv
>> Discovered 198 tests and 99 suites in
>> /home/bobbyrbruce/Documents/gem5/tests/gem5/hello_se/test_hello_se.py
>>
>> =======================================================================================================
>> Running Tests from 1 suites
>> Results will be stored in
>> /home/bobbyrbruce/Documents/gem5/tests/.testing-results
>>
>> =======================================================================================================
>> Building the following targets. This may take a while.
>> /home/bobbyrbruce/Documents/gem5/build/RISCV/gem5.opt
>> You may want to run with only a single ISA(--isa=), use --skip-build, or
>> use 'rerun'.
>> warn: CheckedInt already exists in allParams. This may be caused by the
>> Python 2.7 compatibility layer.
>> warn: Enum already exists in allParams. This may be caused by the Python
>> 2.7 compatibility layer.
>> warn: ScopedEnum already exists in allParams. This may be caused by the
>> Python 2.7 compatibility layer.
>> warn: CheckedInt already exists in allParams. This may be caused by the
>> Python 2.7 compatibility layer.
>> warn: Enum already exists in allParams. This may be caused by the Python
>> 2.7 compatibility layer.
>> warn: ScopedEnum already exists in allParams. This may be caused by the
>> Python 2.7 compatibility layer.
>> Redirecting stdout to /tmp/gem5outaiXmNQ/simout
>> Redirecting stderr to /tmp/gem5outaiXmNQ/simerr
>> Test: test-hello-linux-TimingSimpleCPU-RISCV-x86_64-opt Passed
>> Test: test-hello-linux-TimingSimpleCPU-RISCV-x86_64-opt-MatchStdoutNoPerf
>> Passed
>> ============================== Results: 2 Passed in 9.7e+02 seconds
>>  ==================================
>> ```
>>
>> I'd recommend three fixes for the TestLib:
>>
>> 1) `.testing-results` shouldn't be a hidden directory.
>> 2) Stdout and stderr should be output to the
>> `testing-results/results.xml` file.
>> 3) For the `MatchStdoutNoPerf` tests (where the stdout is compared
>> against some reference), the diff should be recorded upon failure. The
>> `tests/gem5/verifier.py` script contains most of the functionality for
>> these `MatchStdoutNoPerf` tests.
>>
>> The TestLib code (in `ext/testlib`) could definitely use a cleanup. I
>> think, frankly, it's way over-engineered given all we really want to do is
>> compile different gem5 ISAs and run them with different configurations. The
>> testing-results functionality is mostly handled by`
>> ext/testlib/results.py,` though the methods here are clearly broken or not
>> being called when they are supposed to be. If you're looking for something
>> to do, I'd recommend starting there.
>>
>> Kind regards,
>> Bobby
>> --
>> Dr. Bobby R. Bruce
>> Room 2235,
>> Kemper Hall, UC Davis
>> Davis,
>> CA, 95616
>>
>> web: https://www.bobbybruce.net
>>
>>
>> On Sun, Aug 23, 2020 at 8:10 PM Gabe Black <[email protected]> wrote:
>>
>>> Adding Bobby and Giacomo specifically, since I see their fingerprints on
>>> the testing stuff.
>>>
>>> Gabe
>>>
>>> On Sun, Aug 23, 2020 at 8:03 PM Gabe Black <[email protected]> wrote:
>>>
>>>> At the risk of sounding over dramatic, this is a major problem. Before
>>>> it was difficult to see what happened when a test failed when running the
>>>> test automation, but now it's impossible. As far as I can tell, the results
>>>> end up in a temp directory and are then wiped out and not copied anywhere.
>>>> I can't even determine what the test ran to run myself and see what
>>>> happened.
>>>>
>>>> What code is supposed to copy the results out? I'm willing to help
>>>> figure this out, but I don't even know where to start.
>>>>
>>>> Gabe
>>>>
>>>> On Sun, Aug 23, 2020 at 6:00 PM Gabe Black <[email protected]>
>>>> wrote:
>>>>
>>>>> Hey folks, I'm trying to run tests on a change I'm working on, and the
>>>>> test infrastructure is not actually saving any test results where it 
>>>>> claims
>>>>> it will. I also tried selecting what ISAs to use, and after selecting all
>>>>> of them the run script claimed no tests would be run.
>>>>>
>>>>> Is this something other people are seeing that's just generally
>>>>> broken, or is there something out of whack on my machine?
>>>>>
>>>>> Gabe
>>>>>
>>>>
_______________________________________________
gem5-dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

Reply via email to