OK, this is coming along nicely but there are some policy questions I
could use some input on. What I've done is lumped the run the test
and diff the outputs steps into a single task from scons's
perspective, implemented as a python function, so now we have a lot
more control over what happens
Sounds good to me. What should happen with a SIGSEGV? I'm thinking #2.
Ali
On Mar 7, 2009, at 3:15 PM, Steve Reinhardt wrote:
OK, this is coming along nicely but there are some policy questions I
could use some input on. What I've done is lumped the run the test
and diff the outputs steps
1. Diff the outputs and set the pass/fail status based on the result.
2. Declare the test's status as failed regardless of the outputs but
consider the job of running the test as completed successfully. The
test will not be re-run unless some dependence changes (like one that
causes the m5
On Sat, Mar 7, 2009 at 3:51 PM, nathan binkert n...@binkert.org wrote:
- Is there any reason for SIGTERM et al (or some subset of them) to
cause #4 instead of #3?
Hard to say. I never use -k with tests (only when trying to get
through compiler stuff), but I generally hate when programs don't
OK, well I just unintentionally found that scons doesn't reliably
terminate on ^C even outside of the regressions (like when compiling
or doing the autoconf stuff) so sticking with the current plan is at
least no worse than that.
Really? I don't recall ever really having that problem.
Nate
On Sat, Mar 7, 2009 at 4:16 PM, nathan binkert n...@binkert.org wrote:
OK, well I just unintentionally found that scons doesn't reliably
terminate on ^C even outside of the regressions (like when compiling
or doing the autoconf stuff) so sticking with the current plan is at
least no worse than
I'm running into this with gem5 now (the problem with canceled or
aborted runs) and I think I've tracked down a possible explanation...
basically running the test and comparing the outputs are set up as two
separate tasks in scons, and the only thing the output comparison task
is dependent on is
I mentioned this earlier, but scons and regressions are misbehaving,
and it's making updating the regressions very annoying. If a run is
canceled halfway, scons now assumes it actually finished and was just
wrong. I have to go and manually delete the old, incomplete run before
it's willing to
This is what scons is doing when it's recompiling unnecessarily:
makeDefinesPyFile([build/X86_SE/python/m5/defines.py],
[{'ALPHA_TLASER': False, 'FAST_ALLOC_STATS': False, 'FAST_ALLOC_DEBUG':
False, 'USE_CHECKER': False, 'SS_COMPATIBLE_FP': False, 'NO_FAST_ALLOC':
False, 'USE_FENV': True,
I've noticed that that command runs frequently when unnecessary, but
I'm surprised that it causes a problem. Are you pushing and popping
patches? Now that we're putting the repository version into the
binary, when you push and pop, this changes. I have noticed that this
runs sometimes even if
10 matches
Mail list logo