The current ltp/testcases/realtime tests belong to one of func, perf, or 
stress.  While strict pass/fail criteria make sense for functional tests 
(did the tasks wake up in priority order?), the others use "arbitrary" 
values and compare those against the whatever is being measured (wakeup 
latency, etc.) and then determine pass/fail.  Ideally the tests 
themselves would not determine the pass/fail criteria, and would instead 
simply report on their measurements since the criteria will vary in 
every use-case based on requirements, workload, hardware, etc.

I'd like to propose an approach where the tests only report their 
measured values (with the exception of the func/* tests which will 
maintain their pass/fail criteria).  Users should be able to populate a 
criteria.conf file that specified the criteria of each test.  The 
results could then be parsed, compared against the results, and a 
pass/fail determined from there.  I suspect it would be best for the .c 
tests to just report the numbers and the statistics in a common format 
and rely on python parser scripts to read the config file and determine 
pass/fail from there.

I'd like users thoughts on this approach before we jump in and start 
changing things (as this is a fairly invasive change).

-- 
Darren Hart
IBM Linux Technology Center
Real-Time Linux Team

------------------------------------------------------------------------------
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
_______________________________________________
Ltp-list mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/ltp-list

Reply via email to