Hi Joep,

> [...] And I once read that testers are as forgiving as bridge card game
> players ;)


Ah... Not reassuring :)

The first issue could be addressed by defining the libraries that are
> tested with a decent coverage with the lib and not rely on the include
> statements.


OK. Coverage can be computed (overkill ? not so sure, keep reading...)

The second issue can be dealt with in a procedural way: when it does
> not compile, it clearly fails. But when it does, someone always has to
> check proper working anyway (unlike you, I don't expect automated
> testing to become a real option for us in the near future. But you
> proved me wrong before).

When it is not working properly, the testen
> must estimate if the failure is so significatant that the library
> fails. As an alternative we could add categories: pass, minor, major
> or fail.


That would be ideal, but needs more work. As it can be open to discussion
(what's minor, major, pass ?). For me, a test passes or not. The problem is
multiple libs can be used at the same time in a test. But one lib amongst
the others is the one being tested. There's also a dependencies between
tests: if test_serial_hardware does not work, all tests using it as a
underlying lib (not the one being actually tested) will fails.


More thoughts... I now can see two kind of tests:

 1. tests which are real-life tested.
 2. tests which are run in a simulator

The first one is what we have now. It allows to say: "this tests, and used
libs, works on this PIC. Used libs may not be highly tested, though, but
hey, it works !"

The second one is what I'd like to have :) PICShell has unit testing
capabilities. I don't know how reliable it is, but I'm quite sure this can
add value to our tests. I also don't know if coverage is implemented, since
there's a close relation between asm and jal code execution, I'm confident
this is feasible. Those can be automated.

And the good news if both can be integrated in the same test file (special
tags in comment IIRC for unit testing's PICShell).

This is just a thought about it, for the long term. Maybe never, I don't
know. But, I tend to think about PICShell quite a lot of time, I'll contact
author Olivier for more.


For now, back to reality :) What should I put in the results as columns ?
Libraries ? Tests themselves ? None of these ? I'd say either:

 * libraries, using what I described before (when a lib at least is used in
successful test, then mark it as "good").
 * or none of these. That is, just put a testing meter, and a link to a
detailed page, showing all the performed tests.



Cheers,
Seb
-- 
Sébastien Lelong
http://www.sirloon.net
http://sirbot.org

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"jallib" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/jallib?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to