Quoting "Roberto Longobardi" <secc...@gmail.com>:
The first is a true test case manager plugin, since I couldn't find
anything suitable before, and my users wanted to have test case
management and bug tracking integrated in the same place:
http://trac-hacks.org/wiki/TestManagerForTracPlugin

OK, here my first observations after a short test:

 - This testing framework is much more to my personal needs
   than other, similar plugins I tried in the past. Good!

 - It would be nice, if a test case result is not immediately
   recorded on mouse click, but if there were a "save result"
   button. It is too easy to click accidently on the coloured
   bullet.

 - Maybe the "Status change history" should be reversed,
   showing the latest (= most important) result first.

 - This is only a matter of taste, but I'm used to a
   different terminology: A "test catalog" for one project
   would be a "test suite" (TS), a sub catalog (or sub sub
   catalog) would be a "test group" (TG), and a single test
   would be a "test case" (TC).

 - When one creates a ticket from a (failed) test, it would
   be nice, if the new ticket already defaults to a useful
   subject ("Failed test: Basic Sleep - Sleeping Monster").
   This can be easily achieved by using the link:
   .../newticket?summary=Failed%20test:%20Basic%20Sleep...

 - It would be cool, if one could define a set of possible
   verdicts. You have "Successful", "Untested", and "Failed",
   which is OK for many purposes. But if you do tests
   according to one of the many standards, you might have
   different needs. E.g. ISO-9646 has five verdicts:

   None = no result yet, untested
   Pass = good, successful
   Inconc = inconclusive, unclear
   Fail = bad, failed
   Error = there was an error in performing the test

   If you test according to POSIX 1003.3 you have the verdicts:

   PASS = good, successful
   FAIL = bad, failed
   UNRESOLVED = inconclusive, unclear
   UNTESTED = no result yet, untested

   Some testing frameworks add XFAIL (= expected fail), UPASS
   (= unexpected pass) and UNSUPPORTED (= the implementation
   under test doesn't support a feature) to the POSIX verdicts.

 - In the long run, it would be useful (but not trivial to
   implement) to differentiate between the definition of the
   tests and running a test campaign. An example: I have a
   software product, a plugin for Trac. Before doing a release
   "1.0" I have to perform some interactive tests. Therefore
   I start a new test campaign with svn revision 1234, where
   all test verdicts "Untested". Let's say, one or two tests
   failed. I conclude the test campaign unsuccessful. Now I
   have to fix the bugs and have to start a new test campaign,
   again with all test verdicts set to "Untested".

Keep on the good work, thanks!

--
You received this message because you are subscribed to the Google Groups "Trac 
Users" group.
To post to this group, send email to trac-us...@googlegroups.com.
To unsubscribe from this group, send email to 
trac-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/trac-users?hl=en.

Reply via email to