While an interesting idea, I forsee two challenges to doing this...
Firstly is that it might turn an otherwise normal result into something
else, with no clear rule. It makes a judgement call that some level of
testing is good or bad, which isn't really the place of an installer to
call.
The reason Kwalitee has metrics like this is that it's not important in
the scheme of things, it's only looking for indicators which may well be
wrong (hence the name Kwalitee). The Kwalitee of a module does not
prevent it being installed. What makes 79 skips different from 80 skips?
You need some clear distinction between the two states, not just
something arbitrary (be it 50 or 80 or something else).
Now 100% skips, THAT could potentially be interesting, or maybe TODOs.
But then I don't necesarily know why it would be worthy of a different
result code.
The second issue is that it has the potential to override the package
author. If the author felt that the tests weren't critical, and could be
skipped quite happily, who are we to make a call that he is wrong and
that it's a probably, or anything other than 100% ok.
An interesting idea, but maybe one that is better as a Kwalitee metric?
Adam K
Tyler MacDonald wrote:
Adam,
I have one more edgey case I'd like to see on this list:
13. Tests exist, but fail to be executed.
There is tests, but the tests themselves aren't failing.
It's the build-process that is failing.
14. Tests run, and some/all tests fail.
The normal FAIL case due to test failures.
15. Tests run, but hang.
Covers "non-skippable interactive question".
Covers infinite attempts to connect network sockets.
Covers other such cases, if detectable.
Tests run, but >50% (or maybe >80%?) are skipped.
From what I've seen, the most common cause of this is that the
package is untestable with the current build configuration. Eg; you needed
to specify a webserver or database or something to get these tests to run.
Apache::Test provides some hooks for autotesters to configure themselves to
test packages using it, IMHO setting DBI_DSN etc should be enough for
packages that test against a database.
I've been thinking a lot about how to properly autotest
database-driven packages lately. So far the best I've got is:
- If a package depends on a particular DBD:: driver, and you have
the matching database server available, specify a DBI_DSN that uses that
driver.
- If a package depends on DBI, but no particular driver, try testing
it with every driver you can use, starting with SQLite since it does not
need a "live" database server.
In the case where a package supports multiple, but not all, database
driver backends, they would probably depend on DBI, but "reccomend" each
DBD:: driver in the META.yml, which would be covered by the first bullet.
Cheers,
Tyler