Adam,
        I have one more edgey case I'd like to see on this list:

> 
> 13. Tests exist, but fail to be executed.
>     There is tests, but the tests themselves aren't failing.
>     It's the build-process that is failing.
> 
> 14. Tests run, and some/all tests fail.
>     The normal FAIL case due to test failures.
> 
> 15. Tests run, but hang.
>     Covers "non-skippable interactive question".
>     Covers infinite attempts to connect network sockets.
>     Covers other such cases, if detectable.

        Tests run, but >50% (or maybe >80%?) are skipped. 

        From what I've seen, the most common cause of this is that the
package is untestable with the current build configuration. Eg; you needed
to specify a webserver or database or something to get these tests to run.
Apache::Test provides some hooks for autotesters to configure themselves to
test packages using it, IMHO setting DBI_DSN etc should be enough for
packages that test against a database.

        I've been thinking a lot about how to properly autotest
database-driven packages lately. So far the best I've got is:

        - If a package depends on a particular DBD:: driver, and you have
the matching database server available, specify a DBI_DSN that uses that
driver.

        - If a package depends on DBI, but no particular driver, try testing
it with every driver you can use, starting with SQLite since it does not
need a "live" database server.

        In the case where a package supports multiple, but not all, database
driver backends, they would probably depend on DBI, but "reccomend" each
DBD:: driver in the META.yml, which would be covered by the first bullet.

        Cheers,
                Tyler

Reply via email to