On Sun, Oct 31, 2004 at 09:12:49PM -0500, Ed Allen Smith wrote: > >I don't think it can be usefully automated. > > > >Consider this simple sort of problem. > > > > if( eval { require Some::Module } ) { > > * one version of the code * > > } > > else { > > * some other version * > > } > > > >The code still works fine even without the module in question being there. > > Umm... is this likely to be tested for with a core module? And if > testing for a core module is present, then the library module doing so would > appear to be prepared for said core module not being present. That > admittedly mainly says "default to assuming not required if uncertain".
Who knows? You can't tell without a human examining the code. > Urr... yes, wasn't thinking, sorry. It should be Test::Smoke using such a > file, or internal data, to know what test failures to not automatically > report as problematic, just as it now doesn't treat tests not confirmed by a > harnessed failure as being cause to send to P5P (if so configured). All test failures are problematic. Do not start ignoring failures. That way madness lies. Its better in the long run to encode the proper skips and todos. -- Michael G Schwern [EMAIL PROTECTED] http://www.pobox.com/~schwern/ If I got anything to say, I'll say it with lead. -- Jon Wayne