On 27 March 2017 at 23:39, Alceu Rodrigues de Freitas Junior via cpan-testers-discuss <cpan-testers-discuss@perl.org> wrote: > > Instead of trying to setup the environment, isn't possible to workaround and > systematically try to identify the problem instead of causing it?
The problem is that CPAN and TAP::Harness have logic that sets the ENV var to =1 if its not set. That's why setting the ENV var to =0 overwrites that default. The problem is that so much is broken, that a default behaviour otherwise seriously hampers people being able to install *anything* without hitting this problem, so the default path for end users very much must have this in place. But as testers, we should be exposing this issue to get it Actually Fixed to limit the repercussions for things that happen outside the install path ( for instance, runtime ). Hence, a strategy I figured *might* work for testers is as follows: 1. Choose a target 2. Install that target and its dependencies recursively in a local::lib with PERL_USE_UNSAFE_INC=1 just to get them installed and weed out other kinds of failures. 3. Make sure inc::Module::Install is not in @INC 4. Then do a secondary pass of all the passes in the installation order ( recorded in step 2 ), but with PERL_USE_UNSAFE_INC=0 to expose the breakage. That way you maximise exposure of the problem without limiting your test base due to the shadow of failures hiding other failures. This way you get a full pass with the INC preserve behaviour, and so under differential analysis, the candidates with @INC-dependent failures should be clear ( assuming of course, that the ENV var or the reported @INC paths set are factors are things analysis can see and consider ) And the point of constraining them to a local::lib per target is to minimise the side effect of other dists making other dists magically pass ( like installing Module::Install will instantly silence a large swathe of problems that need fixing, hence step 3 ) > I'm thinking about a standard unit test that we could incorporate into > CPAN::Reporter. I can't think of any sane way to do this without simply running everything twice in different combinations. Even ambitious things like putting magic markers in @INC are likely to get hidden behind other problems. Sometimes tests fail and there's no obvious reason, other than that a test simply died without doing anything, because "do" and "eval { require }" failing without check end up as silent errors. -- Kent KENTNL - https://metacpan.org/author/KENTNL