On 15 October 2017 at 02:34, James E Keenan <jkee...@pobox.com> wrote:
> Fellow CPANtesters,
>
> Greetings from the 2017 Perl 5 Core Hackathon in Amsterdam!  I am writing to
> you as a follow-up to a discussion which participants had this afternoon
> entitled "What Do We Want and Need from Smoke Testing?"
> (https://github.com/p5h/2017/wiki/What-Do-We-Want-and-Need-from-Smoke-Testing)
>
> One of the issues raised in the discussion was the extent to which we can do
> more to locate and address failures in tests of CPAN libraries due to
> changes in Perl 5 blead.
>
> Such failures are often referred to in the Perl 5 Porters world as "BBC"
> tickets -- where BBC means "Blead Breaks CPAN".  You probably know of the
> tremendous work which Andreas Koenig and Slaven Rezić have done for years in
> this area.  We value those efforts but were wondering if the CPANtesters
> infrastructure could be leveraged to provide new access to BBC data.
>
> Specifically, the following question:
>
> Would it be possible to generate a monthly report from data in the
> CPANtesters database(s), identifying all CPAN libraries:
>
> (i) which had not had a new version uploaded to CPAN in the past month; but
>
> (ii) which had experienced new breakage in their test suites in the past
> month -- breakage which could hypothetically be attributed to a change in
> the Perl 5 source code from one monthly development release to the next?

This phrasing as stated seems to imply there's an assumption that
*new* releases can't be broken by Blead Perl.

While you can't really reliably use "new software" as a metric for "we
broke a long standing behaviour", you can still use it to identify
things that Blead breaks.

And I'm mostly worried about this approach leading to the presumption
that "a new release has been shipped recently, so it is no longer
likely to be broken"

And that's just not true.

For instance, with '.' in @INC, dozens of people shipped "fixes" that
were still broken once shipped, and thus, it would have been erroneous
to exclude these dists from the BBC testing samples simply because
they shipped before somebody started a BBC run.

The only way to know if X version of Y package is broken against Z
perl is to do a build of it against Z perl.

The only way to know if Y package is broken on Z perl is to test the
latest release of Y package on Z perl, and then, when Y package fails,
asking:

- Which version of Y package starts failing on Z perl
- Which version of perl causes Y package to fail

If somebody ships Y package version X+1, and it fails on Z perl, while
Y package version X does not fail on Z perl, it *might* be the case
that Y broke against Perl and the "fault" was the change between X and
X+1

However, if you take the perspective that Y package version X+1 was
targeted at Perl Z-1, and Perl Z is not released yet, so that, it was
not plausible that the author of Y package tested against Perl Z, that
*might* be grounds for arguing that it was Perl who broke package Y,
not the change between X and X+1

This logic requires you to think of Perl versions like branches in
git, where time doesn't flow normally, because even though given
versions of Perl exist from the perspective of P5P and the BBC
testers, those version of Perl do *not* exist from the perspective of
CPAN Authors, as evidenced by how the majority of bugs don't even get
filed/fixed till after 5.<even> ships.

Thus, from at least one perspective, there might be no warning that
*users* of that module will see indicating their package is broken
till after a major perl release ships, which is just not really
acceptable.







-- 
Kent

KENTNL - https://metacpan.org/author/KENTNL

Reply via email to