Hi Simon,

> > 5) Possibly it makes also sense to allow GNULIB_TOOL_IMPL to be set to
> >    'sh+py'. In this case the script will make a full copy of the destination
> >    dir, run the shell implementation and the Python implementation on the
> >    two destination dirs, separately, and compare the results (again, both
> >    in terms of effects on the file system, as well as standard output).
> >    And err out if they are different.
> 
> Generally I'm happy to hear about speedupds of gnulib-tool!  The plan
> sounds fine.  I think this step 5) is an important part to get
> maintainers try the new implementation, and report failures that needs
> to be looked into.  If there was a small recipe I can follow to get a
> diff that can be reported back, I would run it for a bunch of projects
> that I contribute to.

Thanks for your feedback, and offer to use this mode.

> While a self-test suite for gnulib-tool would be nice, some real
> regression testing by attempting to build a bunch of real-world projects
> that rely on gnulib-tool may be simpler to realize.  If there is a CI/CD
> that builds ~30 different real-world projects (perhaps at known-good
> commits) and compares the output against an earlier known-good build,
> for each modification to gnulib-tool in gnulib, that would give good
> confidence to any change to gnulib-tool.

I guess we are thinking about slightly different things:

  * (A) I am thinking about
    - for P in { coreutils, gettext, ... }, taking a frozen(!) checkout of P,
      removing irrelevant source files (esp. all *.h, *.c, documentation, etc.),
    - and a frozen(!) set of gnulib modules at a specific time point,
    - and merely invoke gnulib-tool and compare the generated files and stdout.

  * (B) You seem to be thinking about
    - for P in { coreutils, gettext, ... }, taking the current git of P
      (or latest release of P),
    - taking the current set of gnulib modules,
    - and invoke not only gnulib-tool, but also './configure' and make.

I think that
  - With either approach, the confidence to any change in gnulib-tool will be
    the same.
  - With approach (A), when we make a change to gnulib-tool, we need to commit
    new expected test results, which is quite easy. No effort otherwise.
  - With approach (B), we will get failures for other reasons as well: when
    a gnulib module has changed in an incompatible way; when the git repository
    of P has moved; when package P itself is broken. Sounds like a continuous
    effort to hunt down (mostly) false positives.

Bruno




Reply via email to