On Fri, 26 Jul 2019 at 19:05:50 +0200, Santiago Vila wrote:
> The practical implications of this is that we are currently forcing
> users to spend extra money if they want *assurance* that all the
> packages (and not just "most" of them) will build, which is a pity.

I have two counterpoints to that. Please bear in mind that I'm speaking
here as someone who has rebuilt a subset of a Debian derivative on
single-core "cloud" worker machines, and encountered what I think is
the same gcc build-system bug you encountered - so I'm not denying that
a bug exists, or claiming that it is only a theoretical concern.

The first is that we aren't really discussing whether it's a bug for
a package to fail to build on a single-CPU machine - I think everyone
involved agrees that it is. We're discussing whether it is or should be a
*release-critical* bug. Making bugs RC is a big hammer: it's saying that
if the bug cannot be fixed before the next Debian release, then removing
the package is preferable to keeping it with the bug still open. This
is not something we should do lightly: I for one would prefer to have a
version of gcc that can only be built on multi-core machines rather than
no version of gcc at all. The build system bug that prevented gcc from
compiling successfully on a single-core system appears to be something
to do with Ada (in the derivative I work on, I was able to avoid it by
disabling Ada, because the derivative doesn't need that language), but I
think Debian as a project would also prefer to have a gcc with Ada support
that cannot compile on single-core systems, instead of a gcc with no Ada
support that can.

Analogously, Policy says packages should build reproducibly, and I think
everyone also agrees that it should be possible to cross-compile packages.
People open bugs (ideally with patches) when they find violations of
those desirable properties - but those bugs aren't RC, because we would
prefer to have a non-reproducible or a non-cross-buildable package than
a missing package.

Sometimes a bug is prioritized lower than its reporter would like,
for example because the package's maintainer doesn't consider it to be
very important, because no solution is known and finding a solution is
expected to be difficult relative to the severity of the bug, or because
a solution is known but its level of regression risk or release-team
distraction isn't appropriate for the current stage of a freeze. In these
situations, escalating the bug to RC in the hope that it will force the
maintainer or release team to re-prioritize it is generally considered
to be unconstructive.

My second counterpoint is that I don't think we can claim to be treating
"minimize the money spent by users who rebuild the archive" as an
important goal in general, and indeed I think it would be harmful if we
did. You could equally say that lots of other things we do are "forcing"
such users to spend extra money, including:

- building documentation at all
- building documentation from its real source code, rather than accepting
  prebuilt documentation from upstream tarballs
- running autoreconf, rather than accepting prebuilt Autotools goo from
  upstream tarballs
- building optimized code
- running build-time tests
- building various optional features
- having debug symbols
- having Haskell, Java, Ada, etc. toolchains and ecosystems
- having packages whose builds won't fit on a 20G or 50G disk
- having packages whose builds won't fit in 2G or 4G of RAM
- having more than one version of gcc
- having more than one version of LLVM
- updating packages regularly
- encouraging our users to exercise their Free Software rights, rather
  than just taking our binaries as-is
- exercising our Free Software right to modify and recompile, rather
  than just taking upstream binaries as-is

... but we do all of those things anyway, because we consider their
benefits to be greater than their costs.

    smcv

Reply via email to