Processed: How to handle FTBFS bugs in release architectures

2019-07-24 Thread Debian Bug Tracking System
Processing commands for cont...@bugs.debian.org:

> retitle 932795 How to handle FTBFS bugs in release architectures
Bug #932795 [tech-ctte] Ethics of FTBFS bug reporting
Changed Bug title to 'How to handle FTBFS bugs in release architectures' from 
'Ethics of FTBFS bug reporting'.
> thanks
Stopping processing here.

Please contact me if you need assistance.
-- 
932795: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=932795
Debian Bug Tracking System
Contact ow...@bugs.debian.org with problems



Bug#932795: How to handle FTBFS bugs in release architectures

2019-07-24 Thread Santiago Vila
retitle 932795 How to handle FTBFS bugs in release architectures
thanks

On Wed, Jul 24, 2019 at 12:05:45PM +0100, Simon McVittie wrote:
> I don't think framing this as a question of ethics is necessarily
> helpful. When people disagree on a technical question, a recurring
> problem is that both "sides" end up increasingly defensive, arguing
> from an entrenched position, and unwilling to be persuaded. Using terms
> that the other "side" is likely to interpret as an accusation of being
> unethical seems likely to exacerbate this.

Ok, I agree.

Thanks a lot.



Bug#932795: Ethics of FTBFS bug reporting

2019-07-24 Thread Simon McVittie
On Tue, 23 Jul 2019 at 13:54:10 +0200, Santiago Vila wrote:
> Ethics of FTBFS bug reporting

I don't think framing this as a question of ethics is necessarily
helpful. When people disagree on a technical question, a recurring
problem is that both "sides" end up increasingly defensive, arguing
from an entrenched position, and unwilling to be persuaded. Using terms
that the other "side" is likely to interpret as an accusation of being
unethical seems likely to exacerbate this.

> I reported this bug:
> 
> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=907829
> 
> and it was downgraded on the basis that the official autobuilders
> are multi-core.

Do I understand correctly that you are asking the TC to exercise our
power to overrule developers, in order to overrule the maintainer's
and/or the release team's judgement about the severity of (bugs like)
#907829?

Or are you asking the TC for advice, or are you asking us to use a
different one of the TC's powers?

> * The informal guideline which is being used, "FTBFS are serious if
> and only if they happen on buildd.debian.org", is not written anywhere
> and it's contradictory with Debian Policy, which says "it must be
> possible to build the package when build-essential and the
> build-dependencies are installed".

I had always interpreted the informal guideline as: FTBFS on the official
buildds of release architectures are always serious, because they mean we
can't release the package; FTFBS anywhere else (non-release architectures,
unofficial buildds, pbuilder, etc.) *might* be serious, but might not,
depending on how "reasonable" the build environment is.

There are many aspects of a build environment that might be considered
reasonable and might not, and they are generally evaluated on a
case-by-case basis. A working build environment needs "enough" RAM (a
lot more for gcc or WebKit than for hello); it needs "enough" disk space
(likewise); it needs a writeable /tmp; it needs a correctly-installed
Debian toolchain (I hope you wouldn't argue that it's RC if a package
FTFBS with a patched gcc in /usr/local/bin); it needs to be on a
case-sensitive filesystem (I hope you wouldn't argue that FTBFS on
FAT/NTFS/SMB would be release-critical); it needs to not have weird
LD_PRELOAD hacks subverting its expectations; and so on.

We also have packages that FTBFS (usually as a result of test failures)
when built as uid 0, when built as gid 0, when built with not enough
CPUs, when built with too many CPUs (a lot of race conditions become more
obvious with make -j32), when built in a time zone 13 hours away from UTC,
when built on filesystems that don't provide the FIEMAP ioctl, when built
on filesystems that don't have sub-second timestamp resolution, and many
other failure modes. Clearly these are bugs. However, not all bugs are
equally serious. For example, we've managed to release the glib2.0 package
for years, despite it failing to build when you're uid 0 (a test fails
because it doesn't expect to be able to exercise CAP_DAC_ADMIN), because
we consider building as uid 0 to be at least somewhat unreasonable.

If build-time test failures are always RC, however usual or unusual the
build environment, then one rational response would be for all maintainers
to disable build-time tests (if they are particularly conscientious,
they might open a wishlist bug, "should run tests", at the same time
as closing a RC bug like "FTBFS due to test failure when gid == 0"). I
don't think that is a desirable outcome. We are not building packages for
the sake of building them, but so that they can be used - which means we
should welcome efforts like build-time tests that improve our confidence
that the package is actually usable in practice, and not just buildable,
and try to avoid creating incentives to remove them.

For the specific question of whether a single CPU core is a "reasonable"
build environment, my answer at the moment is "I don't know".

> * Because this is a violation of a Policy "must" directive, I consider
> the downgrade to be a tricky way to modify Debian Policy without
> following the usual Policy decision-making procedure.

The wording of the serious severity is that it is a "severe" violation
of Debian Policy, which is qualified with "(*roughly*, it violates a
"must" or "required" directive)" (my emphasis). This suggests that there
can exist Policy "must" violations that are not RC.

The release team are the authority on what is and isn't RC: the fact that
serious bugs are normally RC is merely convention. However, I suspect
that the release team would not welcome being asked to add -ignore
tags to serious bugs that describe non-severe Policy "must" violations,
and would ask the package's maintainer to downgrade the bug instead.

> To illustrate why I think this guideline can't be universal, let's
> consider the case (as a "thought experiment") where we have a package
> which builds ok with "dpkg-buildpackage -A" and "dpkg-buildpackage -B"
> but FTBFS when built

Bug#932795: Ethics of FTBFS bug reporting

2019-07-24 Thread Adrian Bunk
On Wed, Jul 24, 2019 at 11:34:53AM +0200, Santiago Vila wrote:
>...
> This is a Makefile bug in gcc-8-cross, a package which would qualify
> as "big". Maintainer did not initially believe it was a real bug,
> maybe because he built the package a lot of times in the past and the
> bug never happened to him.
> 
> See what the maintainer did afterwards:
> 
> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=928424
> 
> Would we say in this case that the package "requires" more than
> one CPU to build?

Noone denies that these are bugs.

The actual problem is an ethical one, more specifically you wanting to 
have your "Please reproduce on a single-core machine and fix this"
bugs RC so that other people are forced to spend their time on fixing
them if they want their packages to stay in testing.

> To me it seems like a bug which may happen to anybody, and the fact
> that it did not happen in buildd.debian.org yet is due to pure chance.
>...

This is not pure chance.

There are no single-core buildds for release architectures,
and there will never be in the future.

gcc-8-cross is only built on architectures with strong autobuilders,
it is not built on ports architectures like hppa where the build
would take weeks.

This is a bug, but the practical benefits of fixing it would be zero.

cu
Adrian

-- 

   "Is there not promise of rain?" Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
   "Only a promise," Lao Er said.
   Pearl S. Buck - Dragon Seed



Bug#932795: Ethics of FTBFS bug reporting

2019-07-24 Thread Santiago Vila
On Tue, Jul 23, 2019 at 01:30:58PM +0100, Ian Jackson wrote:

> I suggest the following approach:
> 
>  - Introduce the words "supported" and "reasonable".  So
> 
> Packages must build from source in any supported environment;
> they should build from source in any reasonable environment.
> 
>  - Provide a place to answer these questions:
> 
> What is a supported, or a reasonable, environment, is not
> completely defined, but here are some examples:
> 
> - An environment with only one cpu available is supported.
> - An environment with a working but non-default compiler
>   is reasonable but not supported.

I believe this is more complex than required, and will not necessarily
solve the problem.

Suppose we write clearly in Debian Policy that build environments with
one CPU are fully supported, and yet there are people who claim that
because buildds have several cores, it is wrong to report such bugs
as serious. We would be back at square one.

This is why I've described this problem as kafkaesque.

> On the point at issue, do these packages build in a cheap single-vcpu
> vm from some kind of cloud vm service ?  ISTM that this is a much
> better argument than the one you made, if the premise is true.

Sorry, I'm a little bit lost here. What do you refer exactly by "these 
packages"?

All packages in Debian build ok in a single CPU system, except the few
ones I've already reported in the last months/years and maybe a few
ones I have not reported yet because of lack of time. I estimate there
must be only a handful of packages with bugs like this one.

Whether single-cpu machines are cheap or not depend mainly on the
amount of RAM, but contrary to what some people insinuate, my goal is
not and never has been to build packages in "cheap" machines, nor I am
complaining that packages do not build ok in "cheap" machines.

My complain is that some packages do not build ok in machines which
are perfectly capable of building them, and yet we seem to be trying
to shift the blame and the responsability to the end user for not
having a clone of buildd.debian.org.


Russ Allbery wrote:
> I'm rather dubious that it makes sense to *require* multiple cores to
> build a package for exactly the reason that Santiago gave: single-core VMs
> are very common and a not-very-exotic environment in which someone may
> reasonably want to make changes to a package and rebuild it.  But maybe
> I'm missing something that would make that restriction make sense.

Thanks a lot for this. No, I don't think you are missing anything.

> And it's possible that multi-core may be a reasonable requirement
> for that "heavy package" tier.

How would that become a requirement? Big packages (as in "packages
requiring a lot of RAM or a lot of disk") do not really need more than
one CPU to build because of they being "big". In theory, using more
than one CPU should just make the build to go faster, that's all.

Ideally, it should be up to the end user to decide if they want the
package to build faster or not, I don't see it as something that needs
to be regulated by Policy.

A funny example which could be seen as a counter-example but it is not
in my opinion:

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=924325

This is a Makefile bug in gcc-8-cross, a package which would qualify
as "big". Maintainer did not initially believe it was a real bug,
maybe because he built the package a lot of times in the past and the
bug never happened to him.

See what the maintainer did afterwards:

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=928424

Would we say in this case that the package "requires" more than
one CPU to build?

To me it seems like a bug which may happen to anybody, and the fact
that it did not happen in buildd.debian.org yet is due to pure chance.

It must be noted that many of the bugs I found while building on
single-CPU systems are really like the one in gcc-8-cross, and not
like the one in p4est. A few more examples:

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=906623
  heimdal, FTBFS because of Makefile bug

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=923476
  webkit2gtk, FTBFS because of Makefile bug


Thanks.