Bug#932795: Ethics of FTBFS bug reporting

2019-07-28 Thread Ian Jackson
Simon McVittie writes ("Bug#932795: Ethics of FTBFS bug reporting"):
> For the specific question of whether a single CPU core is a "reasonable"
> build environment, my answer at the moment is "I don't know".

There are two issues here:

1. "Is a 1-cpu system `reasonable' or `supported'" (or whatever)

2. We don't have anywhere to write down the answer to these kind of
questions.  (That there was nowhere to write down the answer, in
particular, nowhere in policy, is I think a large underlying cause of
why Santiago is frustrated.  Because it means that the answer isn't
written down despite it being quite relevant.)

I think most people here would be happy to have (1) decided (one way
or the other) by the release team.  Solving (2) seems like a job for
the -policy list, since it's mostly wordsmithing.

The detailed questions you ask downthread seem like good questions to
be asking to help aswer (1) but maybe now would be a good time to
bring in -release ?

Ian.



Bug#932795: Ethics of FTBFS bug reporting

2019-07-25 Thread Adrian Bunk
On Thu, Jul 25, 2019 at 09:16:53AM +0100, Colin Watson wrote:
> On Tue, Jul 23, 2019 at 06:11:04PM +0300, Adrian Bunk wrote:
> > On Tue, Jul 23, 2019 at 01:30:58PM +0100, Ian Jackson wrote:
> > > Santiago Vila writes ("Bug#932795: Ethics of FTBFS bug reporting"):
> > >...
> > > On the point at issue, do these packages build in a cheap single-vcpu
> > > vm from some kind of cloud vm service ?  ISTM that this is a much
> > > better argument than the one you made, if the premise is true.
> > >...
> > > - An environment with only one cpu available is supported.
> > >...
> > 
> > - An environment with at least 16 GB RAM is supported.
> > 
> > Not sure about the exact number, but since many packages have 
> > workarounds for gcc or ld running into the 4 GB address space
> > limit on i386 it is clear that several packages wouldn't build
> > in an amd64 vm with only 8 GB RAM.
> 
> I may be missing something, but I'm not totally sure how that follows.
> 
> For what limited amount it's worth, the build VMs used on the Launchpad
> build farm to build Ubuntu uniformly have (IIRC) 8GB RAM, 4GB swap, and
> 60GB disk, and this largely seems to be fine.

That's 12 GB RAM+swap, and since this works in practice I was too high 
with guessing 16 GB.

>...
> > - An environment with at least 75 GB free diskspace is supported.
> > 
> > We do have at least one package in the archive that contains some 
> > hacks for staying inside the 75 GB diskspace available on the amd64 
> > buildds, and couldn't be built in a vm with even less diskspace.
> 
> Out of interest, which package is that?

insighttoolkit4

The Ubuntu patch sacrifices debug info for building on lower-end buildds.

cu
Adrian

-- 

   "Is there not promise of rain?" Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
   "Only a promise," Lao Er said.
   Pearl S. Buck - Dragon Seed



Bug#932795: Ethics of FTBFS bug reporting

2019-07-25 Thread Colin Watson
On Tue, Jul 23, 2019 at 06:11:04PM +0300, Adrian Bunk wrote:
> On Tue, Jul 23, 2019 at 01:30:58PM +0100, Ian Jackson wrote:
> > Santiago Vila writes ("Bug#932795: Ethics of FTBFS bug reporting"):
> >...
> > On the point at issue, do these packages build in a cheap single-vcpu
> > vm from some kind of cloud vm service ?  ISTM that this is a much
> > better argument than the one you made, if the premise is true.
> >...
> > - An environment with only one cpu available is supported.
> >...
> 
> - An environment with at least 16 GB RAM is supported.
> 
> Not sure about the exact number, but since many packages have 
> workarounds for gcc or ld running into the 4 GB address space
> limit on i386 it is clear that several packages wouldn't build
> in an amd64 vm with only 8 GB RAM.

I may be missing something, but I'm not totally sure how that follows.

For what limited amount it's worth, the build VMs used on the Launchpad
build farm to build Ubuntu uniformly have (IIRC) 8GB RAM, 4GB swap, and
60GB disk, and this largely seems to be fine.  (We could in principle
raise any of these limits, but to keep build dispatch logic simple we
strongly prefer all the VMs to be uniform, and so more per-builder
resources may mean running fewer builders.  At the moment this trade-off
point seems to be working well enough.)

> - An environment with at least 75 GB free diskspace is supported.
> 
> We do have at least one package in the archive that contains some 
> hacks for staying inside the 75 GB diskspace available on the amd64 
> buildds, and couldn't be built in a vm with even less diskspace.

Out of interest, which package is that?

-- 
Colin Watson   [cjwat...@debian.org]



Bug#932795: Ethics of FTBFS bug reporting

2019-07-24 Thread Simon McVittie
On Tue, 23 Jul 2019 at 13:54:10 +0200, Santiago Vila wrote:
> Ethics of FTBFS bug reporting

I don't think framing this as a question of ethics is necessarily
helpful. When people disagree on a technical question, a recurring
problem is that both "sides" end up increasingly defensive, arguing
from an entrenched position, and unwilling to be persuaded. Using terms
that the other "side" is likely to interpret as an accusation of being
unethical seems likely to exacerbate this.

> I reported this bug:
> 
> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=907829
> 
> and it was downgraded on the basis that the official autobuilders
> are multi-core.

Do I understand correctly that you are asking the TC to exercise our
power to overrule developers, in order to overrule the maintainer's
and/or the release team's judgement about the severity of (bugs like)
#907829?

Or are you asking the TC for advice, or are you asking us to use a
different one of the TC's powers?

> * The informal guideline which is being used, "FTBFS are serious if
> and only if they happen on buildd.debian.org", is not written anywhere
> and it's contradictory with Debian Policy, which says "it must be
> possible to build the package when build-essential and the
> build-dependencies are installed".

I had always interpreted the informal guideline as: FTBFS on the official
buildds of release architectures are always serious, because they mean we
can't release the package; FTFBS anywhere else (non-release architectures,
unofficial buildds, pbuilder, etc.) *might* be serious, but might not,
depending on how "reasonable" the build environment is.

There are many aspects of a build environment that might be considered
reasonable and might not, and they are generally evaluated on a
case-by-case basis. A working build environment needs "enough" RAM (a
lot more for gcc or WebKit than for hello); it needs "enough" disk space
(likewise); it needs a writeable /tmp; it needs a correctly-installed
Debian toolchain (I hope you wouldn't argue that it's RC if a package
FTFBS with a patched gcc in /usr/local/bin); it needs to be on a
case-sensitive filesystem (I hope you wouldn't argue that FTBFS on
FAT/NTFS/SMB would be release-critical); it needs to not have weird
LD_PRELOAD hacks subverting its expectations; and so on.

We also have packages that FTBFS (usually as a result of test failures)
when built as uid 0, when built as gid 0, when built with not enough
CPUs, when built with too many CPUs (a lot of race conditions become more
obvious with make -j32), when built in a time zone 13 hours away from UTC,
when built on filesystems that don't provide the FIEMAP ioctl, when built
on filesystems that don't have sub-second timestamp resolution, and many
other failure modes. Clearly these are bugs. However, not all bugs are
equally serious. For example, we've managed to release the glib2.0 package
for years, despite it failing to build when you're uid 0 (a test fails
because it doesn't expect to be able to exercise CAP_DAC_ADMIN), because
we consider building as uid 0 to be at least somewhat unreasonable.

If build-time test failures are always RC, however usual or unusual the
build environment, then one rational response would be for all maintainers
to disable build-time tests (if they are particularly conscientious,
they might open a wishlist bug, "should run tests", at the same time
as closing a RC bug like "FTBFS due to test failure when gid == 0"). I
don't think that is a desirable outcome. We are not building packages for
the sake of building them, but so that they can be used - which means we
should welcome efforts like build-time tests that improve our confidence
that the package is actually usable in practice, and not just buildable,
and try to avoid creating incentives to remove them.

For the specific question of whether a single CPU core is a "reasonable"
build environment, my answer at the moment is "I don't know".

> * Because this is a violation of a Policy "must" directive, I consider
> the downgrade to be a tricky way to modify Debian Policy without
> following the usual Policy decision-making procedure.

The wording of the serious severity is that it is a "severe" violation
of Debian Policy, which is qualified with "(*roughly*, it violates a
"must" or "required" directive)" (my emphasis). This suggests that there
can exist Policy "must" violations that are not RC.

The release team are the authority on what is and isn't RC: the fact that
serious bugs are normally RC is merely convention. However, I suspect
that the release team would not welcome being asked to add -ignore
tags to serious bugs that describe non-severe Policy "must" violations,
and would ask the package's maintainer to downgrade the bug instead.

> To illustrate why I think this guideline can't be universal, let's
> consider the case (as a "thought experiment") where we have a package
> which builds ok with "dpkg-buildpackage -A" and "dpkg-buildpackage -B"
> but FTBFS when 

Bug#932795: Ethics of FTBFS bug reporting

2019-07-24 Thread Adrian Bunk
On Wed, Jul 24, 2019 at 11:34:53AM +0200, Santiago Vila wrote:
>...
> This is a Makefile bug in gcc-8-cross, a package which would qualify
> as "big". Maintainer did not initially believe it was a real bug,
> maybe because he built the package a lot of times in the past and the
> bug never happened to him.
> 
> See what the maintainer did afterwards:
> 
> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=928424
> 
> Would we say in this case that the package "requires" more than
> one CPU to build?

Noone denies that these are bugs.

The actual problem is an ethical one, more specifically you wanting to 
have your "Please reproduce on a single-core machine and fix this"
bugs RC so that other people are forced to spend their time on fixing
them if they want their packages to stay in testing.

> To me it seems like a bug which may happen to anybody, and the fact
> that it did not happen in buildd.debian.org yet is due to pure chance.
>...

This is not pure chance.

There are no single-core buildds for release architectures,
and there will never be in the future.

gcc-8-cross is only built on architectures with strong autobuilders,
it is not built on ports architectures like hppa where the build
would take weeks.

This is a bug, but the practical benefits of fixing it would be zero.

cu
Adrian

-- 

   "Is there not promise of rain?" Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
   "Only a promise," Lao Er said.
   Pearl S. Buck - Dragon Seed



Bug#932795: Ethics of FTBFS bug reporting

2019-07-24 Thread Santiago Vila
On Tue, Jul 23, 2019 at 01:30:58PM +0100, Ian Jackson wrote:

> I suggest the following approach:
> 
>  - Introduce the words "supported" and "reasonable".  So
> 
> Packages must build from source in any supported environment;
> they should build from source in any reasonable environment.
> 
>  - Provide a place to answer these questions:
> 
> What is a supported, or a reasonable, environment, is not
> completely defined, but here are some examples:
> 
> - An environment with only one cpu available is supported.
> - An environment with a working but non-default compiler
>   is reasonable but not supported.

I believe this is more complex than required, and will not necessarily
solve the problem.

Suppose we write clearly in Debian Policy that build environments with
one CPU are fully supported, and yet there are people who claim that
because buildds have several cores, it is wrong to report such bugs
as serious. We would be back at square one.

This is why I've described this problem as kafkaesque.

> On the point at issue, do these packages build in a cheap single-vcpu
> vm from some kind of cloud vm service ?  ISTM that this is a much
> better argument than the one you made, if the premise is true.

Sorry, I'm a little bit lost here. What do you refer exactly by "these 
packages"?

All packages in Debian build ok in a single CPU system, except the few
ones I've already reported in the last months/years and maybe a few
ones I have not reported yet because of lack of time. I estimate there
must be only a handful of packages with bugs like this one.

Whether single-cpu machines are cheap or not depend mainly on the
amount of RAM, but contrary to what some people insinuate, my goal is
not and never has been to build packages in "cheap" machines, nor I am
complaining that packages do not build ok in "cheap" machines.

My complain is that some packages do not build ok in machines which
are perfectly capable of building them, and yet we seem to be trying
to shift the blame and the responsability to the end user for not
having a clone of buildd.debian.org.


Russ Allbery wrote:
> I'm rather dubious that it makes sense to *require* multiple cores to
> build a package for exactly the reason that Santiago gave: single-core VMs
> are very common and a not-very-exotic environment in which someone may
> reasonably want to make changes to a package and rebuild it.  But maybe
> I'm missing something that would make that restriction make sense.

Thanks a lot for this. No, I don't think you are missing anything.

> And it's possible that multi-core may be a reasonable requirement
> for that "heavy package" tier.

How would that become a requirement? Big packages (as in "packages
requiring a lot of RAM or a lot of disk") do not really need more than
one CPU to build because of they being "big". In theory, using more
than one CPU should just make the build to go faster, that's all.

Ideally, it should be up to the end user to decide if they want the
package to build faster or not, I don't see it as something that needs
to be regulated by Policy.

A funny example which could be seen as a counter-example but it is not
in my opinion:

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=924325

This is a Makefile bug in gcc-8-cross, a package which would qualify
as "big". Maintainer did not initially believe it was a real bug,
maybe because he built the package a lot of times in the past and the
bug never happened to him.

See what the maintainer did afterwards:

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=928424

Would we say in this case that the package "requires" more than
one CPU to build?

To me it seems like a bug which may happen to anybody, and the fact
that it did not happen in buildd.debian.org yet is due to pure chance.

It must be noted that many of the bugs I found while building on
single-CPU systems are really like the one in gcc-8-cross, and not
like the one in p4est. A few more examples:

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=906623
  heimdal, FTBFS because of Makefile bug

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=923476
  webkit2gtk, FTBFS because of Makefile bug


Thanks.



Bug#932795: Ethics of FTBFS bug reporting

2019-07-23 Thread Gunnar Wolf
Don Armstrong dijo [Tue, Jul 23, 2019 at 06:06:59PM -0700]:
> I think this discussion is great and good to have; thanks for starting it!

I completely concur.

> As a point of order, the TC isn't responsible for deciding whether bugs
> are RC or not. That responsibility belongs with the Release Managers.
> 
> [I don't think that should stop the TC from facilitating the decision
> and the baseline being enshrined in policy so the RMs can rely on it to
> decide whether it is RC or not.]

As we are at DebConf (from which most of the participants of this
thread so far are sorely missed), we have been exchanging some
heigher-bandwidth interactions.

I feel you are expressing the consensus we have found so far
informally. Thing is, the release managers have not been invested
formally of this power - But I think this is the way to be pursued. I
quite liked Ian's proposal of introducing "supported" and "reasonable"
as long-standing concepts that could be enshrined in our basic
documents (i.e. constitution) and in delegations. That way, they could
be rephrased - The Release Team delegation's task description can then
include keeping an updated qualifications for supported build
requirements, and thus, for determining whether a bug report regarding
the minimum characteristics of a system, determine whether the FTBFS
bugs it generates are RC or not.

[ I would like to write more, but Well, three beers are already in
  my system, as it is customary by 22:30 at DebConf. ]



Bug#932795: Ethics of FTBFS bug reporting

2019-07-23 Thread Don Armstrong
I think this discussion is great and good to have; thanks for starting it!

As a point of order, the TC isn't responsible for deciding whether bugs
are RC or not. That responsibility belongs with the Release Managers.

[I don't think that should stop the TC from facilitating the decision
and the baseline being enshrined in policy so the RMs can rely on it to
decide whether it is RC or not.]

-- 
Don Armstrong  https://www.donarmstrong.com

Those who begin coercive elimination of dissent soon find themselves
exterminating dissenters. Compulsory unification of opinion achieves
only the unanimity of the graveyard.
 -- Justice Roberts in 319 U.S. 624 (1943)



Bug#932795: Ethics of FTBFS bug reporting

2019-07-23 Thread Adrian Bunk
On Tue, Jul 23, 2019 at 08:45:42PM +0200, Ansgar wrote:
> Adrian Bunk writes:
> > - An environment with at least 16 GB RAM is supported.
> >
> > Not sure about the exact number, but since many packages have 
> > workarounds for gcc or ld running into the 4 GB address space
> > limit on i386 it is clear that several packages wouldn't build
> > in an amd64 vm with only 8 GB RAM.
> 
> Aren't there even packages that will not build on i386 with a i386
> kernel (non-PAE) as they require the full 4 GB address space to be
> buildable?

That's true (and PAE doesn't make a difference for that).

> Even more, from the "32 bit archs in Debian" BoF at DebConf15 I remember
> the suggestion that one might have to switch to 64-bit compilers even on
> 32-bit architectures in the future...  So building packages would in
> general require a 64-bit kernel, multi-arch and 4+ GB RAM.

Most packages could still be built natively (and building GNU hello
will never require 4GB RAM), but building all packages natively on
32bit architectures is already problematic and might not be feasible 
long-term.

> Ansgar

cu
Adrian

-- 

   "Is there not promise of rain?" Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
   "Only a promise," Lao Er said.
   Pearl S. Buck - Dragon Seed



Bug#932795: Ethics of FTBFS bug reporting

2019-07-23 Thread Ansgar
Russ Allbery writes:
> Ansgar  writes:
>> Even more, from the "32 bit archs in Debian" BoF at DebConf15 I remember
>> the suggestion that one might have to switch to 64-bit compilers even on
>> 32-bit architectures in the future...  So building packages would in
>> general require a 64-bit kernel, multi-arch and 4+ GB RAM.
[...]
> I'm rather dubious that it makes sense to *require* multiple cores to
> build a package for exactly the reason that Santiago gave: single-core VMs
> are very common and a not-very-exotic environment in which someone may
> reasonably want to make changes to a package and rebuild it.  But maybe
> I'm missing something that would make that restriction make sense.

Well, the package that gave raise to this issue is this:

   The p4est software library enables the dynamic management of a
   collection of adaptive octrees, conveniently called a forest of
   octrees. p4est is designed to work in parallel and scale to hundreds
   of thousands of processor cores.

I doubt many people from that application domain work with single-core
systems.

There are other interesting issues as well: I recently had problems with
running a numerics library in a VM where the CPU supports AVX-2, but the
VM instance did not.  But the library used the CPU model to select its
preferred implementation (which then used AVX-2 instructions)...

Just like issues with single-CPU systems this is a bug, but not one with
a high priority for me.

Ansgar



Bug#932795: Ethics of FTBFS bug reporting

2019-07-23 Thread Russ Allbery
Ansgar  writes:
> Adrian Bunk writes:

>> - An environment with at least 16 GB RAM is supported.
>>
>> Not sure about the exact number, but since many packages have 
>> workarounds for gcc or ld running into the 4 GB address space
>> limit on i386 it is clear that several packages wouldn't build
>> in an amd64 vm with only 8 GB RAM.

> Aren't there even packages that will not build on i386 with a i386
> kernel (non-PAE) as they require the full 4 GB address space to be
> buildable?

> Even more, from the "32 bit archs in Debian" BoF at DebConf15 I remember
> the suggestion that one might have to switch to 64-bit compilers even on
> 32-bit architectures in the future...  So building packages would in
> general require a 64-bit kernel, multi-arch and 4+ GB RAM.

Weighing in here as a Policy Editor, I think we do have a rough consensus
in the project about what sorts of resources a package may or may not
require in order to build, in that we've made firm decisions both
directions (dropping architectures that can no longer build large packages
in a reasonable length of time, for example, but also rejecting packages
that cannot be built reasonably on our buildds).  But they're
largely-undocumented "tribal knowledge."

I would be in favor of writing down those guidelines, as have been
discussed on this thread, and publishing them as part of Policy, since I
think it would provide useful guide rails for developers to know how many
resources they can reasonably require for the package build, and what sort
of build environments they need to support (and therefore should at least
consider simulating to ensure that they do support them).

We could then align our archive-wide rebuild testing with the documented
minimum requirements for package builds, and all be consistently testing
the same thing, which would prevent some surprises.

I do think, as this thread has made clear, that we do have some minimum
requirements and don't expect packages to build in smaller environments.
Minimum available memory is a really obvious one; I'm sure many of our
packages won't build in 128MB of RAM, for example.

I'm rather dubious that it makes sense to *require* multiple cores to
build a package for exactly the reason that Santiago gave: single-core VMs
are very common and a not-very-exotic environment in which someone may
reasonably want to make changes to a package and rebuild it.  But maybe
I'm missing something that would make that restriction make sense.

It's possible that we may have to have a couple of levels of requirements:
base minimum requirements below which we don't expect any maintainer to
worry about, and a higher tier of requirements for larger packages.  For
instance, I'm not sure that we want to say that we don't support building
*any* Debian package on a host that can't build Firefox (particularly
given our support for embedded devices); coreutils probably should build
on a lighter-weight machine than Firefox requires.  And it's possible that
multi-core may be a reasonable requirement for that "heavy package" tier.
If we do go down that path, though, it would be nice to add a metadata
field so that maintainers can flag their packages as being "heavy" so that
our users know to expect them to not build on commodity VMs.

-- 
Russ Allbery (r...@debian.org)   



Bug#932795: Ethics of FTBFS bug reporting

2019-07-23 Thread Ansgar
Adrian Bunk writes:
> - An environment with at least 16 GB RAM is supported.
>
> Not sure about the exact number, but since many packages have 
> workarounds for gcc or ld running into the 4 GB address space
> limit on i386 it is clear that several packages wouldn't build
> in an amd64 vm with only 8 GB RAM.

Aren't there even packages that will not build on i386 with a i386
kernel (non-PAE) as they require the full 4 GB address space to be
buildable?

Even more, from the "32 bit archs in Debian" BoF at DebConf15 I remember
the suggestion that one might have to switch to 64-bit compilers even on
32-bit architectures in the future...  So building packages would in
general require a 64-bit kernel, multi-arch and 4+ GB RAM.

Ansgar



Bug#932795: Ethics of FTBFS bug reporting

2019-07-23 Thread Adrian Bunk
On Tue, Jul 23, 2019 at 01:30:58PM +0100, Ian Jackson wrote:
> Santiago Vila writes ("Bug#932795: Ethics of FTBFS bug reporting"):
>...
> On the point at issue, do these packages build in a cheap single-vcpu
> vm from some kind of cloud vm service ?  ISTM that this is a much
> better argument than the one you made, if the premise is true.
>...
> - An environment with only one cpu available is supported.
>...

- An environment with at least 16 GB RAM is supported.

Not sure about the exact number, but since many packages have 
workarounds for gcc or ld running into the 4 GB address space
limit on i386 it is clear that several packages wouldn't build
in an amd64 vm with only 8 GB RAM.

ISTR that some packages (machine learning?) might need more memory.

- An environment with at least 75 GB free diskspace is supported.

We do have at least one package in the archive that contains some 
hacks for staying inside the 75 GB diskspace available on the amd64 
buildds, and couldn't be built in a vm with even less diskspace.

> Ian.

cu
Adrian

-- 

   "Is there not promise of rain?" Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
   "Only a promise," Lao Er said.
   Pearl S. Buck - Dragon Seed



Bug#932795: Ethics of FTBFS bug reporting

2019-07-23 Thread Adrian Bunk
On Tue, Jul 23, 2019 at 01:54:10PM +0200, Santiago Vila wrote:
>...
> * I'm told that single-cpu systems are an oddity and that most
> physical machines manufactured today are multi-core, but this
> completely fails to account that single-cpu systems are today more
> affordable than ever thanks to virtualization and cloud providers.
> 
> Just because most desktop systems are multi-core does not mean that we
> can blindly assume that the end user will use a desktop computer to
> build packages, or that users who do not build packages using a
> desktop computer deserve less support. We don't discriminate
> minorities just because they are minorities.
>...

Trying to support using low-end machines for large-scale package 
building is much effort for marginal value.

Many package builds are tailored to build on the buildds,
and the packages won't build on lower spec machines.

In some cases package builds are even pinned to specific buildds
when a package does not build on all buildds for an architecture
(e.g. FPU-heavy software not building on buildds with FPU emulation).

Thousands of packages will not build on machines without sufficient 
amount of RAM, and that's apparently fine for you.

A single-core VM with 8 GB RAM would be a weird setup for CPU-heavy work 
like package building.

If you want to work on having more packages build on single-core CPUs 
that's appreciated, but the usecase is so exotic that your attempts of 
trying to force other people to work on that through RC bugs only 
prevent people from working on more important issues.

There is a clear ethical difference between working on whatever one 
personally considers important, and trying to force other people to
work on what one personally considers important.

cu
Adrian

-- 

   "Is there not promise of rain?" Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
   "Only a promise," Lao Er said.
   Pearl S. Buck - Dragon Seed



Bug#932795: Ethics of FTBFS bug reporting

2019-07-23 Thread Andrey Rahmatullin
On Tue, Jul 23, 2019 at 01:54:10PM +0200, Santiago Vila wrote:
> * Because this is a violation of a Policy "must" directive, I consider
> the downgrade to be a tricky way to modify Debian Policy without
> following the usual Policy decision-making procedure.
Please also note that https://release.debian.org/bullseye/rc_policy.txt
defines bug severities, not the Policy directly.
For example, while the Policy says that a package in main "must not
require or recommend a package outside of main", the RC policy says
""Recommends:" lines do not count".

> Surely, the end user *must* be able to build the package as well, must
> they not?
I also guess it's not the only case when the buildd infra does things
differently, the best known example is ignoring B-D alternatives.


-- 
WBR, wRAR


signature.asc
Description: PGP signature


Bug#932795: Ethics of FTBFS bug reporting

2019-07-23 Thread Ian Jackson
Santiago Vila writes ("Bug#932795: Ethics of FTBFS bug reporting"):
> Would it work, for example, if I propose a change to Debian Policy

I think the problem here is that:

 - Some packages do not build in quite sane non-buildd build
   environments, but:
 - Some build environments are too weird or too broken
 - We do not have the effort to write a exhaustive specificatio
   which will tell the difference in all cases
 - Worse, the issue is not addressed in policy at all so there
   is not even anywhere to put the answer for specific cases
 - We regard some FTBFS issues as non-RC but still bugs,
   and policy does not mention this at all

I suggest the following approach:

 - Introduce the words "supported" and "reasonable".  So

Packages must build from source in any supported environment;
they should build from source in any reasonable environment.

 - Provide a place to answer these questions:

What is a supported, or a reasonable, environment, is not
completely defined, but here are some examples:

- An environment with only one cpu available is supported.
- An environment with a working but non-default compiler
  is reasonable but not supported.

etc.

On the point at issue, do these packages build in a cheap single-vcpu
vm from some kind of cloud vm service ?  ISTM that this is a much
better argument than the one you made, if the premise is true.

Ian.

-- 
Ian JacksonThese opinions are my own.

If I emailed you from an address @fyvzl.net or @evade.org.uk, that is
a private address which bypasses my fierce spamfilter.



Bug#932795: Ethics of FTBFS bug reporting

2019-07-23 Thread Santiago Vila
Package: tech-ctte

Dear TC:

I reported this bug:

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=907829

and it was downgraded on the basis that the official autobuilders
are multi-core.

I believe this downgrade is not appropriate, for several reasons:

* The informal guideline which is being used, "FTBFS are serious if
and only if they happen on buildd.debian.org", is not written anywhere
and it's contradictory with Debian Policy, which says "it must be
possible to build the package when build-essential and the
build-dependencies are installed".

* Because this is a violation of a Policy "must" directive, I consider
the downgrade to be a tricky way to modify Debian Policy without
following the usual Policy decision-making procedure.

* I also do not recognize the informal guideline being used as
universally applicable, always valid, and in 100% of cases. In fact,
I have yet to see why people follow such guideline when there
is no rationale anywhere. Packages which FTBFS in buildd.debian.org
certainly deserve a serious bug, but P => Q is not the same as Q => P.

If we have a FTBFS bug that nobody can reproduce, then ok, downgrading
the bug if the package builds ok in the buildds may make sense as a
cautionary measure until we have more info, but a single successful
build in buildd.debian.org does not ensure that the package will build
in every system where the package must build.

To illustrate why I think this guideline can't be universal, let's
consider the case (as a "thought experiment") where we have a package
which builds ok with "dpkg-buildpackage -A" and "dpkg-buildpackage -B"
but FTBFS when built with plain "dpkg-buildpackage".

Are we truely and honestly saying this package would not deserve a
serious bug in the BTS just because it builds ok in the buildds?

Surely, the end user *must* be able to build the package as well, must
they not?


So, in the bug above, I'm asked to accept as a fact that we have
*already* deprecated building on single-cpu systems, implicitly and
automagically. Let's assume for a while that such deprecation is real
and suppose I would like to "undeprecate" it. What formal procedure
should I follow for that?

Would it work, for example, if I propose a change to Debian Policy so
that it reads "Packages must build from source" instead of "Packages
must build from source on multi-core systems"? No, that would be
useless, because Debian Policy already says that packages must build
from source.

Would it work, for example, if I propose a change to Release Policy so
that it reads "Packages must build on all architectures on which they
are supported" instead of "Packages must only build ok in the official
buildds"? No, that would not work either, because Release Policy
already says that packages must build in all architectures in which
they are supported.

See how much kafkaesque is this?

Currently, this is what is happening:

Whenever someone dares to report a bug like this as serious, following
both Debian Policy and Release Policy (or at least the letter of it),
we lambast them, we make mock of their building environment, we call
them a fool, and we quote informal guidelines which are not written
anywhere. If we do this consistently, then no doubt that building on
single-cpu systems will become de-facto obsolete regardless of what
policy says, because nobody likes to be treated that way.

But surely there must be a better way: It is my opinion, and here is
where I'm asking the TC for support, that the burden of deprecating
building on single-cpu systems, or in general any other thing which
has always been a policy "must" directive, should be on those willing
to deprecate such things, and they are the ones who should convince
the rest of us, not the other way around.

For example, being proud to call ourselves the Universal Operating System,
we drop release architectures when it's increasingly difficult for us
to support them, *not* because we dislike them, *not* because they are
inefficient, and *not* because amd64 is "better".

We put a lot of care when we are about to deprecate architectures, we
examine the facts, the pros and the cons. The number of bugs affecting
such architectures, the number of people requiring special skills for
such architectures, that sort of thing.

I believe this to be a much better model of what we should do if we
really wanted to deprecate building on single-cpu systems, not what
happened in Bug #907829.

-

Addendum: I'm going to summarize some of the reasons I'm told in
favor of deprecating building on single-cpu systems, and why I
consider those reasons mostly bogus.


* I'm told that single-cpu systems are an oddity and that most
physical machines manufactured today are multi-core, but this
completely fails to account that single-cpu systems are today more
affordable than ever thanks to virtualization and cloud providers.

Just because most desktop systems are