On Sun, Feb 19, 2017 at 06:27:16PM +0100, Christoph Biedl wrote:
> Ian Jackson wrote...
>
> > If there is to be a failure probability threshold I would set it at
> > 10^-4 or so.  After all, computer time is cheap.
> 
> To determine 10^-4 with some accurance you'd have to rebuild that
> package 20000 times. This is an amount where computer time isn't cheap
> any longer.

The figure by Ian is very ambitious indeed, but so much that maybe
it's not very realistic.

I fully agree with the underlying idea, however: If we can measure the
failure rate, then it means it already fails too often to be acceptable.

Anybody who tries to build all Debian source packages will find that
the number of packages that fail to build is a random variable,
sometimes it will be 3, sometimes it will be 5, etc.

A nice goal regarding reliability would be that the expected value for
this random variable (the number of failing packages) should be closer
to 0 than to 1 (when you try to build once every single package).

For that to happen, the around 50 packages which FTBFS randomly should
do so less than 1% of the time (I'm assuming here that all the others
"never" fail to build).

I think this is feasible, but only if we start not allowing
(i.e. making RC) things that we seem to be currently allowing.


BTW: Could anybody tell me when exactly "FTBFS on a single-CPU machine"
stopped being serious and RC? Did such thing ever happened?

I'm pretty sure that I was already a DD when such thing could have
happened, but right now I can't remember it.

(See #848063 for why I'm asking this).

Thanks.

Reply via email to