Peter Jones wrote:
> To categorize our analogies, mine is an analogy for Fedora, yours is an
> analogy for your desktop machine. If you feel like running new untested
> packages on your desktop machine, that's fine, we've got rawhide (and
> updates-testing) for that. You can also feel free to participate in
> life-threatening activities that you find challenging and beneficial to
> your own well being and try to establish records for not dying on the
> highest high-wire or whatnot. Running untested packages may toast your
> desktop machine, but doing so also has inherent benefit to the greater
> group. But putting those packages in Fedora without going through
> updates-testing or rawhide first is effectively doing the high-wire
> without a net /in the circus/, not in your back yard or the alps or
> wherever on your own.

Your analogy still misses the point, see below.

>> For example, some regressions slip through testing (this will ALWAYS
>> happen, testing is not and CANNOT be perfect)
> 
> Perfect is the enemy of good. Our testing will never be perfect, but
> requiring that it happen is better than allowing it not to. If it isn't,
> the answer is to make the testing better - not to skip it entirely!

But the problem is what to do if the testing ALREADY failed. Then the best 
strategy is to fix the problem ASAP, bypassing testing this time, to get the 
regression out of the way.

>> why should our users have to suffer through them for several days
>> instead of getting them fixed in the next update push (i.e. as soon as
>> possible)?
> 
> This is a logically callow statement. Our users do not *suffer* from
> non-critical updates being delayed for a short time,

An update fixing a regression from the previous update is not really "non-
critical". Users will definitely suffer if their package is broken when it 
used to work. The sooner you fix it, the better.

> nor do they *suffer* from critical updates getting sufficient testing

Most users are not going to manually pull updates from testing, nor to use 
testing wholesale. So they WILL suffer the effects of the bug for a longer 
time if the update is being held in testing rather than pushed to stable.

> so as not to immediately require *another* critical update.

The point is to do direct stable pushes only where this is very unlikely 
(e.g. because the fix for the regression is a one-line or otherwise trivial 
fix). I know the likelihood can never be 0, but if you can choose between a 
likelihood of .001% of the package being broken (due to another, unforeseen 
regression) or a likelihood of 100% of the package being broken (because the 
regression fix is not in stable yet), why choose the 100%? In the worst case 
you push another update directly to stable to fix the second regression, but 
the chance of this being necessary is extremely low.

> At no point in the scenario you paint is there any actual suffering.

Sorry, but a user sitting in front of a broken application definitely 
qualifies as "suffering" under my definition.

> You haven't actually demonstrated any real problems it will introduce;
> just the same (rather thin) strawman over and over.

I have, you keep either ignoring or failing to understand my arguments! 
(There is even more than one, but regression fixes are something I care 
particularly about, I feel very strongly that direct stable pushes are often 
the best approach for those.)

> Given a lack of actual, real problems demonstrated with the bizarro
> concept of actually requiring that updates go through our QA
> infrastructure, the answer certainly seems to be: yes, absolutely.

Actual, real problem:
* update X gets created
* update X sits in testing for 1 or 2 weeks, either no regressions are found 
or all those that are found get fixed
* due to the positive feedback, update X gets pushed to stable
* within a day of the stable push, a regression is found in update X (as by 
design, many more people run stable than testing)
* update X' gets created to fix the regression in update X
Now we have 2 options:
a) let X' sit in testing for a week. Users are affected by the regression in 
X for a full week.
b) push X' directly to stable. Users are only affected by the regression for 
the time it takes to process the push containing X', which is the minimum 
possible response time.
How is option b not more desirable than option a?

And this is NOT a fictive example, it has happened many times with real 
updates.

(I'll also note that James Antill's poorly-named "Choice" proposal would 
suck particularly in that it'd especially penalize followup updates like X', 
making it basically impossible to fix regressions in a timely manner. The 
testing time for X' must be decided ONLY based on the changes between X and 
X', not based on past history nor on the mere fact that X' is a followup.)

        Kevin Kofler

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Reply via email to