On Tue, Apr 10, 2018 at 10:35:07AM -0400, David Miller wrote:
>From: Sasha Levin <alexander.le...@microsoft.com>
>Date: Tue, 10 Apr 2018 13:49:40 +0000
>> This commit has been processed because it contains a "Fixes:" tag,
>> fixing commit: d9f9b9a4d05f devlink: Add support for resource abstraction.
>> The bot has tested the following trees: v4.16.1.
>This is nice, but it's becomming noise.

Thank you, and sorry. I've blacklisted net/ and drivers/net/.

>I pick and choose specific networking changes to queue up for -stable
>and whether it has a Fixes tag or applies cleanly are not the primary
>considerations I take into account when deciding what goes to stable
>or not.
>What matters primarily are two attributes:
>1) What is the impact of the bug?
>2) How risky is the change?
>3) Did someone explicitly ask for the backport?  Why?

When I started the autoselection project, I found it interesting that
the networking subsystem has a unique workflow for stable commits where
a single person would review every single patch that gets merged, and
would select patches purely based on whether the patch follows the
stable rules or not, without relying on stable tags.

For the first few iterations of the neural network, I thought that the
networking commit set is valuable since if I tried to learn on commits
that were tagged for stable, the neural network would develop a bias
towards the tag which will prevent it from detecting bug fixing patches
that do not contain such tags.

However, during validation, I found that the quality of the selection on
a random set of commits wasn't what I expected it to be. I reviewed a
random sample of networking commits and found that a high percentage of
commits that are clearly "bug fixing", were not included in any stable

This changed my view on how the stable process works for different
subsystems; I realized that there are 3 main things we look for in
stable trees:

 1. They have all bug fixing commits that are relevant to that tree.
 2. There are none non-bug-fixing commits in the tree.
 3. The commits are backported correctly to that tree.

And since subsystems have different workflows for stable, I could see
how different approaches balanced these goals. For networking, it was
easy to see that a high quality of review by the maintainer prevented
and non-bug-fixing commits from creeping in, and the error rates for
backports were much smaller than any other subsystem I looked at.

What was missing was the first goal. My hypothesis was that after all,
we are only humans; with the high rate of commits that flow into a
subsystem, and the opt-in nature of the process, we are bound to
incorrectly miss commits that would otherwise be in a stable tree.

I could back up my theory with experiences other folks had [1][2], 
so one of my biggest hopes was to see how the autoselection would could
improve the quality of selection and build on the biggest advantage the
networking subsystem over other subsystems: a dedicated maintainer who
actually gives a crap and reviews these patches himself.

When the work on the neural network itself quieted down, and the first
results started trickling in, I was delighted that we can now help
select commits that were missed by manual inspection, but are clearly
bug fixing. For example, [3][4][5].

>These automated emails tell me nothing about either attribute.

The attributes you've listed are subjective, and are the final step of
selecting patches for stable trees. Consider the following additional
attributes you undoubtfully look at when going through commits:

 1. Is it fixing a bug?
 2. Is it relevant to any of the stable trees?

Only after deciding on these attributes, it makes sense to look at ones
that require actual thinking (such as the ones you've listed).

>I figure out what Fixes: tags are involved and whether the patch
>applies cleanly when I process my stable queue of networking patches.

This is another thing that doesn't always end up right. When done
manually patches tend to get lost, forgotten, or end up with a partial

Doing this is a combination of "dumb" manual labor where for each stable
tree we try to find whether the commit we fix exists in it, and which
other commits we may depend on, and "smart" labor where we try to figure
out whether the commit should actually be in that tree.

The bot tries to take the "dumb" part out of your way, by letting
you know from the start which trees this applied/built on and what
dependencies it might have. It comes for free, why not use it?

As an example, a fix for CVE-2017-16939 [6] was backported to v4.9
but not to v4.4 or v3.18 because of missing dependencies.

>For various reasons I have to do this all manually anyways.  I always
>do a full analysis of where the bug came from, and also do manual
>code inspection to find the cases where a Fixes: tag indicates a commit
>that was also put into stable branches and what impact that has on
>the backport of the stable fix for a particular stable tree.

Would you be interested in going over these reasons, and whether the bot
can automate any of these? Things like determining if a Fixes: commit is
already in stable are already automated (excluding the impact analysis,

>So I would like to kindly ask that you stop sending these automated
>emails, they really are not helping my networking stable backport
>process at all, and instead are just making more emails I have to
>delete from my inbox every day.
>Thank you very much for your kind consideration in this matter.

What would be the appropriate way to submit these to the networking
stable branch?

Let's look at a concrete example: I've sent a mail for a dccp fix[7] a
few days ago. Could we discuss why it's not in any stable tree?

[1] https://www.spinics.net/lists/stable/msg173995.html
[3] https://patchwork.kernel.org/patch/10188235/
[6] https://www.spinics.net/lists/netdev/msg469458.html
[7] https://lkml.org/lkml/2018/4/8/762

Reply via email to