On 1/12/26 21:30, Dan Carpenter wrote:
>> +If tools permit you to generate a contribution automatically, expect
>> +additional scrutiny in proportion to how much of it was generated.
>> +
>> +As with the output of any tooling, the result may be incorrect or
>> +inappropriate. You are expected to understand and to be able to defend
>> +everything you submit. If you are unable to do so, then do not submit
>> +the resulting changes.
>> +
>> +If you do so anyway, maintainers are entitled to reject your series
>> +without detailed review.
> Argh... Flip. In context, that sounds even more sinister and
> threatening than my over the top proposal. We have to "defend"
> everything? "If you do so anyway" sounds like we're jumping to a
> "per my last email" from the get go. What about:
>
> If tools permit you to generate a contribution automatically, expect
> additional scrutiny in proportion to how much of it was generated.
>
> Every kernel patch needs careful review from multiple people. Please,
> don't start the public review process until after you have carefully
> reviewed the patches yourself. If you don't have the necessary
> expertise to review kernel code, consider asking for help first before
> sending them to the main list.
>
> Ideally, patches would be tested but we understand that that's not
> always possible. Be transparent about how confident we can be that the
> changes don't introduce new problems and how they have been tested.
>
> Bug reports especially are very welcome. Bug reports are more likely
> to be dealt with if they can be tied to the individual commit which
> introduced them. With new kinds of warnings, it is better to send
> a few at a time at the start to see if they are a real issue or how
> they can be improved.
Hey Dan,
I agree with most of what you wrote here in general. My only issue with
it is that it seems to be good, generic advice and isn't specific to
tooling-generated contributions.
For instance, this suggests saying:
"Ideally, patches would be tested..."
Testing is covered in Documentation/process/submit-checklist.rst and in
a few other places. The thing that I do think belongs in this document
(and is missing in v5) is a note that maintainers might expect *extra*
testing from tool-generated content. I've added a blurb like that to my
working v6 version.
Are there other things that are missing and truly specific to
tooling-generated contributions that aren't covered in other documentation?