On Wed, Sep 16, 2020 at 5:10 AM Eric Rescorla <e...@rtfm.com> wrote:

> I certainly have some sympathy for this, but I'm not sure that it needs to
> be
> addressed via tools. What I would generally expect in cases like this is
> that the reviewer says "why isn't there a test for X" and the author says
> "for reason Y" and either the reviewer does or does not accept that.
> That's certainly been my experience on both sides of this interaction
> in previous instances where there were testing policies but not this
> machinery.
>

Sure, but it just adds latency to the whole process. Hopefully requiring an
extra round trip will be rare.


> Also, contrary to what ahal said, I don't know that tests being an official
>> requirement relieves the burden of guilt of asking for tests, as everybody
>> already knows that tests are good and that you should always write tests.
>>
>
> I also don't know about how this will impact people's internal states; I
> see
> this as having two major benefits:
>
> 1. It tells people what we expect of them
> 2. It gives us the ability to measure what's actually happening and adjust
> accordingly.
>
> To expand on (2) a bit, if we look back and find that there was a lot of
> code
> landed without tests but where exceptions weren't filed, then we know we
> need to work on one set of things. On the other hand, if we see that there
> are a lot of exceptions being filed in cases we don't think there should
> be (for whatever reason) then we need to work on a different set of things
> (e.g., improving test frameworks in that area).
>

Yeah, I think gathering data about what the actual level of testing is a
great output of this process. I'm curious to see what it will turn up.
Presumably it could also be bucketed by Bugzilla components and so forth.


>
>
>> My perspective might be a little distorted as I think a lot of the patches
>> I write would fall under the exceptions, either because they are
>> refactoring, or I am fixing a crash or security issue based on just a
>> stack
>> trace.
>>
>> Separately, one category of fixes I often deal with is fixing leaks or
>> crashes in areas of the code I am unfamiliar with. I can often figure out
>> the localized condition that causes the problem and correct that, but I
>> don't really know anything about, say, service workers or networking to
>> write a test. Do I need to hold off on landing until I can find somebody
>> who is willing and able to write a test for me, or until I'm willing to
>> invest the effort to become knowledgeable enough in an area of code I'm
>> unlikely to ever look at again? Neither of those feel great to me.
>>
>
> Agreed that it's not an ideal situation. I think it's important to step
> back and
> ask how we got into that situation, though. I agree that it's not that
> productive
> for you to write the test, but hopefully *someone* at Mozilla understands
> the
> code and is able to write a test and if not we have some other problems,
> right?
> So what I would hope here is that you were able to identify the problem,
> maybe write a fix and then hand it off to someone who could carry it over
> the
> line.
>

I mean, everybody has their own priorities. Part of the reason I'm fixing
leaks in random code is because I understand the leak fixing tools very
well, but part of it is also that I care more about leaks than the average
person. If I've fixed a leak in an obscure error case in some web API,
maybe the experts in that API are willing to review a patch to fix the
leak, but if they have to weigh trying to figure out how to write a test
for the leak versus meeting some important deadline or their H2 goal, maybe
they aren't going to be able to get to it quickly. I can't even say that's
the wrong decision for them to make.


>
>
>> Another testing difficulty I've hit a few times this year (bug 1659013 and
>> bug 1607703) are crashes that happen when starting Firefox with unusual
>> command line or environment options. I think we only have one testing
>> framework that can test those kinds of conditions, and it seemed like it
>> was in the process of being deprecated. I had trouble finding somebody who
>> understood it, and I didn't want to spend more than a few hours trying to
>> figure it out for myself, so I landed it without testing. To be clear, I
>> could reproduce at least one of those crashes locally, but I wasn't sure
>> how to write a test to create those conditions. Under this new policy, how
>> do we handle fixing issues where there is not always a good testing
>> framework already? Writing a test is hopefully not too much of a burden,
>> but obviously having to develop and maintain a new testing framework could
>> be a good deal of work.
>>
>
> As a general matter, I think we need to be fairly cautious about landing
> code
> where we aren't able to test. In some cases that means taking a step back
> and doing a lot of work on the testing framework before you can write some
> comparatively trivial code, which obviously is annoying at the time but
> also
> is an investment in the future. In other cases, we'll have to make a
> decision
> that the value of the change outweighs the risk of landing untested code.
>

That's true. In these particular cases, I might not have looked at the
crashes if I'd realized before I started investigating that they were
happening with weird options, but once I had, the fix was straightforward
so I decided I might as well land a patch.


> This also depends on the nature of the change: if you're fixing a crash in
> a
> section of code, the bar might be lower than if you're adding a new
> feature.
>

Yeah, that's a good distinction.

Andrew


> -Ekr
>
>
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to