TL;DR:
I think this is almost 180 degrees opposed from the right strategy.
What we need right now is coverage of the things that work so that
when we make changes we know we're not making the situation worse.
*All* other testing work should wait on this. In particular, crashtests
are not useful at this point.


Analysis follows:
Currently, the state of the code can be best described as brittle. The
sunny day cases work and work fairly reliably, but as you deviate from
them, you start to get crashes. This isn't surprising; it's a fairly normal
state of affairs for software that was developed in a hurry with an
emphasis on features rather than testing, which is what this is.

There are two major classes of problems in the code:

- Functional defects (stuff which should work and does not).
- Crashes (i.e., memory errors, whether security relevant or not.)
- Memory and other resource leaks

There are a lot of all of these, and a number of them require fairly
significant refactoring to fix. The result of this is that:

1. It's not going to be possible to *run* automated unit tests for some
time (this is partly due to deficiencies in the testing framework, but you
go to the release with the testing framework you have, not the one you
wish you had.)

2. It's quite common to have multiple reports of problems due to the same
underlying defect. Even when that's not true, we're often going to
accidentally
fix one defect while working on another.

3. It's quite easy to break one piece of functionality while refactoring
another.
We've already had a number of instances on this.

For the reasons above, building crashtests for each problem really isn't
that
useful to development because it's quite likely that those problems will go
away
on their own for reason #2.  I'm obviously not saying that one shouldn't
report
the bugs or *eventually* produce a crash test, but it's not useful to us
*now*.

Rather, what is useful is to develop unit tests for the stuff that does
work so
that when we are fixing one thing we don't accidentally break another.I
realize
that these cannot currently be run automatically on m-c, but they can be run
either (a) on a developer's own system or (b) on alder under the more
reslient
test framework Ted is hacking up. It already takes far too long to validate
manually
that your changes haven't broken stuff and we don't have anything like full
coverage in JS tests of the functionality that works.

If we want people not to break the system, it needs to be reasonably easy
to verify that one has not done so.

The nice thing about this level of test is that it's easy to develop.
Indeed, we
already have a number of tests that test one variant manually. All we need
is someone to stamp out a bunch of tests that (a) are automatic and (b)
cover more of the space. If you find bugs doing that, then great. Otherwise,
we have a regression harness and you can move on to crashtests.

-Ekr




On Mon, Nov 19, 2012 at 4:43 PM, Henrik Skupin <[email protected]> wrote:

> Hi all,
>
> There is a lot of work happening on the WebRTC code and I have been
> asked a while back to get automated tests written. So our team has
> started with some plain Mochitests about a month ago but failed to run
> them due to a lot of crash related bugs. As result I have started to get
> some crashtests into the tree, but we were never able to enable them due
> to leaks in the WebRTC code. Given all those problems and a high demand
> for us on the WebAPI project we stepped away for a while.
>
> Now that I'm back on the WebRTC project the situation is probably better
> now. So I'm going to analyze various problems we had before to get those
> solved and new tests implemented. That said we have to cover two types
> of tests which are crashtests and mochitests. As best we want to see
> those running on tbpl for mozilla-central builds. But there are some
> roadblockers in the way we should try to get fixed over the next couple
> of days (or weeks). I think as earlier as better. So I propose the
> following:
>
> 1. Lets get the crashtests enabled on m-c by fixing all the leaks and
> failures: This probably doesn't take that long given that only a single
> crashtest is causing leaks at the moment (bug 813339). But I can't tell
> how much work is necessary to get this fixed. Also there is one
> remaining test failure for Cipc tests (bug 811873). For myself it's most
> likely only the latter which is remaining to get investigated.
>
> 2. Create more crashtests: Seeing all those crashes in the WebRTC code
> we definitely have to create crashtests in time when the patches land.
> Otherwise it's getting harder to get them reproduced and to verify their
> stability later, because other check-ins could have caused other crashes
> meanwhile. I have seen it a couple of times in the last weeks.
>
> 3. Create a basic set of mochitests: With more stable code we probably
> can create some basic mochitests now. Therefore I would ask Jason to
> give me a good list of those tests he most likely has in his mind. Those
> should be added to the WebRTC QA page for an easier reference. I will
> then pick items, get those filed as bugs, and work on the
> implementation. Those I'm currently not working on I will mark as
> mentored so our wonderful community can pick up tasks too.
>
> Just some more notes:
>
> - When I find leaks for any type of tests I will investigate those and
> get them filed. I wish that we can get those fixed in a short interval.
> Would that be possible?
>
> - From my judgement I would create crashtests and mochitests on a 3:1
> ratio, which indeed depends on the amount of remaining crashtests to
> write or to transform from an already given testcase.
>
> - Keep in mind that those created Mochitests can leak! We will not be
> able to check those into mozilla-central until the leak has been fixed.
> Means we could land those temporarily on the alder branch so we get at
> least coverage across platforms.
>
> - Any mochitest and crashtest we are running gets implemented with faked
> media streams. We will never be able to use real devices. That means we
> still need manual testcases.
>
> - It would be a great help for me to get a good list of possible
> mochitests (as mentioned above). That way I will be able to get started
> as soon as possible on their implementation.
>
>
> Could be that I missed something. So please ask if something is unclear
> or needs a discussion. I really would like to see green results on tbpl
> for webrtc tests in the near future and a good process is crucial here.
>
> Thanks!
>
> --
> Henrik
> _______________________________________________
> dev-media mailing list
> [email protected]
> https://lists.mozilla.org/listinfo/dev-media
>
_______________________________________________
dev-media mailing list
[email protected]
https://lists.mozilla.org/listinfo/dev-media

Reply via email to