As Julien points out, Gaia engineers are much more likely to write
integration tests in JavaScript (given that they're JavaScript developers)
than Python.

Is there documentation on how to set up a development environment to make
this possible? James Lal tried to show me at the last work week but it
involves creating a special build and I admit I still haven't got around to
figuring that out.

Ben


On Thu, Jun 13, 2013 at 7:23 AM, Tony Chung <[email protected]> wrote:

>
> On Jun 11, 2013, at 5:23 PM, Gareth Aye <[email protected]> wrote:
>
> > Thanks for the reality check Jason && Tony :). I don't think I knew how
> resource-constrained QA was and I just figured this might help bridge what
> seems like a pretty big divide between eng and QA.
> >
> > While we're on the topic, I would really appreciate if someone from QA
> wouldn't mind giving an intro to FFOS QA at some point. I'm clearly pretty
> clueless about what you guys are up to and I think I'm maybe not the only
> one? Some of us are very interested in adding more automated,
> integration-level tests to Gaia and it'd be nice if we could work together!
>
> Hey Gareth,
>
> the WebQA team actually did a session on Gaia UI automation (in python) at
> the last madrid workweek, but it certainly didnt cover everyone.   Since
> then, there's been blogging and brownbags posted.
>
> https://github.com/mozilla/gaia-ui-tests
> https://air.mozilla.org/b2gfirefoxos-front-end-automation-in-python/
>
> there's also a platform QA team that has been working for awhile on
> porting over hundred of thousands mochitests that run against b2g.   Many
> of these are running nicely on tbpl.   (see the B2G boxes in
> https://tbpl.mozilla.org/)
>
> Credit for this work goes to (gaia UI QA - Zac Campbell, Stephen Donner,
> and others)  (platform QA - Martijn Wargers, Geo Mealer, David Clarke)
>
> feel free to reach out to these individuals for more on the automation
> strategy.
>
> >
> >
> > On Tue, Jun 11, 2013 at 4:33 PM, Tony Chung <[email protected]> wrote:
> > To add to jason's points,  It's absolutely too much overhead for my team
> to smoketest each checkin.  The only feasible way is having automated tests
> that run per checkin.   This is worth discussing with the A-team / Gaia UI
> team if that is feasible, and in what time frame.
> >
> > our current QA team does write automated tests, but it currently
> supports 1.01 smoketests and we havent gotten around to testing new
> features.  We're still one build behind and can't catch up current release
> fast enough for this.   We would request the help of gaia dev in this
> effort.    (device level testing is better, but maybe running on desktop
> build is good enough)
> >
> > The other condition you need is parallelizing gecko changes as gaia
> changes land.   I know some checkins are independent of gecko, but some
> aren't and require parallel landings.    To test end to end infra per
> checkin, you'd need to sync up landings with gecko.   At least within 24
> hours i suppose.   This means if we sync master, we'll have to sync
> mozilla-central as well, to truly get integration testing going.
> >
> > Tony
> >
> > On Jun 11, 2013, at 3:22 PM, Jason Smith <[email protected]> wrote:
> >
> >> The one problem I have with this proposal is that this sounds like this
> is proposing to do smoke test runs on every single pull request. That's
> likely going to be too much overhead for our smoke test management.
> >
> >
> >>
> >> For patch-specific testing, usually the reviewer handles doing the code
> review of the patch and manually testing the patch to ensure that the patch
> does what it intends to do and that the patch does not cause severe
> regressions. However, there are times when there is value to get QA
> involved on the specific patch (e.g. complex patch that has high regression
> risk). In those cases, I think there's value to following a process such as
> what's you've specified below to generate a try build with the pull request
> and ask for QA support to do testing around the patch to check for
> regressions, ensure the patch does it was says it should do, etc.
> >>
> >> I think there's a bug on file to add support for Gaia try builds to
> implement this. Note that QA can manually generate a custom Gaia build to
> follow the try build testing process right now, although having the
> automated try build process would definitely make this process easier to
> execute.
> >> Sincerely,
> >> Jason Smith
> >>
> >> Firefox OS QA Engineer
> >> Mozilla Corporation
> >> https://quality.mozilla.com
> >> On 6/11/2013 3:06 PM, Gareth Aye wrote:
> >>> Hi Everyone!
> >>>
> >>> I wanted to propose an idea for how we can make the RIL builds more
> relevant for and accessible to developers. Let me begin by saying that I
> truly appreciate the work that QA does and that I do personally read
> through all of the smoketest build results to see whether there are any
> regressions my work has introduced. I think it's a great resource! This is
> simply one idea for how we could make it an even better one.
> >>>
> >>> I believe (and I think others would agree) that tests are most useful
> when a patch is submitted for review. When I do a code review, I like to
> see the linter and test results alongside the patch to get a full
> perspective. Travis is actually *much* more useful to me than TBPL because
> Travis tells me how patches affect things when it matters most: when I'm
> reading code and deciding whether a patch is ready to land. Imagine how
> many things we wouldn't have broken if reviewers knew (when they accepted
> patches) whether or not they were breaking the smoketests!
> >>>
> >>> So, without further adieu, I propose that we do just that! Let's run
> the smoketests on pull requests!
> >>>
> >>> There's a catch though -- we have a very small QA team and there are
> *lots* of pull requests. It's totally reasonable to question whether this
> is possible. It's my opinion that with some clever automation we could make
> the volume of QA work totally manageable. In broad strokes...
> >>>
> >>> 1. GitHub sends a notification to our service-x to tell us that a pull
> request has been opened against mozilla-b2g/gaia.
> >>> 2. service-x parses from the pull request information which bugzilla
> issue the pull request is trying to fix.
> >>> 3. service-x parses from the pull request which smoketests would be
> affected by the patch.
> >>> 4. service-x fetches the bug from bugzilla and parses out the
> environment (ie gaia version and gecko version) that the patch is intended
> to be applied to.
> >>> 5. service-x cherry picks the patch onto the appropriate gaia branch
> and builds b2g.
> >>> 6. service-x sends a notification (maybe an email?) to qa with a link
> to download the build, the smoketests that need to be run, and a link to
> the pull request to comment on with the result of the smoketests.
> >>>
> >>> I think if we built a service-x, it would make the amount of work that
> qa needed to do in order to provide smoketest results to all of the pull
> requests manageable. The manual work would be minimized to flashing a build
> to a device, running only the affected smoketests, and commenting on a
> GitHub issue.
> >>>
> >>> Any thoughts :) ?
> >>>
> >>> --
> >>> Best,
> >>> Gareth
> >>>
> >>>
> >>>
> >>> _______________________________________________
> >>> Qa-b2g mailing list
> >>> [email protected]
> >>> https://mail.mozilla.org/listinfo/qa-b2g
> >>
> >> _______________________________________________
> >> B2g-release-drivers mailing list
> >> [email protected]
> >> https://mail.mozilla.org/listinfo/b2g-release-drivers
> >
> >
> >
> >
> > --
> > Best,
> > Gareth
> >
>
> _______________________________________________
> dev-b2g mailing list
> [email protected]
> https://lists.mozilla.org/listinfo/dev-b2g
>



-- 
Ben Francis
http://tola.me.uk
_______________________________________________
dev-b2g mailing list
[email protected]
https://lists.mozilla.org/listinfo/dev-b2g

Reply via email to