I think the problem is if our infrastructure or test method is not powerful
enough to test those tests in automation. For example, one smoketest I've
broken is about LockScreen doesn't update time correctly, if user pulled
the battery before rebooting (Bug 1119097). In such case, I think the key
is our automation couldn't cover some hardware related cases like battery
or RIL, although I've heard some works, like to allow simulators to dial to
each other, are actually ongoing. However, I believe these are tough works
for our Gecko and Gonk team, since the latest news I now have is they are
planning to make a new abstraction upon HAL and to make it more decoupling
from the real devices. And some teams now even has no plan for that yet.
Maybe we, I mean MoCo/MoFo, should consider this is one of the most
priority issues. Since to be tricked by inaccurate CI result (compare to
real devices) and wait the patch to be stable enough always torture us
(Gaia team). And the idea (to make a new abstraction for testing and
porting purposes) had already been mentioned at least one year ago, as far
as I know.

By the way, for regressions I'm used to bisecting to find the accurate
broken patch. However from the information I've got is for Gecko or whole
B2G it's impractical to do bisecting, if considering the building time. I
wonder if we could or need to ease the pain of this to make finding a
actual broken part more easily and automatically. I believe this does help.
2015年6月15日 上午3:55於 "Kartikaya Gupta" <[email protected]>寫道:

> Is there any effort under way to making the smoketests automated and
> run as part of our regular automation testing? The B2G QA team does a
> great job of identifying regressions and tracking down the regressing
> changeset, but AFAIK this is an entirely manual process that happens
> after the change has landed. Ideally we should catch this on try
> pushes or on landing though, and for that we need to automate the
> smoketests.
>
> There have been a lot of complaints (and rightly so) about all sorts
> of B2G-breaking changesets landing. I myself have landed quite a few
> of them. I think it's unrealistic to expect every Gecko developer to
> run through all of the smoketests manually for every change they want
> to make (even just for the main devices/configurations we support).
> It's also unrealistic to expect them to reliably identify "high risk"
> changes for explicit pre-landing QA testing, because even small
> changes can break things badly on B2G given the variety of
> configurations we have there.
>
> I think the only reasonable long-term solution is to automate the
> smoketests, and I would like to know if there's any planned or
> in-progress effort to do that.
>
> Cheers,
> kats
> _______________________________________________
> dev-gaia mailing list
> [email protected]
> https://lists.mozilla.org/listinfo/dev-gaia
>
_______________________________________________
dev-b2g mailing list
[email protected]
https://lists.mozilla.org/listinfo/dev-b2g

Reply via email to