Outside thought - I'm new to the current state of CI on fxos, but, I wonder
if we can run Raptor on B2GDroid apks to test Gaia apps?  We could then use
real devices on Amazon's Device Farm?

On Tue, Oct 6, 2015 at 10:33 AM, Eli Perelman <[email protected]> wrote:

> I've made this clear in several offline conversations, but everyone should
> know that Raptor automation in Taskcluster cannot happen in its current
> state. Rob Wood spent a solid 8 months trying use emulators for performance
> testing in the Taskcluster/AWS infrastructure. The determination was that
> even with the most expensive AWS infra we could throw at the problem, the
> emulators just cannot be relied on for performance data. The variance was
> way too high, the change in variance too unpredictable, and root causing it
> was practically impossible. The only way we can run Raptor in Taskcluster
> is on real devices, and it may take an army of devices to make that a
> reality.
>
> Consider the following:
> - A pass/fail determination needs two data points: a number for master and
> a number for a patch
> - The number of tests to run, e.g. testing System, Homescreen, and all the
> apps
> - The number of runs per test to ensure statistical relevance. Right now
> we do 30 runs per app, which is an industry-accepted minimum, but we *can*
> do less
> - The volume of daily PRs
> - The ability of Taskcluster to access and flash remote and accessible
> devices
>
> Being able to handle that kind of device volume is what is needed for
> pre-commit performance gating, and I'm sure I've gleaned over other
> important factors. It's totally possible, but will need a good investment
> from the Taskcluster team and Bitbar devices.
>
> Thanks!
>
> Eli Perelman
>
> On Tue, Oct 6, 2015 at 11:27 AM, Kan-Ru Chen (陳侃如) <[email protected]>
> wrote:
>
>> David Scravaglieri <[email protected]> writes:
>>
>> > On Tue, Oct 6, 2015 at 5:50 PM, Kan-Ru Chen (陳侃如) <[email protected]>
>> wrote:
>> >
>> >> David Scravaglieri <[email protected]> writes:
>> >>
>> >> > ▾ Automation
>> >> > • Create a test matrix to define which tests are running on which
>> >> > platform (Device, Mulet, Emulator)
>> >>
>> >> All tests? The most interesting data I'd like to see is which tests are
>> >> effectively disabled on all platforms. This kind of fallout could go
>> >> unnoticed and bite us when code breaks (for example some Nuwa tests).
>> >>
>> >
>> > It seems that not all tests make sense to be executed on every platform,
>> > having a test matrix will help to keep track on which tests are running
>> and
>> > where.
>>
>> I agree that it is helpful but there are so many tests that I think it
>> would be hard to put into a matrix. No objections if there is a good way
>> to present the data.
>>
>> >> > • Emulator-x86-KK
>> >> > Activate emulator-x86-kk on Treeherder, run Gecko unit tests and Gij
>> >> tests
>> >> > • Emulator-x86-L
>> >> > Start porting emulator to L and activate it on Treeherder
>> >>
>> >> Are we going to run emulator-x86 and emulator-arm side by side or just
>> >> -x86?
>> >>
>> >
>> > I do not see value on having emulator-arm still running once we will get
>> > emulator-x86. Am I wrong ?
>>
>> Emulator-x86 used to have less upstream support but I don't know what's
>> the current situation. We may also miss some bugs that only appears on
>> code compiled to ARM, but I think it's rare.
>>
>>               Kanru
>> _______________________________________________
>> dev-fxos mailing list
>> [email protected]
>> https://lists.mozilla.org/listinfo/dev-fxos
>>
>
>
> _______________________________________________
> dev-fxos mailing list
> [email protected]
> https://lists.mozilla.org/listinfo/dev-fxos
>
>
_______________________________________________
dev-fxos mailing list
[email protected]
https://lists.mozilla.org/listinfo/dev-fxos

Reply via email to