Hi
That is what the emulator is already doing. If we start emulating HW
down to individual CPU cycles, it'll only get slower. :(
I think this is wrong in some way. Otherwise I wouldn't see this:
1) running on TBPL (AWS) the internal timings reported show the specific
test going from 30
Just to be explicit, this means that changesets which regress these
tests will be backed out, right?
On 2014-04-08, 5:28 PM, Bill McCloskey wrote:
Hi everyone,
Starting today, we have new mochitests that show up as M-e10s (1 2 3 4 5).
These are mochitests-plain running inside an e10s content
On 2014-04-08, 6:10 PM, Karl Tomlinson wrote:
I wonder whether the real problem here is that we have too many
bad tests that report false negatives, and these bad tests are
reducing the value of our testsuite in general. Tests also need
to be well documented so that people can understand what a
That is what the emulator is already doing. If we start emulating HW
down to individual CPU cycles, it'll only get slower. :(
I think this is wrong in some way. Otherwise I wouldn't see this:
1) running on TBPL (AWS) the internal timings reported show the specific
test going from 30
- Original Message -
From: Ehsan Akhgari ehsan.akhg...@gmail.com
To: Bill McCloskey wmcclos...@mozilla.com, dev-platform
dev-platform@lists.mozilla.org
Sent: Wednesday, April 9, 2014 6:51:46 AM
Subject: Re: New e10s tests on tinderbox
Just to be explicit, this means that
On 4/8/14, 6:51 AM, James Graham wrote:
On 08/04/14 14:43, Andrew Halberstadt wrote:
On 07/04/14 11:49 AM, Aryeh Gregor wrote:
On Mon, Apr 7, 2014 at 6:12 PM, Ted Mielczarek t...@mielczarek.org
wrote:
If a bug is causing a test to fail intermittently, then that test loses
value. It still has
On Wednesday 2014-04-09 11:00 -0700, Gregory Szorc wrote:
The simple solution is to have a separate in-tree manifest
annotation for intermittents. Put another way, we can describe
exactly why we are not running a test. This is kinda/sorta the realm
of bug 922581.
The harder solution is to
On 4/9/14, 11:29 AM, L. David Baron wrote:
On Wednesday 2014-04-09 11:00 -0700, Gregory Szorc wrote:
The simple solution is to have a separate in-tree manifest
annotation for intermittents. Put another way, we can describe
exactly why we are not running a test. This is kinda/sorta the realm
of
Gregory Szorc writes:
2) Run marked intermittent tests multiple times. If it works all
25 times, fail the test run for inconsistent metadata.
We need to consider intermittently failing tests as failed, and we
need to only test things that always pass.
We can't rely on statistics to tell us
On 4/9/14, 2:07 PM, Karl Tomlinson wrote:
Gregory Szorc writes:
2) Run marked intermittent tests multiple times. If it works all
25 times, fail the test run for inconsistent metadata.
We need to consider intermittently failing tests as failed, and we
need to only test things that always
On 4/9/14, 11:48 AM, Gregory Szorc wrote:
I feel a lot of people just shrug shoulders and allow the test to be
disabled (I'm guilty of it as much as anyone). From my perspective, it's
difficult to convince the powers at be that fixing intermittent failures
(that have been successfully swept
On 2014-04-09, 6:46 PM, Chris Peterson wrote:
On 4/9/14, 11:48 AM, Gregory Szorc wrote:
I feel a lot of people just shrug shoulders and allow the test to be
disabled (I'm guilty of it as much as anyone). From my perspective, it's
difficult to convince the powers at be that fixing intermittent
12 matches
Mail list logo