On Thu, Sep 3, 2015 at 8:45 PM, Jon Robson wrote:
> This is a follow-up from Dan Duvall's talk today during the metrics
> meeting about voting browser tests.
>
If you did not see it (34:30-44:30):
https://youtu.be/Hy307xn99-c?t=34m26s
Please notice the explanation of
> In the services team, we found that prominent coverage metrics are a very
> powerful motivator for keeping tests in order. We have set up 'voting'
> coverage reports, which fail the overall tests if coverage falls, and make
> it easy to check which lines aren't covered yet (via coveralls). In
Dear Greg, and anyone else that is involved in deployment
This is a follow-up from Dan Duvall's talk today during the metrics
meeting about voting browser tests.
Background:
The reading web team this quarter with the help of Dan Duvall has made
huge strides in our QA infrastructure. The
I just want to say that I appreciate this overview.
Pine
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
On 09/03/2015 02:45 PM, Jon Robson wrote:
The future!:
Given this success:
1) I would like to see us run @integration tests on core, but I
understand given the number of bugs this might not be feasible so far.
2) We should run @integration tests prior to deployments to the
cluster via the train
In the services team, we found that prominent coverage metrics are a very
powerful motivator for keeping tests in order. We have set up 'voting'
coverage reports, which fail the overall tests if coverage falls, and make
it easy to check which lines aren't covered yet (via coveralls). In all
Just to hop on the bandwagon here: this seems like the only sane path going
forward. One unmentioned benefit is that this is a step toward continuous
deployment. Having integration tests run on every commit and then block
when there are failures is pretty much a requirement if Wikimedia ever
wants