Re: Visibility of disabled tests

2020-01-11 Thread David Burns
I think a lot of the problem is not necessarily a technical issue, meaning
I am not sure that tooling will solve the problem, but it is more of a
social problem.

I think the problem is making sure that items are triaged and placed into
peoples workflow sooner would solve this problem but I also appreciate the
difficulty when there are competing priorities within teams.

This was a problem I started working on last quarter, mostly for web
platform tests, and would like to continue it this quarter. I am going to
be reaching out to more people over the quarter to see if we can solve
this. If you would like to be part of the process please let me know and I
will schedule an interview with you.

David

On Thu, 9 Jan 2020 at 00:28, Andrew Sutherland 
wrote:

> On 1/8/20 12:50 PM, Geoffrey Brown wrote:
> > Instead of changing the reviewers, how about:
> >  - we remind the sheriffs to needinfo
> >  - #intermittent-reviewers check that needinfo is in place when
> > reviewing disabling patches.
>
> To try and help address the visibility issue, we could also make
> searchfox consume the test-info-disabled-by-os taskcluster task[1] and:
>
> - put banners at the top of test files that say "Hey!  This is
> (sometimes) disabled on [android/linux/mac/windows]"
>
> - put collapsible sections at the top of the directory listings that
> explicitly call out the disabled tests in that directory. The idea would
> be to avoid people needing to scroll through the potentially long list
> of files to know which are disabled and provide some ambient awareness
> of disabled tests.
>
> If there's a way to get a similarly pre-built[2] mapping that would
> provide information about the orange factor of tests[3] or that it's
> been marked as needswork, that could also potentially be surfaced.
>
> Andrew
>
> 1:
>
> https://searchfox.org/mozilla-central/rev/be7d1f2d52dd9474ca2df145190a817614c924e4/taskcluster/ci/source-test/file-metadata.yml#62
>
> 2: Emphasis on pre-built in the sense that searchfox's processing
> pipeline really doesn't want to be issuing a bunch of dynamic REST
> queries that would add to its processing latency.  It would want a
> taskcluster job that runs as part of the nightly build process so it can
> fetch a JSON blob at full network speed.
>
> 3: I guess test-info-all at
>
> https://searchfox.org/mozilla-central/rev/be7d1f2d52dd9474ca2df145190a817614c924e4/taskcluster/ci/source-test/file-metadata.yml#91
> does provide the "failed runs" count and "total runs" which could
> provide the orange factor?  The "total run time, seconds" scaled by the
> "total runs" would definitely be interesting to surface in searchfox.
>
> ___
> firefox-dev mailing list
> firefox-...@mozilla.org
> https://mail.mozilla.org/listinfo/firefox-dev
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to prototype: Web Share Target

2019-10-04 Thread David Burns
On Fri, 4 Oct 2019 at 03:30,  wrote:

>
>
> Web-platform-tests: requires manual tests.
>

Is this something that we could be tested with testdriver.js inside wpt?

David
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: CSS 'display:block ruby'

2019-08-14 Thread David Burns
Are there any web platform tests for this or will they be added as part of
this work?

David

On Wed, 14 Aug 2019 at 17:38, Mats Palmgren  wrote:

> Summary:
> Add support for 'display:block ruby' which creates a block box
> with a ruby box inside it.
>
> Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1557825
>
> Standard: https://drafts.csswg.org/css-display/#the-display-properties
>
> Platform coverage: All
>
> Estimated or target release: v70
>
> Preference: None
>
> DevTools: The new value has been added to the auto-completion menu
> for the 'display' property
>
> Other browsers: Other UAs don't support this yet, AFAIK.
>
>
> /Mats
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: Blocking Worker/SharedWorker with non-JS MIME type

2019-07-25 Thread David Burns
On Jul 25, 2019, 12:23 PM +0200, Tom Schuster , wrote:
> On Wed, Jul 24, 2019 at 3:21 AM Boris Zbarsky  wrote:>
> > On 7/22/19 6:22 AM, Tom Schuster wrote:
> > > This was also discussed at https://github.com/whatwg/html/issues/3255.
> > > It seems like Chrome does NOT plan on shipping this at the moment.
> >
> > Does "at the moment" mean they are open to shipping it in the future if
> > we ship it and don't run into web compat issues, or that they are not
> > planning to ship at all? What are Safari's plans here? What is the
> > proposed path to interop?
> >
>
> After asking the Chrome team for clarification
> (https://github.com/whatwg/html/issues/3255), they are interested in
> shipping this, but need more time and information.
> So I propose restricting this change to Beta/Nightly and to wait for
> them or until we see too much fallout.

Are there wpt that we can write to make sure We eventually do have the interop 
we want here?

David

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: Visual Viewport API on Android

2019-05-10 Thread David Burns
Not yet as we are stabilising tests for gecko view but hopefully soon!

David
On May 10, 2019, 7:22 PM +0100, Botond Ballo , wrote:
> On Thu, May 9, 2019 at 7:50 AM David Burns  wrote:
> > There are a number of wpt that fail only in firefox. Are we planning on 
> > fixing those tests with this work?
>
> We are, at least on Android. (On desktop, some of the tests need
> desktop zooming, which we do not yet have, to pass.) A number of fixes
> have landed [1] yesterday.
>
> Is there a way to get a dashboard view similar to [2] with Android results?
>
> Thanks,
> Botond
>
> [1] https://bugzilla.mozilla.org/show_bug.cgi?id=1477610
> [2] 
> https://jgraham.github.io/wptdash/?bugComponent=core%3A%3Alayout=%2Fvisual-viewport=Interop+Comparison
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: Visual Viewport API on Android

2019-05-09 Thread David Burns
There are a number of wpt that fail only in firefox[1]. Are we planning on
fixing those tests with this work?

David

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1546387

On Thu, 9 May 2019 at 01:41, Botond Ballo  wrote:

> Hi everyone!
>
> I would like to ship the Visual Viewport API [1] on Android. The
> initial implementation [2] was done in Firefox 63 behind the pref
> "dom.visualviewport.enabled" (see "Intent to Implement" thread [3]).
> It has since seen bug fixes, polish, and expanded test coverage.
>
> I intend to ship it on Android only for now. The API is primarily
> useful in scenarios involving pinch-zooming, and we don't currently
> support pinch-zooming on desktop. I intend to enable it on desktop
> concurrently with support for pinch-zooming at a later date.
>
> Target release: Firefox 68 or 69, depending on when the patches are
> ready. If it doesn't make 68, I would like to get it into Fennec
> 68.1esr, as it's an important web compat feature.
>
> Tracking bug for shipping:
> https://bugzilla.mozilla.org/show_bug.cgi?id=1512813
>
> Status in other implementation:
>   Blink: Shipping since Chrome 61 [4]
>   Safari: Available in Preview version [5]
>
> Web platform tests: The feature has good WPT coverage, with a mix of
> automatic and manual tests. We are now [6] passing almost all tests on
> Android; of the two remaining failures, one is a test harness
> limitation [7], and the other is pending resolution of a spec issue
> [8]. There are also a couple of tests which are not applicable to
> Android because they involve reflowing zoom which Android does not
> support.
>
> Any thoughts / feedback is appreciated!
>
> Thanks,
> Botond
>
> [1] https://github.com/WICG/visual-viewport/
> [2] https://bugzilla.mozilla.org/show_bug.cgi?id=1357785
> [3]
> https://groups.google.com/d/topic/mozilla.dev.platform/gchNtWfv_bk/discussion
> [4] https://www.chromestatus.com/feature/5737866978131968
> [5] https://webkit.org/status/#feature-visual-viewport-api
> [6] https://bugzilla.mozilla.org/show_bug.cgi?id=1477610
> [7] https://bugzilla.mozilla.org/show_bug.cgi?id=1547827
> [8] https://bugzilla.mozilla.org/show_bug.cgi?id=1543485
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Closure of #ateam irc channel

2019-01-22 Thread David Burns
For the last 2 years the Automation and Tools team (#ateam) has not existed
as a single function as parts of the team were split up between Firefox
Engineer Operations and Product Integrity.

As traffic in #ateam has dropped it is time to close this irc channel and
request people go to more specific channels to get the help that you need.


   -

   #build - Build team: Compilation of Firefox
   -

   #vcs - hg.mozilla.org
   -

   #engworkflow - Engineering Work flow: Tooling for engineers phabricator,
   lando.
   -

   #bmo - bugzilla.mozilla.org
   -

   #Sheriffs - Sheriffing team: Sheriffing of trees
   -

   #cia - CI and Automation team: Test harnesses and CI scheduling
   -

   #treeherder - Treeherder team: Treeherder specific problems or queries
   -

   #perftest - Performance Testing: Performance test harnesses and their
   tests
   -

   #interop - Web Predictability and Interop: Marionette, Webdriver and Web
   Platform Tests


If you are unsure of where your question may fall feel free to try any of
the channels listed and you will be directed to the best team to help.


David
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement: report-to header as part of Reporting API

2019-01-10 Thread David Burns
On Thu, 10 Jan 2019 at 12:27, Andrea Marchesini 
wrote:

> web-platform-tests: just a little support. I wrote several mochitests which
> can be converted to WPTs with a bit of effort.
>

There don't appear to be any WPT if I am looking in the right place[1].
Since Google are experimenting it feels like we have some WPT from the
start, even if its a pure conversion of the mochitest, it would help make
sure we interoperable with them.

[1]
https://searchfox.org/mozilla-central/source/testing/web-platform/tests/reporting/

David
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: web-platform-tests that fail only in Firefox (from wpt.fyi data)

2018-12-15 Thread David Burns
Thanks for this Philip.

I have started raising bugs and blocking
https://bugzilla.mozilla.org/show_bug.cgi?id=1498357.

David

On Fri, 14 Dec 2018 at 08:41, Philip Jägenstedt  wrote:

> On Fri, Oct 19, 2018 at 2:42 PM Philip Jägenstedt 
> wrote:
> >
> > On Wed, Oct 17, 2018 at 11:53 PM Boris Zbarsky  wrote:
> > >
> > > On 10/13/18 3:27 AM, Philip Jägenstedt wrote:
> > > > Fiddling with these rules can reveal lots
> > > > more potential issues, and if you like I could provide reports on
> that too.
> > >
> > > I would be pretty interested in that, yes.  In particular, a report
> > > where there is 1 "not PASS and not FAIL" and 3 "PASS" would be pretty
> > > helpful, I suspect.
> >
> > Rerunning my script it's apparent that unreliable Edge results [1]
> > leads to the same tests being considered lone failures or not for the
> > other browsers. So, I've use the same set of runs for this report of
> > what you suggested:
> > https://gist.github.com/foolip/e6014c9bcc8ca405219bf18542eb5d69
> >
> > It's not a long list, so I checked them all and they are timeouts.
> > This is sometimes the failure mode for genuine problems, so looking
> > over these might be valuable.
>
> Given the recent news [1] it won't be as relevant to consider the
> status of EdgeHTML for prioritization in other engines. Given that and
> the unreliable results, I've updated my script to consider only
> Chrome, Firefox and Safari. I also the reports auto-updating on a
> daily basis:
>
> https://foolip.github.io/ad-hoc-wpt-results-analysis/chrome-lone-failures.html
>
> https://foolip.github.io/ad-hoc-wpt-results-analysis/firefox-lone-failures.html
>
> https://foolip.github.io/ad-hoc-wpt-results-analysis/safari-lone-failures.html
>
> [1] https://github.com/MicrosoftEdge/MSEdge/blob/master/README.md
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement and ship: Referrer Policy for CSS

2018-10-05 Thread David Burns
Are there web platform tests for this feature?

David

On Fri, 5 Oct 2018 at 13:32, Christoph Kerschbaumer 
wrote:

> We just realized we have never sent an intent to implement and ship for
> extending coverage of Referrer Policy to style sheets. Please accept my
> apology for not sending the intent-email earlier. Anyway, we are planning
> to ship that extension of Referrer Policy coverage to CSS in Firefox 64.
>
> Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1330487
> Link to standard:
> https://www.w3.org/TR/referrer-policy/#integration-with-css
>
> Platform coverage: everywhere.
>
> Estimated or target release: 64
>
> Is this feature enabled by default in sandboxed iframes? Yes, it is
>
> DevTools bug: No devtools support.
>
> Do other browser engines implement this? Chromium, since 59.
>
> Is this feature restricted to secure contexts? No, it isn’t.
>
> Cheers,
>   Christoph
>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: unprefixed max-content and min-content for css sizing properties

2018-10-05 Thread David Burns
Are there any web platform tests for this feature?

David

On Tue, 2 Oct 2018 at 20:49, Boris Chiou  wrote:

> Summary:
> `max-content` and `min-content` are sizing values for width, min-width,
> max-width, height, min-height, max-height, inline-size, min-inline-size,
> max-inline-size, and flex-basis. We support these two keywords with -moz-
> prefix for many years, and Google Chrome has shipped them for 3 years. Our
> implementation on inline-size dimension are stable, and it'd be nicer to
> unprefix the keywords so people don't have to write both versions (i.e.
> prefixed and unprefixed) on their websites. Therefore, I think it's worth
> to unprefix them.
>
> Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1322780
> 
>
> Link to standard: https://drafts.csswg.org/css-sizing/#sizing-values
>
> Platform coverage: all platforms
>
> Estimated or target release: Firefox 64
>
> Do other browser engines implement this?
> Chrome has shipped unprefixed max-content and min-content values from 46
> [1]. Safari still uses -webkit- prefixed.
>
> Preference behind which this will be implemented: n/a
> DevTools bug: n/a
>
> [1] https://caniuse.com/#feat=intrinsic-width
>
> --
> Regards,
> Boris
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to Ship: Shadow DOM and Custom Elements

2018-08-14 Thread David Burns
Do we have sufficient tests for this to guarantee webcompat and interop
with other browsers?

David

On 10 August 2018 at 15:49, smaug  wrote:

> I'm planning to keep Shadow DOM and Custom Elements turned on on
> beta/release builds.
> Target release is Firefox 63.
> prefs are dom.webcomponents.customelements.enabled and
> dom.webcomponents.shadowdom.enabled.
>
> Custom elements has been enabled in Nightly since January and Shadow DOM
> since late May.
>
> Bugs:
> https://bugzilla.mozilla.org/show_bug.cgi?id=1471947 (Shadow DOM)
> https://bugzilla.mozilla.org/show_bug.cgi?id=1471948 (Custom Elements)
>
> Even though the APIs are totally distinct, web pages expect either neither
> be there or both, so they
> need to ship together.
>
>
>
> -Olli
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Next year in web-platform-tests

2017-12-15 Thread David Burns
On 15 December 2017 at 21:29, smaug  wrote:

>
> Not being able to send (DOM) events which mimic user input prevents
> converting
> many event handling tests to wpt.
> Not sure if WebDriver could easily do some of this, or should browsers
> have some testing mode which exposes
> APIs for this kinds of cases.
>

The TestDriver work does a lot of this and is currently being worked on and
hopefully will solve this use case. It uses webdriver under the hood to do
the user mimicing.

David



>
>
> -Olli
>
>
>
> or in the workflow that lead to you writing other
>> test types for cross-browser features.
>>
>> Thanks
>>
>> (Note: I set the reply-to for the email version of this message to be
>> off-list as an experiment to see if that avoids the anchoring effect where
>> early
>> replies set the direction of all subsequent discussion. But I'm very
>> happy to have an on-list conversation about anything that you feel merits a
>> broader audience).
>>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to unship: Linux 32bit Geckodriver executable

2017-11-21 Thread David Burns
Answered inline below.

On 21 November 2017 at 19:03, Nicholas Alexander <nalexan...@mozilla.com>
wrote:

>
>
> On Tue, Nov 21, 2017 at 5:25 AM, David Burns <dbu...@mozilla.com> wrote:
>
>> For the next version of geckodriver I am intending that it not ship a
>> Linux
>> 32 bit version of Geckodriver. Currently it accounts of 0.1% of downloads
>> and we regularly get somewhat cryptic intermittents which are hard to
>> diagnose.
>>
>
> I don't see the connection between 32-bit geckodriver and the test changes
> below.  Is it that the test suites we run require 32-bit geckodriver, and
> that's the only consumer?
>

Linux 32 bit Geckodriver is only used on that platform for testing wdspec
tests. It is built as part of the Linux 32 bit build and then moved to
testers.


>
>
>> *What does this mean for most people?* We will be turning off the WDSpec
>> tests, a subset of Web-Platform Tests used for testing the WebDriver
>> specification.
>
>
> Are these WDSpec tests run anywhere?  My long play here is to use a Java
> Web Driver client to drive web content to test interaction with GeckoView,
> so I'm pretty interested in our implementation conforming to the Web Driver
> spec ('cuz any Java Web Driver client will expect it to do so).  Am I
> missing something here?
>

They are currently run on OSX, Windows 32bit and 64bit and Linux 64 bit. We
are not dropping support for WebDriver. Actually this will allow us to
focus more on where our users are.

As for mobile, geckodriver is designed to speak to marionette over tcp. As
long as we can speak to the view, probably over adb, geckodriver it can
then speak to Marionette. This would make the host mostly irrelevant and
seeing how Linux 32 is barely used its not going to affect any work that
you do.


>
> This is all rather vaporish, so if my concerns aren't concrete or
> immediate enough, I'll accept that.
>

Hopefully this gives you a little more confidence :)

David
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to unship: Linux 32bit Geckodriver executable

2017-11-21 Thread David Burns
For the next version of geckodriver I am intending that it not ship a Linux
32 bit version of Geckodriver. Currently it accounts of 0.1% of downloads
and we regularly get somewhat cryptic intermittents which are hard to
diagnose.

*What does this mean for most people?* We will be turning off the WDSpec
tests, a subset of Web-Platform Tests used for testing the WebDriver
specification. Testharness.js and reftests in the Web-Platform tests will
still be working as they use Marionette via another means.

Let me know if you have any questions.

David
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to require Python 3 to build Firefox 59 and later

2017-11-12 Thread David Burns
I am not saying it should but if we have a requirement for python 3, we are
also going to have a requirement for py2 to both be available for local
development.

David

On 11 November 2017 at 14:10, Andrew Halberstadt <ahalberst...@mozilla.com>
wrote:

> On Fri, Nov 10, 2017 at 9:44 PM David Burns <dbu...@mozilla.com> wrote:
>
>> My only concern about this is how local developer environments are going
>> to be when it comes to testing. While I am sympathetic to moving to python
>> 3 we need to make sure that all the test harnesses have been moved over and
>> this is something that needs a bit of coordination. Luckily a lot of the
>> mozbase stuff is already moving to python 3 support but that still means we
>> need to have web servers and the actual test runners moved over too.
>>
>> David
>>
>
> For libraries like mozbase, I think the plan will be to support both 2 and
> 3 at
> the same time. There are libraries (like 'six') that make this possible.
> I'd bet
> there are even parts of the build system that will still need to support
> both at
> the same time.
>
> With that in mind, I don't think python 3 support for test harnesses needs
> to
> block the build system.
>
> Andrew
>
> On 10 November 2017 at 23:27, Gregory Szorc <g...@mozilla.com> wrote:
>>
>>> For reasons outlined at https://bugzilla.mozilla.org/
>>> show_bug.cgi?id=1388447#c7, we would like to make Python 3 a
>>> requirement to build Firefox sometime in the Firefox 59 development cycle.
>>> (Firefox 59 will be an ESR release.)
>>>
>>> The requirement will likely be Python 3.5+. Although I would love to
>>> make that 3.6 if possible so we can fully harness modern features and
>>> performance.
>>>
>>> I would love to hear feedback - positive or negative - from downstream
>>> packagers and users of various operating systems and distributions about
>>> this proposal.
>>>
>>> Please send comments to dev-bui...@lists.mozilla.org or leave them on
>>> bug 1388447.
>>>
>>> ___
>>> dev-builds mailing list
>>> dev-bui...@lists.mozilla.org
>>> https://lists.mozilla.org/listinfo/dev-builds
>>>
>>> ___
>> dev-builds mailing list
>> dev-bui...@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-builds
>>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to require Python 3 to build Firefox 59 and later

2017-11-10 Thread David Burns
My only concern about this is how local developer environments are going to
be when it comes to testing. While I am sympathetic to moving to python 3
we need to make sure that all the test harnesses have been moved over and
this is something that needs a bit of coordination. Luckily a lot of the
mozbase stuff is already moving to python 3 support but that still means we
need to have web servers and the actual test runners moved over too.

David



On 10 November 2017 at 23:27, Gregory Szorc  wrote:

> For reasons outlined at https://bugzilla.mozilla.org/
> show_bug.cgi?id=1388447#c7, we would like to make Python 3 a requirement
> to build Firefox sometime in the Firefox 59 development cycle. (Firefox 59
> will be an ESR release.)
>
> The requirement will likely be Python 3.5+. Although I would love to make
> that 3.6 if possible so we can fully harness modern features and
> performance.
>
> I would love to hear feedback - positive or negative - from downstream
> packagers and users of various operating systems and distributions about
> this proposal.
>
> Please send comments to dev-bui...@lists.mozilla.org or leave them on bug
> 1388447.
>
> ___
> dev-builds mailing list
> dev-bui...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-builds
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Implementing a Chrome DevTools Protocol server in Firefox

2017-09-04 Thread David Burns
I don't think anyone would disagree with the reasons for doing this. I,
like James who brought it up earlier, am concerned that we from the emails
appear to think that implementing the wire protocol would be sufficient to
making sure we have the same semantics.

As mentioned by Karl earlier, it was part of the Browser Testing Tools WG,
but was dropped due to lack of interest from vendors but this thread is now
suggesting that feeling has changed. I am happy, as chair, to see about
adding it back when we recharter, which luckily at the end of September!

I am happy to create a separate thread to get this started.

David

On 31 August 2017 at 21:51, Jim Blandy  wrote:

> Sorry for the premature send. The complete message should read:
>
> The primary goals here are not related to automation and testing.
>
> - We want to migrate the Devtools console and the JS debugger to the CDP,
> to replace an unpopular protocol with a more widely-used one.
>
> - Servo wants its devtools server to use an industry-standard protocol, not
> Firefox's custom thing.
>
> - We'd like to have a devtools server we can share between Gecko and Servo.
>
> - We'd like to move devtools server code out of JS and into a language that
> gives us better control over memory use and performance, because the
> current server, implemented in JS, introduces a lot of noise into
> measurements, affecting the quality of the performance data we're able to
> collect.
>
> - We'd like to share front ends (i.e. clients) between Firefox and Servo.
> devtools.html already implements both the Firefox protocol and the CDP.
>
> Using a protocol that's more familiar to web developers doing automation
> and testing is also good, and we're hoping that will have ancillary
> benefits. For example, it turns out that there is lots of interest in our
> new JS debugger UI, which has hundreds of contributors now. I don't know
> why people want a JS debugger UI, but they do. I believe that Firefox's use
> of a bespoke protocol prevents many similar opportunities for
> collaboration.
>
>
> On Thu, Aug 31, 2017 at 1:36 PM, Jim Blandy  wrote:
>
> > Certain bits of the original post are getting more emphasis than I had
> > anticipated. Let me try to clarify why we in Devtools want this change or
> > something like it.
> >
> > The primary goals here are not related to automation and testing. They
> are:
> >
> >- to allow Devtools to migrate the console and the JS debugger to the
> >CDP;
> >- to start a tools server that can be shared between Gecko and Servo;
> >- to replace Gecko's devtools server, implemented in JS, with one
> >implemented in Rust, to reduce memory consumption and introduce less
> noise
> >into performance and memory measurements
> >
> >
> >
> > and to help us share code with Servo. Our user interfaces already work
> > with the CDP.
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Implementing a Chrome DevTools Protocol server in Firefox

2017-08-30 Thread David Burns
Do we know if the other vendors would see value in having this spec'ed
properly so that we have true interop here? Reverse engineering  seems like
a "fun" project but what stops people from breaking stuff without realising?

David

On 30 August 2017 at 22:55, Michael Smith  wrote:

> Hi everyone,
>
> Mozilla DevTools is exploring implementing parts of the Chrome DevTools
> Protocol ("CDP") [0] in Firefox. This is an HTTP, WebSockets, and JSON
> based protocol for automating and inspecting running browser pages.
>
> Originally built for the Chrome DevTools, it has seen wider adoption with
> outside developers. In addition to Chrome/Chromium, the CDP is supported by
> WebKit, Safari, Node.js, and soon Edge, and an ecosystem of libraries and
> tools already exists which plug into it, for debugging, extracting
> performance data, providing live-preview functionality like the Brackets
> editor, and so on. We believe it would be beneficial if these could be
> leveraged with Firefox as well.
>
> The initial implementation we have in mind is an alternate target for
> third-party integrations to connect to, in addition to the existing Firefox
> DevTools Server. The Servo project has also expressed interest in adding
> CDP support to improve its own devtools story, and a PR is in flight to
> land a CDP server implementation there [1].
>
> I've been working on this project with guidance from Jim Blandy. We've
> come up with the following approach:
>
> - A complete, typed Rust implementation of the CDP protocol messages and
> (de)serialization lives in the "cdp" crate [2], automatically generated
> from the protocol's JSON specification [3] using a build script (this
> happens transparently as part of the normal Cargo compilation process).
> This comes with Rustdoc API documentation of all messages/types in the
> protocol [4] including textual descriptions bundled with the specification
> JSON. The cdp crate will likely track the Chrome stable release for which
> version of the protocol is supported. A maintainers' script exists which
> can find and fetch the appropriate JSON [5].
>
> - The "tokio-cdp" crate [6] builds on the types and (de)serialization
> implementation in the cdp crate to provide a server implementation built on
> the Tokio asynchronous I/O system. The server side provides traits for
> consuming incoming CDP RPC commands, executing them concurrently and
> sending back responses, and simultaneously pushing events to the client.
> They are generic over the underlying transport, so the same backend
> implementation could provide support for "remote" clients plugging in over
> HTTP/WebSockets/JSON or, for example, a browser-local client communicating
> over IPDL.
>
> - In Servo, a new component plugs into the cdp and tokio-cdp crates and
> acts on behalf of connected CDP clients in response to their commands,
> communicating with the rest of the Servo constellation. This server is
> disabled by default and can be started by passing a "--cdp" flag to the
> Servo binary, binding a TCP listener to the loopback interface at the
> standard CDP port 9222 (a different port can be specified as an option to
> the flag).
>
> - The implementation we envision in Firefox/Gecko would act similarly: a
> new Rust component, disabled by default and switched on via a command line
> flag, which binds to a local port and mediates between Gecko internals and
> clients connected via tokio-cdp.
>
> We chose to build this on Rust and the Tokio event loop, along with the
> hyper HTTP library and rust-websocket which plug into Tokio.
>
> Rust and Cargo provide excellent facilities for compile-time code
> generation which integrate transparently into the normal build process,
> avoiding the need to invoke scripts by hand to keep generated artifacts in
> sync. The Rust ecosystem provides libraries such as quote [7] and serde [8]
> which allow us to auto-generate an efficient, typed, and self-contained
> interface for the entire protocol. This moves the complexity of ingesting,
> validating, and extracting information from client messages out of the
> Servo- and Gecko-specific backend implementations, helps to ensure they
> conform correctly to the protocol specification, and provides a structured
> way of upgrading to new protocol versions.
>
> As for Tokio, the event loop and Futures-based model of concurrency it
> offers maps well to the Chrome DevTools Protocol. RPC commands typically
> execute simultaneously, returning responses in order of completion, while
> the server continuously generates events to which the client has
> subscribed. Under Tokio we can spawn multiple lightweight Tasks, dispatch
> messages to them, and multiplex their responses back over the single client
> connection. The Tokio event loop is nicely self-contained to the one or,
> optionally, more threads it is allocated, so the rest of the application
> doesn't need to be aware of it.
>
> Use of Tokio is becoming a standard in the Rust 

Fwd: Web Compatibility Testing Survey

2017-05-03 Thread David Burns
Hi All,

If you are working on web exposed features could you please take a few
minutes to complete the survey. This will help us prioritize work on Web
Platform Tests which helps us with our web compatibility story.

David
-- Forwarded message --
From: 
Date: 28 April 2017 at 14:19
Subject: Web Compatibility Testing Survey
To: dev-platform@lists.mozilla.org


I've invited you to fill in the following form:
Web Platform Tests Survey

To fill it in, visit:
https://docs.google.com/forms/d/e/1FAIpQLScsbrVikqdnH32Qs-hV
8YWgBi6kb5QRJtYG3KGqHba2eO2Dzw/viewform?c=0w=1usp=mail_form_link

Hi All,

The Product Integrity Automation Harnesses team would like your help
prioritizing web compatibility work.

We are trying to understand how people test web-exposed features they are
working on.

Please can you fill in the following survey[1], it shouldn’t take more than
5 minutes. It will help us ensure that we focus on the projects that will
have the greatest impact on gecko development.

Thanks

David

Google Forms: Create and analyse surveys.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: unowned module: Firefox::New Tab Page, help me find an owner

2017-03-22 Thread David Burns
On 22 March 2017 at 13:49, Ben Kelly  wrote:

> On Wed, Mar 22, 2017 at 9:39 AM,  wrote:
>
>
> Finding someone to own the feature and investigate intermittents is
> important too, but that doesn't mean the tests have zero value.
>

This just strikes me that we are going to disable until they are all gone
then we have dead code in the tree and still no one to own it. Its a longer
process that could end up at the same end point.

David
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The future of commit access policy for core Firefox

2017-03-13 Thread David Burns
As the manager of the sheriffs, I am in favour of this proposal.

The reasons why are as follow (and to note there are only 3 paid sheriffs
to try cover the world):

* A number of r+ with nits land up in the sheriffs queue for
checkin-needed. This then puts the onus on the sheriffs, not the reviewer
that the right thing has been done. The sheriffs do no have the context
knowledge of the patch, never mind the knowledge of the system being
changed.

* The throughput of patches into the trees is only going to increase. If
there are failures, and the sheriffs need to back out, this can be a long
process depending on the failure leading to pile ups of broken patches.

A number of people have complained that using autoland doesn't allow us to
fail forward on patches. While that is true, there is the ability to do
T-shaped try pushes to make sure that you at least compile on all
platforms. This can easily done from Mozreview (Note: I am not suggesting
we move to mozreview).

Regarding burden on reviewers, the comments in this thread just highlight
how broken our current process is by having to flag individual people for
reviews. This leads to the a handful of people doing 50%+ of reviews on the
code. We, at least visibly, don't do enough to encourage new committers
that would lighten the load to allow the re-review if there are nits. Also,
we need to do work to automate the removal of nits to limit the amount of
re-reviews that reviewer can do.

We should try mitigate the security problem and fix our nit problem instead
of bashing that we can't handle re-reviews because of nits.

David

On 9 March 2017 at 21:53, Mike Connor  wrote:

> (please direct followups to dev-planning, cross-posting to governance,
> firefox-dev, dev-platform)
>
>
> Nearly 19 years after the creation of the Mozilla Project, commit access
> remains essentially the same as it has always been.  We've evolved the
> vouching process a number of times, CVS has long since been replaced by
> Mercurial & others, and we've taken some positive steps in terms of
> securing the commit process.  And yet we've never touched the core idea of
> granting developers direct commit access to our most important
> repositories.  After a large number of discussions since taking ownership
> over commit policy, I believe it is time for Mozilla to change that
> practice.
>
> Before I get into the meat of the current proposal, I would like to
> outline a set of key goals for any change we make.  These goals have been
> informed by a set of stakeholders from across the project including the
> engineering, security, release and QA teams.  It's inevitable that any
> significant change will disrupt longstanding workflows.  As a result, it is
> critical that we are all aligned on the goals of the change.
>
>
> I've identified the following goals as critical for a responsible commit
> access policy:
>
>
>- Compromising a single individual's credentials must not be
>sufficient to land malicious code into our products.
>- Two-factor auth must be a requirement for all users approving or
>pushing a change.
>- The change that gets pushed must be the same change that was
>approved.
>- Broken commits must be rejected automatically as a part of the
>commit process.
>
>
> In order to achieve these goals, I propose that we commit to making the
> following changes to all Firefox product repositories:
>
>
>- Direct commit access to repositories will be strictly limited to
>sheriffs and a subset of release engineering.
>   - Any direct commits by these individuals will be limited to fixing
>   bustage that automation misses and handling branch merges.
>- All other changes will go through an autoland-based workflow.
>   - Developers commit to a staging repository, with scripting that
>   connects the changeset to a Bugzilla attachment, and integrates with 
> review
>   flags.
>   - Reviewers and any other approvers interact with the changeset as
>   today (including ReviewBoard if preferred), with Bugzilla flags as the
>   canonical source of truth.
>   - Upon approval, the changeset will be pushed into autoland.
>   - If the push is successful, the change is merged to
>   mozilla-central, and the bug updated.
>
> I know this is a major change in practice from how we currently operate,
> and my ask is that we work together to understand the impact and concerns.
> If you find yourself disagreeing with the goals, let's have that discussion
> instead of arguing about the solution.  If you agree with the goals, but
> not the solution, I'd love to hear alternative ideas for how we can achieve
> the outcomes outlined above.
>
> -- Mike
>
> ___
> firefox-dev mailing list
> firefox-...@mozilla.org
> https://mail.mozilla.org/listinfo/firefox-dev
>
>
___
dev-platform mailing list

Re: Sheriff Highlights and Summary in February 2017

2017-03-10 Thread David Burns
I went back and did some checks with autoland to servo and the results are
negligible. So from 01 February 2017 to 10 March 2017 (as of sending this
email). I have removed merge commits from the numbers.

Autoland:
Total Servo Sync Pushes: 152
Total Pushes: 1823
Total Backouts: 144
Percentage of backouts: 7.8990674712
Percentage of backouts without Servo: 8.61759425494

Mozilla-Inbound:
Total Pushes: 1472
Total Backouts: 166
Percentage of backouts: 11.277173913


I will look into why, with more pushes, is resulting in fewer backouts. The
thing to note is that autoland, by its nature, does not allow us to fail
forward like inbound without having to get a sheriff to land the code.

I think, and this is my next area to investigate, is the 1 bug per push
(the autoland model) could be helping with the percentage of backouts being
lower.

David

On 7 March 2017 at 21:29, Chris Peterson  wrote:

> On 3/7/2017 3:38 AM, Joel Maher wrote:
>
>> One large difference I see between autoland and mozilla-inbound is that on
>> autoland we have many single commits/push whereas mozilla-inbound it is
>> fewer.  I see the Futurama data showing pushes and the sheriff report
>> showing total commits.
>>
>
> autoland also includes servo commits imported from GitHub that won't break
> Gecko. (They might break the linux64-stylo builds, but they won't be backed
> out for that.)
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Sheriff Highlights and Summary in February 2017

2017-03-07 Thread David Burns
This is just a rough number of how many pushes had a backout and how many
didn't. I don't have any data on whether this is a full or partial backout.

If there are multiple bugs in a push on inbound, a sheriff may revert the
entire push (or might not depending on how obvious the error is and
availability of the person pushing.)

As jmaher pointed out, and so did you, pushes are more succinct on autoland
which could be mean its simpler to know what to revert. Other than doing
some A/B testing to prove the hypothesis of the sheriffs in this thread, a
lot of the data is going to be anecdotal as to why there is a difference.

David

On 7 March 2017 at 12:57, Boris Zbarsky <bzbar...@mit.edu> wrote:

> On 3/7/17 6:23 AM, David Burns wrote:
>
>>- Autoland 6%.(24 backouts out of 381 pushes)
>>- Inbound 12% (30 backouts out of 251 pushes)
>>
>
> Were those full backouts or partial backouts?
>
> That is, how are we counting a multi-bug push to inbound where one of the
> bugs gets backed out?  Note that such a thing probably doesn't happen on
> autoland.
>
> -Boris
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Sheriff Highlights and Summary in February 2017

2017-03-07 Thread David Burns
One thing that we have also noticed is that the backout rate on autoland is
lower than inbound.

In the last 7 days backout rate is averaging (merges have been removed):

   - Autoland 6%.(24 backouts out of 381 pushes)
   - Inbound 12% (30 backouts out of 251 pushes)

I don't have graphs to show this but when I look at https://futurama.
theautomatedtester.co.uk/   each week I have seen this result consistenly
for about a month.

David


On 7 March 2017 at 09:03, Carsten Book  wrote:

> Hi Lawrence,
>
> most (i would say 95 %) of the backouts are for Code issues - this include
> bustages and test failures.
>
> From this Code Issues i would guess its about 2/3 for breaking tests and
> 1/3 Build Bustages.
>
> The other backout reasons are merge conflicts / backout requests for
> changes causes new regressions out of our test suites / backout requests
> from developers etc -
>
> In February the backout rate was (excluding the servo merge with 8315
> changesets from the 13242) = 297 in 4927 changesets = ~ 6 %
> In January the backout rate = 302 backouts in 4121 changesets = 7 %
>
> We had a much higher backout rate in the past i think - so it stabilized
> now with this 6-7 % backout rate in the last months,
>
> If you think its useful, i can provide for the next monthly report a more
> detailed analysis like x % =  backouts because builds bustages, y %=
> backouts for test failures etc .
>
> Cheers,
>  -Tomcat
>
>
> On Mon, Mar 6, 2017 at 11:13 PM, Lawrence Mandel 
> wrote:
>
>> Hi Tomcat,
>>
>> Do you have any more details about the reasons why the 297 changesets
>> needed to be backed out?
>>
>> Thanks,
>>
>> Lawrence
>>
>> On Wed, Mar 1, 2017 at 6:05 AM, Carsten Book  wrote:
>>
>>> Hi,
>>>
>>> We will be more active in 2017 and inform more about whats happening in
>>> Sheriffing and since its already March :)
>>>
>>> In February we had about 13242 changesets landed on mozilla-central
>>> monitored by Sheriffs.
>>>
>>> (The high number of changes is because of the merge from servo to
>>> mozilla-central with about 8315 changesets).
>>>
>>> 297 changesets were backed out in February.
>>>
>>> Beside this Sheriffs took park in doing uplifts and checkin-needed bugs.
>>>
>>> The Current Orangefactor is 10.43 (7250 test failures failures in 695
>>> pushes in the last 7 days).
>>> You can find the list of top Intermittent failure bugs here
>>> https://brasstacks.mozilla.com/orangefactor/
>>>
>>> You can find more statistics here: https://futurama.theautomatedt
>>> ester.co.uk/
>>>
>>> A big thanks to the Team especially our Community Sheriffs for handling
>>> the new Tier 2 Stylo Build on integration trees  and mozilla-central  and
>>> the teamwork with the Developers!
>>>
>>> If you want also to be a Community Sheriffs, as part of more blogging
>>> about Sheriffing i published a blog post about it :
>>> https://blog.mozilla.org/tomcat/2017/02/27/community-sheriffs/
>>>
>>> Let us know when you have any Question or Feedback about Sheriffing.
>>>
>>> Cheers,
>>>  -Tomcat
>>>
>>
>>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Please don't abuse "No bug" in commit messages

2017-02-03 Thread David Burns
Ideally we would be doing it in a way that we can audit things quickly in
the case we think a bad actor has compromised someone's machine.

While we can say what we want and how we want the process, we need to work
with what we have until we have a better process. This could be between now
and the end of the year. Ideally, and hopefully its not too far away, all
pushes will be going through autoland and then we won't have situations
like this.

In the short term, when the sheriffs do merges I will be getting them to
have a look at all "no bug" pushes. Anything that goes outside of
https://developer.mozilla.org/docs/Mozilla/Developer_guide/C
ommitting_Rules_and_Responsibilities will be followed up.

David



On 3 February 2017 at 19:01, Steve Fink  wrote:

> On 02/03/2017 09:29 AM, Gijs Kruitbosch wrote:
>
>> On 03/02/2017 15:11, Ryan VanderMeulen wrote:
>>
>>> A friendly reminder that per the MDN commit rules, the use of "No bug"
>>> in the commit message is to be used sparingly - in general for minor things
>>> like whitespace changes/comment fixes/etc where traceability isn't as
>>> important.
>>> https://developer.mozilla.org/docs/Mozilla/Developer_guide/C
>>> ommitting_Rules_and_Responsibilities
>>>
>>> I've come across uses of "No bug" commits recently where entire upstream
>>> imports of vendored libraries were done. This is bad for multiple reasons:
>>> * If makes sheriffing difficult - if the patch is broken and needs
>>> backing out, where should the comment go? When it gets merged to
>>> mozilla-central, who gets notified?
>>>
>> As Greg said, the committer / pusher, via IRC and/or email.
>>
>>> * It makes tracking a pain - what happens if that patch ends up needing
>>> uplift?
>>>
>> Generally, the person committing it will know whether it needs uplift,
>> and would have filed a bug if it did - and would file one if it does after
>> the fact. We can already not rely on bugs being marked fixed/verified on a
>> trunk branch when searching bugzilla for uplift requests (cf. "disable
>> feature Y on beta" bugs) and so I don't see how this is relevant.
>>
>>> What about when that patch causes conflicts with another patch needing
>>> uplift?
>>>
>> That seems like it hardly ever happens in the example you gave (vendored
>> libraries and other wholesale "update this dir to match external canonical
>> version"), and if/when it does, the people who would be likely to be
>> involved in such changes (effectively changes to vendored deps that aren't
>> copied from the same canonical source!) are also likely to be aware of what
>> merged when.
>>
>>> What if it causes a regression and a blocking bug needs to be filed?
>>>
>> Then you file a bug and needinfo the person who landed the commit (which
>> one would generally do anyway, besides just marking it blocking the
>> regressor).
>>
>>
>> If there's an overwhelming majority of people who think using "no bug"
>> for landing 'sync m-c with repo X' commits is bad, we can make a more
>> explicit change in the rules here. But in terms of reducing development
>> friction, if we think bugs are necessary at this point, I would prefer
>> doing something like what Greg suggested, ie auto-file bugs for commits
>> that get pushed that don't have a bug associated with them.
>>
>>
>> More generally, I concur with Greg that we should pivot to having the
>> repos be our source of truth about what csets are present on which
>> branches. I've seen cases recently where e.g. we land a feature, then there
>> are regressions, and some of them are addressed in followup bugs, and then
>> eventually we back everything out of one of the trains because we can't fix
>> *all* the regressions in time. At that point, the original feature bug's
>> flags are updated ('disabled' on the branch with backouts), but not those
>> of the regression fix deps, and so if *those* have regressions, people
>> filing bugs make incorrect assumptions about what versions are affected.
>> Manually tracking branch fix state in bugzilla alone is a losing battle.
>>
>
> For the immediate term, I must respectfully disagree. Sheriffs are the
> people most involved with and concerned with the changeset management
> procedures, so if the sheriffs (and Ryan "I'm not a sheriff!" VM) are
> claiming that No Bug landings are being overused and causing issues, I'm
> inclined to adjust current practices first and hold off on rethinking our
> development process until the immediate pain is resolved. The fact is that
> our *current* changeset management *is* very bug-centric. I think gps and
> others have made a good argument for why another process may be superior
> (and I personally do not regard our current process as the pinnacle of
> efficiency), and I'm all for coming up with something better. But I really
> don't want some long drawn-out halfway state where the people who think the
> process should be A are doing it that way, the people who think it should
> be B are changing piecemeal to get 

Re: Please don't abuse "No bug" in commit messages

2017-02-03 Thread David Burns
I have raised a bug[1] to block these types of commits in the future. This
is an unnecessary risk that we are taking.

I also think that we need to remove this as acceptable practice from the
MDN page.

David

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1336459

On 3 February 2017 at 15:11, Ryan VanderMeulen 
wrote:

> A friendly reminder that per the MDN commit rules, the use of "No bug" in
> the commit message is to be used sparingly - in general for minor things
> like whitespace changes/comment fixes/etc where traceability isn't as
> important.
> https://developer.mozilla.org/docs/Mozilla/Developer_guide/
> Committing_Rules_and_Responsibilities
>
> I've come across uses of "No bug" commits recently where entire upstream
> imports of vendored libraries were done. This is bad for multiple reasons:
> * If makes sheriffing difficult - if the patch is broken and needs backing
> out, where should the comment go? When it gets merged to mozilla-central,
> who gets notified?
> * It makes tracking a pain - what happens if that patch ends up needing
> uplift? What about when that patch causes conflicts with another patch
> needing uplift? What if it causes a regression and a blocking bug needs to
> be filed?
>
> Bugs are cheap. Please use them.
>
> Thanks,
> Ryan
>
> ___
> firefox-dev mailing list
> firefox-...@mozilla.org
> https://mail.mozilla.org/listinfo/firefox-dev
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Build System Project - Latest Update

2016-11-10 Thread David Burns
Below is a highlight of all work the build peers have done in the last few
weeks as part of their work to modernise the build infrastructure.

Since the last report[1] a large number of improvements have landed in
Mozilla Central.

The build peers have landed support for a new construct in python configure
(1296530 ) that will,
along upcoming changes, allow simplifications to the python configure code.
It also paves the way for the upcoming pseudo-linter. We have also landed
patches to move some graphics configuration to Python configure (bug 1305145
).

We also investigated parallelising the emitter and backend steps, but found
that our workload is so lopsided to deal with tests, there wasn’t much
parallelism to exploit. Investigating this led to patches to improve this,
resulting in ~⅓ overall improvement landed (bug 1312520
 and bug 1312574
).

The build peers have also spent 2 weeks getting OSX universal builds in
Taskcluster to support ESR through 2018 - bug 1183613
.

Finally, we have also landed patches to build NSS with gyp in the NSS
repository (bug 1237872
). We need to do a
little bit of work to make it work in Mozilla Central but this is tiny task
(dependencies of bug 1295937
).

The NSS work is blocking the SCCache2 work. With the current NSS patches we
are getting green build so expect this work to be complete by the end of
the week.

David

[1]
https://groups.google.com/d/msg/mozilla.dev.platform/2PZnk9mD9Ak/WR9KXZH7BwAJ
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Build System Project - Latest Update

2016-10-25 Thread David Burns
Below is a highlight of all work the build peers have done in the last few
weeks as part of their work to modernise the build infrastructure.

Since the last report[1] a large number of improvements have landed in
Mozilla Central.

The build peers have managed to get numerous patches landed for the tup
backend. This is our first steps to modernising the build. We have landed
patches for FINAL_TARGET_PP_FILES (1305157
), to produce
BUILDSTATUS messages (1306405
), XPIDL generation:
1293448 ,
GENERATED_FILES: 1304129
 , and WebIDL
generation: 1304125  .

The build peers have also helped out on a few optimizations to TaskCluster
with the most notable moving builders to use SSD on Amazon (Bug 1306167)
which halved the time that PGO builds take.

We have done some more work on artifact builds. There is now the option to
download symbols during an artifact build. This benefits both local
developers and what is happening in Automation (bug 1305502
).

The build peers have started looking into the build-backend slowness (bug
1259789 ). This is
one of the slowest parts of the configure step in builds, especially on
Windows.

Finally, the build peers have landed, or in the process of landing, build
rewrites for 3rd party libraries we use. More details are available in the
bugs for NSS Bug 1237872
, bug 1295937
 and for Libffi (bug
1262155 ).

David

[1]
https://groups.google.com/d/msg/mozilla.dev.platform/BWuB6S7qxUc/9HzVRXg3CAAJ
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Build System Project - Latest Update

2016-09-02 Thread David Burns
Below is a highlight of all work the build peers have done since the last
report[1].

The build peers have been working to get faster builds in automation as
well as for local developers. We have landed changes to stop generating
XPIDL sources in artifact builds[2], which is a performance and correctness
improvement.

We have also been working hard at removing the legacy m4+shell autoconf
mess. We are now below 70% of the size when we started this project. Since
the last update we have removed over 3000 lines[3].

Build tasks in taskcluster are now using c4.4xlarge instances instead of
c3.2xlarge[4] - builds are faster by around 20%[5].

As part of the work to allow us to use an alternate build backend we have
managed to get Firefox building NSS with GYP on Linux[6]. Other platforms
are to follow before we land this change. We have also been reviewing
changes to build XPIDL sources with TUP[7].

Support for building Rust sources via Cargo has been landed in
mozilla-central[8]. For more information read the Dev.Platform post[9]

Last but not least, our intern project to use msys2 has reached its goal.
The new msys2 environment now builds on mozilla-central. We will be sending
out a separate email about this for those who would like to try it out.

[1]
https://groups.google.com/d/msg/mozilla.dev.platform/i0rRsYgnXDM/O5DxU-PqBQAJ

[2] https://bugzilla.mozilla.org/show_bug.cgi?id=1240134

[3] https://bugzilla.mozilla.org/show_bug.cgi?id=1292463

[4] https://bugzilla.mozilla.org/show_bug.cgi?id=1290282

[5] https://mzl.la/2c5bjSR

[6] https://bugzilla.mozilla.org/show_bug.cgi?id=1237872

[7] https://bugzilla.mozilla.org/show_bug.cgi?id=1293448

[8] https://bugzilla.mozilla.org/show_bug.cgi?id=1231764

[9]
https://groups.google.com/d/msg/mozilla.dev.platform/ZgkBLZb6dtc/oWBE_nUxAQAJ
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Build System Project - Latest Update

2016-08-04 Thread David Burns
Below is a highlight of all work the build peers have done since the last
report[1].

The build peers have been working to get faster builds in automation as
well as well as local developers. We have updated the way that Taskcluster
decision and linting jobs use version control[2]. This has driven down the
times for those jobs from 3 minutes to ~9 seconds.

We are still working on getting the sccache rewrite out. We are hitting a
few issues on try but this is to be expected. Hopefully this will be out
soon.

We’re also looking into running automation builds in EC2 on instances with
more and faster CPU cores so they complete faster[3].

On the local build side we have moved some of the header checks[4] to the
python configure. These were fairly self contained configure steps and ripe
for porting. They removed ~1300 lines of configure code which is a huge
win! We have also been working through some of the potential problems with
moving to a new build backend. This is going to be a little slow at first
as we try get specific parts using the alternate build backend.

Some initial work on replacing the NSS build system is showing great
promise[5].

We have also been moving Windows specific configure code around. This is
part of our goal to make that step faster in the build. Nathan, our intern,
has also been making great strides in his work to move MozillaBuild to
msys2 allowing easier hackability of the build. He also did his intern
presentation[6]. I highly recommend you watch it!

Nathan will be leaving us at the end of next week (12 August) and the build
peers want to thank him for all his hard work over his internship and wish
him luck in his future endeavours! .

[1]
https://groups.google.com/d/msg/mozilla.dev.platform/4Vyez4stDyg/jlQeQ5TACAAJ

[2] https://bugzilla.mozilla.org/show_bug.cgi?id=1247168

[3] https://bugzilla.mozilla.org/show_bug.cgi?id=
1290282

[4] https://bugzilla.mozilla.org/show_bug.cgi?id=1269517

[5] https://bugzilla.mozilla.org/show_bug.cgi?id=1237872#c3

[6]  https://air.mozilla.org/down-the-rabbit-hole/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed change to Commit Access Policy Level 3

2016-08-04 Thread David Burns
Looks great to me!

David

On 4 August 2016 at 06:20, Mitchell Baker  wrote:

> Over time we've made a series of exceptions to the level 3 requirements
> for Sheriffs and this proposal addresses that.
>
>
> The current Policy for level 3 is:
>
> Level 3 - Core Product Access
>
> Requirements: two vouchers - module owners or peers of code stored
> at this level
>
>
> The issue is that Sheriffs typically need level 3 access to fulfill their
> roles.  But they aren't chosen based on number and quality of patches, so
> often don't meet the current requirements.  Today we go through an
> exception process.  The thinking is that this process is unneeded for a set
> of people to whom we've delegated Sheriff authority.
>
>
> The proposal is to change the text as follows:
>
> from:"module owners or peers of code stored at this level"
> to:  "module owners or peers of code stored at this level or owners or
> peers of the 'Tree Sheriffs' module."
>
>
>
> Mitchell
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Build System Project - Latest Update

2016-07-22 Thread David Burns
Below is a highlight of all work the build peers have done since the last
report[1].

The build peers have been working to get faster builds in automation as
well as well as local developers. We are currently testing the distributed
cache rewrite (sccache) to make sure that we have not regressed anything.
Once this has landed we can start work on how it can benefit engineers
local environments. We have also been working to try get artifact builds
being used at least on Try in an attempt to bring down try turnaround times.

They have also been working through making 3rd party libraries build
faster. This is part of the work to remove the use of recursive make in our
build system which is going quite well. So well in fact we are now starting
work on prototyping a new build back end. Hopefully this prototype will be
done by the end of the quarter.

And finally our intern, Nathan, has done some great work with updating
MozillaBuild to msys2 and now has artifact builds working there.

[1]
https://groups.google.com/d/msg/mozilla.dev.platform/VH3KD4ZyNL0/befnqd7WPQAJ


David
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2016-07-04 Thread David Burns
Yes!

Part of the build project work that I regularly email this list[1] we have
it on our roadmap to have the same distributed cache that we use in
automation available for engineers who are working on C++ code. We have
completed our rewrite and will be putting the initial work through try over
the next fortnight to make sure we havent regressed anything. After that we
will be working towards making it available to engineers before the end of
Q3 (at least on one platform).

David


[1]
https://groups.google.com/forum/#!topicsearchin/mozilla.dev.platform/Build$20System$20Project$20

On 4 July 2016 at 21:39, Gijs Kruitbosch  wrote:

> What about people not lucky enough to (regularly) work in an office,
> including but not limited to our large number of volunteers? Do we intend
> to set up something public for people to use?
>
> ~ Gijs
>
>
> On 04/07/2016 20:09, Michael Layzell wrote:
>
>> If you saw the platform lightning talk by Jeff and Ehsan in London, you
>> will know that in the Toronto office, we have set up a distributed
>> compiler
>> called `icecc`, which allows us to perform a clobber build of
>> mozilla-central in around 3:45. After some work, we have managed to get it
>> so that macOS computers can also dispatch cross-compiled jobs to the
>> network, have streamlined the macOS install process, and have refined the
>> documentation some more.
>>
>> If you are in the Toronto office, and running a macOS or Linux machine,
>> getting started using icecream is as easy as following the instructions on
>> the wiki:
>>
>> https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Using_Icecream
>>
>> If you are in another office, then I suggest that your office starts an
>> icecream cluster! Simply choose one linux desktop in the office, run the
>> scheduler on it, and put its IP in the Wiki, then everyone can connect to
>> the network and get fast builds!
>>
>> If you have questions, myself, BenWa, and jeff are probably the ones to
>> talk to.
>>
>>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Basic Auth Prevalence (was Re: Intent to ship: Treat cookies set over non-secure HTTP as session cookies)

2016-06-13 Thread David Burns
Is there a way that we can gather if people are using this for testing web
sites? This might account for those numbers.

For example, there is basic support, and I mean really basic support, in
Selenium to handle Basic auth and we suggest to people that setting up a
proxy in the middle to handle that handshake. I suspect in these cases
people won't have all the necessary security setup if it is behind some
kind of firewall. Just a thought.

David

On 11 June 2016 at 03:27, Jason Duell  wrote:

> This data also smells weird to me.  8% of pages using basic auth seems very
> very high, and only 0.7% of basic auth being done unencypted seems low.
>
> Perhaps we should chat in London (ideally with Honza Bambas) and make sure
> we're getting the telemetry right here.
>
> Jason
>
> On Fri, Jun 10, 2016 at 2:15 PM, Adam Roach  wrote:
>
> > On 4/18/16 09:59, Richard Barnes wrote:
> >
> >> Could we just disable HTTP auth for connections not protected with TLS?
> >> At
> >> least Basic auth is manifestly insecure over an insecure transport.  I
> >> don't have any usage statistics, but I suspect it's pretty low compared
> to
> >> form-based auth.
> >>
> >
> > As a follow up from this: we added telemetry to answer the exact question
> > about how prevalent Basic auth over non-TLS connections was. Now that 49
> is
> > off Nightly, I pulled the stats for our new little counter.
> >
> > It would appear telemetry was enabled for approximately 109M page
> > loads[1], of which approximately 8.7M[2] used HTTP auth -- or
> approximately
> > 8% of all pages. (This is much higher than I expected -- approximately 1
> > out of 12 page loads uses HTTP auth? It seems far less dead than we
> > anticipated).
> >
> > 749k of those were unencrypted basic auth[2]; this constitutes
> > approximately 0.7% of all recorded traffic.
> >
> > I'll look at the 49 Aurora stats when it has enough data -- it'll be
> > interesting to see how much if it is nontrivially different.
> >
> > /a
> >
> >
> > [1]
> >
> https://telemetry.mozilla.org/new-pipeline/dist.html#!cumulative=0_date=2016-06-06=__none__!__none__!__none___channel_version=nightly%252F49=HTTP_PAGELOAD_IS_SSL_channel_version=null=Firefox=1_keys=submissions_date=2016-05-04=0=1_submission_date=0
> >
> > [2]
> >
> https://telemetry.mozilla.org/new-pipeline/dist.html#!cumulative=0_date=2016-06-06=__none__!__none__!__none___channel_version=nightly%252F49=HTTP_AUTH_TYPE_STATS_channel_version=null=Firefox=1_keys=submissions_date=2016-05-04=0=1_submission_date=0
> >
> >
> > --
> > Adam Roach
> > Principal Platform Engineer
> > Office of the CTO
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
>
>
>
> --
>
> Jason
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Build System Project - Latest Update

2016-06-02 Thread David Burns
Below is a highlight of all work the build peers have done since the last
report[1].

We have reduced the time it takes to run reftests as well as the amount of
I/O that happens during the tests by disabling some features in Firefox
that are not used during the test. This has saved over 50GB of I/O in
automation. Reporting has been put in place to help monitor in automation
to help us spot issues like this in the future! We have hopefully spotted a
few and will hopefully comment in the next update.

The build peers have also been working hard on getting a distributed cache
ready for everyone to use. This means we will try get get from the globally
distributed cache before building, hopefully saving time by only building
things that have changed and the cache for everything else.

We are continuing our work to remove configure/m4 code and have removed
around 2000 lines[2] as well as working our way through the long tail of
MakeFiles. We are down to just over 100 files and few thousand lines of
code.[3]. Once we have completed enough of this work we can start looking
to move over to a more performant build backend.

Last, but not least, I want to introduce our intern for the Summer, Nathan
Hakkakzadeh [Nat on IRC]. He will be helping with various Windows build
tasks while working with us. Make sure to say Hi to him!

[1]
https://groups.google.com/d/msg/mozilla.dev.platform/aQVrp8GElno/QbGac4drAQAJ

[2] https://plot.ly/~glandium/14/lines-vs-time/

[3] http://people.mozilla.org/~tmielczarek/makefiles/makefiles_count.html
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Build System Project - Update from the last 2 weeks

2016-05-09 Thread David Burns
Below is a highlight of all work the build peers have done in the last 2
weeks as part of their work to modernise the build infrastructure.

Since the last report[1] a large number of improvements have landed in
Mozilla Central.

We have been experimenting with making a global cache available to all
engineers. This has seen build times drop by, in some cases, up to 8
minutes. The time improvements are highly dependent on your local machine
and internet connection but show great promise!

We have also shaved off 22mb of all-tests.json, which is a great
improvement for both local builds and builds in automation.

Finally, Old-configure m4+shell has shrunk by 20% since the beginning of
the year. As mentioned in the previous status email, this will allow us to
replace the build backend with a more performant tool.

David

[1]
https://groups.google.com/d/msg/mozilla.dev.platform/aQVrp8GElno/QbGac4drAQAJ
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Build System Project - Update from the last 2 weeks

2016-04-20 Thread David Burns
Below is a highlight of all work the build peers have done in the last 2
weeks as part of their work to modernise the build infrastructure.

Since the last report[1] a large number of improvements have landed in
Mozilla Central.

We have landed some more build improvements that have brought down the
build times for Windows PGO to be more in line with Linux64 opt builds[2]
in automation. This is helping us edge closer to it taking 11 hours less to
release Firefox to Windows users[3]. These same changes should also improve
local build times in most configurations, most noticeably on Windows.

We have also started looking at how we can use a global compiler cache on
local builds and not just in automation. This will allow artifact-like
builds for those who are doing C++ development.

Tests are now installed incrementally so the build system only installs
what you need when you need it. This dramatically speeds the test part of
the development cycle locally. On top of this, there are investigations to
see if all-tests.json can be reduced in size. Initial investigations show
that we can do some simple optimizations and get some sizeable wins.

We are continuing to remove configure, m4 code, and Makefiles from
mozilla-central. As mentioned in the previous status email, this will allow
us to replace the build backend with a more performant tool.

David

[1]
https://groups.google.com/d/msg/mozilla.dev.platform/7kCM7aJ80rs/Mgq7jrmNBAAJ

[2]
https://treeherder.mozilla.org/perf.html#/graphs?timerange=2592000=%5Bmozilla-inbound,04b9f1fd5577b40a555696555084e68a4ed2c28f,1%5D=%5Bmozilla-inbound,c0018285639940579da345da71bb7131d372c41e,1%5D=%5Bmozilla-inbound,65e0ddb3dc085864cbee77ab034dead6323a1ce6,1%5D

[3] https://rail.merail.ca/posts/release-build-promotion-overview.html
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Build System Project - Update from the last 2 weeks

2016-04-05 Thread David Burns
Below is a highlight of all work the build peers have done in the last 2
weeks as part of their work to modernise the build infrastructure.

Since the last report[1] a large number of improvements have landed in
Mozilla Central.

The build system now lazily installs test files. Before, the build copied
tens of thousands of test and support files. This could take dozens of
seconds on Windows or machines with slow I/O. Now, the build system defers
installing test files until they are needed there (e.g. when running tests
or creating test packages). Furthermore, only the test files relevant to
the action performed are installed. Mach commands running tests should be
significantly faster, as they no longer examine the state of tens of
thousands of files on every invocation.

After upgrading build machines to use VS2015, we have seen a decrease in
build times[2] for PGO on Windows by around 100 minutes. This brings PGO
times on Windows in line with that of PGO(Strictly speaking this is LTO)
times on Linux.

This work, coupled with the build promotion work[3] have reduced the time
it takes automation to release Firefox on Windows by over 10 hours.

We are continuing to remove configure, m4 code, and Makefiles from
mozilla-central. As mentioned in the previous status email, this will allow
us to replace the build backend with a more performant tool.

David

[1]
https://groups.google.com/d/msg/mozilla.dev.platform/IKRdGCjdN_Y/sK2QbXqmCAAJ

[2]
https://treeherder.mozilla.org/perf.html#/graphs?timerange=2592000=%5Bmozilla-inbound,04b9f1fd5577b40a555696555084e68a4ed2c28f,1%5D=%5Bmozilla-inbound,c0018285639940579da345da71bb7131d372c41e,1%5D=%5Bmozilla-inbound,65e0ddb3dc085864cbee77ab034dead6323a1ce6,1%5D

[3] https://rail.merail.ca/posts/release-build-promotion-overview.html
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Build System Project - Update from the last 2 weeks

2016-03-21 Thread David Burns
Below is a highlight of all work the build peers have done in the last 2
weeks as part of their work to modernise the build infrastructure.

Since the last report[1] we have seen thousands of lines of configure and
m4 code removed from mozilla-central. We have removed over 30 Makefiles
from mozilla-central. The removal of the Makefiles gives us what we need to
use more performant build backends in the future.

In case you missed it, you can now make Artifact Builds[3] if you are using
git[4]. If you find any issues, please raise bugs against Core :: Build
Config.

Finally, we are seeing huge improvements in build times by switching to
Visual Studio 2015. There are however a few regressions in moving to the
latest version. gps has asked for assistance[5] in helping finalise what we
need to make the move. If you can, help get this over the line!

Regards,

David

[1] https://groups.google.com/forum/#!topic/mozilla.dev.platform/PD7Lmot1H3I

[2]
https://groups.google.com/forum/#!msg/mozilla.dev.builds/0Wo_8Vhgu9Y/XFCCSmKABQAJ

[3] https://developer.mozilla.org/en-US/docs/Artifact_builds

[4] https://groups.google.com/forum/#!topic/mozilla.dev.builds/XtyrGfY48wo

[5]
https://groups.google.com/d/msg/mozilla.dev.platform/2dja0nkKq_w/5iJPJXCaBwAJ
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Report from Build Team Sprint

2016-03-08 Thread David Burns
February 22-26 saw the build team gathered in San Francisco to discuss all
the work that is required to speed up developer ergonomics around building
Firefox.

After discussing all the major items of work we want to work on in the
first half of this year (of which there are many), we started working on
replacing the Autoconf configure script. We are planning to replace the old
shell and m4 code with sandboxed python, which will allow us to have
improved performance (especially on Windows), parallel configure checks,
better caching, and improved maintainability.

We have also added more support for artifact builds
<https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Build_Instructions/Artifact_builds>.
If you use Git as part of your Firefox development, you will now be able to
get the latest artifacts for your builds. This will benefit anyone not
currently working on C++ code in Mozilla-Central. Support for C++
developers will be worked on later in the year.

We were also able to add more data points to the telemetry work. This is
still in development and being beta tested by Engineering Productivity
before we ask for more people to submit their build data.

There was also work to see if we can get speed ups by using different
versions of compilers. We are noticing huge gains by upgrading to newer
versions but some of the upgrades require some work before we can deploy
them. When we are looking to do the upgrade we will email the list to warn
you.

David Burns
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Taking screenshots of single elements (XUL/XULRunner)

2016-01-19 Thread David Burns
You can try getting access to
https://dxr.mozilla.org/mozilla-central/source/testing/marionette/capture.js
and then that will give you everything you want or you can just "borrow"
the code from there.

David

On 19 January 2016 at 11:22, Ted Mielczarek  wrote:

> On Tue, Jan 19, 2016, at 01:39 AM, m.bauermeis...@sto.com wrote:
> > As part of my work on a prototyping suite I'd like to take screenshots
> > (preferably retaining the alpha channel) of single UI elements. I'd like
> > to do so on an onclick event.
> >
> > Is there a straightforward way to accomplish this? Possibly with XPCOM or
> > js-ctypes?
>
> You can use the drawWindow method of CanvasRenderingContext2D:
>
> https://developer.mozilla.org/en-US/docs/Web/API/CanvasRenderingContext2D/drawWindow
>
> You just need to create a canvas element, call getContext('2d') on it,
> and then calculate the offset of the element you want to screenshot.
>
> -Ted
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Thanks for all the great teamwork with the Sheriffs in 2015!

2015-12-30 Thread David Burns
Well done Sheriffs! Really proud of all the work you did this year!

David

On 30 December 2015 at 14:19, Carsten Book  wrote:

> Hi,
>
> Sheriffing is not just about Checkins, Uplifts and Backouts - its also a
> lot of teamwork with different Groups and our Community like Developers, IT
> Teams and Release Engineering and a lot more to keep the trees up and
> running. And without this great teamwork our job would be nearly impossible!
>
> So far in 2015 we had around:
>
> 56471 changesets with 336218 changes to 70807 files in mozilla-central
> and 4391 Bugs filed for intermittent failures (and a lot of them fixed).
>
> So thanks a lot for the great teamwork with YOU in 2015 - especially also
> a great thanks to our Community Sheriffs like philor, nigelb and Aryx who
> done great work!
>
> I hope we can continue this great teamwork in 2016 and also the monthly
> sheriff report with interesting news from the sheriffs and how you can
> contribute will continue then :)
>
> Have a great start into 2016!
>
> Tomcat
> on behalf of the Sheriffs-Team
>
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Dan Stillman's concerns about Extension Signing

2015-11-26 Thread David Burns
Another data point that we seem to have overlooked is that users want to be
able to side load their extensions for many different reasons. We see this
with apps on phones and with extensions currently. I appreciate that users
have grown to be warning blind but, as others have pointed out, this feels
like a sure way to have users move from us to Chrome if there extension
lives there too. Once they are lost it will be non-trivial to get them back.

My main gripe is that we will be breaking tools like WebDriver[1] (better
known as Selenium) and not once have we approached that community. Luckily
we have Marionette being developed as a replacement for them, and was being
developed before we started the addon signing. When mentioned I was told
that since it instruments the browser it can never get signed and we need
to get a move on or get everyone to change to the "whitelabel" version to
use WebDriver. Having spoke to peers at other large tech companies they
said no, they will remain on older versions and if it breaks then stop
support for it until they have a like for like replacement. They will stop
caring about WebCompat until they have a like for like replacement. We will
drive away other users because Firefox does work as well on their favourite
website.

There are also companies that have developed internal tools in addons that
they don't want in AMO. We are essentially telling them that we don't care
about how much effort they have put in or how "sooper sekrit" their addon
is. It's in AMO or else...

I honestly thought we would do the "signing keys to developers" approach
and revoke when they are being naughty.

David

[1] http://github.com/seleniumhq/selenium

On 26 November 2015 at 13:50, Thomas Zimmermann 
wrote:

> Hi
>
> Am 26.11.2015 um 13:56 schrieb Till Schneidereit:
>
> > I read the blog post, too, and if that were the final, uncontested word
> on
> > the matter, I think I would agree. As it is, this assessment strikes me
> as
> > awfully harsh: many people have put a lot of thought and effort into
> this,
> > so calling for it to simply be canned should require a substantial amount
> > of background knowledge.
>
> Ok, I take back the 'complete nonsense' part. There can be ways of
> improving security that involve signing, but the proposed one isn't. I
> think the blog post makes this obvious.
>
>
> >
> > I should also give a bit more information about the feedback I received:
> in
> > both cases, versions of the extensions exist for at least Chrome and
> > Safari. In at least one case, the extension uses a large framework that
> > needs to be reviewed in full for the extension to be approved. Apparently
> > this'd only need to happen once per framework, but it hasn't, yet. That
> > means that the review is bound to take much longer than if just the
> > extension's code was affected. While I think this makes sense, two things
> > strike me as very likely that make it a substantial problem: many authors
> > of extensions affected in similar ways will come out of the woodwork very
> > shortly before 43 is released or even after that, in reaction to users'
> > complaints. And many of these extensions will use large frameworks not
> > encountered before, or simply be too complex to review within a day or
> two.
>
> Thanks for this perspective. He didn't seem to use any frameworks, but
> the review process failed for an apparently trivial case. Regarding
> frameworks in general: there are many and there are usually different
> versions in use. Sometimes people make additional modifications. So this
> helps only partially.
>
> And of course reviews are not a panacea at all. Our own Bugzilla is
> proof of that. ;) Pretending that a reviewed extension (or any other
> piece of code) is more trust-worthy is not credible IMHO. Code becomes
> trust-worthy by working successfully in "the real world."
>
> >
> > I *do* think that we shouldn't ship enforced signing without having a
> solid
> > way of dealing with this problem. Or without having deliberately decided
> > that we're willing to live with these extensions' authors recommending
> (or
> > forcing, as the case may be) their users to switch browsers.
>
> I think, a good approach would be to hand-out signing keys to extension
> developers and require them to sign anything they upload to AMO. That
> would establish a trusted path from developers to users; so users would
> know they downloaded the official release of an extension. A malicious
> extensions can then be disabled/blacklisted by simply revoking the keys
> and affected users would notice. For anything non-AMO, the user is on
> their own.
>
> Best regards
> Thomas
>
> >
> >
> > till
> >
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org

Re: Capturing additional metadata in moz.build files

2015-01-22 Thread David Burns
The last bullet for me is the killer feature. I recently hit an issue where
I made some fairly big change to an API and updated all the consumers that
I was aware and even ran a try push for the happy set. Unfortunately this
burnt the tree.

I see this situation as a bigger waste of resources (sheriffs time,
infrastructure time) than people not compiling their code and pushing to a
tree.

Obviously there is an issue that annotating the tree will only give you the
happy set but that is much better than what we have now and would
hopefully remove the need for people to workout what they need as try
syntax, it would be done for them.

David

On 9 December 2014 at 18:46, Gregory Szorc g...@mozilla.com wrote:

 In Portland, there were a number of discussions around ideas and features
 that could be easier implemented if only we had better metadata and
 annotations for source files. For example:

 * Suggested reviewers for a patch
 * Determine the Bugzilla component for a failing test
 * Determine the Bugzilla component for a changed file so a bug can be
 filed automatically
 * Building a subscription service for watching code and reviews
 * Defining what static analysis should run on a given source file
 * Mapping changed files to impacted automation jobs (useful for minimizing
 automation that runs)

 There is pretty much universal consensus that as much metadata as possible
 should live in the tree, next to the things being annotated. This is in
 contrast to how current systems like Bugzilla's suggested reviewers feature
 operate, which is to establish a separate service/data store, essentially
 fragmenting the source of truth and introducing one-off change processes.

 I discussed options with Mike Hommey and we believe that moz.build files
 are the appropriate default location for this metadata. We considered
 alternatives such as moz.build-like Python sandboxes under a different
 filename and standalone JSON or YAML files. We like moz.build because it is
 a fully customizable Python environment that already exists and therefore
 doesn't require much effort to stand up and doesn't fragment source of
 truth.

 This should not be a surprise: capturing non-build metadata in moz.build
 files was always an eventual goal. There is already precedence for this in
 defining the Sphinx documentation [1]. We just haven't had a good reason or
 time to add more things. Until now.

 In the weeks and months ahead, expect to start seeing work to integrate
 extra metadata into moz.build files. This may require refactoring some
 moz.build files. We'll need to support a world where moz.build files can be
 evaluated before configure is executed (so any tool with a copy of the
 source and the Python package for reading moz.build files can extract
 metadata in milliseconds).

 This work should enable all kinds of awesome tooling and developer
 productivity wins.

 If anyone has any other crazy ideas for what metadata to capture in
 moz.build files to help improve processes, I'm definitely interested in
 hearing them!

 [1] https://ci.mozilla.org/job/mozilla-central-docs/Tree_
 Documentation/index.html#adding-documentation
 ___
 dev-platform mailing list
 dev-platform@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Screen Capture

2014-10-23 Thread David Burns

On 23/10/2014 22:10, Jet Villegas wrote:

Roc wrote up a proposal last year for a web-facing screen capture API:
https://wiki.mozilla.org/User:Roc/ScreenCaptureAPI

Even if not web-facing, we could use the implementation code to cover chrome 
use cases like this one:
https://bugzilla.mozilla.org/show_bug.cgi?id=933389

At a recent GFx/Layout work week, we discussed using the Compositor to extract 
the screen-grab in an efficient, cross-platform manner. It seems we've got 
enough infrastructure in place to implement this quickly.

I'd like options on the API to allow for obscuring text layers so that we could 
use the images for UI tiles, and other privacy-sensitive use cases.

Kicking off this thread to get a discussion on:

1. Web-facing or not?


Not. Feels like there would be a lot of security issues


2. Security/Privacy concerns


Addon's would need to be flagged for using this API. E.g. what if it did 
screenshots when visiting your bank?



3. Implementation concerns


Just for clarification, this would only return the viewport? If so, 
would full document (at the time call so we don't have to scroll and get 
into a parallax painful world) be doable?



4. Feature requests (eg. blurred or lorem-ipsum text)




David
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Marionette Mailing List

2014-09-08 Thread David Burns

Hi Everyone!

Marionette now has it's own mailing list[1] that allows us to be able to 
send a message to one place that is open for anyone to send to.


We can use it to discuss changes that are happening in the WebDriver 
spec, breaking changes we want to do and for releases. Please join the 
list and feel free to pass this email on to anyone else that might be 
interested in joining the list.


David


[1] https://lists.mozilla.org/listinfo/tools-marionette
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Experiment with running debug tests less often on mozilla-inbound the week of August 25

2014-08-19 Thread David Burns
I know this is tangential but the small changes are the least tested 
changes in my experience. The try push requirement for checkin-needed 
has had a wonderful impact on the amount of times the tree is closed[1]. 
The tree is less likely to be closed these days.


David

[1] http://futurama.theautomatedtester.co.uk/

On 19/08/2014 22:04, Ralph Giles wrote:

On 2014-08-19 1:55 PM, Benoit Girard wrote:

Perhaps we should instead promote checkin-needed (or a similar simple)
to coalesce simple changes together.

I would prefer to use 'checkin-needed' for more things, but am blocked
by the try-needed requirement. We need some way to bless small changes
for inbound without a try push. Look up the author's commit access maybe?

  -r
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Tree Closure Stats - July 2014

2014-08-06 Thread David Burns
One of the items that has limited the closures is the new requirement 
from Sheriffs that if there is a checkin-needed that there is a try push 
for that patch. There has also been changes from Releng about being able 
to run arbitary jobs against a build which wasnt there before which has 
helped narrow down issues that need backing out.


The sheriffs have, as always, been trying to limit the closures where 
they can which has also had an affect.


David

On 06/08/2014 07:03, Ehsan Akhgari wrote:
Hmm thanks David, this is interesting data!  Looking at the chart, the 
amount of tree closure in the recent past seemed to have peaked in 
April and have been declining since.  Do you have any idea what we've 
been doing right?  I am always uncomfortable with good results that I 
don't understand!


Thanks a lot for sending these out!

On 2014-08-04, 5:42 PM, David Burns wrote:

Hi Everyone, (cross posted to dev-platform)

Below is the stats for Tree Closures for the main trees that the
sheriffs manage. Please feel free to let me know if you have any
questions._

Mozilla-Inbound
_
2014-07
  checkin-compilation: 1 day, 1:19:41
  checkin-test: 1 day, 2:44:14
  infra: 1 day, 12:59:37
  no reason: 0:00:10
  total: 3 days, 17:03:42_

__Mozilla-Central_

2014-07
  checkin-test: 2:26:46
  infra: 1 day, 10:34:31
  no reason: 0:14:02
  total: 1 day, 13:15:19

_Fx-Team

_2014-07
  checkin-compilation: 1:06:28
  checkin-test: 13:45:54
  infra: 1 day, 14:58:00
  other: 0:46:07
  total: 2 days, 6:36:29_
_
_B2G-Inbound_

2014-07
  checkin-compilation: 1:14:08
  checkin-test: 2:54:52
  infra: 1 day, 5:29:23
  no reason: 0:15:34
  total: 1 day, 9:53:57

If you would like to see how this compares to previous months please see
http://futurama.theautomatedtester.co.uk/

David

___
dev-tree-management mailing list
dev-tree-managem...@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tree-management




___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Tree Closure Stats - July 2014

2014-08-04 Thread David Burns

Hi Everyone, (cross posted to dev-platform)

Below is the stats for Tree Closures for the main trees that the 
sheriffs manage. Please feel free to let me know if you have any questions._


Mozilla-Inbound
_
2014-07
 checkin-compilation: 1 day, 1:19:41
 checkin-test: 1 day, 2:44:14
 infra: 1 day, 12:59:37
 no reason: 0:00:10
 total: 3 days, 17:03:42_

__Mozilla-Central_

2014-07
 checkin-test: 2:26:46
 infra: 1 day, 10:34:31
 no reason: 0:14:02
 total: 1 day, 13:15:19

_Fx-Team

_2014-07
 checkin-compilation: 1:06:28
 checkin-test: 13:45:54
 infra: 1 day, 14:58:00
 other: 0:46:07
 total: 2 days, 6:36:29_
_
_B2G-Inbound_

2014-07
 checkin-compilation: 1:14:08
 checkin-test: 2:54:52
 infra: 1 day, 5:29:23
 no reason: 0:15:34
 total: 1 day, 9:53:57

If you would like to see how this compares to previous months please see 
http://futurama.theautomatedtester.co.uk/


David

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Tree Closure Stats - June 2014

2014-07-01 Thread David Burns


Hi Everyone, (cross posted to dev-platform)

Below is the stats for Tree Closures for the main trees that the 
sheriffs manage. Please feel free to let me know if you have any questions._


_A lot of the backlog numbers were down to there being AWS machines not 
picking up jobs fast enough. RelEng have been working on this_.


Mozilla Inbound_

2014-06
 backlog: 9:46:52
 checkin-compilation: 1 day, 2:47:16
 checkin-test: 2 days, 23:38:59
 infra: 16:57:27
 no reason: 0:00:33
 other: 1:41:36
 total: 5 days, 6:52:43

_Mozilla-Central_

2014-06
 backlog: 4:55:13
 checkin-compilation: 3:40:03
 checkin-test: 6:24:24
 infra: 9:37:15
 no reason: 0:00:54
 other: 1:42:27
 total: 1 day, 2:20:16

_Fx-Team_

2014-06
 backlog: 9:30:43
 checkin-compilation: 0:12:54
 checkin-test: 9:52:53
 infra: 12:14:43
 no reason: 0:00:22
 other: 1:41:36
 total: 1 day, 9:33:11

_B2G-Inbound_

2014-06
 backlog: 5:06:17
 checkin-compilation: 4:37:29
 checkin-test: 6:54:03
 infra: 8:43:13
 no reason: 3:19:53
 other: 1:01:39
 total: 1 day, 5:42:34

If you would like to see how this compares to previous months please see 
http://futurama.theautomatedtester.co.uk/


David
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Tree Closure Stats - May 2014

2014-06-02 Thread David Burns

Hi Everyone, (cross posted to dev-platform)

Below is the stats for Tree Closures for the main trees that the 
sheriffs manage. Please feel free to let me know if you have any questions.


_Mozilla Inbound
_Closures due to test failures on Inbound are slightly higher than normal.

2014-05
 checkin-compilation: 23:53:02
 checkin-test: 3 days, 0:45:33
 infra: 11:52:28
 planned: 2:34:02
 total: 4 days, 15:05:05_

Mozilla Central
_
2014-05
 checkin-compilation: 0:02:34
 checkin-test: 12:57:52
 infra: 12:58:48
 total: 1 day, 1:59:14
_
Fx-Team
_
2014-05
 checkin-compilation: 0:08:23
 checkin-test: 20:24:28
 infra: 13:42:36
 no reason: 0:00:03
 total: 1 day, 10:15:30_

B2G Inbound_
The higher than normal infra closure was down to a 3rd party repository 
having their master branch tracked which was unfortunately broken.


2014-05
 backlog: 6:08:36
 checkin-compilation: 1 day, 14:58:51
 checkin-test: 15:15:13
 infra: 2 days, 1:17:26
 total: 4 days, 13:40:06

If you would like to see how this compares to previous months please see 
http://futurama.theautomatedtester.co.uk/


David
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is XPath still a thing?

2014-04-14 Thread David Burns

On 14/04/2014 22:28, Eric Shepherd wrote:
I think I know the answer to this, but want to confirm: is XPath a 
going concern? We want to be sure of its current status before 
migrating its documentation to where it ought to be assuming that it 
is in fact something people still use.


XPath is still a going concern from where I stand. Web Testing people, 
who use Selenium WebDriver, use XPath extensively since they struggle to 
get to have testable documents. Having decent documentation for them 
would be awesome :)


David

P.S. Having a native implementation of XPath makes Selenium WebDriver 
a lot faster than say Internet Explorer which doesnt have a native 
implementation and relies on a JavaScript polyfill.


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is XPath still a thing?

2014-04-14 Thread David Burns

Not from my side!

David

On 14/04/2014 22:41, Eric Shepherd wrote:

On 2014-04-14 21:38:24 +, David Burns said:

XPath is still a going concern from where I stand. Web Testing 
people, who use Selenium WebDriver, use XPath extensively since they 
struggle to get to have testable documents. Having decent 
documentation for them would be awesome :)


On 2014-04-14 21:38:20 +, Anne van Kesteren said:


I don't think we should actively recommend it. We're maintaining the
existing code, but are not upgrading our level of support or anything.
And if we could get rid of the existing code, we would.


Well, there's the expected yes/no answer I was looking for. My 
inclination is to go ahead and migrate the doc and keep it in the main 
body of our documentation content, but maybe with an added notice that 
support is limited yadda yadda yadda...


Any disagreements?



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Spring cleaning: Reducing Number Footprint of HG Repos

2014-03-27 Thread David Burns


What are mission critical repos since you just put everything in the 
same list?


If we start removing project branches to be put on outsourced VCS we 
remove any sheriff support for that project branch since, as been 
pointed out many times, we dont have access to the server side commit 
hooks and can't close the tree. This may (I want to use *want* but don't 
have the data to prove it) impact engineering productivity. We have this 
situation with Gaia which has its canonical repo on Github. Sheriffs can 
land checkin-needed but can't close the tree. The way the B2G people do 
it is to remove everyone from the repo and then re-add (or thats how 
they used to do it) which then spams you with you are now getting 
notifications for repository X which is annoying.


There is the other thing we need to worry about is the constant DDoS of 
Github[1]. We have seen that when there is a massive one it will take 
down their site for hours impacting engineering productivity again since 
people can't pull or push. I couldn't find similar reports on bitbucket 
but it can happen to any third party we may use.


David

[1] https://github.com/blog/1796-denial-of-service-attacks

On 26/03/2014 23:53, Taras Glek wrote:

*User Repos*
TLDR: I would like to make user repos read-only by April 30th. We 
should archive them by May 31st.


Time  spent operating user repositories could be spent reducing our  
end-to-end continuous  integration cycles. These do not seem like 
mission-critical repos, seems like developers would be better off 
hosting these on bitbucket or github. Using a 3rd-party host has 
obvious benefits for collaboration  self-service that our existing 
system will never meet.


We are happy to help move specific hg repos to bitbucket.

Once you have migrated your repository, please comment in 
https://bugzilla.mozilla.org/show_bug.cgi?id=988628so we can free some 
disk space.


*Non-User Repos*
There  are too many non-user repos. I'm not convinced we should host 
ash, oak, other project branches internally. I think we should focus 
on  mission-critical repos only. There should be less than a dozen of 
those. I would like to stop hosting non-mission-critical repositories 
by end of Q2.


This is a soft target. I don't have a concrete plan here. I'd like to 
start experimenting with moving project branches elsewhere and see 
where that takes us.


*What my hg repo needs X/Y that 3rd-party services do not provide?*
If you have a good reason to use a feature not supported by 
github/bitbucket, we should continue hosting your repo at Mozilla.


*Why Not Move Everything to Github/Bitbucket/etc?*
Mozilla  prefers to keep repositories public by-default. This does not 
fit  Github's business model which is built around private repos. 
Github's free  service does not provide any availability guarantee. 
There is also a problem of github not supporting hg.


I'm not completely sure why we can't move everything to bitbucket. 
Some of  it is to do with anecdotal evidence of robustness problems. 
Some of it is lack of hooks (sans post-receive POSTs).Additionally, as 
with Github there is no availability guarantee.


Hosting arbitrary Moz-related hg repositories does not make strategic 
sense. We should do the absolute minimum(eg http://bke.ro/?p=380) 
required to keep Firefox shipping smoothly and focus our efforts on 
making Firefox better.



Taras


ps. Footprint stats:

*Largest User Repos Out Of ~130GB*
1.1Gdmt.alexandre_gmail.com
1.1Gjblandy_mozilla.com
1.1Gjparsons_mozilla.com
1.2Gbugzilla_standard8.plus.com
1.2Gmbrubeck_mozilla.com
1.2Gmrbkap_mozilla.com
1.3Gdcamp_campd.org
1.3Gjst_mozilla.com
1.4Gblassey_mozilla.com
1.4Ggszorc_mozilla.com
1.4Giacobcatalin_gmail.com
1.5Gcpearce_mozilla.com
1.5Ghurley_mozilla.com
1.6Gbsmedberg_mozilla.com
1.6Gdglastonbury_mozilla.com
1.6Gdtc-moz_scieneer.com
1.6Gjlund_mozilla.com
1.6Gsarentz_mozilla.com
1.6Gsbruno_mozilla.com
1.7Gmshal_mozilla.com
1.9Gmhammond_skippinet.com.au
2.1Glwagner_mozilla.com
2.4Garmenzg_mozilla.com
2.4Gdougt_mozilla.com
2.5Gbschouten_mozilla.com
2.7Ghwine_mozilla.com
2.8Geakhgari_mozilla.com
2.8Gmozilla_kewis.ch
2.9Grcampbell_mozilla.com
3.1Gbhearsum_mozilla.com
3.1Grjesup_wgate.com
3.2Gagal_mozilla.com
3.3Gaxel_mozilla.com
3.3Gprepr-ffxbld
4.2Gjford_mozilla.com
4.3Gmgervasini_mozilla.com
4.6Glsblakk_mozilla.com
5.0Gbsmith_mozilla.com
5.5Gnthomas_mozilla.com
5.8Gcoop_mozilla.com
6.5Gjhopkins_mozilla.com
7.7Graliiev_mozilla.com
9.2Gcatlee_mozilla.com
13Gstage-ffxbld

*Space Usage by Non-user repos ~100GB*
24K integration/gaia-1_4
28K addon-sdk
28K projects/collusion
32K integration/gaia-1_1_0

Re: js-inbound as a separate tree

2013-12-19 Thread David Burns
Personally I find the branches we have annoying and are papering over 
the real problem that our feedback cycles once landed are far too long. 
Just for that reason alone I am against the idea.


I think if we can solve the build/test scheduling and being smart about 
how we do our testing we can reduce the time the tree is closed greatly.


more comments in line.

David

On 19/12/2013 18:48, Jason Orendorff wrote:

On dev-tech-js-engine-internals, there's been some discussion about
reviving a separate tree for JS engine development.

The tradeoffs are like any other team-specific tree.

Pro:
- protect the rest of the project from closures and breakage due to JS
patches
mozilla-inbound has been closed for on average ~4 days a Month (Data at 
the end of the email). This is including the 8 days in November because 
we werent monitoring leaks properly. These ~4 days havent been split 
into Infrastructure vs test/build failure causing the closure and do 
include known downtime from Releng when they do work.



- protect the JS team from closures and breakage on mozilla-inbound

see my comment above.

- avoid perverse incentives (rushing to land while the tree is open)


When auto-land is ready we will be able to throttle landings for people 
adding checkin-needed to bugs since the tree is fragile on re opening. 
Currently the sheriffs watch for that an land things accordingly. They 
do the throttling themselves.




Con:
- more work for sheriffs (mostly merges)


If mostly merges, are you suggesting there will be little traffic on the 
branch or the JS team will watch the tree for failures? If the former, 
is their value in having another branch when there is low traffic?



- breakage caused by merges is a huge pain to track down
Yup! Not to mention merge conflicts that can happen between branches. 
Today there was a complaint in #jsapi when someone was trying to fix an 
issue but the test framework was out of sync currently and no merge 
imminent. This was between b2g-inbound and mozilla-inbound. Adding 
another inbound feels like its going to make it even harder.

- makes it harder to land stuff that touches both JS and other modules


I already have this pain with working on something that B2G use too. The 
B2G team has been working with releng to try mitigate it but it's still 
painful.




We did this before once (the badly named tracemonkey tree), and it
was, I dunno, OK. The sheriffs have leveled up a *lot* since then.

There is one JS-specific downside: because everything else in Gecko
depends on the JS engine, JS patches might be extra likely to conflict
with stuff landing on mozilla-inbound, causing problems that only
surface after merging (the worst kind). I don't remember this being a
big deal when the JS engine had its own repo before, though.

We could use one of these to start:
https://wiki.mozilla.org/ReleaseEngineering/DisposableProjectBranches

Thoughts?

-j
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


(treestatus)☁ mozilla-inbound python treestatus-stats.py --tree 
mozilla-inbound

Added on :2012-05-14T09:59:46
Tree has been closed for a total of 64 days, 23:18:12 since it was 
created on 2012-05-14T09:59:46

2012-08 : 1 day, 1:26:57
2012-09 : 1 day, 3:31:16
2012-10 : 2 days, 21:33:14
2012-11 : 20:45:45
2012-12 : 2 days, 1:19:51
2013-01 : 2 days, 8:17:55
2013-02 : 4 days, 0:24:59
2013-03 : 6 days, 3:13:09
2013-04 : 4 days, 17:51:39
2013-05 : 5 days, 13:33:49
2013-06 : 2 days, 15:42:37
2013-07 : 6 days, 13:46:11
2013-08 : 4 days, 5:42:17
2013-09 : 4 days, 20:59:41
2013-10 : 4 days, 21:22:40
2013-11 : 8 days, 4:58:30
2013-12 : 2 days, 16:47:42


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: js-inbound as a separate tree

2013-12-19 Thread David Burns

On 19/12/2013 23:56, Jason Orendorff wrote:

On 12/19/13 4:55 PM, David Burns wrote:

On 19/12/2013 18:48, Jason Orendorff wrote:

Con:
- more work for sheriffs (mostly merges)

If mostly merges, are you suggesting there will be little traffic on
the branch or the JS team will watch the tree for failures?

Neither, I'm just saying the overall rate of broken patches wouldn't
increase much, which I think shouldn't be controversial.

That is, sheriffing is not watching trees, it's fighting bustage. Each
busted patch and each intermittent orange creates a ton of work. It
stands to reason that diverting some patches to a separate tree won't
increase the volume of patches, except to the degree it actually
improves developer efficiency (and let's have that problem, please).


For context, I manage the sheriffs so want to be sure what I am signing 
them up for. If the overall rate of broken patches wouldn't increase 
much, why can't we keep things on inbound and when the tree is closed 
just using the checkin-needed keyword and let the sheriffs manage 
continue to manage the bustage and start landing patches again?





2013-07 : 6 days, 13:46:11
2013-08 : 4 days, 5:42:17
2013-09 : 4 days, 20:59:41
2013-10 : 4 days, 21:22:40
2013-11 : 8 days, 4:58:30
2013-12 : 2 days, 16:47:42

I know the point of including these numbers was, hey look it's not that
bad, but this is really shocking.


I know its bad and this is why I am tracking this information! I am 
watching how many backouts are affecting closures[1] and what the 
backout to push ratio[2] is. Currently these figures scare me and the 
default stance that I get from platform engineers is It's probably 
cheaper to push and get backed out than push to try. This comes back to 
my papering over the cracks be spreading things around.



We're looking at an average of
something like 125 hours per month that developers can't check stuff in.
Even if the breakage is evenly distributed across time zones
(optimistic) we're looking at zero 9s of availability.


I know that RelEng are looking into how to do scheduling better, I am 
not sure where they are with this or if it is started but its a good 
first step. The whole a push can take hours to build/test is the thing 
that we need to be pushing against. I think if we solve that problem 
their will be a significant drop in bad pushes. A bad push is 3 times 
more expensive than a good push just in compute hours (we have 1 backout 
in every 15 pushes on average), never mind the cost of someone doing a 
pull after a bad push and them trying to solve why things don't build.




We've all gotten used to it, but it's kind of nuts.


Couldnt agree more!



-j



David

[1] 
https://secure.theautomatedtester.co.uk/owncloud/public.php?service=filest=f54a3e2edabb70771d64e473b30780ac
[2] 
https://secure.theautomatedtester.co.uk/owncloud/public.php?service=filest=ca3312fa7e0914e8352e96d44a48569f

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Pushes to Backouts on Mozilla Inbound

2013-11-05 Thread David Burns

On 05/11/2013 18:11, Steve Fink wrote:

These stats are *awesome*! I've been wanting them for a long time, but
never got around to generating them myself. Can we track these on an
ongoing basis?


Sure! Since we need to be working on the engineering productivity as a 
whole I think this could be a good metric to see if other efforts are 
paying off.




On 11/05/2013 07:09 AM, Ed Morley wrote:

On 05 November 2013 14:44:27, David Burns wrote:

We appear to be doing 1 backout for every 15 pushes on a rough
average[4].

I've been thinking about this some more - and I believe the ratio is
probably actually even worse than the numbers suggest, since:

Yeah, 1 backout for every 15 pushes sounds quite a bit better than I'd
expect.


* Depending on how the backouts are performed, the backout of several
changesets/bugs are sometimes folded into one commit.

Can this be factored into the stats? As in, parse the backout commit
messages, gather the bug numbers (or infer them from the changeset if
not given), then map them to back to the pushes for that bug? It still
won't be 100% right, but it'll be closer.

qbackout does a little bit of this when it tries to find the right
commit message to reuse when you run with --apply. But it doesn't have
access to (nor need) the pushlog, which would be required for this.


I am happy to make tweaks. The data I get is quite raw so happy to dive 
in deeper to get better data.



* The 'total commits' figure includes merges  other automated/non-dev
commits.

Can this be fixed?


Sure, this should be trivial to fix.


The benefits of this approach are:
* Available local compute time scales linearly with the number of devs
hired, unlike our Tryserver automation.

That doesn't seem like a fundamental property to me. At least
theoretically, much of the tryserver automation scales with the Amazon
cloud (aka it scales with the load on some corporate credit card that
I'm glad I don't have to see the statements for).

Again theoretically, we could be buying a local build/test box for every
dev hire  active volunteer, and setting up automation that bridges the
gap between a dev's main box and the try server. (More on this below.)


There is an efficiency here that we are missing here but that is a 
different discussion when there is more data.



* Local dep builds are much quicker than Try clobber builds.

Let's split that up into builds vs tests.

For the stuff I work on, building is normally not a problem. But it can
be during heavy times, because doing builds means losing push races.
With wide-ranging stuff (where the probability of failures due to
rebases is high), this means you either have to push without a final
build or get repeatedly bumped to a later day. This should get better
with the current build system improvements, so perhaps this isn't much
of a problem anymore, but I'm running into it a fair amount right now.

For tests, it depends on the test suite. But many of them just really
suck to run locally. mach magic to identify a minimal subset of tests to
run would help a lot with this, but that's going to be a substantial
amount of work. For the most part, I think the try server is the way to
go for tests. As for resource usage, my personal opinion is that if you
restrict the tests to a single platform (a T push, which you can
generate by selecting something under Restrict tests to platform(s) on
http://trychooser.pub.build.mozilla.org/ ), then you're fine. I'd rather
people run tests on one try platform than whittle down the specific
tests to be run. (Well, for the first push. If you're working through a
particular issue on try, it makes sense to just test that one test suite.)

In short: use the try server. Build on everything. Test on one platform.
Run all the tests. If any fail, iterate on just the failed test suites
(unless you think your changes may break others.)

I don't have the data to prove it, but my guess is that this would
result in the lowest overall load. (Backouts are expensive! Especially
in hard-to-measure people time.)


I'm hopeful that with the build peer's ongoing overhaul of our build
system, dep build times for an average patch are going to be short
enough that there really is no excuse not to build locally. Add to
that ongoing work on improving mach commands to ease running just a
subset of the tests (for bonus points making use of the applied MQs to
guess which ones), and it really shouldn't be too onerous of a request.

Other ideas:

Would it be possible to restrict the statistics to only the active times
of day? It sucks when the tree is closed on a weekend or in the middle
of my night, but it's way way less of a problem when only a few devs are
impacted. The problem I see is tree closures when lots of people need to
land. Tree closures at other times are a different problem, and can be
addressed separately if needed. (You could even say backouts don't
matter if there's no queue in front of any test machines, which isn't
true when you consider

Re: Bringing Marionette and Mochitest closer

2013-01-30 Thread David Burns
I am all for moving towards Marionette being the basis of all 
frameworks. It will allow us to share tests between other vendors 
because Marionette is based on the W3C Browser Automation Spec[1].


One of the things that this brings up is how we can share between other 
vendors. Testharness.js has become the W3 in browser standard so if we 
are going for interop between vendors then using testharness.js is what 
we will need to look at that or look at pure Marionette calls into the 
browser if we need to get out of the sandbox.


One thing to note, and might not be obvious from Marionette 
documentation, is that it is easy to switch between Browser Chrome and 
Browser Content and execute commands (including pure JS).


Opera use their equivalent of Marionette called OperaDriver[3] to load 
the browser and set the browser capabilities and visit a server that 
loads the in-browser test and runs testharness.js or calling into Chrome 
since it accesses the browser via its Scope Protocol. If I remember 
correctly, Chromium is moving to that model too but through their remote 
debugger like us.


I have also put some in line messages below for more specific things.

If you want to discuss this further via vidyo/skype/irc let me know. I 
am keep to fix any potential shortcomings in Marionette.


David Burns
AutomatedTester

[1] https://dvcs.w3.org/hg/webdriver/raw-file/tip/webdriver-spec.html
[2] https://github.com/mozilla/mozbase/tree/master/mozprofile
[3] https://github.com/operasoftware/operadriver
[4] [2] https://github.com/mozilla/mozbase/tree/master/mozhttpd
[5] https://wiki.mozilla.org/Auto-tools/Projects/Mozbase

On 30/01/2013 07:13, Jonas Sicking wrote:

We'd also need to improve the web server which is started by
marionette so that we can satisfy some of the use cases that we're
currently using .sjs files to solve in mochitest.


I am sure that we update MozHTTPd[4] to have a similar feature, might be 
exact and might not be trivial to do but might be worth the effort at 
for a later stage. MozHTTPd is part of MozBase[5], the ATeam project to 
standardise tools we use in testing.




Instead we suggest that we use a simpler python based solution.


Marionette's python client will be great at this. We are already using 
this extensively.




This means that we can't immediately migrate the existing body of
mochitests to the new http server. But we only have a total of 184
.sjs files, so it seems like something that should be doable
eventually.

So for now this means that we'll have to set up a new test harness.
But I'm personally hoping that we can eventually let this harness
cover also our /mochitests which would be an all-around win.


So a quick win for this sounds like Marionette + (MozProfile[2] + 
Special Powers) to get what we want but there might be more to it.



Would very much like to hear feedback on these plans.

Something else we talked about very briefly was to improve the test
environment which we use for running mochitests on B2G such that we
enable the same adjust test, press reload flow that is commonly used
on desktop when writing mochitests.


Since there is a python package we can easily use the REPL for python to 
do things but I don't see how Marionette would stop this. Marionette 
just drives the browser. If you are just wanting to load a page then 
Marionette's navigate() method would just load that.




This will hopefully both help with
getting more tests authored for B2G-specific features (tests which can
hopefully be migrated to cross-platform as the features become
cross-platform), and help with getting the current body of mochitests
running on B2G.

/ Jonas
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Bringing Marionette and Mochitest closer

2013-01-30 Thread David Burns
Marionette is one of the core frameworks for testing FirefoxOS but works 
pretty much everywhere since it is part Gecko.


You can see it on TBPL with 
https://tbpl.mozilla.org/?tree=Mozilla-Inboundjobname=marionette for 
example.


David

On 30/01/2013 18:33, Benjamin Smedberg wrote:

On 1/30/2013 2:13 AM, Jonas Sicking wrote:

A few of us got together last week and had a quick brainstorming
session about how to leverage the combined power from Mochitest and
Marionette better. The following issues were raised:
What is Marionette? From the discussion, it sounds like a W3C testing 
framework, but it's not clear whether it's something we're currently 
using, or something we're discussing using. I don't remember having 
seen it on TBPL, for example.


--BDS

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform