Re: What is future of editor mode-lines after the switch to clang-format?

2018-12-10 Thread Gregory Szorc
On Wed, Dec 5, 2018 at 5:51 AM Ehsan Akhgari 
wrote:

> On Fri, Nov 30, 2018 at 8:23 PM Ehsan Akhgari 
> wrote:
>
>> On Fri, Nov 30, 2018 at 4:08 PM Gregory Szorc  wrote:
>>
>>> On Fri, Nov 30, 2018 at 10:00 AM  wrote:
>>>
>>> > Now that all of mozilla-central is been migrated to use clang-format
>>> > automated code formatting, the question of what should happen with
>>> editor
>>> > modelines at the top of files should be considered.
>>> >
>>> > https://bugzilla.mozilla.org/show_bug.cgi?id=clang-format
>>> >
>>> > Here are some options and some arguments I've heard. Please reply with
>>> > further ideas or rationale. I've not classified points as pro/con and
>>> leave
>>> > that up to the reader's interpretation.
>>> >
>>> > Option 1: Remove mode lines
>>> >   - Devs are expected to run clang-format anyways (hopefully automated
>>> > with a hook of sorts)
>>> >   - Devs are free to set their modeline configuration elsewhere
>>> >   - If they aren't providing value, they deserve to be removed.
>>> >   - Many of these were already inconsistent/wrong, so this might be an
>>> > opportunity to phase out
>>> >   - Not all devs use vim/emacs, so we should think about workflows help
>>> > that doesn't need stuff in every single source file.
>>> >   - The editorconfig project (https://editorconfig.org/) aims to solve
>>> > this for a variety of editors without marking each file
>>> >
>>> > Option 2: Fix mode lines
>>> >   - A correct text-width mode-line gives a closer first approximation
>>> for
>>> > devs
>>> >   - Certain files (eg. moz.build, obj-C headers) use a non-standard
>>> file
>>> > types.
>>> >
>>> > A hybrid of these is also very possible, such as removing certain
>>> > attributes or only using when file type is non-standard.
>>> >
>>> > I had originally intended this discussion for js/ components, but it
>>> turns
>>> > out to be a question across the whole tree (even if the solution
>>> chosen is
>>> > per-module).
>>> >
>>>
>>> https://editorconfig.org/ has been gaining popularity and I think we
>>> should
>>> adopt it. Unlike mode lines, you can control behavior for multiple files
>>> with a single source file, making it much more convenient to use.
>>>
>>> Unfortunately, it doesn't look like you can set the file type with
>>> .editorconfig files. I think we should lobby for that to be added. Then
>>> in
>>> a year's time we can ditch mode lines for the remaining non-standard
>>> filenames in the repo.
>>>
>>
>> Great idea!  A future without modelines sounds really nice.
>>
>
> By the way, it seems like the emacs editorconfig plugin already had
> experimental support for a filetype -> mode mapping configuration open (see
> file_type_ext and file_type_emacs here
> https://github.com/editorconfig/editorconfig-emacs#supported-properties).
> Is this sufficient for our needs as far as replacing the file type
> information in the Emacs modelines go?
>

That looks promising!

But this file_type_ext only solves the problem for users of the emacs
operating system. Other editors (like vim) support emacs mode lines as
well. I'm assuming we'll want to wait for more general adoption lest we
disrupt non-emacs users.

FWIW https://github.com/editorconfig/editorconfig/issues/190 seems to be
the editorconfig issue tracking this feature request. Perusing through the
discussion, it looks like it needs a champion to push through. Also, there
doesn't seem to be a good "registry" of content types. Someone linked to
https://github.com/github/linguist/blob/master/lib/linguist/languages.yml
as a suggestion. But delegating mappings to a project that GitHub maintains
seems less than ideal. Perhaps someone could chime in and guide them to
something more standard, such as using the IANA media type registry? (I
would, but I'm not a standards expert and don't want to recommend something
when it isn't appropriate.)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Upcoming changes to our C++ Coding Style

2018-12-07 Thread Gregory Szorc
On Fri, Dec 7, 2018 at 1:57 PM Botond Ballo  wrote:

> On Fri, Dec 7, 2018 at 11:36 AM Sylvestre Ledru 
> wrote:
> > In the meantime, we will be running a bot weekly to reformat the
> > mistakes and add the changeset into the ignore lists.
> > But in the long run this won’t be sustainable, so once we gain
> > confidence that a good number of developers have successfully integrated
> > clang-format into their local workflow, we will look into enabling a
> > Mercurial hook on hg.mozilla.org to reject misformatted code upon push
> > time.  That will be the ultimate solution to help ensure that our code
> > will be properly formatted at all times.
>
> Have you considered something like running clang-format automatically
> during landing (i.e. as part of what Lando does to the patch)? That
> seems like it would achieve the best of both worlds, by not placing
> any requirements on what developers do locally, while also ensuring
> the tree is always properly formatted without cleanup commits.
>

I chatted with Sylvestre earlier today. While I don't want to speak for
him, I believe we both generally agree that the formatting should happen
"automagically" as part of the patch review and landing lifecycle, even if
the client doesn't have their machine configured for formatting on save.
This would mean that patches are either:

a) auto-formatted on clients as part of being submitted to Phabricator
b) updated automatically by bots after submission to Phabricator
c) auto-formatted by Lando as part of landing

Lando rewrites/rebases commits as part of landing, so commit hashes already
change. So if auto-formatting magically occurs and "just works" as part of
the review/landing process, there should be little to no developer
inconvenience compared to what happens today. i.e. developers don't need to
do anything special to have their patches land with proper formatting.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: What is future of editor mode-lines after the switch to clang-format?

2018-11-30 Thread Gregory Szorc
On Fri, Nov 30, 2018 at 10:00 AM  wrote:

> Now that all of mozilla-central is been migrated to use clang-format
> automated code formatting, the question of what should happen with editor
> modelines at the top of files should be considered.
>
> https://bugzilla.mozilla.org/show_bug.cgi?id=clang-format
>
> Here are some options and some arguments I've heard. Please reply with
> further ideas or rationale. I've not classified points as pro/con and leave
> that up to the reader's interpretation.
>
> Option 1: Remove mode lines
>   - Devs are expected to run clang-format anyways (hopefully automated
> with a hook of sorts)
>   - Devs are free to set their modeline configuration elsewhere
>   - If they aren't providing value, they deserve to be removed.
>   - Many of these were already inconsistent/wrong, so this might be an
> opportunity to phase out
>   - Not all devs use vim/emacs, so we should think about workflows help
> that doesn't need stuff in every single source file.
>   - The editorconfig project (https://editorconfig.org/) aims to solve
> this for a variety of editors without marking each file
>
> Option 2: Fix mode lines
>   - A correct text-width mode-line gives a closer first approximation for
> devs
>   - Certain files (eg. moz.build, obj-C headers) use a non-standard file
> types.
>
> A hybrid of these is also very possible, such as removing certain
> attributes or only using when file type is non-standard.
>
> I had originally intended this discussion for js/ components, but it turns
> out to be a question across the whole tree (even if the solution chosen is
> per-module).
>

https://editorconfig.org/ has been gaining popularity and I think we should
adopt it. Unlike mode lines, you can control behavior for multiple files
with a single source file, making it much more convenient to use.

Unfortunately, it doesn't look like you can set the file type with
.editorconfig files. I think we should lobby for that to be added. Then in
a year's time we can ditch mode lines for the remaining non-standard
filenames in the repo.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Notice of hg.mozilla.org x509 certificate change on 2018-10-31

2018-10-30 Thread Gregory Szorc
 hg.mozilla.org's x509 server certificate (AKA an "SSL certificate") will
be rotated around 2018-10-31T17:00 UTC (10:00 PDT). That's less than 24
hours from now. Bug 1495464 tracks.

You may have the certificate's fingerprint pinned in your hgrc files.
Automated jobs may pin the fingerprint as well. *If you have the
fingerprint pinned, you will need to take action otherwise Mercurial will
refuse the connect to hg.mozilla.org once the certificate is swapped.*

The easiest way to ensure your pinned fingerprint is up-to-date is to run
`mach vcs-setup` from a Mercurial checkout (it can be from an old
revision). If running Mercurial 3.9+ (which you should be in order to have
security fixes), both the old and new fingerprints will be pinned and the
transition will "just work." Otherwise you'll need to run `mach vcs-setup`
or take further action after the new certificate is installed. If a new
fingerprint is installed, run `mach vcs-setup` again after the transition
to remove the old fingerprint.

Fingerprints and details of the new certificate (including hgrc config
snippets you can copy) are located at
https://bugzilla.mozilla.org/show_bug.cgi?id=1495464#c6
. From a
certificate level, this transition is pretty boring: just a standard
certificate renewal from the same CA.

The IRC channel for this operational change will be #vcs. Fallout in
Firefox CI should be discussed in #ci. Please track any bugs related to
this change against bug 1495464.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Using clang-cl to ship Windows builds

2018-07-10 Thread Gregory Szorc
On Tue, Jul 10, 2018 at 1:29 PM, David Major  wrote:

> Bug 1443590 is switching our official Windows builds to use clang-cl
> as the compiler.
>
> Please keep an eye out for regressions and file a blocking bug for
> anything that might be fallout from this change. I'm especially
> interested in hearing about the quality of the debugging experience.
>
> It's possible that the patch may bounce and we'll go back and forth to
> MSVC for a while. You can check your build's compiler at
> `about:buildconfig`. Treeherder is running an additional set of MSVC
> jobs on mozilla-central to make sure we can fall back to a green MSVC
> if needed.
>
> Watch for more toolchain changes to come. The next steps after this
> will be to switch to lld-link and enable ThinLTO. That will open the
> door to a cross-language LTO that could inline calls between Rust and
> C++. In the longer term we can look into cross-compiling from Linux.
>
> But for now, shipping our most-used platform with an open-source
> compiler is a huge milestone in and of itself. Big thanks to everyone
> who has contributed to this effort on the Mozilla side, and also big
> thanks to the developers of LLVM and Chromium who helped make clang on
> Windows a realistic possibility.
>

A lot of people have wanted to see this day for years (for various reasons
- an open source toolchain, potential for cross compiling, unified
toolchains across platforms, etc). This is a *major* milestone. And while
the transition will likely have a few bumps, the payoff should be well
worth it.

Thank you to everyone who helped us get here. From the initial work to
support Clang on Windows a few years ago. To the upstream work that was put
into LLVM and Clang (especially by Google/Chromium). To those that worked
through all the issues to transition the compiler today. This was a
significant effort by a lot of people. We should all be ecstatic we're
finally crossing this bridge. I know I am! Congratulations!
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Getting files from builders to test machines

2018-07-02 Thread Gregory Szorc
On Sun, Jul 1, 2018 at 6:59 PM, Geoff Lankow  wrote:

> On comm-central we have a number of extensions that get built alongside
> everything else, but are not shipped. (That is, building produces a .xpi
> file in objdir/dist/xpi-stage.) I'd like to be able to have them uploaded
> as build artefacts so they are available to download by testers (both human
> and machine). I also need to tell the test machines about them so tests can
> be run.
>
> Can anyone advise me how I might go about doing this?
>

For Firefox builds, if you put a file in/builds/worker/artifacts/ (Linux)
or public/build (Windows) (relative to task directory), it will
automatically get uploaded as an artifact. From the context of the Firefox
build system, I believe UPLOAD_PATH will be set and it will point to this
directory. Although this logic has changed a bit and my knowledge may be
slightly out of date.

For obtaining artifacts from a previous task, there are a handful of
solutions.

For tasks using `run-task`, USE_ARTIFACT_URLS and USE_ARTIFACT_PATH can be
set to cause artifacts to be fetched automatically. In Taskgraph YAML, you
define "use-artifacts" and just give the filename of the artifact you want
to fetch. e.g.
https://dxr.mozilla.org/mozilla-central/source/taskcluster/ci/source-test/python.yml#67
.

We also have "fetch" tasks and the `fetch-content` script. But these aren't
ubiquitous yet.

And I'm not sure how to best accomplish artifact fetching for non-run-task
tasks (e.g. a lot of Windows and macOS tasks). This area of the CI code
could use some love. ahal, myself, and others have been beating around the
bushes attempting to solve it. Consider pinging ahal or me privately if you
have follow-up questions.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: mozilla-inbound backout policy subject to change (become similar to autoland)

2018-06-26 Thread Gregory Szorc
On Tue, Jun 26, 2018 at 3:45 PM, Dave Townsend 
wrote:

> On Tue, Jun 26, 2018 at 3:06 PM Sebastian Hengst  wrote:
>
> >  Original-Nachricht 
> > Betreff: Re: mozilla-inbound backout policy subject to change (become
> > similar to autoland)
> > Von: Boris Zbarsky 
> > Datum: 2018-06-24 21:28
> > > On 6/19/18 9:04 AM, Sebastian Hengst wrote:
> > >> TL;DR: We would like to change the mozilla-inbound backout policy to
> > >> be like autoland’s.
> > >
> > > This seems like a pretty reasonable change in general.
> > >
> > > Is there a well-documented try syntax string which corresponds to
> "these
> > > are the things that need to be green to avoid being backed out"?
> > > Presumably that string is not "-p all -u all" because it should exclude
> > > tier2 and lower things, but it's not entirely clear to me what the
> > > autoland backout criteria are.  I would assume we want developers to do
> > > a try run with that syntax before pushing to inbound as needed.
> > >
> > > -Boris
> >
> > The recommend Try practices can be found at
> > https://wiki.mozilla.org/Sheriffing/How_To/Recommended_Try_Practices
>
>
> This doesn't seem to answer the question. I frequently do patches where I
> can't reliably guess the extent of test breakage and want to run a full set
> to be on the safe side. What try configuration gives us tier 1 results
> only?
>

I agree that the default behavior of `-p all -u all -t all` is running too
much and confusing people. I filed bug 1471404 to change the behavior.

Will that be sufficient to fix the issue?

FWIW there were also some discussions at the all hands about in-tree
profiles for `mach try` to make it easier for people to perform common test
scenarios. Would this be a better solution?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Localized Repacks now visible on (most) pushes

2018-05-30 Thread Gregory Szorc
On Wed, May 30, 2018 at 10:08 AM, Justin Wood  wrote:

> Hello Everyone,
>
> tl;dr You should now see "L10n" jobs on treeherder with many pushes, these
> are tier 1 and if they break they would also be breaking Nightly so your
> patch would need to be backed out.
>
> As many of you know, especially the old guard [1] here, Localized Repacks
> have frequently been known to fail in weird and interesting ways on Nightly
> and Beta builds.
>
> Throughout the movement to taskcluster we have been reducing the
> differences in automation to make what we ship to release users happen with
> the same process as what we ship to nightly users. We have recently
> achieved that parity now that we have finished our migration to taskcluster
> [2]
>
> One straggler was on our implementation of L10n builds on try [3][4] which
> had begun to frequently fail when users add/remove any localized file (.dtd
> or .ftl). And similarly we have always lacked the ability to easily vet a
> change to central/inbound/autoland as "will this break l10n".
>
> With the work I've now done we have aligned this "try" l10n job with what
> we perform in the Nightly and Release Promotion process, as well as allowed
> ourselves the ability to run these on every push.
>
> Implementation details:
> * For now these still run only when a subset of files change [5] but this
> list can be expanded easily, or we can rip it out and instead *always* run
> these jobs.
> * These jobs are performed using live L10n repositories, but just a small
> set of our total localization, specifically: en-CA, he, it, ja, ja-JP-mac
> [6]
> * As part of doing this work, we needed to specify the STUB Installer
> differently, if we need it on any new channels/builds we need to specify it
> in the build taskcluster kind, like [7]. We have a check in configure to
> error if its not set correctly [8]
>
> If you have any questions, feel free to reach out to me/releng.
> ~Justin Wood (Callek)
>

Thank you, Justin and everyone else who worked on this! l10n packaging has
historically suffered from a lack of visibility in CI and lack of
understanding outside its small circle of maintainers. Moving the l10n
automation to Taskcluster and giving it visibility in Treeherder as part of
regular jobs that normal people see will go a very long way to increasing
understanding of l10n packaging. It also paves the road for overhauling the
technical underpinnings of l10n packaging. For those not aware, l10n
packaging has historically been a major burden for build system maintainers
because of the convoluted ways it interacts with the build system. Let's
just say that we have actively avoided touching code related to l10n out of
fear that it will break a convoluted system. Now that the l10n CI can more
easily be toggled and tested, it is substantially easier to iterate on and
we now have the confidence that our changes won't break things. This is a
game changer and will directly enable us to perform some long-overdue
refactoring of l10n code. Thank you!
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Update on rustc/clang goodness

2018-05-29 Thread Gregory Szorc
On Tue, May 29, 2018 at 1:48 PM, Jeff Gilbert  wrote:

> It would be sad to see us standardize on a clang monoculture.
>

I am sympathetic to that concern. (I have similar monoculture fears that
the open source world is over-pivoting towards non-OSS platforms like
GitHub and Slack as well.)

We will still support GCC on at least Linux because many downstream
distributions rely on it. I'm not sure what tier it will be supported at.
But I suspect it will be tier 1 for the foreseeable future. Although we may
not be able to fully test every revision with GCC due to capacity/cost
reasons. We can certainly get build coverage in CI and periodic test
coverage. So maybe that means tier 2 by definition. But nobody on the build
side is seriously talking about dropping GCC support completely.

MSVC is a separate beast. It is a great compiler. But it is also closed
source and Windows only. Visual Studio is a great IDE and getting better
with every release. So that would be a compelling reason to keep MSVC
support. But Visual Studio supports Clang natively these days. And you can
always configure solutions to "shell out" to an alternate build backend
(that's how the autogenerated Visual Studio files work today). So I'm less
confident that we will retain MSVC support longer than required to safely
complete the transition to Clang.

Does that assuage your concerns?


>
> On Tue, May 1, 2018 at 7:56 PM, Anthony Jones  wrote:
> > You may already know that the Low-Level Tools team support important
> tools
> > and code infrastructure. Lately we’ve also been improving our rustc/clang
> > (LLVM) story and I’d like bring everyone up to date.
> >
> >
> > There are a lot of important and interesting things going on:
> >
> >
> > Michael Woerister and Nathan Froyd recently (mid-March) enabled Rust
> > incremental compilation for Firefox developers
> >
> > Michael is experimenting with cross language inlining (rustc/clang) using
> > LTO
> >
> > Preliminary results show compiling LLVM with clang and using LTO on rustc
> > improves stylo compilation time by around 15%
> >
> > LTO requires more work to support Firefox
> >
> > Nick Nethercote has started speeding up rustc and has already had a
> number
> > of wins
> >
> > David Major has got Windows clang builds working with green tests
> >
> > LTO is still buggy (but works well enough to run benchmarks) and PGO
> won’t
> > run on Windows
> >
> > Win32 - clang with LTO w/o PGO is marginally faster than MSVC with
> LTO/PGO
> >
> > Win64 - clang with LTO w/o PGO is ~5-10% slower than MSVC with LTO/PGO
> >
> > Windows users can build locally with clang
> >
> > Mike Hommey is tracking improvements and regressions in the Rust compiler
> > (performance regression, crash reporting and more crash reporting)
> >
> > Tom Tromey is continuing to work on first class Rust support in GDB and
> LLDB
> >
> >
> > Ultimately, I’d like to see us standardise on clang on all platforms
> because
> > it makes Rust/C++ integration better as well as making things simpler for
> > developers and build system maintainers. We’ll get more done if we make
> our
> > own lives simpler.
> >
> >
> > I have some specific requests:
> >
> >
> > Let me know if you have specific Firefox cases where Rust is slowing you
> > down
> >
> > Cross language inlining is coming - avoid duplication between Rust and
> C++
> >
> > Start doing your developer builds with clang
> >
> > Anthony
> >
> >
> >
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Removing tinderbox-builds from archive.mozilla.org

2018-05-11 Thread Gregory Szorc
On Fri, May 11, 2018 at 7:47 PM, Boris Zbarsky <bzbar...@mit.edu> wrote:

> On 5/11/18 7:06 PM, Gregory Szorc wrote:
>
>> Artifact retention and expiration boils down to a
>> trade-off between the cost of storage and the convenience of accessing
>> something immediately (as opposed to waiting several dozen minutes to
>> populate the cache).
>>
>
> Just to be clear, when doing a bisect, one _can_ just deal with local
> builds.  But the point is that then it takes tens of minutes per build as
> you point out.  So a bisect task that might otherwise take 10-15 minutes
> total (1 minute per downloaded build) ends up taking hours...


And this is where bisect-in-the-cloud would come in. You could trigger
dozens of builds on Taskcluster and they would all come back in the time it
takes a single build to run. Or you codify the thing you are testing for,
submit it to bisect-in-the-cloud, and it does everything for you and
reports on the results. It still might take dozens of minutes to a few
hours. But at least you wouldn't be burdened with the overhead of doing
everything manually.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Update on rustc/clang goodness

2018-05-11 Thread Gregory Szorc
On Thu, May 10, 2018 at 5:35 PM, Anthony Jones  wrote:

> You may already know that the Low-Level Tools team support important tools
> and code infrastructure. Lately we’ve also been improving our rustc/clang
> (LLVM) story and I’d like bring everyone up to date.
>
> There are a lot of important and interesting things going on:
>
> * Michael Woerister and Nathan Froyd recently (mid-March) enabled Rust
> incremental compilation for Firefox developers
> * Michael is experimenting with cross language inlining[1]
> (rustc/clang) using LTO
> * Preliminary results show compiling LLVM with clang and using LTO
> on rustc improves stylo compilation time by around 15% [on-my-machine
> measurements]
> * LTO requires more work to support Firefox
> * Nick Nethercote has started speeding up rustc[2] and has already had
> a number of wins
> * David Major has got Windows clang builds working with green tests
> * LTO is still buggy (but works well enough to run benchmarks) and
> PGO won’t run on Windows
> * Win32 - clang with LTO w/o PGO is marginally faster than MSVC
> with LTO/PGO
> * Win64 - clang with LTO w/o PGO is ~5-10% slower than MSVC with
> LTO/PGO
> * Windows users can build locally with clang[3]
> * Mike Hommey is tracking improvements and regressions in the Rust
> compiler (performance regression[4], crash reporting[5] and more crash
> reporting[6])
> * Tom Tromey is continuing to work on first class Rust support in GDB
> and LLDB
>
> Note: this a summary of things rustc/clang stuff that is happening only.
>
> Ultimately, I’d like to see us standardise on clang on all platforms
> because it makes Rust/C++ integration better as well as making things
> simpler for developers and build system maintainers. We’ll get more done if
> we make our own lives simpler.
>
> I have some specific requests for you:
>
> Let me know if you have specific Firefox related cases where Rust is
> slowing you down (thanks Jeff [7])
> Cross language inlining is coming - avoid duplication between Rust and
> C++ in the name of performance
> Do developer builds with clang
>

This is all fantastic work!

Speaking as a build system maintainer, I would also like to see us
standardize on Clang on all platform for various reasons, especially
simplicity. Speaking as an open source advocate, I can't wait to be using
an open source toolchain on Windows. Speaking as someone who cares about
developer ergonomics, I can't wait until we reduce the number of toolchains
we support as tier 1 platforms so we all don't have to spend so much effort
making things work on N+1 platforms. Reducing the number of toolchains we
need to worry about is very much analogous to the security benefits of
reducing attack surface area.

I do have some concerns about dropping support for non-Clang toolchains and
what that will mean for various people who rely on them. But I think
"switching toolchains" is a separate discussion from "dropping toolchains"
and we can largely defer those discussions. As far as switching to Clang
for all builds we ship to users is concerned, I'm all aboard from the build
system side.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Removing tinderbox-builds from archive.mozilla.org

2018-05-11 Thread Gregory Szorc
On Wed, May 9, 2018 at 11:01 AM, Ted Mielczarek  wrote:

> On Wed, May 9, 2018, at 1:11 PM, L. David Baron wrote:
> > > mozregression won't be able to bisect into inbound branches then, but I
> > > believe we've always been expiring build artifacts created from
> integration
> > > branches after a few months in any case.
> > >
> > > My impression was that people use mozregression primarily for tracking
> down
> > > relatively recent regressions. Please correct me if I'm wrong.
> >
> > It's useful for tracking down regressions no matter how old the
> > regression is; I pretty regularly see mozregression finding useful
> > data on bugs that regressed multiple years ago.
>
> To be clear here--we still have an archive of nightly builds dating back
> to 2004, so you should be able to bisect to a single day using that. We
> haven't ever had a great policy for retaining individual CI builds like
> these tinderbox-builds. They're definitely useful, and storage is not that
> expensive, but given the number of build configurations we produce nowadays
> and the volume of changes being pushed we can't archive everything forever.


It's worth noting that once builds are deterministic, a build system is
effectively a highly advanced caching mechanism. It follows that cache
eviction is therefore a tolerable problem: if the entry isn't in the cache,
you just build again! Artifact retention and expiration boils down to a
trade-off between the cost of storage and the convenience of accessing
something immediately (as opposed to waiting several dozen minutes to
populate the cache).

The good news is that Linux Firefox builds have been effectively
deterministic (modulo PGO and some minor build details like the build time)
for several months now (thanks, glandium!). And moving to Clang on all
platforms will make it easier to achieve deterministic builds on other
platforms. The bad news is we still have many areas of CI that are not
hermetic and attempts to retrigger Firefox build tasks in the future have a
very high possibility of failing for numerous reasons (e.g. some dependent
task of the build hits a 3rd party server that is no longer available or
has deleted a file). In other words, our CI results may not be reproducible
in the future. So if we delete an artifact, even though the build is
deterministic, we may not have all the inputs to reconstruct that result.

Making CI hermetic and reproducible far in the future is a hard problem.
There are esoteric failure scenarios like "what if we need to fetch content
from a server in 2030 but TLS 1.2 has been disabled due to a critical
vulnerability and code in the hermetic build task doesn't support TLS 1.3."
In order to realistically achieve reproducible builds in the future, we
need to store *all* inputs somewhere reliable where they will always be
available. Version control is one possibility. A content-indexed service
like tooltool is another. (At Google, they check in the source code for
Clang, glibc, binutils, Linux, etc into version control so all they need is
a version revision and a bootstrap compiler (which I also suspect they
check into the monorepo) to rebuild the world from source.)

What I'm trying to say is we're making strides towards making builds
deterministic and reproducible far in the future. So hopefully in a few
years we won't need to be concerned about deleting old data because our
answer will be "we can easily reproduce it at any time."
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is super-review still a thing?

2018-05-10 Thread Gregory Szorc


On May 10, 2018, at 06:47, Randell Jesup  wrote:

>>> On Thu, Apr 26, 2018, at 12:41 AM, Mark Côté wrote:
>>> How we might use blocking reviewers in our workflow is still open, but 
>>> it could be used for training up reviewers, in which case the trainee 
>>> would be a regular reviewer and the module peer/owner would be a 
>>> blocking reviewer.
>> 
>> It's not uncommon for me to submit patches for review from multiple people
>> where I require all reviewers to sign off on the patch, usually because I
>> ask them to look at different parts of the patch.  I don't think I have
>> ever sent a patch to get review from more than one person with the
>> intention of landing it if only one person OKs it.  (I'd probably needinfo
>> people to get that kind of feedback.)  So ignoring potential new workflows
>> like the training one you mention, I think I would exclusively use blocking
>> reviewers.  It's good to know that Phabricator will help me avoid
>> accidentally landing patches when not all of my reviewers have signed off.
> 
> Agreed... but it sounds like they're not quite sure how this will be
> used.  I'm concerned that developers may be confused and just as for
> review by N people, not realizing it can be landed when one of them
> r+'s.  (Especially if the developer doesn't frequently use multiple
> reviewers.)  At minimum, if this how it works, there will been to be
> clear communication about when to use this - or (better!) to switch the
> default to blocking, and have the unusual/more-work-to-ask-for case be
> any-of.
> 
> Once in a blue moon I'll ask for a fast review from any of N people,
> then cancel the r?'s once i have 1 r+.  but this is really rare, and
> mostly when there's a time deadline to land something ASAP (sec issue,
> or to get in ahead of a merge date).  99% of the time when I ask for N
> people, I need r+ from all of them.
> 
> On the other side, I do know that the build peers (IIRC) us a shared
> reviewing setup, where most bugs are thrown in a pool for any of them to
> review.  But that's not the normal workflow for any other group I know
> of in Mozilla, at least in Firefox.  (I don't know them all, though.)

When you take a step back, I think you have to conclude that our current model 
of requesting reviews from *individuals* is not practical for the common case. 
The way our code governance model works is that groups of people (module 
members) are empowered to sign off on changes. But because existing review 
tooling (practically speaking) only allows assigning reviews to individuals (as 
opposed to a group consisting of module members), that’s what we do. I.e. tools 
are dictating sub-optimal workflows that don’t match our governance model.

In the common case, I think group assignment is preferred. Maybe it assigns to 
an available person at submission time. But the submitter should not need to be 
burdened with picking an individual. (This picking requirement is especially 
hostile to new contributors who don’t know who to pick for review.)

Phabricator has the concept of a normal “reviewer” and a “blocking reviewer.” 
“Blocking” means they must sign off before the review is marked as accepted.

I think the world we’ll eventually get to is one where most reviews are 
assigned to groups (ideally automatically by mapping files changed to 
appropriate reviewers) as normal non-blocking reviewers in Phabricator. For 
special cases, we can assign individuals for review and/or set “blocking 
reviewers” so someone or a member of a group *must* sign off. The review 
submission tooling should make all of this turnkey - allowing people to easily 
opt in to the special case if warranted. But most review requirement policies 
should be codified so tools can apply/enforce them without human intervention.

It’s worth noting that in a future world where the review tool isn’t 
constraining us to assigning reviews to individuals and where group assignment 
is the default, assignments to individuals may represent shortcomings in the 
distribution of expertise. I.e. if only one person is capable of performing a 
review, then we have a bus factor of 1 and there is a business continuity 
concern. Assigning reviews to groups - perhaps specialized groups like “dom 
experts” instead of the normal “dom peers” group - helps reinforce that 
reviewing - and the knowledge required to perform it - is a shared 
responsibility and isn’t limited to single individuals. This is another reason 
why I think it is a good idea to lean heavily towards group-based review over 
individual. Even if the group consists of a single person, a group reinforces 
that the responsibility is distributed.

I think that once review submission doesn’t require submitters to choose 
reviewers and reviews magically end up on the plates of appropriate people, 
we’ll all look back at our current workflow and view it as primitive.
___
dev-platform mailing list

Re: Default Rust optimization level decreased from 2 to 1

2018-04-25 Thread Gregory Szorc


> On Apr 25, 2018, at 08:35, Emilio Cobos Álvarez <emi...@crisal.io> wrote:
> 
> There's a fair amount of people bitten by this constantly, which see long 
> style profiling markers and what's really happening is that they're profiling 
> a local opt build, and thus the Rust code in style has barely any 
> optimization and is slow.
> 
> I know that shouldn't be a thing, and that people should --enable-release for 
> profiling and all that. But given it happens, could we consider reverting 
> this change?

I’m going to assert that the set of people who care about fast builds is 
greater than the set of people who care about profiling the style system on 
local builds. Do you disagree? Or is the slowness of the style system 
undermining the ability for the local build to be usable/useful in general?

Regardless, I think the most productive conversation we can have is about how 
we can make the Firefox development UI guide people towards an optimal build 
configuration. The reality is that there are several “factions” of developers 
that each want different things from a build. There is no 
one-build-config-fits-all.

The build peers have long thought about adding the concept of “build profiles” 
to the build system. Think of them as in-tree mozconfigs for common, supported 
scenarios.

But without a forcing function making people choose an initial “profile,” 
people will continue to get bit by whatever default we choose.

We once printed a fatal message on first invocation of the build system. This 
was intended to be such a forcing function. But there was popular revolt and 
this feature was removed. Considering that I managed to upset a lot of people 
with this feature, I’m reluctant to go down that path again because it would 
seemingly undermine general support for the build system team and jeopardize 
our ability to make meaningful changes.

> 
>> On 10/25/17 7:34 PM, Gregory Szorc wrote:
>> Compiling Rust code with optimizations is significantly slower than
>> compiling without optimizations. As was measured in bug 1411081, the
>> difference between rustc's -Copt-level 1 and 2 on an i7-6700K (4+4 cores)
>> for a recent revision of mozilla-central was 325s/802s wall/CPU versus
>> 625s/1282s. This made Rust compilation during Firefox builds stand out as a
>> long pole and significantly slowed down builds.
>> Because we couldn't justify the benefits of level 2 for the build time
>> overhead it added, we've changed the build system default so Rust is
>> compiled with -Copt-level=1 (instead of 2).
>> Adding --enable-release to your mozconfig (the configuration for builds we
>> ship to users) enables -Copt-level=2. (i.e. we didn't change optimization
>> settings for builds we ship to users.)
>> Adding --disable-optimize sets to -Copt-level=0. (This behavior is
>> unchanged.)
>> If you want explicit control over -Copt-level, you can `export
>> RUSTC_OPT_LEVEL=` in your mozconfig and that value will always be
>> used. --enable-release implies a number of other changes. So if you just
>> want to restore the old build system behavior, set this variable in your
>> mozconfig.
>> Also, due to ongoing work around Rust integration in the build system, it
>> is dangerous to rely on manual `cargo` invocations to compile Rust because
>> bypassing the build system (not using `mach build`) may not use the same
>> set of RUSTFLAGS that direct `cargo` invocations do. Things were mostly in
>> sync before. But this change and anticipated future changes will cause more
>> drift. If you want the correct behavior, use `mach`.
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is super-review still a thing?

2018-04-20 Thread Gregory Szorc
On Fri, Apr 20, 2018 at 2:51 PM, L. David Baron  wrote:

> On Friday 2018-04-20 14:23 -0700, Kris Maglione wrote:
> > For a lot of these patches, my opinion is only really critical for
> certain
> > architectural aspects, or implementation aspects at a few critical
> points.
> > There are other reviewers who are perfectly qualified to do a more
> detailed
> > review of the specifics of the patch, and have more spare cycles to
> devote
> > to it. Essentially, what's needed from me in these cases is a
> super-review,
> > which I can do fairly easily, but instead I become a bottleneck for the
> code
> > review as well.
> >
> > So, for the areas where I have this responsibility, I'd like to
> institute a
> > policy that certain types of changes need a final super-review from me,
> but
> > should get a detailed code review from another qualified reviewer when
> that
> > makes sense.
>
> I think it's reasonable to use the super-review flag for this sort
> of high-level or design review, at least until we come up with a
> better name for it (and make a new flag, and retire the old one).  I
> don't think the super-review policy (as written) is meaningful
> today.


FWIW I'm pretty sure Phabricator won't support the super-review flag. And
since we're aiming to transition all reviews to Phabricator...
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent To Require Manifests For Vendored Code In mozilla-central

2018-04-09 Thread Gregory Szorc
On Mon, Apr 9, 2018 at 9:52 PM, Mike Hommey  wrote:

> On Tue, Apr 10, 2018 at 02:46:40PM +1000, Martin Thomson wrote:
> > This seems like a good idea.
> >
> > Please consider adding hg.mozilla.org to your list of things you will
> > clone from.  That will allow us to remove some ugly hacks from the
> > tree for vendoring NSS and NSPR.  (libffi uses the same script, but it
> > seems to be on GitHub now, so that seems like an easy win assuming
> > even that libffi still uses that script.)  We could use the NSS GitHub
> > mirror, but NSPR doesn't have one yet and those can't be the only
> > projects that we have using mercurial.  Of course, if that is the
> > case, then don't worry, we'll just have to talk more seriously about
> > moving NSS to GitHub.
> >
> > You don't permit the use of a tag for vendoring, is that intentional?
> > Tags for releases are so commonplace that you should consider it.  You
> > shouldn't need a branch AND a tag either.  More so because branches
> > move, which makes them not reliable here.
> >
> > Having a version in addition to a tag is redundant and therefore
> > likely to result in mismatches.  I'd say that the tag is enough.
>
> I'd say a revision should be mandatory. Branches and tags can change.
> Revisions can't.


I'll reaffirm this. The full hash of a revision - at least in Mercurial and
Git - means more than a branch or a tag. Both the branch name and a tag are
symbolic and can move. Even if you GPG sign a tag, someone could steal your
signing key and re-sign a new tag for a different revision and a VCS
`verify` command would happily accept it because the signature is valid! A
revision hash, however, is constant for all of time and a revision hash
cannot change out from under you to refer to different content.

Or at least that's the theory. Hash collisions /could/ occur. However, the
actual probability of a hash collision is currently far less than the
probability that someone moves a pointer on the symbolic branch or tag name
because a SHA-1 preimage attack is currently far more difficult than
running VCS operations [that Mercurial or Git let you run out of the box]
which let you update a symbolic name.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to require Python 3 to build Firefox 59 and later

2018-04-05 Thread Gregory Szorc
On Fri, Nov 10, 2017 at 3:27 PM, Gregory Szorc <g...@mozilla.com> wrote:

> For reasons outlined at https://bugzilla.mozilla.org/s
> how_bug.cgi?id=1388447#c7, we would like to make Python 3 a requirement
> to build Firefox sometime in the Firefox 59 development cycle. (Firefox 59
> will be an ESR release.)
>
> The requirement will likely be Python 3.5+. Although I would love to make
> that 3.6 if possible so we can fully harness modern features and
> performance.
>
> I would love to hear feedback - positive or negative - from downstream
> packagers and users of various operating systems and distributions about
> this proposal.
>
> Please send comments to dev-bui...@lists.mozilla.org or leave them on bug
> 1388447.
>

We decided to wait until after the ESR release to establish the Python 3.5+
requirement. Then ESR got bumped to 60.

Nightly is now 61 and free of ESR. And I just submitted patches for review
that establish the Python 3.5+ requirement.

So this is a courtesy heads up that builds will soon require Python 3.5+.
`configure` today has a "checking for Python 3" line that will tell you if
things will "just work" once this lands.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Unaligned NEON memory access on ARMv7 phones

2018-03-28 Thread Gregory Szorc
On Tue, Mar 27, 2018 at 3:44 AM, Henri Sivonen  wrote:

> I'm having trouble finding reliable information about the performance
> of unaligned NEON memory access on ARMv7 phones.
>
> What I can find is:
>
>  * ARMv7 seems to allow unaligned access to be a trap-to-kernel kind
> of performance disaster, but it's hard to find information about
> whether the phone SoCs we care about are actually disastrous like
> that.
>
>  * On aarch64, unaligned access is the same instruction as aligned
> access and gets dynamically penalized, but only minimally, if the
> access crosses a cache line boundary. *Presumably* ARMv7 code running
> on an ARMv8 core gets the same benefit.
>
> Do we know what performance characteristics we can assume for
> unaligned NEON loads/stores on Android phones that have ARMv7 cores
> and recent enough Android that Fennec runs in the first place?


Is
http://fastcompression.blogspot.fr/2015/08/accessing-unaligned-memory.html
and/or the comments for MEM_FORCE_MEMORY_ACCESS at
https://github.com/facebook/zstd/blob/dev/lib/common/mem.h useful?

I could also introduce you to the zstandard developers if you think it
would be useful (compression often spends a large portion of its execution
time accessing and moving memory and I'm pretty sure they know arcane
memory access details like this). Reply privately if you want that
introduction.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Core::Build Config has moved

2018-03-13 Thread Gregory Szorc
Bug 1406536 has the context.

I should note that we now have shiny new components for aspects of the
build workflow including bootstrapping, linting, static analysis, generated
documentation, and CI task configuration. I encourage people to look at
those components at
https://bugzilla.mozilla.org/enter_bug.cgi?product=Firefox%20Build%20System
- and possibly even follow components that are relevant to your interests.
The new components are still low volume, unlike the main "General"
component.

On Tue, Mar 13, 2018 at 11:20 AM, Kartikaya Gupta 
wrote:

> I was looking to file a bug in Core::Build Config and discovered it
> doesn't exist any more. :mccr8 told me there is now a "Firefox Build
> System" product that encompasses what used to be Core::Build Config.
>
> I'm not a build peer so I don't know anything more than this; if
> anybody wants more context hopefully the build peers can answer any
> questions. This is just a heads-up in case anybody else goes looking
> for the component.
>
> Cheers,
> kats
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Who can review licenses these days?

2018-03-09 Thread Gregory Szorc
On Fri, Mar 9, 2018 at 7:28 AM, David Teller  wrote:

> I'll need a license review for a vendored Rust package. Who can perform
> these reviews these days?
>

We have an allow list of licenses in our Cargo config. So if the license is
already allowed, you can reference the crate in a Cargo.toml and `mach
vendor rust` will "just work." Otherwise, we need to review the license
before any action is taken.

**Only attorneys or someone empowered by them should review licenses and
give sign-off on a license.**

You should file a Legal bug at
https://bugzilla.mozilla.org/enter_bug.cgi?product=Legal. I think the
component you want is "Product - feature." Feel free to CC me and/or Ted,
as we've dealt with these kinds of requests in the past and can help bridge
the gap between engineering and legal. We also know about how to follow up
with the response (e.g. if we ship code using a new license, we need to add
something to about:license).
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Commit messages in Phabricator

2018-02-12 Thread Gregory Szorc
On Mon, Feb 12, 2018 at 6:09 AM, Boris Zbarsky  wrote:

> On 2/11/18 3:57 PM, Emilio Cobos Álvarez wrote:
>
>> Arc wants to use something like:
>>
>
> So from my point of view, having the bug# easily linked from various
> places where the short summary is all that's shown (pushlogs especially) is
> pretty useful.  It saves loading a bunch of extra things when trying to go
> from regression-range pushlogs to the relevant bugs


It's generally pretty easy to modify tooling to find linked bugs. We have a
shared Python module for parsing commit messages into useful metadata and
that's used by various tools for extracting bugs, reviewers, so they can be
rendered in various places (
https://hg.mozilla.org/hgcustom/version-control-tools/file/3743b6c62d05/pylib/mozautomation/mozautomation/commitparser.py).
And updating templating on hg.mozilla.org to render useful fields more
prominently is generally pretty turnkey. Please file bugs in the
hg.mozilla.org component if you want the display of things tweaked!

More to the point of the original question, there are several reasons why
we don't want to use Arcanist (`arc`) for Firefox development (and probably
more broadly at Mozilla). The initial comment in bug 1366401 records a lot
of them. The plan of record is to author a minimal client for submitting
reviews via Phabricator's HTTP API and to plumb that up to `mach` as
needed. This work isn't part of the Phabricator MVP, which is why the
Phabricator team hasn't worked on it.

If you use Mercurial, there is an alternative to Arcanist available today!
You can combine Mercurial's "phabricator" extension with a minimal wrapper
that Tom Prince wrote so it recognizes Mozilla's bug numbers and reviewer
annotations. Instructions are at
https://bugzilla.mozilla.org/show_bug.cgi?id=1366401#c4. This will likely
serve as the base for the eventual solution. It is probably less effort to
configure this than to install Arcanist. It is unsupported, but I think it
is better than using Arcanist.

Git users will likely have to wait until after the Phabricator MVP is
launched before there is staffing to work on a client. We kind of lucked
out that Mercurial had something that was almost drop-in ready for us to
consume and that is why there is a Mercurial alternative (i.e. this isn't
about prioritizing Mercurial over Git).

While I'm not working on either client implementation and am not part of
the Phabricator team, if someone wants to formalize the Mercurial or Git
clients in version-control-tools before the Phabricator team has time to
work on them, I'd be happy to review code or provide technical assistance.
Track things against bug 1366401 if you do any work.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2018-01-16 Thread Gregory Szorc
On Tue, Jan 16, 2018 at 1:42 PM, Ted Mielczarek  wrote:

> On Tue, Jan 16, 2018, at 10:51 AM, Jean-Yves Avenard wrote:
> > Sorry for resuming an old thread.
> >
> > But I would be interested in knowing how long that same Lenovo P710
> > takes to compile *today*….> In the past 6 months, compilation times have
> certainly increased
> > massively.>
> > Anyhow, I’ve received yesterday the iMac Pro I ordered early December.
> > It’s a 10 cores Xeon-W (W-2150B) with 64GB RAM>
> > Here are the timings I measured, in comparison with the Mac Pro 2013 I
> > have (which until today was the fastest machines I had ever used)>
> > macOS 10.13.2:
> > Mac Pro late 2013 : 13m25s
> > iMac Pro : 7m20s
> >
> > Windows 10 fall creator
> > Mac Pro late 2013 : 24m32s (was 16 minutes less than a year ago!)
> > iMac Pro : 14m07s (16m10s with windows defender going)
> >
> > Interestingly, I can almost no longer get any benefits when using
> > icecream, with 36 cores it saves 11s, with 52 cores it saves 50s only…>
> > It’s a very sweet machine indeed
>
> I just did a couple of clobber builds against the tip of central
> (9be7249e74fd) on my P710 running Windows 10 Fall Creators Update
> and they took about 22 minutes each. Definitely slower than it
> used to be :-/
>
>
On an EC2 c5.17xlarge (36+36 CPUs) running Ubuntu 17.10 and using Clang
5.0, 9be7249e74fd does a clobber but configured `mach build` in 7:34. Rust
is very obviously the long pole in this build, with C++ compilation (not
linking) completing in ~2 minutes.

If I enable sccache for just Rust by setting "mk_add_options "export
RUSTC_WRAPPER=sccache" in my mozconfig, a clobber build with populated
cache for Rust completes in 3:18. And Rust is still the long pole -
although only by a few seconds. It's worth noting that CPU time for this
build remains in the same ballpark. But overall CPU utilization increases
from ~28% to ~64%. There's still work to do improving the efficiency of the
overall build system. But these are mostly in parts only touched by clobber
builds. If you do `mach build binaries` after touching compiled code, our
CPU utilization is terrific.

From a build system perspective, C/C++ scales up to dozens of cores just
fine (it's been this way for a few years). Rust is becoming a longer and
longer long tail (assuming you have enough CPU cores that the vast amount
of C/C++ completes before Rust does).
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2018-01-16 Thread Gregory Szorc
Yes, most of the build time regressions in 2017 came from Rust. Leaning
more heavily on C++ features that require more processing or haven't been
optimized as much as C++ features that have been around for years is likely
also contributing.

Enabling sccache allows Rust compilations to be cached, which makes things
much faster on subsequent builds (since many Rust crates don't change that
often - but a few "large" crates like style do need to rebuild
semi-frequently).

We'll be transitioning workstations to the i9's because they are faster,
cheaper, and have more cores than the Xeons. But if you insist on having
ECC memory, you can still get the dual socket Xeons.

Last I heard Sophana was having trouble finding an OEM supplier for the
i9's (they are still relatively new). But if you want to put in a order for
the i9 before it is listed in the hardware catalog, contact Sophana (CCd)
and you can get the hook up.

While I'm here, we also have a contractor slated to add distributed
compilation to sccache [to replace icecream]. The contractor should start
in ~days. You can send questions, feature requests, etc through Ted for
now. We also had a meeting with IT and security last Friday about more
officially supporting distributed compilation in offices. We want people to
walk into any Mozilla office in the world and have distributed compilation
"just work." Hopefully we can deliver that in 2018.

On Tue, Jan 16, 2018 at 11:35 AM, Ralph Giles  wrote:

> On Tue, Jan 16, 2018 at 11:19 AM, Jean-Yves Avenard  >
> wrote:
>
> 12 minutes sounds rather long, it’s about what the macpro is currently
> > doing. I typically get compilation times similar to mac...
> >
>
> Yes, I'd like to see 7 minute build times again too! The E5-2643 has a
> higher clock speed than the Xeon W in the iMac Pro (3.4 vs 3.0 GHz) but a
> much lower peak frequency (3.7 vs 4.5 GHz) so maybe the iMac catches up
> during the single-process bottlenecks. Or it could be memory bandwidth.
>
>  -r
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Please do not use GetNativePath and GetNativeTarget in XP code and Windows-specific code

2017-12-04 Thread Gregory Szorc
On Mon, Dec 4, 2017 at 7:22 PM, Robert O'Callahan <rob...@ocallahan.org>
wrote:

> On Tue, Dec 5, 2017 at 2:00 PM, Gregory Szorc <g...@mozilla.com> wrote:
>
>> Many programming languages paper over these subtle differences leading to
>> badness. For example, the preferred path APIs for Python and Rust assume
>> paths are Unicode (they have their own logic to perform encoding/decoding
>> behind the scenes and depending on settings, run-time failures or data
>> loss
>> can occur).
>
>
> I don't think that's true for Rust. Rust's `Path` and `PathBuf` are the
> preferred data types and wrap an underlying `OsStr`/`OsString`, which
> claims to be able to represent any path for the target platform. On Linux
> an `OsString` is just an array of bytes with no specified encoding, and on
> Windows it appears to be an array of WTF-8 bytes, so that claim seems valid.
>

Yes. I was confusing Rust's handling of paths with environment variables
and command line arguments. std::env::vars() and std::env::args() panic if
an element isn't "Unicode." (The docs for those types don't say how Rust
chooses which encoding it treats the underlying bytes as and now I'm
legitimately curious.) That's why there are vars_os() and args_os() to get
the raw data. I got mixed up and thought there was a Path and OsPath
distinction as well.


>
> Of course PathBuf isn't exactly what you want for an application like
> Mercurial, since there I guess you want a type that represents any path
> from any platform, including platforms other than the target platform...
>

Indeed. Mercurial treats all paths as bytes (for better or worse). There
are known problems with path handling on Windows. For the curious, read
https://www.mercurial-scm.org/wiki/WindowsUTF8Plan and
https://www.mercurial-scm.org/wiki/EncodingStrategy. And, uh, I should have
linked to EncodingStrategy before because that contains a lot of useful
info. Although it is a bit light on links to back up the research. That
page was mostly authored by mpm, so I generally trust the accuracy of the
content.


> --
> lbir ye,ea yer.tnietoehr  rdn rdsme,anea lurpr  edna e hnysnenh hhe uresyf
> toD
> selthor  stor  edna  siewaoeodm  or v sstvr  esBa  kbvted,t
> rdsme,aoreseoouoto
> o l euetiuruewFa  kbn e hnystoivateweh uresyf tulsa rehr  rdm  or rnea
> lurpr
> .a war hsrer holsa rodvted,t  nenh hneireseoouot.tniesiewaoeivatewt
> sstvr  esn
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Please do not use GetNativePath and GetNativeTarget in XP code and Windows-specific code

2017-12-04 Thread Gregory Szorc
On Mon, Dec 4, 2017 at 5:00 PM, Gregory Szorc <g...@mozilla.com> wrote:

> On Wed, Nov 29, 2017 at 5:09 PM, Karl Tomlinson <mozn...@karlt.net> wrote:
>
>> I've always found this confusing, and so I'll write down the
>> understanding I've reached, in the hope that either it will help
>> others, or others can help me by correcting if these are
>> misunderstandings.
>>
>> On Unix systems:
>>
>>   `nativePath`
>>
>>  contains the bytes corresponding to the native filename used
>>  by native system calls.
>>
>>   `path`
>>
>>  is a UTF-16 encoding of an attempt to provide a human
>>  readable version of the native filename.  This involves
>>  interpreting native bytes according to the character encoding
>>  specified by the current locale of the application as
>>  indicated by nl_langinfo(CODESET).
>>
>>  For different locales, the same file can have a different
>>  `path`.
>>
>>  The native bytes may not be valid UTF-8, and so if the
>>  character encoding is UTF-8, then there may not be a valid
>>  `path` that can be encoded to produce the same `nativePath`.
>>
>>   It is best to use `nativePath` for working with filenames,
>>   including conversion to URI, but use `path` when displaying
>>   names in the UI.
>>
>> On WINNT systems:
>>
>>   `path`
>>
>>  contains wide characters corresponding to the native filename
>>  used by native wide character system APIs.  For at least most
>>  configurations, I assume wide characters are UTF-16, in which
>>  case this is also human readable.
>>
>>   `nativePath`
>>
>>  is an attempt to represent the native filename in the native
>>  multibyte character encoding specified by the current locale
>>  of the application.
>>
>>  For different locales, I assume the same file can have a
>>  different `nativePath`.
>>
>>  I assume there is not necessarily a valid multibyte character
>>  encoding, and so there may not be a valid `nativePath` that
>>  can be decoded to produce the same `path`.
>>
>>   It is best to use `path` for working with filenames.
>>   Conversion to URI involves assuming `path` is UTF-16 and
>>   converting to UTF-8.
>>
>> The parameters mean very different things on different systems,
>> and so it is not generally possible to write XP code with either
>> of these, but Gecko attempts to do so anyway.
>>
>> The numbers of applications not using UTF-8 and filenames not
>> valid UTF-8 are much smaller on Unix systems than the numbers of
>> applications not using UTF-8 and non-ASCII filenames on WINNT
>> systems, and so choosing to work with `path` provides more
>> compatibility than working with `nativePath`.
>>
>
> My understanding from contributing to Mercurial (version control tools are
> essentially virtual filesystems that need to store paths and realize them
> on multiple operating systems and filesystems) is as follows.
>
> On Linux/POSIX, a path is any sequence of bytes not containing a NULL
> byte. As such, a path can be modeled by char*. Any tool that needs to
> preserve paths from and to I/O functions should be using NULL terminated
> bytes internally. For display, etc, you can attempt to decode those bytes
> using an encoding. But there's no guarantee the byte sequences from the
> filesystem/OS will be e.g. proper UTF-8. And if you normalize all paths to
> e.g. Unicode internally, this can lead to round tripping errors.
>
> On Windows, there are multiple APIs for I/O. Under the hood, NTFS is
> purportedly using UTF-16 to store filenames. Although I can't recall if it
> is actual UTF-16 or just byte pairs. This means you should be using the
> *W() functions for all I/O. e.g. CreateFileW(). (Note: if you use
> CreateFileW(), you can also use the "\\?\" prefix on the filename to avoid
> MAX_PATH (260 character) limitations. If you want to preserve filenames on
> Windows, you should be using these functions. If you use the *A() functions
> (e.g. CreateFileA()) or use the POSIX libc compatibility layer (e.g.
> open()), it is very difficult to preserve exact byte sequences. Further
> clouding matters is that values from environment variables, command line
> arguments, etc may be in unexpected/different encodings. I can't recall
> specifics here. But there are definitely cases where the bytes being
> exposed may not match exactly what is stored in NTFS.
>
> In addition to that, there are various normalizations that the operating
> system or filesyst

Re: Please do not use GetNativePath and GetNativeTarget in XP code and Windows-specific code

2017-12-04 Thread Gregory Szorc
On Wed, Nov 29, 2017 at 5:09 PM, Karl Tomlinson  wrote:

> I've always found this confusing, and so I'll write down the
> understanding I've reached, in the hope that either it will help
> others, or others can help me by correcting if these are
> misunderstandings.
>
> On Unix systems:
>
>   `nativePath`
>
>  contains the bytes corresponding to the native filename used
>  by native system calls.
>
>   `path`
>
>  is a UTF-16 encoding of an attempt to provide a human
>  readable version of the native filename.  This involves
>  interpreting native bytes according to the character encoding
>  specified by the current locale of the application as
>  indicated by nl_langinfo(CODESET).
>
>  For different locales, the same file can have a different
>  `path`.
>
>  The native bytes may not be valid UTF-8, and so if the
>  character encoding is UTF-8, then there may not be a valid
>  `path` that can be encoded to produce the same `nativePath`.
>
>   It is best to use `nativePath` for working with filenames,
>   including conversion to URI, but use `path` when displaying
>   names in the UI.
>
> On WINNT systems:
>
>   `path`
>
>  contains wide characters corresponding to the native filename
>  used by native wide character system APIs.  For at least most
>  configurations, I assume wide characters are UTF-16, in which
>  case this is also human readable.
>
>   `nativePath`
>
>  is an attempt to represent the native filename in the native
>  multibyte character encoding specified by the current locale
>  of the application.
>
>  For different locales, I assume the same file can have a
>  different `nativePath`.
>
>  I assume there is not necessarily a valid multibyte character
>  encoding, and so there may not be a valid `nativePath` that
>  can be decoded to produce the same `path`.
>
>   It is best to use `path` for working with filenames.
>   Conversion to URI involves assuming `path` is UTF-16 and
>   converting to UTF-8.
>
> The parameters mean very different things on different systems,
> and so it is not generally possible to write XP code with either
> of these, but Gecko attempts to do so anyway.
>
> The numbers of applications not using UTF-8 and filenames not
> valid UTF-8 are much smaller on Unix systems than the numbers of
> applications not using UTF-8 and non-ASCII filenames on WINNT
> systems, and so choosing to work with `path` provides more
> compatibility than working with `nativePath`.
>

My understanding from contributing to Mercurial (version control tools are
essentially virtual filesystems that need to store paths and realize them
on multiple operating systems and filesystems) is as follows.

On Linux/POSIX, a path is any sequence of bytes not containing a NULL byte.
As such, a path can be modeled by char*. Any tool that needs to preserve
paths from and to I/O functions should be using NULL terminated bytes
internally. For display, etc, you can attempt to decode those bytes using
an encoding. But there's no guarantee the byte sequences from the
filesystem/OS will be e.g. proper UTF-8. And if you normalize all paths to
e.g. Unicode internally, this can lead to round tripping errors.

On Windows, there are multiple APIs for I/O. Under the hood, NTFS is
purportedly using UTF-16 to store filenames. Although I can't recall if it
is actual UTF-16 or just byte pairs. This means you should be using the
*W() functions for all I/O. e.g. CreateFileW(). (Note: if you use
CreateFileW(), you can also use the "\\?\" prefix on the filename to avoid
MAX_PATH (260 character) limitations. If you want to preserve filenames on
Windows, you should be using these functions. If you use the *A() functions
(e.g. CreateFileA()) or use the POSIX libc compatibility layer (e.g.
open()), it is very difficult to preserve exact byte sequences. Further
clouding matters is that values from environment variables, command line
arguments, etc may be in unexpected/different encodings. I can't recall
specifics here. But there are definitely cases where the bytes being
exposed may not match exactly what is stored in NTFS.

In addition to that, there are various normalizations that the operating
system or filesystem may apply to filenames. For example, Windows has
various reserved filenames that the Windows API will disallow (but NTFS can
store if you use the NTFS APIs directly) and MacOS or its filesystem will
silently munge certain Unicode code points (this is a fun one because of
security implications).

In all cases, if a filename originates from something other than the
filesystem, it is probably a good idea to normalize it to Unicode
internally and then spit out UTF-8 (or whatever the most-native API
expects).

Many programming languages paper over these subtle differences leading to
badness. For example, the preferred path APIs for Python and Rust assume
paths are Unicode (they have their own logic to perform 

Re: Phabricator/Lando update, November 2017

2017-12-04 Thread Gregory Szorc
On Wed, Nov 29, 2017 at 11:40 AM, Mark Côté  wrote:

> On Wednesday, 29 November 2017 12:43:58 UTC-5, Steve Fink  wrote:
> > On 11/29/2017 08:35 AM, Mark Côté wrote:
> > > I posted an update on Phabricator and Lando to my blog a couple weeks
> ago, but I figured I should share it here too:
> https://mrcote.info/blog/2017/11/17/phabricator-and-lando-november-update/
> > >
> > > There are two important points:
> > >
> > > 1. Our Phabricator instance has been up and running for a few months
> now.  Our team has been using it regularly, as has the NSS team and some
> Firefox devs.  We were hesitant to advertise this too widely in order not
> to create any confusion around the Quantum release, but now that that has
> settled down I am told it should be fine for anyone to start using it.  The
> instance is at https://phabricator.services.mozilla.com/ and there are
> docs at https://moz-conduit.readthedocs.io/en/latest/phabricator-user.html.
> We will have some hands-on training in Austin as well.
> >
> > Where should we file bugs, and what sorts of bugs would be useful right
> now?
> >
> > I see in bugzilla Conduit :: Phabricator (Upstream) and Conduit ::
> > Phabricator Extensions. I don't know what Conduit is. I checked Conduit
> > :: General, but it just says "General Conduit bugs." What terminology do
> > we need to know in order to be able to file bugs and find the right
> > documentation?
> >
> > So for example, "recent commits" on the mozilla-central repo appears to
> > be backwards. It only shows stuff from 2007, and the related pages
> > (History, Graph) are the same. The Graph page, in particular, seems to
> > only allow advancing a page at a time, so there's no way you'd ever get
> > to the tip. But this is totally noncritical functionality right now, so
> > perhaps it would just add friction to file a bunch of obvious bugs?
>
> "Conduit" is the name of the whole system consisting of all our services
> related to code review and landing (and a few other things).  The "Getting
> in Touch" section of our docs has a breakdown of components:
> http://moz-conduit.readthedocs.io/en/latest/phabricator-user.html#getting-
> in-touch
>
> Huh yeah I didn't notice that about the history & graphs.  That is weird.
> To be honest, Diffusion is not the best source-code viewer for a repo the
> size of mozilla-central.  I would still default to using hg.mozilla.org,
> despite it not having as fancy an interface.  But feel free to file those
> bugs as I am curious why the behaviour is the way it is.  Those would
> belong in Conduit :: Phabricator (Upstream), as we do not intend to heavily
> customize our instance and hence would like to work with upstream on these
> kinds of issues.
>

My understanding is that another user of Phabricator with a Very Large
Repository does not use the Phabricator repository viewer because of
scaling and usability issues. My info may be a bit out of date though. But
I'm inclined to say we should steer people away from the Phabricator UI
(for better or worse).

Also, Mercurial has steadily been getting a slew of updates to the HTML
interface. There's a ton coming in the not-yet-released 4.5 release. But
we're still running 4.2 on hg.mozilla.org. You can run `hg serve` to run a
local server, which will get you the modern goodies (and it will be fast
since it avoids network round trips).
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Browser Architecture Newsletter 5

2017-11-30 Thread Gregory Szorc


> On Nov 30, 2017, at 04:40, Anne van Kesteren  wrote:
> 
>> On Wed, Nov 29, 2017 at 9:47 PM, Dave Townsend  wrote:
>> If you’re working on improving storage or syncing of data in any of our
>> products or projects, on any platform, or if you’re currently struggling to
>> work with what currently exists, then we’d like to hear from you and align
>> our efforts.
> 
> We have quite a bit of ongoing activity linked from
> https://bugzilla.mozilla.org/show_bug.cgi?id=1147820 focused on APIs
> exposed to web content (HTTP cache, cookies, localStorage, Indexed DB,
> service worker registrations, etc.) and the associated UI exposed to
> users. There should maybe be a discussion at some point if the backend
> can be reconciled somehow as I think it would be beneficial to users
> if at some point some or all of this data could be synchronized in the
> same manner bookmarks are today. (No reason to "install" a web
> application twice or not having access to locally stored documents if
> the user agent can just synchronize the necessary data for you.)

This all sounds reasonable to me.

In addition, I’d also suggest that syncing considerations be taken into account 
as part of web standard design. Having worked on multiple “sync” products 
(including Firefox Sync), I’ve seen what happens when syncing is an 
afterthought. Just like security, you can’t just bolt syncing onto an existing 
product/feature: it needs to be considered as part of the initial design in 
order for the final implementation to be high quality.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to remove client.mk

2017-11-13 Thread Gregory Szorc
On Fri, Oct 27, 2017 at 4:16 PM, Gregory Szorc <g...@mozilla.com> wrote:

> client.mk has existed since 1998 (https://hg.mozilla.org/
> experimental/mozilla-central-cvs/rev/0a5e1fa423bd). Before `mach`,
> client.mk was the recommended way to build Firefox and perform other
> build tasks. client.mk has rarely changed in recent years because the
> build system maintainers have been working around it and because not much
> has changed around how the very early parts of "invoke the build system"
> work.
>
> The build system maintainers are making a significant push towards
> supporting building Firefox without GNU make in this and future quarters.
>
> Since client.mk is a make file and we want to not require make to build
> Firefox, client.mk is incompatible with our vision for the future of the
> Firefox build system. Furthermore, client.mk represents an ancient piece
> of build system infrastructure and its convoluted content creates
> consternation and inhibits forward progress. For these reasons, I'm
> announcing the intent to remove client.mk.
>
> If you have tools, infrastructure, workflows, etc that use client.mk - or
> any direct use of `make` for that matter - please transition them to `mach`
> commands and/or check with a build peer regarding your use case. You can
> file bugs regarding client.mk dependencies against bug 1412398.
>
> Thank you for your understanding.
>

Heads up: the patches to ween off client.mk have started landing. As I'm
touching the code, it is obvious how brittle things are and that there is a
web of under-documented dependencies everywhere I look.

If you see the build system behaving in strange ways - e.g. running
configure twice, complaining about clobbers when it shouldn't, running
moz.build sandbox evaluation when it shouldn't, etc, please don't hesitate
to make noise in #build or file a bug blocking 1412398. Even if we break a
workflow you relied on and there isn't a good equivalent, we'd like to
know. Disrupting people unnecessarily is definitely not a goal of this
refactor.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to require Python 3 to build Firefox 59 and later

2017-11-10 Thread Gregory Szorc
On Fri, Nov 10, 2017 at 7:43 PM, Mike Hommey <m...@glandium.org> wrote:

> On Fri, Nov 10, 2017 at 05:42:01PM -0800, Gregory Szorc wrote:
> > On Fri, Nov 10, 2017 at 4:22 PM, Xidorn Quan <m...@upsuper.org> wrote:
> >
> > > I'm happy hearing this. I would be interested on whether we are going
> to
> > > drop Python 2 at some point, or are we stuck with that forever?
> > >
> >
> > Let's put it this way: we still have a few uses of Perl in the build
> > system. There have been no meaningful changes to that Perl code in years.
> >
> > We'll likely be using Python 2 for years ahead because there isn't a
> > compelling reason to port code that isn't being actively updated.
> >
> >
> > >
> > > Also I'm curious what modern features are the team looking forward to?
> > >
> >
> > asyncio is huge for performance, especially for I/O bound workloads (both
> > local and network). We have a few of these in critical paths in the build
> > system.
> >
> > Python 3.6 is faster than 2.7 pretty much across the board in everything
> > except for process startup overhead.
> >
> > Python 3 has sensible behavior with regards to not coercing Unicode and
> > bytes types, which means people not using English in their operation
> system
> > won't experience as many build failures due to us failing to handle
> Unicode
> > in the build system.
>
> In fairness, those problems wouldn't be magically be fixed by using
> python 3. We'd still have to handle the difference between bytes and
> unicode somehow, and the reason there have been these problems is that
> we're failing to do that, and python 3 alone wouldn't solve that.
>

Python 3 refuses to perform automatic type coercion between bytes and
unicode. Unlike Python 2, if you mix types without an explicit encode() or
decode(), Python 3 complains loudly for *all* values, not just values that
can't be converted losslessly using the current default encoding (likely
ascii). In my experience, this catches a lot of "this function isn't
handling types consistently" bugs before they occur. Yes, you can still
have bugs due to e.g. improper encoding/decoding. But at least you can more
easily find and audit where encoding/decoding occurs because automatic type
coercion doesn't occur.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to require Python 3 to build Firefox 59 and later

2017-11-10 Thread Gregory Szorc
On Fri, Nov 10, 2017 at 4:22 PM, Xidorn Quan <m...@upsuper.org> wrote:

> I'm happy hearing this. I would be interested on whether we are going to
> drop Python 2 at some point, or are we stuck with that forever?
>

Let's put it this way: we still have a few uses of Perl in the build
system. There have been no meaningful changes to that Perl code in years.

We'll likely be using Python 2 for years ahead because there isn't a
compelling reason to port code that isn't being actively updated.


>
> Also I'm curious what modern features are the team looking forward to?
>

asyncio is huge for performance, especially for I/O bound workloads (both
local and network). We have a few of these in critical paths in the build
system.

Python 3.6 is faster than 2.7 pretty much across the board in everything
except for process startup overhead.

Python 3 has sensible behavior with regards to not coercing Unicode and
bytes types, which means people not using English in their operation system
won't experience as many build failures due to us failing to handle Unicode
in the build system.

Type annotations make it easier to hack on large code bases and make IDEs
more useful.

If you do perf hacking in Python, Python 3 also has a slew of new
facilities for memory and CPU profiling.

There's also a ton of random package additions and improvements in the
standard library. And of course tons of small bug fixes.


>
> On Sat, Nov 11, 2017, at 10:27 AM, Gregory Szorc wrote:
> > For reasons outlined at
> > https://bugzilla.mozilla.org/show_bug.cgi?id=1388447#c7, we would like
> to
> > make Python 3 a requirement to build Firefox sometime in the Firefox 59
> > development cycle. (Firefox 59 will be an ESR release.)
> >
> > The requirement will likely be Python 3.5+. Although I would love to make
> > that 3.6 if possible so we can fully harness modern features and
> > performance.
> >
> > I would love to hear feedback - positive or negative - from downstream
> > packagers and users of various operating systems and distributions about
> > this proposal.
> >
> > Please send comments to dev-bui...@lists.mozilla.org or leave them on
> bug
> > 1388447.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to require Python 3 to build Firefox 59 and later

2017-11-10 Thread Gregory Szorc
For reasons outlined at
https://bugzilla.mozilla.org/show_bug.cgi?id=1388447#c7, we would like to
make Python 3 a requirement to build Firefox sometime in the Firefox 59
development cycle. (Firefox 59 will be an ESR release.)

The requirement will likely be Python 3.5+. Although I would love to make
that 3.6 if possible so we can fully harness modern features and
performance.

I would love to hear feedback - positive or negative - from downstream
packagers and users of various operating systems and distributions about
this proposal.

Please send comments to dev-bui...@lists.mozilla.org or leave them on bug
1388447.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Bigger hard drives wanted (was Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux))

2017-11-08 Thread Gregory Szorc
This thread is good feedback. I think changing the default to a 1TB SSD is
a reasonable request.

Please send any future comments regarding hardware to Sophana (
s...@mozilla.com) to increase the chances that feedback is acted on.

On Wed, Nov 8, 2017 at 9:09 AM, Julian Seward  wrote:

> On 08/11/17 17:28, Boris Zbarsky wrote:
>
> > The last desktop I was shipped came with a 512 GB drive.  [..]
> >
> > In practice, I routinely run out of disk space and have to delete
> > objdirs and rebuild them the next day, because I have to build
> > something else in a different srcdir...
>
> I totally agree.  I had a machine with a 512GB SSD and wound up in the
> same endless juggle/compress/delete-and-rebuild game.  I got a new machine
> with a 512GB SSD *and* a 1T HDD, and that helps a lot, although the perf
> hit from the HDD especially when linking libxul is terrible.
>
> J
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-11-06 Thread Gregory Szorc


> On Nov 6, 2017, at 05:19, Gabriele Svelto  wrote:
> 
>> On 04/11/2017 01:10, Jeff Gilbert wrote:
>> Clock speed and core count matter much more than ECC. I wouldn't chase
>> ECC support for general dev machines.
> 
> The Xeon-W SKUs I posted in the previous thread all had identical or
> higher clock speeds than equivalent Core i9 SKUs and ECC support with
> the sole exception of the i9-7980XE which has slightly higher (100MHz)
> peak turbo clock than the Xeon W-2195.
> 
> There is IMHO no performance-related reason to skimp on ECC support
> especially for machines that will sport a significant amount of memory.
> 
> Importance of ECC memory is IMHO underestimated mostly because it's not
> common and thus users do not realize they may be hitting memory errors
> more frequently than they realize. My main workstation is now 5 years
> old and has accumulated 24 memory errors; that may not seem much but if
> it happens at a bad time, or in a bad place, they can ruin your day or
> permanently corrupt your data.
> 
> As another example of ECC importance my laptop (obviously) doesn't have
> ECC support and two years ago had a single bit that went bad in the
> second DIMM. The issue manifested itself as internal compiler errors
> while building Fennec. The first time I just pulled again from central
> thinking it was a fluke, the second I updated the build dependencies
> which I hadn't done in a while thinking that an old GCC might have been
> the cause. It was not until the third day with a failure that I realized
> what was happening. A 2-hours long memory test showed me the second DIMM
> was bad so I removed it, ordered a new one and went on to check my
> machine. I had to purge my compilation cache because garbage had
> accumulated in there, run an hg verify on my repo as well as verifying
> all the installed packages for errors. Since I didn't have access to my
> main workstation at the time I had wasted 3 days chasing the issue and
> my workflow was slowed down by a cold compilation cache and a gimped
> machine (until I could replace the DIMM).
> 
> This is not common, but it's not rare either and we now have hundreds of
> developers within Mozilla so people are going to run into issues that
> can be easily prevented by having ECC memory.
> 
> That being said ECC memory also makes machines less susceptible to
> Rowhammer-like attacks and makes them detectable while they are happening.
> 
> For a more in-depth reading on the matter I suggest reading "Memory
> Errors in Modern Systems - The Good, The Bad, and The Ugly" [1] in which
> the authors analyze memory errors on live systems over two years and
> argue that SEC-DED ECC (the type of protection you usually get on
> workstations) is often insufficient and even chipkill ECC (now common on
> most servers) is not enough to catch all errors happening during real
> world use.
> 
> Gabriele
> 
> [1] https://www.cs.virginia.edu/~gurumurthi/papers/asplos15.pdf
> 

The Xeon-W’s are basically the i9’s (both Skylake-X) with support for ECC, more 
vPRO, and AMT. The Xeon-W’s lack Turbo 3.0 (preferred core). However, Turbo 2.0 
apparently reaches the same MHz, so I don’t think it matters much. There are 
some other differences with regards to PCIe lanes, chipset, etc.

Another big difference is price. The Xeon’s cost a lot more.

For building Firefox, the i9’s and Xeon-W are probably very similar (and is 
something we should test). It likely comes down to whether you want to pay a 
premium for ECC and other Xeon-W features. I’m not in a position to answer that.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-11-02 Thread Gregory Szorc
On Thu, Nov 2, 2017 at 3:43 PM, Nico Grunbaum  wrote:

> For rr I have an i7 desktop with a base clock of 4.0 Ghz, and for building
> I use icecc to distribute the load (or rather I will be again when bug
> 1412240[0] is closed).  The i9 series has lower base clocks (2.8 Ghz, and
> 2.6Ghz for the top SKUs)[1], but high boost clocks of 4.2 Ghz.  If I were
> to switch over to an i9 for everything, would I see a notable difference in
> performance in rr?
>

Which i7? You should get better CPU efficiency with newer
microarchitectures. The i9's we're talking about are based on Skylake-X
which is based on Skylake which are the i7-6XXX models in the consumer
lines. It isn't enough to compare MHz: you need to also consider
microarchitectures, memory, and workload.

https://arstechnica.com/gadgets/2017/09/intel-core-i9-7960x-review/2/ has
some single-threaded benchmarks. The i7-7700K (Kaby Lake) seems to "win"
for single-threaded performance. But the i9's aren't far behind. Not far
enough behind to cancel out the benefits of the extra cores IMO.

This is because the i9's are pretty aggressive about using turbo. More
aggressive than the Xeons. As long as cooling can keep up, the top-end GHz
is great and you aren't sacrificing that much perf to have more cores on
die. You can counter by arguing that the consumer-grade i7's can yield more
speedups via overclocking. But for enterprise uses, having this all built
into the chip so it "just works" without voiding warranty is a nice trait :)

FWIW, the choice to go with Xeons always bothered me because we had to make
an explicit clock vs core trade-off. Building Firefox requires both many
cores for compiling and fast cores for linking. Since the i9's turbo so
well, we get the best of both worlds. And at a much lower price. Aside from
the loss of ECC, it is a pretty easy decision to switch.


> -Nico
>
> [0] https://bugzilla.mozilla.org/show_bug.cgi?id=1412240 Build failure in
> libavutil (missing atomic definitions), when building with clang and icecc
>
> [1] https://ark.intel.com/products/series/123588/Intel-Core-X-
> series-Processors
>
> On 10/27/17 7:50 PM, Robert O'Callahan wrote:
>
>> BTW can someone forward this entire thread to their friends at AMD so AMD
>> will fix their CPUs to run rr? They're tantalizingly close :-/.
>>
>> Rob
>>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Mapping source files to Bugzilla components

2017-10-31 Thread Gregory Szorc
On Tue, Oct 24, 2017 at 11:59 AM, Gregory Szorc <g...@mozilla.com> wrote:

> Freshly landed in bug 1411343, CI now produces JSON files containing info
> about how source files map to Bugzilla components.
>
> You'll notice a new "Bugzilla" task in Treeherder. It will write some
> artifacts to the "source/source-bugzilla-info" TC Index route. e.g.
> https://tools.taskcluster.net/index/artifacts/gecko.v2.autol
> and.pushlog-id.53952.source/source-bugzilla-info.
>
> Available artifacts include:
>
> * A JSON and gzipped JSON file mapping source files to Bugzilla
> components. e.g. https://index.taskcluster.net/
> v1/task/gecko.v2.autoland.pushlog-id.53952.source.source-
> bugzilla-info/artifacts/public/components.json
> * A JSON and gzipped JSON file listing source files without Bugzilla
> components. e.g. https://public-artifacts.taskc
> luster.net/c2fvXmqEQLy12DsWpXyKyg/0/public/missing.json
>
> The task basically runs various `mach file-info` sub-commands with
> --format=json (a new feature). This is all powered by moz.build Files()
> metadata, which jmaher and others have been curating for the past several
> months.
>
> The JSON files can be a bit large. So I'm not sure if it makes sense to
> actively query them for lookups. But it would certainly be possible to
> import the JSON files into other services, local SQLite databases, to
> facilitate querying.
>
> Given how difficult it is to run the moz.build evaluation feature on
> hg.mozilla.org (https://mozilla-version-control-tools.readthedocs.io/en/
> latest/hgmo/mozbuildinfo.html), I'm really tempted to deprecate that
> service or have it powered by these new JSON files. If you are using this
> service or need to obtain metadata from source files, let me know so I can
> figure out a path forward.
>

Quick update...

The CI task now fails if a file under source control does not have a bug
component defined in moz.build metadata. This could potentially bite people
adding new files to the repo or moving files around. But this is a good
thing: we want every file in the repo to be associated with a bug component
so we know where to file bugs. Add missing metadata is simple: just search
for BUG_COMPONENT in moz.build files for examples.

Also, we now produce a smaller "components-normalized.json" file in
addition to the original "components.json" file. e.g.
https://public-artifacts.taskcluster.net/JDLSmj2XQwmfgRpHUUKFTg/0/public/components-normalized.json.
This /might/ be fast/small enough to load on demand.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Default Rust optimization level decreased from 2 to 1

2017-10-31 Thread Gregory Szorc
On Tue, Oct 31, 2017 at 12:02 PM, Jeff Muizelaar <jmuizel...@mozilla.com>
wrote:

> As another piece of evidence in support opt-level=1 being the wrong
> default, Glenn also got bitten profiling with the wrong options.
> https://github.com/servo/webrender/issues/1817#issuecomment-340553613


It is "wrong" for the set of "people performing profiling." This set is
different from "people compiling Gecko." Which is different from "people
who actually need to compile Gecko." What I'm trying is the new default is
"not wrong" for a large set of people who aren't "people performing
profiling."

My opinion is that this class of failure is a UX failure revolving around
the question of "am I building the thing that should be used for X?" The
build workflow does a horrible job of asking this critical question. The
closest we have is `mach bootstrap` asking if you are building {Firefox,
Fennec} {with, without} artifact builds. And we don't do a good job of
transitioning between `mach bootstrap` or forcing people to run it at all.
So we essentially have nothing forcing people to answer a question that has
critical implications for their ability to adequately develop Firefox.

I have ideas for solving this problem. But to do it in a way that results
in fewer people falling through "my build configuration is sub-optimal and
I didn't know it" cracks, we need to eliminate the concept of a "default
build configuration" and require people to specify a build configuration
(by creating a mozconfig file, etc). This is because the reality is that
there is no single "default build configuration" that is ideal for
everyone. There are more details and some discussion in bug 1410262. I'd
rather we morph that bug as needed than discuss on this mailing list. I'm
happy to discuss on this mailing list eventually: I just want it to be
after we have general agreement on a concrete proposal.


>
>
> On Thu, Oct 26, 2017 at 2:51 PM, Jeff Muizelaar <jmuizel...@mozilla.com>
> wrote:
> > FWIW, WebRender becomes unusable opt-level=1. It also looks like style
> > performance takes quite a hit as well which means that our default
> > developer builds become unusable for performance work. I worry that
> > people will forget this and end up rediscovering only when they look
> > at profiles (as mstange just did). What's the use case for a
> > --enable-optimize, opt-level=1 build?
> >
> > -Jeff
> >
> > On Wed, Oct 25, 2017 at 1:34 PM, Gregory Szorc <g...@mozilla.com> wrote:
> >> Compiling Rust code with optimizations is significantly slower than
> >> compiling without optimizations. As was measured in bug 1411081, the
> >> difference between rustc's -Copt-level 1 and 2 on an i7-6700K (4+4
> cores)
> >> for a recent revision of mozilla-central was 325s/802s wall/CPU versus
> >> 625s/1282s. This made Rust compilation during Firefox builds stand out
> as a
> >> long pole and significantly slowed down builds.
> >>
> >> Because we couldn't justify the benefits of level 2 for the build time
> >> overhead it added, we've changed the build system default so Rust is
> >> compiled with -Copt-level=1 (instead of 2).
> >>
> >> Adding --enable-release to your mozconfig (the configuration for builds
> we
> >> ship to users) enables -Copt-level=2. (i.e. we didn't change
> optimization
> >> settings for builds we ship to users.)
> >>
> >> Adding --disable-optimize sets to -Copt-level=0. (This behavior is
> >> unchanged.)
> >>
> >> If you want explicit control over -Copt-level, you can `export
> >> RUSTC_OPT_LEVEL=` in your mozconfig and that value will always be
> >> used. --enable-release implies a number of other changes. So if you just
> >> want to restore the old build system behavior, set this variable in your
> >> mozconfig.
> >>
> >> Also, due to ongoing work around Rust integration in the build system,
> it
> >> is dangerous to rely on manual `cargo` invocations to compile Rust
> because
> >> bypassing the build system (not using `mach build`) may not use the same
> >> set of RUSTFLAGS that direct `cargo` invocations do. Things were mostly
> in
> >> sync before. But this change and anticipated future changes will cause
> more
> >> drift. If you want the correct behavior, use `mach`.
> >> ___
> >> dev-platform mailing list
> >> dev-platform@lists.mozilla.org
> >> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-10-27 Thread Gregory Szorc
Yeah. Only the Xeons and ThreadRipper (as our potential high core count
machines) support ECC. rr, ECC, or reasonable costs: pick at most two :/

On Fri, Oct 27, 2017 at 4:08 PM, Sophana "Soap" Aik <s...@mozilla.com>
wrote:

> Thanks Gabriele, that poses a problem then for the system build we have in
> mind here as the i9's do not support ECC memory. That may have to be a
> separate system with a Xeon.
>
> On Fri, Oct 27, 2017 at 3:58 PM, Gabriele Svelto <gsve...@mozilla.com>
> wrote:
>
>> On 27/10/2017 01:02, Gregory Szorc wrote:
>> > Sophana (CCd) is working on a new system build right now. It will be
>> based
>> > on the i9's instead of dual socket Xeons and should be faster and
>> cheaper.
>>
>> ... and lacking ECC memory. Please whatever CPU is chosen make sure it
>> has ECC support and the machine comes loaded with ECC memory. Developer
>> boxes usually ship with plenty of memory, and they can stay on for days
>> without a reboot churning at builds and tests. Memory errors happen and
>> they can ruin days of work if they hit you at the wrong time.
>>
>>  Gabriele
>>
>>
>>
>
>
> --
> moz://a
> Sophana "Soap" Aik
> IT Vendor Management Analyst
> IRC/Slack: soap
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to remove client.mk

2017-10-27 Thread Gregory Szorc
client.mk has existed since 1998 (
https://hg.mozilla.org/experimental/mozilla-central-cvs/rev/0a5e1fa423bd).
Before `mach`, client.mk was the recommended way to build Firefox and
perform other build tasks. client.mk has rarely changed in recent years
because the build system maintainers have been working around it and
because not much has changed around how the very early parts of "invoke the
build system" work.

The build system maintainers are making a significant push towards
supporting building Firefox without GNU make in this and future quarters.

Since client.mk is a make file and we want to not require make to build
Firefox, client.mk is incompatible with our vision for the future of the
Firefox build system. Furthermore, client.mk represents an ancient piece of
build system infrastructure and its convoluted content creates
consternation and inhibits forward progress. For these reasons, I'm
announcing the intent to remove client.mk.

If you have tools, infrastructure, workflows, etc that use client.mk - or
any direct use of `make` for that matter - please transition them to `mach`
commands and/or check with a build peer regarding your use case. You can
file bugs regarding client.mk dependencies against bug 1412398.

Thank you for your understanding.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-10-26 Thread Gregory Szorc
On Thu, Oct 26, 2017 at 4:31 PM, Mike Hommey <m...@glandium.org> wrote:

> On Thu, Oct 26, 2017 at 04:02:20PM -0700, Gregory Szorc wrote:
> > Also, the machines come with Windows by default. That's by design: that's
> > where the bulk of Firefox users are. We will develop better products if
> the
> > machines we use every day resemble what actual users use. I would
> encourage
> > developers to keep Windows on the new machines when they are issued.
>
> Except actual users are not using i9s or dual xeons. Yes, we have
> slower reference hardware, but that also makes the argument of using the
> same thing as actual users less relevant: you can't develop on machines
> that actually look like what users have. So, as long as you have the
> slower reference hardware to test, it doesn't seem to me it should
> matter what OS you're running on your development machine.


Host OS matters for finding UI bugs and issues with add-ons (since lots of
add-on developers are also on Linux or MacOS).

I concede that performance testing on i9s and Xeons is not at all
indicative of the typical user :)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-10-26 Thread Gregory Szorc
On Thu, Oct 26, 2017 at 6:34 AM, Henri Sivonen  wrote:

> On Thu, Oct 26, 2017 at 9:15 AM, Henri Sivonen 
> wrote:
> > There's a huge downside, though:
> > If the screen stops consuming the DisplayPort data stream, the
> > graphical session gets killed! So if you do normal things like turn
> > the screen off or switch input on a multi-input screen, your graphical
> > session is no longer there when you come back and you get a login
> > screen instead! (I haven't yet formed an opinion on whether this
> > behavior can be lived with or not.)
>
> And the downsides don't even end there. rr didn't work. Plus other
> stuff not worth mentioning here.
>
> I guess going back to 16.04.1 is a better deal than 17.10.
>
> > P.S. It would be good for productivity if Mozilla issued slightly less
> > cutting-edge Nvidia GPUs to developers to increase the probability
> > that support in nouveau has had time to bake.
>
> This Mozilla-issued Quadro M2000 has been a very significant harm to
> my productivity. Considering how good rr is, I think it makes sense to
> continue to run Linux to develop Firefox. However, I think it doesn't
> make sense to issue fancy cutting-edge Nvidia GPUs to developers who
> aren't specifically working on Nvidia-specific bugs and, instead, it
> would make sense to issue GPUs that are boring as possible in terms of
> Linux driver support (i.e. Just Works with distro-bundled Free
> Software drivers). Going forward, perhaps Mozilla could issue AMD GPUs
> with computers that don't have Intel GPUs?
>
> As for the computer at hand, I want to put an end to this Nvidia
> obstacle to getting stuff done. It's been suggested to me that Radeon
> RX 560 would be well supported by distro-provided drivers, but the
> "*2" footnote at https://help.ubuntu.com/community/AMDGPU-Driver
> doesn't look too good. Based on that table it seems one should get
> Radeon RX 460. Is this the correct conclusion? Does Radeon RX 460 Just
> Work with Ubuntu 16.04? Is Radeon RX 460 going to be
> WebRender-compatible?
>

Sophana (CCd) is working on a new system build right now. It will be based
on the i9's instead of dual socket Xeons and should be faster and cheaper.
We can all thank AMD for introducing competition in the CPU market to
enable this to happen :)

I also share your desire to not issue fancy video cards in these machines
by default. If there are suggestions for a default video card, now is the
time to make noise :)

Also, the machines come with Windows by default. That's by design: that's
where the bulk of Firefox users are. We will develop better products if the
machines we use every day resemble what actual users use. I would encourage
developers to keep Windows on the new machines when they are issued.

I concede that developing Firefox on Linux is better than on Windows for a
myriad of reasons. However, that doesn't mean you have to forego Linux. I
use Hyper-V under Windows 10 to run Linux. I do most of my development
(editors, builds, etc) in that local Linux VM. I use an X server for
connecting to graphic Linux applications. The overhead of Hyper-V as
compared to native Linux is negligible. Unless I need fast graphics in
Linux (which is rare), I pretty much get the advantages of Windows *and*
Linux simultaneously. Unless you have requirements that prohibit using a
VM, I encourage using this setup.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Default Rust optimization level decreased from 2 to 1

2017-10-26 Thread Gregory Szorc
On Thu, Oct 26, 2017 at 11:51 AM, Jeff Muizelaar <jmuizel...@mozilla.com>
wrote:

> FWIW, WebRender becomes unusable opt-level=1. It also looks like style
> performance takes quite a hit as well which means that our default
> developer builds become unusable for performance work. I worry that
> people will forget this and end up rediscovering only when they look
> at profiles (as mstange just did). What's the use case for a
> --enable-optimize, opt-level=1 build?
>

Yes, this was a painful trade-off. I acknowledge we still may not have the
proper set of defaults or tweakable options in play.

Currently, --enable-optimize is the default and it is tied to both C/C++
and Rust (Rust inherited the option).

Would it help if we had a separate --enable-optimize-rust (or similar)
option to control Rust optimizations so we have separate knobs? If we did
that, --disable-optimize-rust could be opt-level 0 or 1 and
--enable-optimize-rust could be opt-level=2. The local defaults would
probably be --enable-optimize/--disable-optimize-rust (mirroring today).

I'm not sure if it is possible, but per-crate optimization levels might
help. Although, the data shows that the style crate is one of the slowest
to compile. And, this crate's optimization is also apparently very
important for accurate performance testing. That's a really unfortunate
conflict to have and it would be terrific if we could make the style crate
compile faster so this conflict goes away. I've filed bug 1412077 to track
improvements here.


>
> On Wed, Oct 25, 2017 at 1:34 PM, Gregory Szorc <g...@mozilla.com> wrote:
> > Compiling Rust code with optimizations is significantly slower than
> > compiling without optimizations. As was measured in bug 1411081, the
> > difference between rustc's -Copt-level 1 and 2 on an i7-6700K (4+4 cores)
> > for a recent revision of mozilla-central was 325s/802s wall/CPU versus
> > 625s/1282s. This made Rust compilation during Firefox builds stand out
> as a
> > long pole and significantly slowed down builds.
> >
> > Because we couldn't justify the benefits of level 2 for the build time
> > overhead it added, we've changed the build system default so Rust is
> > compiled with -Copt-level=1 (instead of 2).
> >
> > Adding --enable-release to your mozconfig (the configuration for builds
> we
> > ship to users) enables -Copt-level=2. (i.e. we didn't change optimization
> > settings for builds we ship to users.)
> >
> > Adding --disable-optimize sets to -Copt-level=0. (This behavior is
> > unchanged.)
> >
> > If you want explicit control over -Copt-level, you can `export
> > RUSTC_OPT_LEVEL=` in your mozconfig and that value will always be
> > used. --enable-release implies a number of other changes. So if you just
> > want to restore the old build system behavior, set this variable in your
> > mozconfig.
> >
> > Also, due to ongoing work around Rust integration in the build system, it
> > is dangerous to rely on manual `cargo` invocations to compile Rust
> because
> > bypassing the build system (not using `mach build`) may not use the same
> > set of RUSTFLAGS that direct `cargo` invocations do. Things were mostly
> in
> > sync before. But this change and anticipated future changes will cause
> more
> > drift. If you want the correct behavior, use `mach`.
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Default Rust optimization level decreased from 2 to 1

2017-10-26 Thread Gregory Szorc
On Wed, Oct 25, 2017 at 10:34 AM, Gregory Szorc <g...@mozilla.com> wrote:

> Compiling Rust code with optimizations is significantly slower than
> compiling without optimizations. As was measured in bug 1411081, the
> difference between rustc's -Copt-level 1 and 2 on an i7-6700K (4+4 cores)
> for a recent revision of mozilla-central was 325s/802s wall/CPU versus
> 625s/1282s. This made Rust compilation during Firefox builds stand out as a
> long pole and significantly slowed down builds.
>
> Because we couldn't justify the benefits of level 2 for the build time
> overhead it added, we've changed the build system default so Rust is
> compiled with -Copt-level=1 (instead of 2).
>
> Adding --enable-release to your mozconfig (the configuration for builds we
> ship to users) enables -Copt-level=2. (i.e. we didn't change optimization
> settings for builds we ship to users.)
>
> Adding --disable-optimize sets to -Copt-level=0. (This behavior is
> unchanged.)
>
> If you want explicit control over -Copt-level, you can `export
> RUSTC_OPT_LEVEL=` in your mozconfig and that value will always be
> used. --enable-release implies a number of other changes. So if you just
> want to restore the old build system behavior, set this variable in your
> mozconfig.
>
> Also, due to ongoing work around Rust integration in the build system, it
> is dangerous to rely on manual `cargo` invocations to compile Rust because
> bypassing the build system (not using `mach build`) may not use the same
> set of RUSTFLAGS that direct `cargo` invocations do. Things were mostly in
> sync before. But this change and anticipated future changes will cause more
> drift. If you want the correct behavior, use `mach`.
>

Heads up: the code change uncovered a subtle bug in sccache's argument
parser. If you see "error: codegen option `opt-level` requires a string (C
opt-level=)", you've hit it.

Until the fix merges around, the workaround is to disable sccache or graft
https://hg.mozilla.org/integration/autoland/rev/bc103ec9d0b3.

Sorry for any disruption. Caching and argument parsing are hard :/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Default Rust optimization level decreased from 2 to 1

2017-10-25 Thread Gregory Szorc
On Wed, Oct 25, 2017 at 11:50 AM, Emilio Cobos Álvarez <emi...@crisal.io>
wrote:

>
>
> On 10/25/2017 07:59 PM, Boris Zbarsky wrote:
> > On 10/25/17 1:34 PM, Gregory Szorc wrote:
> >> Also, due to ongoing work around Rust integration in the build system,
> it
> >> is dangerous to rely on manual `cargo` invocations to compile Rust
> >> because
> >> bypassing the build system (not using `mach build`) may not use the same
> >> set of RUSTFLAGS that direct `cargo` invocations do. Things were
> >> mostly in
> >> sync before. But this change and anticipated future changes will cause
> >> more
> >> drift. If you want the correct behavior, use `mach`.
> >
> > Is something like "mach cargo-geckolib check" from within the servo/
> > directory still expected to work sanely?
>
> That's from the servo/ tree, right?
>
> From mozilla-central I've been using ./mach cargo check gkrust, and I
> certainly expect it to keep working.
>

There is a `mach cargo` defined in both
python/mozbuild/mozbuild/mach_commands.py and
servo/python/servo/mach_commands.py. The former is accessed via `mach` and
the latter via `servo/mach`.

mozilla-central's `mach cargo` goes through the Firefox build system to
invoke cargo and appears to do the right thing.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Default Rust optimization level decreased from 2 to 1

2017-10-25 Thread Gregory Szorc
Compiling Rust code with optimizations is significantly slower than
compiling without optimizations. As was measured in bug 1411081, the
difference between rustc's -Copt-level 1 and 2 on an i7-6700K (4+4 cores)
for a recent revision of mozilla-central was 325s/802s wall/CPU versus
625s/1282s. This made Rust compilation during Firefox builds stand out as a
long pole and significantly slowed down builds.

Because we couldn't justify the benefits of level 2 for the build time
overhead it added, we've changed the build system default so Rust is
compiled with -Copt-level=1 (instead of 2).

Adding --enable-release to your mozconfig (the configuration for builds we
ship to users) enables -Copt-level=2. (i.e. we didn't change optimization
settings for builds we ship to users.)

Adding --disable-optimize sets to -Copt-level=0. (This behavior is
unchanged.)

If you want explicit control over -Copt-level, you can `export
RUSTC_OPT_LEVEL=` in your mozconfig and that value will always be
used. --enable-release implies a number of other changes. So if you just
want to restore the old build system behavior, set this variable in your
mozconfig.

Also, due to ongoing work around Rust integration in the build system, it
is dangerous to rely on manual `cargo` invocations to compile Rust because
bypassing the build system (not using `mach build`) may not use the same
set of RUSTFLAGS that direct `cargo` invocations do. Things were mostly in
sync before. But this change and anticipated future changes will cause more
drift. If you want the correct behavior, use `mach`.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Reviews for in-tree documentation (was: Builds docs on MDN)

2017-10-19 Thread Gregory Szorc
On Thu, Oct 19, 2017 at 6:48 PM, Daniel Veditz  wrote:

> On Thu, Oct 19, 2017 at 9:30 AM, smaug  wrote:
>
> > (Hoping the r=documentation flag won't be misused ;))
>
>
> ​I hope there will be some kind of hook making sure files touched in that
> manner are all actually documentation files and not other parts of the
> repo.
>

Of course. I don't intend to create a backdoor for making changes to
Firefox :)

The authnz part of allowing contributions that aren't `hg push ssh://
hg.mozilla.org/` is hard to do right. We're very cognizant about looping in
members of jbryner's team when we design and implement changes. That will
definitely happen here at some point.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Reviews for in-tree documentation (was: Builds docs on MDN)

2017-10-19 Thread Gregory Szorc
On Thu, Oct 19, 2017 at 5:37 PM, Andreas Tolfsen  wrote:

> Some time ago there was a discussion on dev-builds@ regarding
> the state of our in-tree source code documentation.  The main
> focus was that MDN, moving forward, will mainly revolve around web
> platform documentation and would actively start de-emphasising Gecko
> contribution docs.
>
> Now, that discussion paints the backdrop for this new thread, but it
> is well worth reading on its own and had a lot of good ideas in it
> that never materialised:
>
> https://groups.google.com/d/msg/mozilla.dev.builds/cp4bJ1QJX
> TE/Xqy_nHV5DAAJ
>
> The reality four months on is that more documentation than ever
> lives in the tree, and there is a sentiment that imposing the
> same rigorous peer review process we have for source code on
> documentation changes is overkill.
>
> bz made a modest proposal that documentation changes should not
> require bugs or reviews, and that they could be annotated with a
> special review flag to pass pre-receive hooks.  I’m including his
> original email below.
>
> If we still feel this is a good idea I would like to know what steps
> to take next to make that policy.
>

I am planning on implementing this. I'll probably track it off bug 1395763
somewhere. The timetable with Phabricator may hold up aspects of it a bit.


>
> -- >8 --
> From: Boris Zbarsky 
> Date: June 16, 2017 15:40
> Subject: Re: Builds docs on MDN
> To: dev-bui...@lists.mozilla.org
>
> On 6/16/17 9:33 AM, Ehsan Akhgari wrote:
>
>> I certainly feel like the barrier for filing bugs, creating a
>> patch, figuring out how to use readthedocs infrastructure, getting
>> reviews, etc. isn't really worth it
>>
>
> I believe we should not require filing bugs, reviews, or any of
> that for in-tree docs.  Just edit the doc, commit, push.  Add
> "r=documentation" if needed to placate hooks.  Just because it's
> in-tree doesn't mean it needs to use the whole heavyweight process.
> And if we can make these things auto-DONTBUILD, that's even better,
> of course.
>
> I agree it's still slower than a wiki. :(
> ___
> dev-builds mailing list
> dev-bui...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-builds
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: We need better canaries for JS code

2017-10-18 Thread Gregory Szorc
On Wed, Oct 18, 2017 at 10:28 AM, David Teller  wrote:

> Hi everyone,
>
>   Yesterday, Nightly was broken on Linux and MacOS because of a typo in
> JS code [1]. If I understand correctly, this triggered the usual
> "undefined is not a function", which was
>
> 1/ uncaught during testing, as these things often are;
> 2/ basically impossible to diagnose in the wild, because there was no
> error message of any kind.
>
> I remember that we had bugs of this kind lurking for years in our
> codebase, in code that was triggered daily and that everybody believed
> to be tested.
>
> I'd like to think that there is a better way to handle these bugs,
> without waiting for them to explode in our user's face. Opening this
> thread to see if we can find a way to somehow "solve" these bugs, either
> by making them impossible, or by making them much easier to solve.
>
> I have one proposal. Could we change the behavior of the JS VM as follows?
>
> - The changes affect only Nightly.
> - The changes affect only mozilla chrome code (including system add-ons
> but not user add-ons or test code).
> - Any programmer error (e.g. SyntaxError) causes a crash a crash that
> displays (and attaches to the CrashReporter) both the JS stack in and
> the native stack.
> - Any SyntaxError is a programmer error.
> - Any TypeError is a programmer error.
>
> I expect that this will find a number of lurking errorsy, so we may want
> to migrate code progressively, using a directive, say "use strict
> moz-platform" and static analysis to help encourage using this directive.
>
> What do you think?
>

I agree that errors like this should have better visibility in order to
help catch bugs.

I'm not sure changing behavior of the JS VM is the proper layer to
accomplish this. I think reporting messages from the JS console is a better
place to start. We could change the test harnesses to fail tests if certain
errors (like SyntaxError or TypeError) are raised. If there is a "hook" in
the JS VM to catch said errors at error time, we could have the test
harnesses run Firefox in a mode that makes said errors more fatal (even
potentially crashing as you suggest) and/or included additional metadata,
such as stacks.

Another idea would be to require all non-log output in the JS console to be
accounted for. Kinda like reftest's expected assertion count. Assuming most
JS errors/warnings are unwanted, this would allow us to fail all tests
reporting JS errors/warnings while allowing wanted/known failures to not
fail the test. A problem though is background services "randomly" injecting
their output during test execution depending on non-deterministic timing.
It could be difficult to roll this out in practice. But it feels like we
should be able to filter out messages or stacks accordingly.

But why stop there? Shouldn't Firefox installs in the wild report JS errors
and warnings in Firefox code back to Mozilla (like we do crashes)? I know
this has been discussed. I'm not sure what we're doing/planning about it
though.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to Enable: Automated Static Analysis feedback in MozReview

2017-10-17 Thread Gregory Szorc
On Tue, Oct 17, 2017 at 9:58 PM,  wrote:

> This is awesome! As an engineer who has to work with C++ until we advance
> Rust bindings, I always feel terrible when my reviewers spend their
> precious time identifying simple C++ errors in my code.
>
>
> Seeing the advancements in static analysis for C++, rustfmt and eslint for
> JS, I'm wondering if there's a place to collect a less strict "best
> practices" analysis - more similar to rust's clippy than fmt.
>
> In Intl/L10n land, we have a bunch of recommendations that are very hard
> to enforce since they spread between JS, C++ and soon Rust regarding
> language selection, manipulation, testing of intl output etc.
> I'm wondering if there's a place to get those kind of "automated feedback"
> patterns.
> A few examples of what I have on my mind:
>
>  - We'd like people to not use "general.useragent.locale" to manipulate
> app locale anymore, but rather use Services.locale.getRequestedLocales/
> setRequestedLocales.
>  - We'd like to make sure people don't write unit tests that test
> particular output out of Intl APIs (that makes our tests locked to only
> work in one language and break every time we update our datasets - that's a
> big no-no in the intl world)
>  - We'd like to discourage people from altering app locales, and rather
> test against updated required locales.
>  - Soon we'll want to recommend using DOMLoclaization.jsm/Localization.jsm
> API over StringBundle/DTD.
>
> Those things can be encoded as regexps, but they span across programming
> languages (XUL, XHTML, HTML, XBL, DTD, JS, C++).
>
> Is there any mozilla-clippy project being planned? :)
>

We have mozlint (
https://firefox-source-docs.mozilla.org/tools/lint/index.html), which is a
mini framework of sorts for integrating various linters into a common
frontend (`mach lint`). Many of these lints run in CI. It is relatively
easy to add new linters (as the docs detail).

Feel free to supplement existing linters with custom checks or write your
own new linter if appropriate. ahal is the mozlint guru if you have
questions.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Coding style: Argument alignment

2017-10-01 Thread Gregory Szorc
On Wed, Sep 27, 2017 at 11:58 AM, Gregory Szorc <g...@mozilla.com> wrote:

> On Tue, Sep 26, 2017 at 6:45 PM, Nicholas Hurley <hur...@mozilla.com>
> wrote:
>
>> So I'm going to chime in on a formatting discussion for probably the
>> second
>> time in my life as a mozillian. (It's apparently that important to me.)
>>
>> On Tue, Sep 26, 2017 at 9:25 AM, Mats Palmgren <m...@mozilla.com> wrote:
>>
>> > 2. touching more lines than necessary when adding/removing params,
>> >making it harder to follow blame links
>> >
>> > Judging from my personal use of blame links, it's very rare that
>> > I want to find when a specific param was added to a signature.
>> > This is also something that tooling could solve (skip blame on
>> > lines that differ only by whitespace).
>> >
>>
>> This is probably half the time I annotate something (I refuse to use the b
>> word here). Use cases differ, and for a lot of what I find myself doing,
>> this would make it demonstrably worse.
>>
>
> Mercurial and Git both support a -w argument to ignore whitespace with
> annotate/blame.
>
> In addition, modern versions of Mercurial have `hg annotate --skip
> ` which allows you to specify a revset used to select revisions to
> skip over when annotating. The Chromium developer tools have
> git-hyper-blame (https://commondatastorage.googleapis.com/chrome-infra-
> docs/flat/depot_tools/docs/html/git-hyper-blame.html), which is similar.
>
> What both tools seem to lack is the ability to toggle whitespace and
> revision skipping via the web interface. This can certainly be implemented.
> Also, revision skipping is not an exact science and the results may not
> match expectations, so it shouldn't be seen as a cure-all.
>

Quick update: the next release of Mercurial (presumably 4.4) will have HTML
elements to control whitespace handling of annotate in the web interface.

Shoring up Mercurial's revision skipping in annotate was also discussed at
the Mercurial developer sprint this weekend. Someone is literally sitting
an arm's length away hacking on it as I type this.


>
>
>>
>>
>> > I think improved readability trumps all of the minor issues above.
>> >
>>
>> Readability is (to an extent) in the eye of the beholder. For example, I
>> have a difficult time following long gaps in things... so vertically
>> aligned parameters, where one of them has an extra-long type name (which
>> also seems to happen a lot in things I'm working with) become exceedingly
>> difficult for me to match type and name quickly. (Yes, there are editor
>> tricks to help with this, and yes I use them, but they still only get me
>> so
>> far when I'm quickly scanning code).
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Coding style: Argument alignment

2017-09-27 Thread Gregory Szorc
On Tue, Sep 26, 2017 at 6:45 PM, Nicholas Hurley  wrote:

> So I'm going to chime in on a formatting discussion for probably the second
> time in my life as a mozillian. (It's apparently that important to me.)
>
> On Tue, Sep 26, 2017 at 9:25 AM, Mats Palmgren  wrote:
>
> > 2. touching more lines than necessary when adding/removing params,
> >making it harder to follow blame links
> >
> > Judging from my personal use of blame links, it's very rare that
> > I want to find when a specific param was added to a signature.
> > This is also something that tooling could solve (skip blame on
> > lines that differ only by whitespace).
> >
>
> This is probably half the time I annotate something (I refuse to use the b
> word here). Use cases differ, and for a lot of what I find myself doing,
> this would make it demonstrably worse.
>

Mercurial and Git both support a -w argument to ignore whitespace with
annotate/blame.

In addition, modern versions of Mercurial have `hg annotate --skip
` which allows you to specify a revset used to select revisions to
skip over when annotating. The Chromium developer tools have
git-hyper-blame (
https://commondatastorage.googleapis.com/chrome-infra-docs/flat/depot_tools/docs/html/git-hyper-blame.html),
which is similar.

What both tools seem to lack is the ability to toggle whitespace and
revision skipping via the web interface. This can certainly be implemented.
Also, revision skipping is not an exact science and the results may not
match expectations, so it shouldn't be seen as a cure-all.


>
>
> > I think improved readability trumps all of the minor issues above.
> >
>
> Readability is (to an extent) in the eye of the beholder. For example, I
> have a difficult time following long gaps in things... so vertically
> aligned parameters, where one of them has an extra-long type name (which
> also seems to happen a lot in things I'm working with) become exceedingly
> difficult for me to match type and name quickly. (Yes, there are editor
> tricks to help with this, and yes I use them, but they still only get me so
> far when I'm quickly scanning code).
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Canonical cinnabar repository

2017-09-18 Thread Gregory Szorc
A few posts here and in the other thread have mentioned abstracting away
complexity to servers as if it is a magical solution that makes all
problems go away.

Yes, deploying a server-side solution to do some "syncing" is doable and
solves a subset of notable problems.

But I advise extreme caution against asserting simplicity for anything.

One of the large shifts that we're starting to see take place for Firefox
tooling are the breakdown of the barriers between a) the build system and
version control b) the build system and automation. Transitively, we're
seeing all 3 of these things (which are easy to mentally model as
independent concepts) start to merge. This is both terrifying and exciting.

I really owe a blog post or gdoc or something on this topic, but
essentially we're at a few tipping points in terms of scale where we need
tighter cohesion between these systems to continue to scale in a way that
meets developers' needs. Artifact builds are an example of such a system.
Automation builds these artifacts and indexes against the version control
revision (currently Mercurial). The build system ties it all together. In
order to use an artifact build locally, you need to find an artifact build
from automation. This means knowing the indexed version control revision.
With git-cinnabar, the commit map is locally known (read: fast and
well-defined). With vanilla Git, that commit map is... not local. (Yes, we
could work our way out of this commit mapping problem using various
mechanisms (notably caching and/or content-based hashing for indexing). But
there is no obvious solution at this time.) Artifact builds are only the
tip of the iceberg for this class of problem.

In 2-4 years I see us in a position where version control, the build
system, and automation function as one logical system working in harmony.
We're still in the early stages of this shift, which is probably why you
haven't noticed it. Therefore, decisions and policies around how to support
version control today thus incur a magnitude more effort to support down
the road (and even then may result in poorer outcomes for the user base not
on the "primary" set of tools). I'm particularly sensitive to the butterfly
effect on this (likely inevitable) transition and I don't want to hamstring
future productivity efforts by committing us to a short-sighted version
control support policy that won't easily scale.

That being said, I think a canonical git-cinnabar repository might be
doable. I view that as a glorified cache to offload the expensive
first-time conversion. But using it with vanilla Git is a bit more
concerning from a future compatibility perspective, so we should set
expectations for appropriate use accordingly.

On Mon, Sep 18, 2017 at 8:59 AM, Nicholas Hurley  wrote:

> I've had quite a few times (every time I get a new machine) that I've had
> issues with git-cinnabar and multiple machines. This has resulted in me
> just scp'ing my entire repo every time I get a new machine (which comes
> with its own set of issues... messed up paths in the .git/config, for
> example). I grant that some of this is due to my own stupidity, but aren't
> tools supposed to make it harder for one's own stupidity to get in the way?
>
> I haven't even tried to share anything with anyone else via git, I just
> send them a patch. The chance of mismatched sha's is Too Damn High! This
> is, as the saying goes, suboptimal.
>
> All this to say, I think having an official cinnabar repo (preferably with
> cinnabar metadata I can fetch, too) would make things significantly easier
> for a lot of use-cases, and I can't think of a single use-case that it
> would make things worse for over the long-term (ie, once we have autoland
> everywhere and no manual pushes to m-c or try). The initial fetch might
> take a while if we include metadata, but that's a one-time (per machine)
> cost.
>
> Even better would be the "all the magic happens server-side" solution. Then
> I could Just Use Git, and never have to worry about any of this junk ever
> again.
>
> On Mon, Sep 18, 2017 at 8:25 AM, Andrew McCreight 
> wrote:
>
> > On Mon, Sep 18, 2017 at 7:05 AM, Kartikaya Gupta 
> > wrote:
> >
> > > I've tried using cinnabar a couple of times now and the last time I
> > > tried, this was the dealbreaker for me. My worfklow often involves
> > > moving a branch from one machine to another and the extra hassle that
> > > results from mismatched SHAs makes it much more complicated than it
> > > needs to be. gecko-dev doesn't have this problem as it has a canonical
> > > upstream that works much more like a regular git user expects.
> > >
> >
> > For what it is worth, I regularly pull from one machine to another with
> > git-cinnabar, and it works just fine without any problems from mismatched
> > SHAs. For me, the switch from a clone of gecko-dev to git-cinnabar has
> been
> > totally transparent.
> > 

Re: Intent to require `mach try` for submitting to Try

2017-09-18 Thread Gregory Szorc
On Fri, Sep 15, 2017 at 10:21 PM, ISHIKAWA,chiaki 
wrote:

> Does "mozilla/mach try" will work for try-comm-central?
>
> (Well, I know there has been a general talk of moving thunderbird to
> somewhere else, but it is not going to happen overnight.)
>

I'm not sure. There's no good reason why it should or shouldn't. Also, we
may not impose this Try push restriction on try-comm-central. It depends
how tightly linked they are. Realistically, we likely go with whatever is
the least effort way forward for Firefox because we're not supposed to be
spending time hacking on Thunderbird's infra and tooling.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to require `mach try` for submitting to Try

2017-09-15 Thread Gregory Szorc
On Fri, Sep 15, 2017 at 7:44 PM, Eric Rescorla <e...@rtfm.com> wrote:

> What happens if you are using git?
>

git-cinnabar is already supported.

Vanilla git is TBD. I see bug 1400470 was just filed for that.


>
> On Fri, Sep 15, 2017 at 3:30 PM, Gregory Szorc <g...@mozilla.com> wrote:
>
>> The Try Service ("Try") is a mechanism that allows developers to schedule
>> tasks in automation. The main API for that service is "Try Syntax" (e.g.
>> "try: -b o -p linux -u xpcshell"). And the transport channel for making
>> that API call is a Mercurial changeset's commit message plus a version
>> control "push" to the "try" repository on hg.mozilla.org.
>>
>> As the recent "Reminder on Try usage and infrastructure resources" thread
>> details, scaling Firefox automation - and Try in particular - is
>> challenging. In addition, the number of workflows and variations that
>> automation needs to support is ever increasing and continuously evolving.
>>
>> It shouldn't come as a surprise when I say that we've outgrown many of the
>> implementation details of the Try Service. Try Syntax itself is over 7
>> years old and has survived a complete rewrite of automation scheduling,
>> for
>> example. Aspects of Try are adversely impacting the ability for developers
>> to use Try efficiently and therefore undermining our collective ability to
>> get important work done quickly.
>>
>> In order to put ourselves in a position to more easily change
>> implementation details of the Try Service so we may deliver a better
>> experience for all, we'd like to require the use of `mach try` for Try
>> submissions. This will ensure there is a single, well-defined, and
>> easily-controlled mechanism for submitting to Try. This will allow greater
>> flexibility and adaptability. It will provide better opportunities for
>> user
>> messaging. It will ensure that any new features are seen by everyone
>> sooner. It will eventually lead to faster results on Try for everyone.
>>
>> Bug 1400330 ("require-mach-try") is on file to track requiring `mach try`
>> to submit to Try.
>>
>> The mechanism for submitting to Try has remaining relatively stable for
>> years. `mach try` is relatively new - and I suspect unused by a sizeable
>> population. This is a potentially highly disruptive transition. That's why
>> we're not making it immediately and why we're sending this email today.
>>
>> You can mitigate the disruption by using `mach try` before the forced
>> transition is made and reporting bugs as necessary. Have them block
>> "require-mach-try" if you need them addressed before the transition or
>> "mach-try" otherwise. We don't really have a good component for `mach try`
>> bugs, so put them in TaskCluster :: Task Configuration for now and chain
>> them to a tracking bug for visibility.
>>
>> FAQ
>>
>> Q: When will the transition be made?
>> A: When we feel `mach try` is usable for important workflows (as judged by
>> blockers on "require-mach-try"). Also, probably not before Firefox 57
>> ships
>> because we don't want to interfere with that release.
>>
>> Q: What about old changesets?
>> A: You will still be able to submit to Try using the current/legacy
>> mechanism for old changesets. There will be a "flag day" of sorts on
>> mozilla-central after which all Try submissions will require `mach try` or
>> nothing will get scheduled.
>>
>> Q: Will Try Syntax continue to work?
>> A: For the foreseeable future, yes. There is a long-term goal to replace
>> Try Syntax with something more automatic and less effort - at least for
>> most use cases. But there are no definite plans or a timetable to remove
>> Try Syntax.
>>
>> Q: Are there any other major changes planned?
>> A: Several. People are hacking on path-based selection, `mach try fuzzy`
>> improvements, moz.build-based annotations influencing what gets scheduled,
>> not using a traditional Mercurial repository for backing Try, and more.
>> Some of these may not be available to legacy Try submission workflows,
>> giving you additional reasons to adopt `mach try` sooner.
>>
>> Q: Should I be worried about this transition negatively impacting my
>> workflow?
>> A: As long as you file bugs blocking "require-mach-try" to make it known
>> why `mach try` doesn't work for you, your voice will be heard and
>> hopefully
>> acted on. So as long as you file bugs, you shouldn't need to worry.
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to require `mach try` for submitting to Try

2017-09-15 Thread Gregory Szorc
On Fri, Sep 15, 2017 at 6:58 PM, Boris Zbarsky <bzbar...@mit.edu> wrote:

> On 9/15/17 6:30 PM, Gregory Szorc wrote:
>
>> we'd like to require the use of `mach try` for Try
>> submissions.
>>
>
> So just to be clear, instead of being able to "hg push -f try" will one
> now need to do "./mach try syntax ." and put in the actual syntax
> string on every single try push?
>
> The reason I ask is that a very common use case for me is having a single
> try syntax that I want to apply to multiple changesets (e.g. bisecting
> things on try, or doing A/B comparisons on some tests with/without some
> patches, etc).  Having a trysyntax changeset or ten in my mq works pretty
> well for this.  I can give them names that indicate what sort of try push
> they do, so if I want to do a "linux64, tests, no talos" changeset I don't
> have to go around digging for the try syntax for it: I already have
> something that has a name indicating that, and qpushing it is pretty easy
> with the name-substring matching qpush does.
>
> There are obviously ways mach try could support this, e.g. by being able
> to easily define aliases for particular ways of pushing to try or whatnot.
> But it doesn't have an obvious way of doing that right now.
>
> One could also use shell aliases for this, but in practice the lack of
> substring matching on shell aliases would get annoying, because there are a
> bunch of kinda-similar variants involved in my experience.


`mach try fuzzy` supports presets via `mach try fuzzy --preset PRESET`. See
`mach help try fuzzy`.

I encourage you to file bugs on making the usability fit your needs. I
would even consider an "auto bisect" mechanism (where it submits N
changesets using the same configuration) a reasonable feature request.
Since Try scheduling is all in-tree now, we could do that in a single push
- possibly even with a single changeset. That will almost certainly be
faster than the workflow you are currently practicing.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to require `mach try` for submitting to Try

2017-09-15 Thread Gregory Szorc
On Fri, Sep 15, 2017 at 4:51 PM, Nicholas Nethercote <n.netherc...@gmail.com
> wrote:

> Are there docs on how to use `./mach try`? `./mach help try` doesn't give
> much detail.
>

`mach try` has sub-commands. Run `mach help try syntax` and `mach help try
fuzzy` for more useful docs. The former is actually a pretty reasonable
documentation of Try Syntax. There is some light magic where `mach try
` is essentially treated as `mach try syntax`. This could be
documented better in `mach help try`.


>
> Also, will https://mozilla-releng.net/trychooser/ still work? I'm
> generally more of a command line person than a GUI person, but this is one
> case where I find the GUI approach far more usable.
>

I reckon it will. Although with the velocity that Try Syntax changes, it is
arguably better to have this all in-tree. Also, it is kind of silly that
you have to open a web site and copy something from that into your
terminal. IMO this should all be turnkey.  e.g. `mach try chooser` would
start a local HTTP server, point your web browser at it, and an HTML form
submission would stop the local HTTP server and submit to Try. That feels a
bit more pleasant (for those who can run a browser in their development
environment anyway).

But I digress: that site should continue to run.


>
> On Sat, Sep 16, 2017 at 8:30 AM, Gregory Szorc <g...@mozilla.com> wrote:
>
>> The Try Service ("Try") is a mechanism that allows developers to schedule
>> tasks in automation. The main API for that service is "Try Syntax" (e.g.
>> "try: -b o -p linux -u xpcshell"). And the transport channel for making
>> that API call is a Mercurial changeset's commit message plus a version
>> control "push" to the "try" repository on hg.mozilla.org.
>>
>> As the recent "Reminder on Try usage and infrastructure resources" thread
>> details, scaling Firefox automation - and Try in particular - is
>> challenging. In addition, the number of workflows and variations that
>> automation needs to support is ever increasing and continuously evolving.
>>
>> It shouldn't come as a surprise when I say that we've outgrown many of
>> the implementation details of the Try Service. Try Syntax itself is over 7
>> years old and has survived a complete rewrite of automation scheduling, for
>> example. Aspects of Try are adversely impacting the ability for developers
>> to use Try efficiently and therefore undermining our collective ability to
>> get important work done quickly.
>>
>> In order to put ourselves in a position to more easily change
>> implementation details of the Try Service so we may deliver a better
>> experience for all, we'd like to require the use of `mach try` for Try
>> submissions. This will ensure there is a single, well-defined, and
>> easily-controlled mechanism for submitting to Try. This will allow greater
>> flexibility and adaptability. It will provide better opportunities for user
>> messaging. It will ensure that any new features are seen by everyone
>> sooner. It will eventually lead to faster results on Try for everyone.
>>
>> Bug 1400330 ("require-mach-try") is on file to track requiring `mach try`
>> to submit to Try.
>>
>> The mechanism for submitting to Try has remaining relatively stable for
>> years. `mach try` is relatively new - and I suspect unused by a sizeable
>> population. This is a potentially highly disruptive transition. That's why
>> we're not making it immediately and why we're sending this email today.
>>
>> You can mitigate the disruption by using `mach try` before the forced
>> transition is made and reporting bugs as necessary. Have them block
>> "require-mach-try" if you need them addressed before the transition or
>> "mach-try" otherwise. We don't really have a good component for `mach try`
>> bugs, so put them in TaskCluster :: Task Configuration for now and chain
>> them to a tracking bug for visibility.
>>
>> FAQ
>>
>> Q: When will the transition be made?
>> A: When we feel `mach try` is usable for important workflows (as judged
>> by blockers on "require-mach-try"). Also, probably not before Firefox 57
>> ships because we don't want to interfere with that release.
>>
>> Q: What about old changesets?
>> A: You will still be able to submit to Try using the current/legacy
>> mechanism for old changesets. There will be a "flag day" of sorts on
>> mozilla-central after which all Try submissions will require `mach try` or
>> nothing will get scheduled.
>>
>> Q: Will Try Syntax continue to work?
>> A: For the foreseeable future,

Re: Intent to require `mach try` for submitting to Try

2017-09-15 Thread Gregory Szorc
On Fri, Sep 15, 2017 at 4:48 PM, Nick Fitzgerald <nfitzger...@mozilla.com>
wrote:

> Does `mach try` still require `git cinnabar` when using a `git` checkout
> of m-c, or does it work with `moz-git-tools` now too?
>

It currently requires git-cinnabar. TBH, I forgot about this. Supporting
non-cinnabar will be a bit annoying. Not impossible, but annoying.

A non-cinnabar user should file a blocking bug detailing their workflow so
we can figure out a path forward. It is quite possible the outcome is "only
support git-cinnabar." So whoever files the bug should reply here with the
link so non-cinnabar Git users can pile on and let your numbers be known.
But please do it in the bug and not on this list.


> On Fri, Sep 15, 2017 at 3:30 PM, Gregory Szorc <g...@mozilla.com> wrote:
>
>> The Try Service ("Try") is a mechanism that allows developers to schedule
>> tasks in automation. The main API for that service is "Try Syntax" (e.g.
>> "try: -b o -p linux -u xpcshell"). And the transport channel for making
>> that API call is a Mercurial changeset's commit message plus a version
>> control "push" to the "try" repository on hg.mozilla.org.
>>
>> As the recent "Reminder on Try usage and infrastructure resources" thread
>> details, scaling Firefox automation - and Try in particular - is
>> challenging. In addition, the number of workflows and variations that
>> automation needs to support is ever increasing and continuously evolving.
>>
>> It shouldn't come as a surprise when I say that we've outgrown many of
>> the implementation details of the Try Service. Try Syntax itself is over 7
>> years old and has survived a complete rewrite of automation scheduling, for
>> example. Aspects of Try are adversely impacting the ability for developers
>> to use Try efficiently and therefore undermining our collective ability to
>> get important work done quickly.
>>
>> In order to put ourselves in a position to more easily change
>> implementation details of the Try Service so we may deliver a better
>> experience for all, we'd like to require the use of `mach try` for Try
>> submissions. This will ensure there is a single, well-defined, and
>> easily-controlled mechanism for submitting to Try. This will allow greater
>> flexibility and adaptability. It will provide better opportunities for user
>> messaging. It will ensure that any new features are seen by everyone
>> sooner. It will eventually lead to faster results on Try for everyone.
>>
>> Bug 1400330 ("require-mach-try") is on file to track requiring `mach try`
>> to submit to Try.
>>
>> The mechanism for submitting to Try has remaining relatively stable for
>> years. `mach try` is relatively new - and I suspect unused by a sizeable
>> population. This is a potentially highly disruptive transition. That's why
>> we're not making it immediately and why we're sending this email today.
>>
>> You can mitigate the disruption by using `mach try` before the forced
>> transition is made and reporting bugs as necessary. Have them block
>> "require-mach-try" if you need them addressed before the transition or
>> "mach-try" otherwise. We don't really have a good component for `mach try`
>> bugs, so put them in TaskCluster :: Task Configuration for now and chain
>> them to a tracking bug for visibility.
>>
>> FAQ
>>
>> Q: When will the transition be made?
>> A: When we feel `mach try` is usable for important workflows (as judged
>> by blockers on "require-mach-try"). Also, probably not before Firefox 57
>> ships because we don't want to interfere with that release.
>>
>> Q: What about old changesets?
>> A: You will still be able to submit to Try using the current/legacy
>> mechanism for old changesets. There will be a "flag day" of sorts on
>> mozilla-central after which all Try submissions will require `mach try` or
>> nothing will get scheduled.
>>
>> Q: Will Try Syntax continue to work?
>> A: For the foreseeable future, yes. There is a long-term goal to replace
>> Try Syntax with something more automatic and less effort - at least for
>> most use cases. But there are no definite plans or a timetable to remove
>> Try Syntax.
>>
>> Q: Are there any other major changes planned?
>> A: Several. People are hacking on path-based selection, `mach try fuzzy`
>> improvements, moz.build-based annotations influencing what gets scheduled,
>> not using a traditional Mercurial repository for backing Try, and more.
>> Some of these may not be available to le

Re: Intent to require `mach try` for submitting to Try

2017-09-15 Thread Gregory Szorc
On Fri, Sep 15, 2017 at 3:39 PM, Botond Ballo  wrote:

> Will submitting to try from MozReview (or Phabricator once that
> replaces MozReview) still work? I think there is important value in
> being able to submit to try without a local checkout.
>

Yes. And I agree there is value in scheduling automation without a checkout.

This is tracked in bug 1400333.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to require `mach try` for submitting to Try

2017-09-15 Thread Gregory Szorc
The Try Service ("Try") is a mechanism that allows developers to schedule
tasks in automation. The main API for that service is "Try Syntax" (e.g.
"try: -b o -p linux -u xpcshell"). And the transport channel for making
that API call is a Mercurial changeset's commit message plus a version
control "push" to the "try" repository on hg.mozilla.org.

As the recent "Reminder on Try usage and infrastructure resources" thread
details, scaling Firefox automation - and Try in particular - is
challenging. In addition, the number of workflows and variations that
automation needs to support is ever increasing and continuously evolving.

It shouldn't come as a surprise when I say that we've outgrown many of the
implementation details of the Try Service. Try Syntax itself is over 7
years old and has survived a complete rewrite of automation scheduling, for
example. Aspects of Try are adversely impacting the ability for developers
to use Try efficiently and therefore undermining our collective ability to
get important work done quickly.

In order to put ourselves in a position to more easily change
implementation details of the Try Service so we may deliver a better
experience for all, we'd like to require the use of `mach try` for Try
submissions. This will ensure there is a single, well-defined, and
easily-controlled mechanism for submitting to Try. This will allow greater
flexibility and adaptability. It will provide better opportunities for user
messaging. It will ensure that any new features are seen by everyone
sooner. It will eventually lead to faster results on Try for everyone.

Bug 1400330 ("require-mach-try") is on file to track requiring `mach try`
to submit to Try.

The mechanism for submitting to Try has remaining relatively stable for
years. `mach try` is relatively new - and I suspect unused by a sizeable
population. This is a potentially highly disruptive transition. That's why
we're not making it immediately and why we're sending this email today.

You can mitigate the disruption by using `mach try` before the forced
transition is made and reporting bugs as necessary. Have them block
"require-mach-try" if you need them addressed before the transition or
"mach-try" otherwise. We don't really have a good component for `mach try`
bugs, so put them in TaskCluster :: Task Configuration for now and chain
them to a tracking bug for visibility.

FAQ

Q: When will the transition be made?
A: When we feel `mach try` is usable for important workflows (as judged by
blockers on "require-mach-try"). Also, probably not before Firefox 57 ships
because we don't want to interfere with that release.

Q: What about old changesets?
A: You will still be able to submit to Try using the current/legacy
mechanism for old changesets. There will be a "flag day" of sorts on
mozilla-central after which all Try submissions will require `mach try` or
nothing will get scheduled.

Q: Will Try Syntax continue to work?
A: For the foreseeable future, yes. There is a long-term goal to replace
Try Syntax with something more automatic and less effort - at least for
most use cases. But there are no definite plans or a timetable to remove
Try Syntax.

Q: Are there any other major changes planned?
A: Several. People are hacking on path-based selection, `mach try fuzzy`
improvements, moz.build-based annotations influencing what gets scheduled,
not using a traditional Mercurial repository for backing Try, and more.
Some of these may not be available to legacy Try submission workflows,
giving you additional reasons to adopt `mach try` sooner.

Q: Should I be worried about this transition negatively impacting my
workflow?
A: As long as you file bugs blocking "require-mach-try" to make it known
why `mach try` doesn't work for you, your voice will be heard and hopefully
acted on. So as long as you file bugs, you shouldn't need to worry.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intermittent oranges and when to disable the related test case - a simplified policy

2017-09-08 Thread Gregory Szorc
On Wed, Sep 6, 2017 at 2:10 PM,  wrote:

> Over the last 9 months a few of us have really watched intermittent test
> failures almost daily and done a lot to pester people as well as fix many.
> While there are over 420 bugs that have been fixed since the beginning of
> the year, there are half that many (211+) which have been disabled in some
> form (including turning off the jobs).
>
> We don't like to disable and have been pretty relaxed in recommending
> disabling a test.  Overall we have tried to adhere to a policy of:
> * >=30 failures/week- ask for owner to look at failure and fix it, if this
> persists for a few weeks with no real traction we would go ahead [and
> recommend] disabling it.
> * >= 75 failures/week- ask for people to fix this in a shorter time frame
> and recommend disabling the test in a week or so
> * >= 150 failures/week- often just disable the test
>
> This is confusing and hard to manage.  Since then we have started
> adjusting triage queries and some teams are doing their own triage and we
> are ignoring those bugs (while they are getting prioritized properly).
>
> What we are looking to start doing this month is adopting a simpler policy:
> * any bug that has >=200 instances in the last 30 days will be disabled
> ** this will be a manual process, so it will happen a couple times/week
>
> We expect the outcome of this to be a similar amount of disabling, just an
> easier method for doing so.  It is very possible we might recommend
> disabling a test before it hits the threshold- keep in mind a disabled test
> is easy to re-enable (so feel free to disable for that one platform until
> you have time to look at fixing it)
>
> To be clear we (and some component owners) will continue triaging bugs and
> trying to get fixes in place as often as possible and prefer a fix, not a
> disabled test!
>
> Please raise any concerns, otherwise we will move forward with this in the
> coming weeks.
>

A few replies on this thread revolve around how to determine if a test is
disabled.

Our canonical source of which tests run where is the in-tree test
manifests. e.g. a browser.ini file. These are consumed by moz.build files
and the build system produces a master list of all tests. The scheduling
logic (Taskgraph) turns these into tasks in CI. There is also the
out-of-tree SETA database that can influence test scheduling.

Our mechanism for disabling tests tends to be something like adding
`skip-if os == "win" # Bug 1393121` to a test manifest. This is useful as a
means to accomplish filtering at the test scheduling/execution layer. And
humans can read the metadata ("# Bug XXX") and other comments and commit
history to get a feel for history. But the metadata isn't rich enough to
build useful queries or dashboards. We can find all tests being skipped.
But the manifests themselves are lacking rich metadata around things like
why tests were disabled. This limits our ability to answer various
questions.

I know we've topic in this topic in the past but I can't recall outcomes.
Is it worthwhile to define and use a richer test manifest "schema" that
will facilitate querying and building dashboards so we have better
visibility into disabled tests?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Coding style: Argument alignment

2017-08-29 Thread Gregory Szorc
On Tue, Aug 29, 2017 at 7:07 PM, L. David Baron  wrote:

> On Tuesday 2017-08-29 18:32 -0700, Eric Rahm wrote:
> > Do we explicitly state a preferred alignment of arguments in multi-line
> > function declarations (primarily in the context of C++) [1]? This
> question
> > has come up in regards to using clang-format for cleaning up code [2] and
> > it would be helpful to be able to reference a concrete example in the
> style
> > guide.
> >
> > Given this example:
> >
> > int
> > Foo(const nsACstring& aStr,
> > mozilla::UniquePtr&& aBuffer,
> > nsISupports* aOptionalThing = nullptr);
> >
> > Should the reformatted style:
> >
> > a) *stay the same*
> > b) align the argument names like so:
> >
> > int
> > Foo(const nsACstring& aStr,
> > mozilla::UniquePtr&& aBuffer,
> > nsISupports*  aOptionalThing = nullptr);
> >
> > c) some other option
>
> I don't know if we've had any official statements, but I tend to
> reformat code *away* from option (b) towards your original example
> if I'm modifying it and need to add a parameter than would throw
> things out of alignment.
>
> I think I do this because (b) has the disadvantage that more code
> changes require touching additional lines, which is both changes
> blame and is extra work (although it's not extra work if we're using
> clang-format tree-wide).  Option (b) is also more likely to lead to
> overly long lines that require wrapping.


I agree with dbaron.

Vertical alignment just creates more work and code churn. Keep it simple
and don't require extra code to be changed just because you rename or
add/remove arguments. We should be encouraging code to be refactored when
justified. Extra, avoidable code churn via vertical alignment therefore
discourages cleanups and is counter-productive to long-term maintainability.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: How to configure mozilla Firefox build options to build a customized release/developer edition build?

2017-08-28 Thread Gregory Szorc
On Mon, Aug 28, 2017 at 12:28 AM, Erxin Shang 
wrote:

> Hi Mozilla Experts,
>
> I'm trying to build a release version Firefox. But after the build there
> seems still a little bit different compare to the official build. Such as
> the official build doesn't contain the folder chrome/, components/,
> gmp-fake/ etc in the root of Firefox folder.
>
> Current build options:
>
> mk_add_options AUTOCLOBBER=1
>
> ac_add_options --enable-application=browser
> ac_add_options --enable-artifact-builds
> ac_add_options --target=x86_64-pc-mingw32
> ac_add_options --host=x86_64-pc-mingw32
> ac_add_options --disable-tests
> ac_add_options --enable-optimize
> ac_add_options --enable-release
> ac_add_options --disable-crashreporter
>
> My question is how to configure the options to build a version just like
> the official Firefox but it's a nightly build? Where can I find the build
> options, such as release/developer edition, which are used by Mozilla?
>
> Thanks in advance!
>

If you open `about:buildconfig` in Firefox, you'll see the configure
options used to build Firefox. That's a good start for obtaining a similar
build. Enabling MOZ_AUTOMATION=1 enables a bunch of supplemental build
steps you normally wouldn't get in a local build. MOZ_AUTOMATION=1 is
tailored to Mozilla's build environment. So attempting to set that locally
will likely result in multiple failures. The custom build steps you likely
care about can be run with `mach build installer` + `mach build package`.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Fwd: Security releases for Git, Mercurial, and Subversion

2017-08-10 Thread Gregory Szorc
-- Forwarded message --
From: Gregory Szorc <g...@mozilla.com>
Date: Thu, Aug 10, 2017 at 12:10 PM
Subject: Security releases for Git, Mercurial, and Subversion
To: Firefox Dev <firefox-...@mozilla.org>, dev-version-control <
dev-version-cont...@lists.mozilla.org>


Git, Mercurial, and Subversion just had a coordinated release to mitigate a
security vulnerability regarding the parsing of ssh:// URLs. Essentially,
well-crafted ssh:// URLs (e.g. in a subrepo, submodule, or svn:externals
references) could lead to local code execution. If you run a command like
`git clone --recurse-submodules` or `hg pull --update` and nefarious data
is received, you could be p0wned.

This is tracked in at least CVE-2017-1000116 and CVE-2017-1000117.

In addition, Mercurial issued a security fix for symlink handling that
could result in arbitrary filesystem write (attempts) for well-crafted
symlinks. This is CVE-2017-1000115.

You should upgrade your version control clients ASAP to eliminate exposure
to these bugs. Until you do, be extra cognizant where you pull from -
especially any operation related to subrepos/submodules.

As of today, hg.mozilla.org is now configured to not allow subrepos and
symlinks on non-user repos. The main Firefox repos have been audited and no
"bad" data is present. So, the canonical Firefox repos cannot be used as a
delivery vehicle for these exploits. I anticipate popular hosting services
like GitHub and Bitbucket will take similar actions and make similar
announcements.

Critical version control infrastructure like hg.mozilla.org and Autoland
has been patched for several days courtesy of responsible early disclosure
of the vulnerabilities and fixes from the Mercurial Project.

Announcements:

hg: https://www.mercurial-scm.org/pipermail/mercurial/2017-
August/050522.html
git: http://marc.info/?l=git=150238802328673=2
svn: http://mail-archives.apache.org/mod_mbox/subversion-
announce/201708.mbox/%3C2fefe468-7d41-11e7-aea1-9312c6089150%40apache.org%3E
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Figuring out and controlling what tasks run before first paint

2017-08-08 Thread Gregory Szorc
On Tue, Aug 8, 2017 at 5:42 PM, Kris Maglione  wrote:

> One of my biggest frustrations in profiling startup performance has been
> the fact that exactly which code runs during before or after first paint
> changes based on arbitrary timing factors. If I make a 5ms improvement to
> one section of code, a 100ms chunk of code winds up running after first
> paint rather than before. If I make a 5ms improvement to another section of
> code, a 150ms chunk of code winds up running *before* first paint rather
> than after. This also shows up in the ts_paint timings on talos, where we
> have a fairly consistent cluster of high times, a fairly consistent cluster
> of low times, and very little in-between.
>
> Presumably, if we're OK with these chunks *ever* running after first
> paint, then they should always run after first paint. And vice versa.
>
> I've made various attempts to get a handle on this, but never with much
> success. The last time, I got as far as fixing the broken TaskTracer build
> before I finally gave up trying to find a useful way to analyze the data.
> What I'd really like is a handle on what tasks are run, when, who schedule
> them (and when), and what code they run.
>
> After that, I'd ideally like to find a way to run async tasks during
> startup so that I'm guaranteed which parts run before first paint and which
> run after.
>
> Has anyone else made any progress on this front? Are there any other tools
> that I'm overlooking? Is there a sensible path forward?
>

This reminded me of an old thread from 2013:
https://groups.google.com/d/msg/mozilla.dev.platform/oKRBRqQbalk/D_YYWex83X4J

I'm pretty sure that thread eventually led to
toolkit/components/asyncshutdown (which Yoric wrote). That's a really nifty
mechanism for managing component shutdown. IIRC it helped eliminate a
number of race conditions, edge cases, and bad practices (like event loop
spinning).

AFAIK we never replicated that feature to startup or came up with a more
modern/generic mechanism to manage components and their complex
dependencies throughout their lifetime. (It is a hard problem after all.)
Re-reading that old thread and your issues here, it might be worth
re-visiting those grand ideas from 2013.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More Rust code

2017-08-08 Thread Gregory Szorc
On Mon, Aug 7, 2017 at 11:01 PM, Henri Sivonen  wrote:

> On Tue, Aug 8, 2017 at 1:12 AM, Mike Hommey  wrote:
> > Here's a bunch of data why "let's switch compilers" is not necessarily
> > easy (I happen to have gathered that recently):
>
> Thank you.
>
> > Newer versions of clang-cl might generate faster code, but they crash
> > during the build: https://bugs.llvm.org/show_bug.cgi?id=33997
>
> I'm guessing using a very new clang was what allowed Chrome to switch
> from MSVC to clang? (Chrome accepted a binary size increase on 32-bit
> Windows as a result of switching to clang.)
>

I also found references to performance hits in their issue tracker.
Specifically, it looked like process startup and new tab times regressed.
But I could find no numbers quantifying it.

We should view their decision as having to do more with using a unified and
open source toolchain [that they can patch, improve, and deploy in days
instead of months] instead of purely performance. It's a huge win for
Chrome's developers (target 1 toolchain instead of N) and it will allow
them to move faster since they're not beholden to any lagging one-off
toolchain. They still have to worry about packaging for Chromium, of
course. Hopefully this will result in faster distro packaging for newer
LLVM/Clang releases (not unlike how Rust is forcing the world to come to
terms with a toolchain that is released every 6 weeks).
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: OS/2 still supported ?

2017-07-24 Thread Gregory Szorc
On Mon, Jul 24, 2017 at 6:13 PM, Enrico Weigelt, metux IT consult <
enrico.weig...@gr13.net> wrote:

> On 25.07.2017 00:32, Mike Hoye wrote:
>
>> On 2017-07-24 8:27 PM, Enrico Weigelt, metux IT consult wrote:
>>
>>> Hi folks,
>>>
>>>
>>> I see lots of #ifdef XP_OS2 ... is OS/2 still supported at all ?
>>>
>> Our list of supported platforms is here:
>>
>> https://developer.mozilla.org/en/docs/Supported_build_configurations
>>
>
> I don't see OS2 here, nor win16. So, can we remove the related parts ?
>

Yes, please submit patches to remove dead code.

If you want to go down a rabbit hole to help with that,
https://bugzilla.mozilla.org/show_bug.cgi?id=nukeb2g is full of open bugs.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: build complexity vs kbuild

2017-07-22 Thread Gregory Szorc
On Sat, Jul 22, 2017 at 1:11 PM, Enrico Weigelt, metux IT consult <
enrico.weig...@gr13.net> wrote:

> Hi folks,
>
>
> the whole build seems to be very complex.
>
> Not just the huge configure.in, but also the complexity in moz.build
> files, many ifdef's, etc, etc. Many orthogonal things are twisted
> together (eg. widget toolkits influence lots of other things)
>
> I'm thinking of having a deeper look at Kbuild. It's really nice to
> handle complex dependencies. In the kernel we also use that for
> automatically switch hidden settings by higher level ones
> (eg. combination of a and b can activate c, certain things automatically
> happen based on platform/cpu type, etc, etc.
>
> Could it be an option for moz ?
>

Yes, the build system is complex. It has evolved organically for over 20
years, accumulating tons of cruft and complexity along the way. Much of the
existing complexity can be simplified. In fact, the build system today with
moz.build files is much simpler than it was 5 years ago.

Kbuild is not a viable long-term solution because it is tied to Make, which
has fundamental shortcomings that make it not suitable for large, complex,
and cross-platform projects like Firefox. The future of the build system
will involve tools that incorporate modern technologies and wisdom and
don't have their underpinnings based on 1970's technology (like Make does).
Bazel is one such modern tool.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Announcing MozillaBuild 3.0 Release

2017-07-21 Thread Gregory Szorc
Thanks for all your work on this, Ryan!

So everyone knows, this is hopefully the last major overhaul of
MozillaBuild.

The plan from build system land is to attempt to go "all in" on Windows
Subsystem for Linux (WSL). That's the feature in Windows 10 (and even in
Server additions now) that implements Linux syscalls inside the Windows
kernel and allows you to run Linux binaries "natively" on Windows. The
performance is pretty good and it is a better Linux-on-Windows approach
than msys ever will be. Assuming we can pull it off, the goal is to ditch
MozillaBuild and support building Firefox from WSL. You'll then be able to
develop on Windows from the comfort of a Linux command line with access to
the full power of your favorite Linux distro. We will likely maintain some
support for building in non-WSL environments for automation, etc. But for
the average developer, we want to focus on WSL because we perceive that to
be the best opportunity for a more pleasant development experience on
Windows.

On Fri, Jul 21, 2017 at 3:07 PM, Ryan VanderMeulen <
rvandermeu...@mozilla.com> wrote:

> It appears that the formatting of that email was pretty well destroyed
> when sent out. Here's a direct link to the release notes in Google Doc form:
> https://docs.google.com/document/d/1NDz7ROxTYNB5YnP7VJ5CmJomQDun-
> 7o4HDtSRLkuPaw/edit
>
> -Ryan
>
> On Fri, Jul 21, 2017 at 6:02 PM, Ryan VanderMeulen <
> rvandermeu...@mozilla.com> wrote:
>
>> I am pleased to announce the final release of MozillaBuild 3.0! Sorry in
>> advance for the length of this message, but there's a lot of changes in
>> this release worth calling out.
>>
>> https://ftp.mozilla.org/pub/mozilla/libraries/win32/MozillaB
>> uildSetup-Latest.exe
>> 
>>
>> Important changes since version 2.2.0:
>>
>>-
>>
>>MozillaBuild now requires Windows 7+ 64-bit to install.
>>-
>>
>>Removed the start-shell-msvc*.bat files. See below for more
>>information on this change.
>>-
>>
>>Updated Python to version 2.7.13 and switched to the 64-bit version.
>>-
>>
>>Added Python 3.6.2.
>>-
>>
>>Added nodejs 8.1.4 & npm 5.3.0.
>>-
>>
>>   ESLint is now a first-class citizen on Windows! Things should Just
>>   Work when using the |./mach eslint| command.
>>   -
>>
>>Other updates to various included components.
>>-
>>
>>   Mercurial updated to version 4.2.2.
>>   -
>>
>>   NSIS updated to version 3.01 (and older versions removed).
>>   -
>>
>>   Some MSYS components were updated.
>>   -
>>
>>Behind the scenes, MozillaBuild packaging was completely overhauled
>>so that anyone can now generate an installer package simply by running the
>>packageit.py script from the source checkout. This should in turn make it
>>much easier for new contributors to test their work.
>>
>>
>> Full changelog:
>>
>> https://hg.mozilla.org/mozilla-build/pushloghtml?fromchange=
>> MOZILLABUILD_2_2_0_RELEASE=MOZILLABUILD_3_0_0_RELEASE
>> 
>>
>> It is strongly advised that you not install this over a previous
>> installation!
>>
>> Upgrade instructions:
>>
>>1.
>>
>>Pull down and update to a modern revision of any trunk repo
>>(mozilla-central, inbound, or autoland).
>>2.
>>
>>Run |./mach mercurial-setup --update-only| to ensure the
>>version-control-tools repository is current. Extension bustage after
>>upgrade is likely if you don’t do this.
>>3.
>>
>>Run |./mach clobber| to remove the object directory of any trees you
>>have. Build errors are pretty much guaranteed to occur otherwise.
>>4.
>>
>>Assuming you’ve previously installed Rust via |./mach bootstrap|,
>>backup msys/etc/profile.d/profile-rustup.sh from your current
>>MozillaBuild installation.
>>5.
>>
>>Remove your current installation. If you can't remove the existing
>>installation, you probably have a terminal open or ssh-agent running.
>>Terminate it and try again.
>>6.
>>
>>Install MozillaBuild 3.0.
>>7.
>>
>>Copy profile-rustup.sh to your new msys/etc/profile.d directory. You
>>can also just re-run |./mach bootstrap| if you don’t want to deal with
>>copying files around.
>>8.
>>
>>If you previously enabled minTTY, you will need to do so again by
>>setting USE_MINTTY=1 at the top of start-shell.bat.
>>
>>
>> Please file any problems you come across in mozilla.org::MozillaBuild.
>>
>> Where did start-shell-msvc*.bat go?
>>
>> Since the release of MozillaBuild 2.2.0, much has changed with how we
>> build Firefox and in how Microsoft distributes their compiler and SDK
>> toolchains. In order to make things more flexible for both our build
>> automation as well as people building locally, MSVC and Platform SDK
>> detection was moved into the build 

Re: wpt CSS tests now running on Linux

2017-07-20 Thread Gregory Szorc
On Thu, Jul 20, 2017 at 9:33 AM, James Graham 
wrote:

> Bug 1341078 and dependencies just landed on inbound, which means we now
> have the W3C/web-platform-tests CSS tests in-tree and running in
> automation. This adds about 12,000 reftests for CSS features to the
> web-platform-tests suite. They are currently enabled in CI, but only on
> linux*, due to limited capacity on OSX, and issues with the harness on
> Windows. The tests will be enabled on other platforms once these problems
> are resolved.
>
> Servo was already running many of these tests in their automation, so this
> landing plugs a gap in the stylo testing vs upstream.
>
> Note that, as usual with wpt landings, these tests have been vetted for
> stability, but not for the actual results.
>
> Changes to the css tests in this directory will be upstreamed to
> web-platform-tests in the same way as for any other web-platform-test. Note
> that the reftest harness versions of the Mozilla submitted tests are still
> running, so if you want to edit or add to those it is recommended to use
> the copy in layout/reftests/w3c-css/submitted/ since that will be
> correctly synchronised and currently runs on more platforms in CI.
>
> The number of tests and nature of reftests means that this change added a
> large number of files to the repository (around 37,000). Apologies for any
> inconvenience caused by such a large changeset. I'm told that narrow clones
> are just around the corner and may make this kind of thing more tolerable
> in the future.
>

"Around the corner" is a bit optimistic. Mercurial 4.3 has experimental
support for sparse checkouts via an extension. (The feature has been part
of a Facebook extension for years.) I plan to leverage this in automation
to speed up VCS interactions. But end-user support will likely be
experimental until the feature stabilizes upstream, hopefully in 4.4 in 3
months.

In the mean time, be sure to install Watchman (https://facebook.github.io/
watchman/) and the fsmonitor Mercurial extension to make operations faster.
`mach mercurial-setup` should help with this.

Git doesn't yet have filesystem watcher integration, so you are pretty much
at the mercy of your filesystem and I/O subsystem. However, an engineer at
Twitter is currently trying to land Watchman support into the upstream Git
project! The first version of the patches was emailed in March. The 14th
iteration was emailed last week. But there still appears to be active
discussion on some high-level details of the patch series. So I'm not
holding my breath waiting for it to land.

For reference, the best timings I could obtain are as follows. These are
under ideal conditions with as much as possible already in filesystem
caches. If you actually need to perform I/O due to a cache miss (which is
pretty common because common build system activity touches tons of files
and tends to cause cache eviction), the differences with Watchman are even
more pronounced. So it isn't uncommon for Watchman to save a second or two
for random operations.


Linux
$ hg status
no watchman: ~1.00s
w/ watchman: ~0.11s

$ git status
~0.28s

MacOS
$ hg status
no watchman: ~1.78s
w/ watchman: ~0.23s

Windows
$ hg status
no watchman: ~2.31s
w/ watchman: ~0.35s


Linux
$ hg status
no watchman: ~1.16s
w/ watchman: ~0.12s

$ git status
~0.33s

MacOS
$ hg status
no watchman: ~2.04s
w/ watchman: ~0.26s

Windows
$ hg status
no watchman: ~2.56s
w/ watchman: ~0.40s
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Care in the use of MOZ_CRASH_UNSAFE_* macros: data review needed

2017-07-17 Thread Gregory Szorc
On Mon, Jul 17, 2017 at 10:01 AM, Kris Maglione 
wrote:

> On Mon, Jul 17, 2017 at 09:52:55AM -0700, Bobby Holley wrote:
>
>> On Mon, Jul 17, 2017 at 9:42 AM, Benjamin Smedberg > >
>> wrote:
>>
>> I don't know really anything about how rust panics get reflected into
>>> crash-data. Who would be the right person to talk to about that?
>>>
>>>
>> Rust panics are equivalent to MOZ_CRASHES, and we treat them as such (or
>> at
>> least try to, see bug 1379857).
>>
>> Rust makes it easier to put non-constant things in the crash strings,
>> which
>> can be quite useful (see [1] as an example). They're not used often, and
>> it
>> seems unlikely that the existing use-cases would pose privacy issues, but
>> I
>> don't have a good proposal for enforcing that.
>>
>
> It would be nice if we could add a commit hook that required a data-r tag
> for any changes that add MOZ_CRASH_UNSAFE_PRINTF or a Rust panic with a
> non-static string. I suspect it's something that most people will tend to
> overlook. For the cases that are clearly not data privacy issues, we could
> accept data-r=trivial, or something like that.
>

Phabricator allows you to create rules when certain events occur. You can
react to changes to certain files or when certain content within files
changes.

Phabricator has the concept of a "blocking reviewer." If set, you must have
a review from a "blocking reviewer" before the patch can be accepted
(r+'d). A "blocking reviewer" can be an individual or a group. If a group,
a member of that group must accept the patch.

Putting the two together, Phabricator can be configured to automatically
flag diffs containing changes to MOZ_CRASH_UNSAFE_PRINTF so they set a
"blocking reviewer" from a "data review" group, thus ensuring data review
is conducted before the patch lands. (This also means we could formalize
module review in Phabricator if we wanted. And I predict there will be some
interesting debates about how formal we want the module review policy to be
in a post-Phabricator world and how "blocking reviewer" should be used. But
that's for a different thread.)

Phabricator's features are superior to custom server-side hooks for various
reasons and I look forward to the day when we can define "commit policy" as
part of Phabricator rules instead of custom hooks. Of course, we need all
commits landing via Phabricator/Autoland before we can talk about
abandoning server-side hooks completely. That's in the works for
post-Phabricator. I'm unsure of timetable.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Phabricator Update, July 2017

2017-07-17 Thread Gregory Szorc


> On Jul 15, 2017, at 23:36, Gabriele Svelto  wrote:
> 
>> On 14/07/2017 05:39, Boris Zbarsky wrote:
>> I should note that with GitHub what this means is that you get
>> discussion on the PR that should have gone in the issue, with the result
>> that people following the issue don't see half the relevant discussion.
>> In particular, it's common to go off from "reviewing this line of code"
>> to "raising questions about what the desired behavior is", which is
>> squarely back in issue-land, not code-review land.
>> 
>> Unfortunately, I don't know how to solve that problem without
>> designating a "central point where all discussion happens" and ensuring
>> that anything outside it is mirrored there...
> 
> Yeah, we frequently had that problem with Gaia issues as part of Firefox
> OS. Reviews were done on GitHub's PR so the bugzilla entries were often
> empty (even for bugs with huge, complex patch-sets). To gather any kind
> of information one had to go and check the comments on the PR itself
> which not only was less-than-optimal but made life a lot harder for
> those not directly involved with development.

If the bug is only serving as an anchor to track code review, then the question 
we should be asking is "do we even need a bug."

I've explored this a bit in 
https://gregoryszorc.com/blog/2015/01/16/bugzilla-and-the-future-of-firefox-development/.
 That post was written 2.5 years ago. If you replace "MozReview" with 
"Phabricator," many of the points I made are still relevant. We're just taking 
a drastically different path. Also, it was written very early in MozReview's 
development and reflected more of a long term workflow we wanted to enable. It 
became apparent that many things would not be easily achievable or were not 
reasonable at that time. That's why we dropped/paused many of the more drastic 
ideas. Making the decision to decouple review from issue tracking with 
Phabricator brings many of them back to the table.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Phabricator Update, July 2017

2017-07-13 Thread Gregory Szorc
On Thu, Jul 13, 2017 at 6:04 PM, Mark Côté  wrote:

> On 2017-07-13 3:54 PM, Randell Jesup wrote:
>
>> On Wed, Jul 12, 2017 at 11:27 AM, Byron Jones  wrote:
>>>
>>> But indeed having also the patches in bugzilla would be good.

>
> no, it would be bad for patches to be duplicated into bugzilla.  we're
 moving from bugzilla/mozreview to phabricator for code review,
 duplicating
 phabricate reviews back into the old systems seems like a step backwards
 (and i'm not even sure what the value is of doing so).

>>>
>>> I find this a strange argument.  We don't know how successful
>>> phrabricator
>>> is going to be.  The last attempt to switch review tools seems to be
>>> getting killed.  I think its somewhat reasonable to be skeptical of
>>> further
>>> review tool changes.
>>>
>>
>> I quite agree...  And I hate the idea of having the data spread across
>> systems (which I also disliked in MozReview, which I tried and it ended
>> up being really bad for some aspects of my ability to land patches).
>>
>
> Mirroring comments from Phabricator to BMO is even more confusing, as we
> end up with forked discussions, where some people reply only on BMO. This
> happens regularly with MozReview, as a quick survey of recently fixed bugs
> shows.  Keeping code review in one place, loosely coupled to issue
> tracking, improves readability of bugs by keeping discussion of issues
> separate from specific solutions.  It is also what newer systems do today
> (e.g. GitHub and the full Phabricator suite), which I mention not as a
> reason in and of itself but to note precedent.  The plan is to move to a
> separate, more modern, and more powerful code-review tool. Forked
> discussions means that the bugs will always have to be consulted for code
> reviews to progress, which undermines the utility of the code-review tool
> itself.  Loose coupling also frees us to try experiments like patches
> without bugs, which has been discussed many times but has been technically
> blocked from implementation.
>

> That said, lightweight linkages between issues and code review are very
> useful.  We're doing some of that, and we're open to iterating more there.
>
> We've been asked to be bold and innovate all across Mozilla, to try new
> things and see if they work.  For a long time, Mozillians have discussed
> modernized workflows, with new tools that also open more avenues to
> automation, but such things have been rarely attempted, and even then in a
> disorganized fashion.  We're finally at a time when we have the ability and
> support to forge ahead, and that's what we're doing.
>


This.

As someone who worked on MozReview, I can say unequivocally that one of the
hardest parts - and I dare say the thing that held MozReview back the most
- was the tight coupling with Bugzilla and the confusion and complexity
that arose from it.

When we deployed MozReview, we intentionally tried to not rock the boat too
hard: we wanted it to be an extension of the Bugzilla-centric workflow that
everyone was familiar with. Was there some boat rocking, yes: that's the
nature of change. But we went out of our way to accommodate familiar
workflows and tried to find the right balance between old and new. This
resulted in obvious awkwardness, like trying to map ReviewBoard's "Ship It"
field to Bugzilla's complicated review flag mechanism. Does a review that
doesn't grant "Ship It" clear the r? flag or leave it? What happens when
someone grants "Ship It" but they don't have permissions to leave review
flags in Bugzilla? How do you convey r-? What about feedback flags? What
happens when someone changes a review flag in Bugzilla? Then you have other
awkwardness, like when you mirror the review comment to Bugzilla and it
loses rich text formatting. Or someone replies to a MozReview comment on
Bugzilla and you can't see that reply on MozReview. Do you disable replying
to comments mirrored from MozReview? That's annoying. Do you disable the
mirroring then? That's annoying too! Bi-directional sync? No way! (If
you've ever implemented bi-directional sync you'll know why.) You just
can't win.

The usability issues stemming from trying to overlay ReviewBoard's and
Bugzilla's models of how the world worked were obvious and frustrating. It
took months to years to iron things out. And we arguably never got it right.

Then there were technical issues. When you did things like post a series of
commits to review on MozReview, this was done so as an atomic operation via
a database transaction. It either worked or it didn't. But we had to mirror
those reviews to Bugzilla. Bugzilla didn't have an API to atomically create
multiple attachments/reviews. So not only was the publishing to Bugzilla
slow because of multiple HTTP requests, but it was also non-atomic. When
Bugzilla randomly failed (all HTTP requests randomly fail: it is a property
of networks), the review series in Bugzilla was sometimes left in an

Re: inlining JS code at build time

2017-07-05 Thread Gregory Szorc
On Wed, Jul 5, 2017 at 6:36 PM, Kris Maglione  wrote:

> It's possible to use the preprocessor to join multiple JS files, but we've
> been moving away from that as much as possible recently.
>

It is worth mentioning that we don't like to use the Python preprocessor
because preprocessor syntax makes the file invalid JavaScript, thus
undermining the utility of tools like editors and linters. Before the
movement towards standard syntax, our JavaScript developers couldn't have
nice things like eslint.


>
> With the script precompiler enabled, though, there isn't very much
> per-script overhead for scripts loaded via the subscript loader at startup.
> And if the new code isn't likely to be needed during startup, you can use
> the subscript loader to lazily load it the first time it's needed, rather
> than at startup. So I'd recommend you go one of those two routes.
>
> On Wed, Jul 05, 2017 at 06:30:02PM -0700, zbranie...@mozilla.com wrote:
>
>> I'm working on a new feature which will add a new JS class.
>>
>> I can grow the main file (mozIntl.js) or add it in a separate file
>> (mozIntlLocale.js).
>>
>> For readability, I think it would be nicer if I could add it as a
>> separate file (I like to keep my files under 500 lines), but I don't want
>> us to pay the runtime price of loading the file via JSM or XPCOM interfaces.
>>
>> Is there anything like a RollupJS in our build system? Or is there any
>> plan to add it?
>>
>
> --
> Kris Maglione
> Senior Firefox Add-ons Engineer
> Mozilla Corporation
>
> Of all the preposterous assumptions of humanity over humanity, nothing
> exceeds most of the criticisms made on the habits of the poor by the
> well-housed, well-warmed, and well-fed.
> --Herman Melville
>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Stylo bits coming to a Nightly near you

2017-06-21 Thread Gregory Szorc
We're starting to enable the *building* (not enabling) of Stylo in various
platforms in automation. This means that Nightly will start shipping Stylo
bits soon - possibly the next Nightly if things stick.

The roll-out is being done as platforms are ready. Linux 64-bit and Windows
64-bit are the primary targets, with other platforms to follow.
https://bugzilla.mozilla.org/show_bug.cgi?id=stylo-nightly-build tracks.

There are still some toolchain ergonomics issues that we want to address
before building Stylo is enabled by default in local builds. These are
actively being worked on and local builds should re-converge with
automation soon.

If you want to tweak your local build, here are the relevant mozconfig
snippets:

  # build Stylo but don't enable it (what automation is now doing)
  ac_add_options --enable-stylo=build

  # build and enable Stylo
  ac_add_options --enable-stylo

But unless you interact with Stylo, my guess is you'll want to stick with
the defaults because enabling the building of a feature that isn't enabled
by default may only increase your build times while providing little to no
benefits.

As part of this, building Stylo in Firefox is now being promoted from Tier
2 to 1. This involves some wonky version control machinery that knows how
to "synchronize" backouts on autoland to the canonical Stylo repo on
GitHub, all while ensuring file content and diffs line up between the 2
repos. This machinery may be a bit fragile initially. So there's a chance
we'll have to back out building Stylo if there are problems. Hopefully
we'll fall forward and fix bugs as they arise without too much disruption
though.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Profiling nightlies on Mac - what tools are used?

2017-06-20 Thread Gregory Szorc
On Tue, Jun 20, 2017 at 10:28 AM, Ehsan Akhgari 
wrote:

> On 06/20/2017 12:28 PM, Nathan Froyd wrote:
>
>> On Tue, Jun 20, 2017 at 12:19 PM, Ehsan Akhgari 
>> wrote:
>>
>>> On 06/20/2017 08:34 AM, Nathan Froyd wrote:
>>>
 There is some kind of interaction with the underlying machine (see
 comment 104 in said bug, where the binaries perform identically on a
 local machine, but differently on infrastructure), but we haven't
 tracked that down yet.

>>>  From comment 104 it seems that it is possible to reproduce the slowdown
>>> from
>>> the unstripped cross builds locally.  Has anyone profiled one of these
>>> builds comparing them to an unstripped non-cross build to see where the
>>> additional time is being spent?  I couldn't tell from the bug if this
>>> investigation has happened.
>>>
>> My understanding is that the slowdown cannot be reproduced on local
>> developer machines, but can be reproduced on loaner machines from
>> infra.
>>
> Huh.  That's interesting and even more puzzling...
>
>> I don't think anybody has tried profiling on infra to see
>> where time differences are.
>>
> That seems like the obvious next step to investigate to me.  We should
> *really* only talk about stripping builds as the last resort IMO, since we
> have way too many developers using OSX every day...
>

I would argue it is in our best interest to have as little divergence
between Firefox release channels as possible. Today, Nightly is distributed
with symbols and Beta/Release/ESR all ship without symbols. There are other
variations between the build configurations as well. Every variation
between channels increases the risk of introducing bugs or other undesired
behavior and that said behavior won't be detected until weeks after it has
landed on central (we can run CI for these variations, sure, but that's no
substitute for a user base). It should not be a controversial statement to
say that variation between build configurations / channels is not ideal.

In the case of debug symbols, we've historically made the trade-off that
symbols are useful to developers using Nightly, they don't appear to have a
significant downside (other than package/download size), so why not ship
them. We've accepted the risk of this variation in the Nightly channel in
return for not inconveniencing developers. What's happening now is that we
have a new problem caused by debug symbols for cross builds and we are
re-assessing whether the trade-off to ship debug symbols on Nightly is
still justified. If it is, then there may be considerable work to preserve
the status quo. I understand stripping Nightly would be inconvenient and I
personally don't want to do it for this reason. But the flip side is we
converge the configurations for Nightly and Beta, which I argue is a good
thing.

FWIW, I'd like to point out that Chrome Canary (Nightly equivalent) doesn't
ship debug symbols for MacOS (assuming my methodology of running `nm
--debug-syms` is correct). While this is quite possibly due to Chrome
having closed source components, their developers have obviously found a
way to work without debug symbols on shipped builds. I'd like to think that
if Chrome has figured out how to make it work, we can too.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Profiling nightlies on Mac - what tools are used?

2017-06-19 Thread Gregory Szorc
On Mon, Jun 19, 2017 at 7:06 PM, Jeff Muizelaar 
wrote:

> Yes. I use Instruments on Nightly builds extensively. It would really
> be a loss to lose this functionality. I think it's important to weigh
> the performance improvements that we get from easy profiling against
> any advantage we get from stripping the symbols.
>

Instruments supports configuring where the symbols live. So that's a
solvable problem (although it /may/ require a one-time manual config to set
up - not sure if it can be automated).

I'm not sure if Activity Monitor knows how to obtain symbols from a custom
source. So there's a chance an outcome is "sorry, you have to use
Instruments if you want symbols."

The decision to strip Nightly builds does not come lightly. Read 1338651
comment 111 and later for the ugly backstory.


>
> On Mon, Jun 19, 2017 at 6:07 PM, Bobby Holley 
> wrote:
> > Instruments is the big one that I'm aware of.
> >
> > On Mon, Jun 19, 2017 at 3:03 PM, Chris Cooper  wrote:
> >
> >> Hey all,
> >>
> >> The build peers are looking to change the way that nightlies are created
> >> on Mac as we switch to cross-compilation. Specifically, we're looking at
> >> stripping the nightlies to avoid an as-of-yet undiagnosed performance
> >> discrepancy vs native builds[1], but also to make the nightly
> configuration
> >> match what we ship on beta/release (stripped).
> >>
> >> Of course, stripping removes the symbols, and while we believe we have a
> >> solution for re-acquiring symbols that works for the Gecko Profiler, we
> >> realize
> >> that people out there may be using other profiling tools.
> >>
> >> If you profile on Mac, now is your chance to speak up. What other
> >> profiling tools do you use that we should be aware of?
> >>
> >> cheers,
> >> --
> >> coop
> >>
> >> 1. https://bugzilla.mozilla.org/show_bug.cgi?id=1338651
> >> ___
> >> dev-platform mailing list
> >> dev-platform@lists.mozilla.org
> >> https://lists.mozilla.org/listinfo/dev-platform
> >>
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Fwd: [tools-tc] TaskCluster Windows Loaners

2017-06-05 Thread Gregory Szorc
(Forwarding for Rob since he's not on these lists and can't post.)

Many of you have said that you want an easier way to reproduce what
automation is doing. Rob's work should make things a bit easier for Windows.

-- Forwarded message --
From: Rob Thijssen 
Date: Tue, May 30, 2017 at 11:04 PM
Subject: [tools-tc] TaskCluster Windows Loaners
To: release , tools-taskcluster <
tools-taskclus...@lists.mozilla.org>


tl;dr: you can now provision your own TaskCluster Windows instances.
Instructions at:
https://wiki.mozilla.org/ReleaseEngineering/How_To/
Self_Provision_a_TaskCluster_Windows_Instance

Borrowing Windows instances just got easier for TaskCluster Windows worker
types. If you need to debug something like a failing build or test and you
want to use the same environment configuration we use for TaskCluster
Windows builds and tests, you now can.

What's better, its set up to be self serve. You don't need to ask anyone or
even raise a bug, just go ahead and help yourself. Instructions in the wiki
link above.

The available operating systems are:

   - Microsoft Windows Server 2012 R2
   - Microsoft Windows 10 Enterprise 64 bit, build 1703 (Creators Update)
   - Microsoft Windows 7 Enterprise 32 bit

Please feel free to comment on how we could improve the process further.

Kind regards,
Rob
___
tools-taskcluster mailing list
tools-taskclus...@lists.mozilla.org
https://lists.mozilla.org/listinfo/tools-taskcluster
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Improving visibility of compiler warnings

2017-05-25 Thread Gregory Szorc
On Thu, May 25, 2017 at 4:11 PM, Eric Rahm <er...@mozilla.com> wrote:

> I think we disable it for local builds because we don't control what
> versions of tools folks use. So clang vFOO might spit out errors we don't
> see in clang vBAR and it would be a huge pain if those failed locally even
> though they'd be fine in automation.
>

Exactly.

When we offer a build mode that uses the exact toolchain that automation is
using and can guarantee that the local environment reasonably approximates
CI, we can enable --enable-warnings-as-errors by default in that
configuration. We'll likely need a flag on `mach build` to disable that
warnings as errors per invocation, as sometimes developers just want to get
code to compile, even if there are warnings. But the default should be to
match CI so there are fewer surprises and errors are caught quicker. Of
course, the more we write Rust the more warnings as errors becomes the
explicit behavior because Rust just flat out refuses to compile problems
that C++ compilers treat as warnings.


>
> On Thu, May 25, 2017 at 4:07 PM, Andrew McCreight <amccrei...@mozilla.com>
> wrote:
>
> > On Thu, May 25, 2017 at 4:03 PM, <gsquel...@mozilla.com> wrote:
> >
> > > On Friday, May 26, 2017 at 6:03:02 AM UTC+12, Chris Peterson wrote:
> > > > On 2017-05-25 5:31 AM, Ehsan Akhgari wrote:
> > > > > On 05/19/2017 02:44 PM, Gregory Szorc wrote:
> > > > >> `mach build` attempts to parse compiler warnings to a persisted
> > > > >> "database."
> > > > >> You can view a list of compiler warnings post build by running
> `mach
> > > > >> warnings-list`. The intent behind this feature was to make it
> easier
> > > to
> > > > >> find and fix compiler warnings. After all, something out of sight
> is
> > > > >> out of
> > > > >> mind.
> > > >
> > > > Given that we treat warnings as errors on CI and most directories
> that
> > > > opt out of warnings-as-errors are third-party code
> > > > (ALLOW_COMPILER_WARNINGS in moz.build [1]), I don't think we need to
> > > > make the warning list any more visible. A warning regression in
> nearly
> > > > all first-party code is already treated as an error.
> > > >
> > > >
> > > > [1]
> > > > https://searchfox.org/mozilla-central/search?q=ALLOW_
> > > COMPILER_WARNINGS=true=moz.build
> > >
> > > Tangentially: "we treat warnings as errors on CI" -- It'd be great if
> > > local builds could have that behavior by default, so we (or just I?)
> > don't
> > > get caught by surprise when warnings-as-errors appear after landing.
> > >
> > > Or at least: Can I opt-in locally? (I tried 'ac_add_options
> > > --enable-warnings-ar-errors' in mozconfig, but that was not
> recognized.)
> > >
> >
> > This has worked for me before:
> > ac_add_options --enable-warnings-as-errors
> >
> >
> > ___
> > > dev-platform mailing list
> > > dev-platform@lists.mozilla.org
> > > https://lists.mozilla.org/listinfo/dev-platform
> > >
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


View the history of line ranges on hg.mozilla.org

2017-05-24 Thread Gregory Szorc
Mercurial 4.2 (just deployed to https://hg.mozilla.org/) contains a new
feature for visualizing the history of a subset of a file. Those of you who
perform a lot of code archeology may find it more useful than the classical
annotate/blame UI (which is based on whole files and single revisions and
can therefore become unwieldy for large files and files with long history).

To use the feature, load a view of a whole file (URL paths with
/file//). e.g.
https://hg.mozilla.org/mozilla-central/file/37d777d87200/dom/base/nsContentList.cpp.
Then as you hover over lines, click once to set a start line and click
again to set an end line. A hover box should appear presenting links to
follow line history for changes that are "older" and/or "newer" than the
revision you are currently on. That will load a URL like
https://hg.mozilla.org/mozilla-central/log/37d777d87200/dom/base/nsContentList.cpp?patch==507:536
(in this case showing the history of the arbitrarily chosen function
nsContentList::NamedItem).

Having used the feature for a few months, I find it to be one of those
things that you didn't realize you wanted. But after you start using it,
you can't live without it because it allows you to efficiently answer a lot
of archeology questions.

Some quick notes:

* If you load an old file revision, you can use this feature to find newer
changes to a range in that file.
* The feature actually follows diff blocks, not lines. (Because tracing
lines/content is a non-exact science, especially from the perspective of
version control where files are just bytes, not rich, structured
documents.) So, if "extra" changes sneak into a block, it could widen the
data being reported. That's why the last/earliest revision is often the
full file content.
* Run `hg serve` to start a local HTTP server and get a similar interface
without going over the Internet.
* The UI could use some polish. If you want to hack on it yourself,
instructions are at [1].
* There is no command-line equivalent yet for the "log of line range" view.
The feature is being considered for Mercurial 4.3.
* This feature was brought to you by a MOSS grant.
* Prefix the "action" component of Mercurial URL with "json-" to get JSON
output. e.g.
https://hg.mozilla.org/mozilla-central/json-log/37d777d87200/dom/base/nsContentList.cpp?patch==507:536
.
* I think it would be rad if DXR and/or Searchfox hyperlinked to the
Mercurial view for history of a function, class, etc :)

Your suggestions on how to improve the feature will be used to prioritize
future work upstream. Some changes can even be rolled out to hg.mozilla.org
within minutes. So please get in touch (reply, ping me on IRC, file a
hg.mozilla.org bug, etc) with ideas for improvements.

[1]
https://mozilla-version-control-tools.readthedocs.io/en/latest/hgmo/contributing.html#hacking-the-theming
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Improving visibility of compiler warnings

2017-05-19 Thread Gregory Szorc
On Fri, May 19, 2017 at 1:27 PM, Michael Layzell <mich...@thelayzells.com>
wrote:

> With regard to ALLOW_COMPILER_WARNINGS, I'm strongly of the opinion that we
> should be providing an option to hide all warnings from modules marked as
> ALLOW_COMPILER_WARNINGS now, as they are only in "external" libraries which
> most of us are not working on, and they really clog up my compiler output
> and make me have to scroll up many many pages in order to see the build
> errors which are actually my fault. (see this bug:
> https://bugzilla.mozilla.org/show_bug.cgi?id=1301688). Unfortunately, I
> don't know enough about the build system to write a patch to do this.
>

I like to teach others how to fish:

https://dxr.mozilla.org/mozilla-central/search?q=path%3Amoz.build+C=false

Code to call into external build systems is typically in config/external/.
See e.g. config/external/icu/defs.mozbuild for where compiler flags for ICU
are defined.



>
> On Fri, May 19, 2017 at 2:59 PM, David Major <dma...@mozilla.com> wrote:
>
> > I'm not sure if this is exactly the same as the ALLOW_COMPILER_WARNINGS
> > that we use for warnings-on-errors, but FWIW as of a couple of months
> > ago I cleaned out the last warning-allowance in our "own" code.
> > ALLOW_COMPILER_WARNINGS usage is now only in external libraries (I'm
> > counting NSS and NSPR as "external" for this purpose since I can't land
> > code to m-c directly).
> >
> > On Fri, May 19, 2017, at 02:55 PM, Kris Maglione wrote:
> > > I've actually been meaning to post about this, with some questions.
> > >
> > > I definitely like that we now print warnings at the end of builds,
> since
> > > it
> > > makes them harder to ignore. But the current amount of warnings spew at
> > > the
> > > end of every build is problematic, especially when a build fails and I
> > > need
> > > to scroll up several pages to find out why.
> > >
> > > Can we make some effort to get clean warnings output at the end of
> > > standard
> > > builds? A huge chunk of the warnings come from NSS and NSPR, and should
> > > be
> > > easily fixable. Most of the rest seem to come from vendored libraries,
> > > and
> > > should probably just be whitelisted if we can't get fixes upstreamed.
> > >
> > > On Fri, May 19, 2017 at 11:44:08AM -0700, Gregory Szorc wrote:
> > > >`mach build` attempts to parse compiler warnings to a persisted
> > "database."
> > > >You can view a list of compiler warnings post build by running `mach
> > > >warnings-list`. The intent behind this feature was to make it easier
> to
> > > >find and fix compiler warnings. After all, something out of sight is
> > out of
> > > >mind.
> > > >
> > > >There have been a few recent changes to increase the visibility of
> > compiler
> > > >warnings with the expectation being that raising visibility will
> > increase
> > > >the chances of someone addressing them. After all, a compiler warning
> is
> > > >either a) valid and should be fixed or b) invalid and should be
> ignored.
> > > >Either way, a compiler warning shouldn't exist.
> > > >
> > > >First, `mach build` now prints a list of all parsed compiler warnings
> at
> > > >the end of the build. More details are in the commit message:
> > > >https://hg.mozilla.org/mozilla-central/rev/4abe611696ab
> > > >
> > > >Second, Perfherder now records the compiler warning count as a metric.
> > This
> > > >means we now have alerts when compiler warnings are added or removed,
> > like
> > > >[1]. And, we can easily see graphs of compiler warnings over time,
> like
> > > >[2]. The compiler warnings are also surfaced in the "performance" tab
> of
> > > >build jobs on Treeherder, like [3].
> > > >
> > > >The Perfherder alerts and tracking are informational only: nobody will
> > be
> > > >backing you out merely because you added a compiler warning. While the
> > > >possibility of doing that now exists, I view that decision as up to
> the
> > C++
> > > >module. I'm not going to advocate for it. So if you feel passionately,
> > take
> > > >it up with them :)
> > > >
> > > >Astute link clickers will note that the count metrics in CI are often
> > noisy
> > > >- commonly fluctuating by a value of 1-2. I suspect there are race
> > > >conditions with interlea

Improving visibility of compiler warnings

2017-05-19 Thread Gregory Szorc
`mach build` attempts to parse compiler warnings to a persisted "database."
You can view a list of compiler warnings post build by running `mach
warnings-list`. The intent behind this feature was to make it easier to
find and fix compiler warnings. After all, something out of sight is out of
mind.

There have been a few recent changes to increase the visibility of compiler
warnings with the expectation being that raising visibility will increase
the chances of someone addressing them. After all, a compiler warning is
either a) valid and should be fixed or b) invalid and should be ignored.
Either way, a compiler warning shouldn't exist.

First, `mach build` now prints a list of all parsed compiler warnings at
the end of the build. More details are in the commit message:
https://hg.mozilla.org/mozilla-central/rev/4abe611696ab

Second, Perfherder now records the compiler warning count as a metric. This
means we now have alerts when compiler warnings are added or removed, like
[1]. And, we can easily see graphs of compiler warnings over time, like
[2]. The compiler warnings are also surfaced in the "performance" tab of
build jobs on Treeherder, like [3].

The Perfherder alerts and tracking are informational only: nobody will be
backing you out merely because you added a compiler warning. While the
possibility of doing that now exists, I view that decision as up to the C++
module. I'm not going to advocate for it. So if you feel passionately, take
it up with them :)

Astute link clickers will note that the count metrics in CI are often noisy
- commonly fluctuating by a value of 1-2. I suspect there are race
conditions with interleaved process output and/or bugs in the compiler
warnings parser/aggregator. Since the values are currently advisory only,
I'm very much taking a "perfect is the enemy of good" attitude and have no
plans to improve the robustness.

[1] https://treeherder.mozilla.org/perf.html#/alerts?id=6700
[2] https://mzl.la/2qFN0me and https://mzl.la/2rAkWR5
[3]
https://treeherder.mozilla.org/#/jobs?repo=autoland=100509284
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Have you run 'mach bootstrap' lately?

2017-05-15 Thread Gregory Szorc
I thought there was a bug on file, but maybe not. I've long thought the
following changes should be made:

* `mach mercurial-setup` should be rolled into `mach bootstrap`
* `mach doctor` should be rolled into `mach bootstrap`
* `mach bootstrap` should remember answers from last time and not prompt on
subsequent runs unless a flag is specified
* `mach bootstrap` should be renamed/aliased to `mach refresh` (or similar)
to reflect that it is no longer limited to initial system setup

And of course there are missing features, such as Git configuration and
offering to create a Git clone.

bootstrap, doctor, and mercurial-setup are all under-loved features. They
tend to not get prioritized very highly. Typically, new features or bug
fixes are implemented when someone feels like scratching an itch. I wish I
could justify spending more time on these things because turnkey optimal
development environments are important. But it's a lot of ongoing work and
there always seems to be something else competing for my time :(

On Mon, May 15, 2017 at 4:59 PM, Ralph Giles  wrote:

> The stand-alone bootstrap.py script actually has a --no-interactive
> option (which answers 'yes' to everything) but the mach wrapper
> doesn't support this.
>
> `mach mercurial-setup` takes an --update-only option. Maybe we
> implementing something like that for `mach boostrap` would help. Or
> calling it something more descriptive like `mach update-deps`. Like
> mercurial-setup, it could still use the bootstrap python module to
> install things. While the two use cases are different, it makes sense
> to share code between an initial development environment setup script
> and one that updates that environment.
>
>  -r
>
> On Mon, May 15, 2017 at 3:39 PM, Ethan Glasser-Camp
>  wrote:
> > Actually, I think my real question is "What is the intended way for
> > developers to keep their development environment up-to-date?" I don't
> think
> > that way should require a developer to answer questions, because the
> > answers presumably haven't changed since the last time they answered
> them.
> > If the intended way is `mach bootstrap`, then I think `mach bootstrap`
> > should have an option to skip the questions[*]. If `mach bootstrap` is
> only
> > intended to run once when setting up a new development environment, then
> > maybe there should be a `mach tune-up` command or something like that.
> >
> > I'm happy to file bugs for whichever is the case, but I'm not sure which
> > one it is.
> >
> > Ethan
> >
> > [*] When using `./mach bootstrap --settings ./mozconfig`, I get: `The
> > bootstrap command does not accept the arguments: --settings ./mozconfig`.
> > When using `./mach --settings ./mozconfig bootstrap`, I get the
> questions.
> >
> >
> > On Fri, May 12, 2017 at 1:04 PM, Geoffrey Brown 
> wrote:
> >
> >> I'm not sure. I always just answer the prompts and am happy with that.
> >>
> >> There is a --settings option, which sounds like it might be helpful,
> but I
> >> don't have any experience with that.
> >>
> >>  - Geoff
> >>
> >> On Fri, May 12, 2017 at 9:00 AM, Ethan Glasser-Camp <
> >> eglasserc...@mozilla.com> wrote:
> >>
> >>> Is there a way to run it without having to reanswer the configuration
> >>> questions?
> >>>
> >>> Ethan
> >>>
> >>>
> >>
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Have you run 'mach bootstrap' lately?

2017-05-12 Thread Gregory Szorc
On Fri, May 12, 2017 at 2:37 PM, Aaron Klotz  wrote:

> On 5/12/2017 8:29 AM, Ryan VanderMeulen wrote:
>
>> And while we're on that topic, I'll also remind people once again that
>> for Windows MozillaBuild users, you can keep your copy of Mercurial up to
>> date via pip! Simply run |pip install -U mercurial| and you'll have the
>> latest version available - no need to wait for an updated MozillaBuild
>> release.
>>
>>
> Isn't that something that mach bootstrap should be doing?
>

Yes. A patch is in Ryan's review queue to do this.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: A reminder about commit messages: they should be useful

2017-04-18 Thread Gregory Szorc
On Mon, Apr 17, 2017 at 10:36 PM, Steve Fink  wrote:

> On 04/17/2017 08:11 PM, Nicholas Nethercote wrote:
>
>> On Tue, Apr 18, 2017 at 11:34 AM, Ben Kelly  wrote:
>>
>> I don't object to people writing longer commit messages, but that
>>> information needs to be in the bug.  Our tools today (splinter and
>>> mozreview) don't do that automatically AFAIK.  I think there is an hg
>>> extension you can get to do it with splinter.
>>>
>>> Yes, 'hg bzexport' defaults to copying the commit message into a bug
>> comment.
>>
>
> Heh. Yes, bzexport will copy the commit message into a comment. 1 point in
> its favor.
>

MozReview does exactly the same thing. But Bugzilla appears to hide
comments with the "mozreview-request" tag by default in the web UI. The
full commit message / attachment description comes across in the HTML
email, however. This seems like behavior that could be revisited if a bug
were filed with a compelling reason for the change.


>
> If you use it to create bugs (via the --new flag), it will use the first
> line of the commit message as the default bug title. Which goes against
> what has been repeatedly requested in this thread, in that the bug title is
> what is wrong, the commit message is what is being done to fix it. 1 point
> against. (Though arguably this is ok for feature request bugs.)
>

FWIW GitHub pull requests have the same problem. There is an inherent UX
design conflict between lowering the barrier to task completion [with
metadata that's "good enough"] versus asking the user to do too much in the
name of being ideal. In both cases, the tools are forcing the existence of
a "named grouping" (via a bug or pull request title/summary) onto
end-users. You can't argue too hard with that: summaries are convenient for
dashboards and tables. My personal opinion is the tools should optimize for
"throwing code over the wall" to get one step closer to landing and
minimize abandonment. So default heuristics for picking the summary feel
like a lesser evil than requiring the user to explicitly enter something.
And this is exactly what `hg bzexport --new` and GitHub's web UI for new
pull requests do by defaulting to the commit message for titles.


>
> (bzexport was also written in the mq era and has some deficiencies when
> using it with a more modern workflow. It's fixable, but it needs some work.)
>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: A reminder about commit messages: they should be useful

2017-04-17 Thread Gregory Szorc
On Mon, Apr 17, 2017 at 5:12 PM, <gsquel...@mozilla.com> wrote:

> On Tuesday, April 18, 2017 at 11:58:11 AM UTC+12, smaug wrote:
> > On 04/18/2017 02:36 AM, Gregory Szorc wrote:
> > > On Mon, Apr 17, 2017 at 4:10 PM, smaug <sm...@welho.com> wrote:
> > >
> > >> On 04/17/2017 06:16 PM, Boris Zbarsky wrote:
> > >>
> > >>> A quick reminder to patch authors and reviewers.
> > >>>
> > >>> Changesets should have commit messages.  The commit message should
> > >>> describe not just the "what" of the change but also the "why".  This
> is
> > >>> especially
> > >>> true in cases when the "what" is obvious from the diff anyway; for
> larger
> > >>> changes it makes sense to have a summary of the "what" in the commit
> > >>> message.
> > >>>
> > >>> As a specific example, if your diff is a one-line change that
> changes a
> > >>> method call argument from "true" to "false", having a commit message
> that
> > >>> says
> > >>> "change argument to mymethod from true to false" is not very helpful
> at
> > >>> all.  A good commit message in this situation will at least mention
> the
> > >>> meaning for the argument.  If that does not make it clear why the
> change
> > >>> is being made, the commit message should explain the "why".
> > >>>
> > >>> Thank you,
> > >>> Boris
> > >>>
> > >>> P.S.  Yes, this was prompted by a specific changeset I saw.  This
> > >>> changeset had been marked r+, which means neither the patch author
> not the
> > >>> reviewer
> > >>> really thought about this problem.
> > >>>
> > >>
> > >>
> > >> And reminder, commit messages should *not* be stories about how you
> ended
> > >> up with this particular change. They should just tell "what" and
> "why".
> > >> It seems like using mozreview leads occasionally writing stories
> (which is
> > >> totally fine as a bugzilla comment).
> > >>
> > >
> > > I disagree somewhat. As a reviewer, I often appreciate the extra
> context if
> > > it helps me - the reviewer - or a future me - an archeologist or patch
> > > author - understand the situation better.
> >
> > That is why we have links to the bug. Bug should always be the unite of
> truth telling
> > why some change was done. Bugs tend to have so much more context about
> the change than any individual commit message can or should have.
>
> But then some bugs may accumulate hundreds of comments and it becomes
> unreasonable to expect future maintainers to read through them all, when
> the commit descriptions could just present a selectively-useful history of
> the "how we got to this solution".
>

Exactly. As a reviewer, when I see an empty commit message and load the bug
to get more context and see a wall of text, my reaction is to utter an
expletive. But if I see a descriptive commit message and I trust what's
written there, I can avoid loading the bug and just get on with reviewing.
In other words, that commit message just saved me a bunch of time and got
you feedback sooner all without upsetting me. And generally speaking, we
want to optimize for reducing time spent on review because reviewers are a
more limited (and therefore more valuable) resource than patch authors.

Of course, you can't prevent all bug scouring. But I argue it shouldn't be
necessary in the common case. Furthermore, I would expect a good commit
message to direct the reviewer to the bug if it contains useful context.
Along that vein, perhaps we need a way to "flag" a bug/comment so the
reviewing interface can tell any future reviewer "see bug / comment for
important context." That way we don't have to put so much trust in commit
messages (which can have omissions, often unintentionally, possibly
maliciously).


>
> >
> > > If that context prevents the
> > > reviewer or a future patch author from asking "why didn't we do X [and
> > > spending a few hours waiting for a reply or trying to find an answer]"
> then
> > > that context was justified. I do agree there are frivolous stories
> that do
> > > little more than entertain (e.g. the first paragraphs of
> > > https://hg.mozilla.org/mozilla-central/rev/6b064f9f6e10) and we
> should not
> > > be encouraging that style.
> > >
> > >
> > >> Overlong unrelated commit messages just make it harder to read blame.
> > >>
> > >
> > > This is the tail wagging the dog. Please file a bug against the tool
> for
> > > letting best practices interfere with an important workflow.
> > >
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: A reminder about commit messages: they should be useful

2017-04-17 Thread Gregory Szorc
On Mon, Apr 17, 2017 at 4:10 PM, smaug  wrote:

> On 04/17/2017 06:16 PM, Boris Zbarsky wrote:
>
>> A quick reminder to patch authors and reviewers.
>>
>> Changesets should have commit messages.  The commit message should
>> describe not just the "what" of the change but also the "why".  This is
>> especially
>> true in cases when the "what" is obvious from the diff anyway; for larger
>> changes it makes sense to have a summary of the "what" in the commit
>> message.
>>
>> As a specific example, if your diff is a one-line change that changes a
>> method call argument from "true" to "false", having a commit message that
>> says
>> "change argument to mymethod from true to false" is not very helpful at
>> all.  A good commit message in this situation will at least mention the
>> meaning for the argument.  If that does not make it clear why the change
>> is being made, the commit message should explain the "why".
>>
>> Thank you,
>> Boris
>>
>> P.S.  Yes, this was prompted by a specific changeset I saw.  This
>> changeset had been marked r+, which means neither the patch author not the
>> reviewer
>> really thought about this problem.
>>
>
>
> And reminder, commit messages should *not* be stories about how you ended
> up with this particular change. They should just tell "what" and "why".
> It seems like using mozreview leads occasionally writing stories (which is
> totally fine as a bugzilla comment).
>

I disagree somewhat. As a reviewer, I often appreciate the extra context if
it helps me - the reviewer - or a future me - an archeologist or patch
author - understand the situation better. If that context prevents the
reviewer or a future patch author from asking "why didn't we do X [and
spending a few hours waiting for a reply or trying to find an answer]" then
that context was justified. I do agree there are frivolous stories that do
little more than entertain (e.g. the first paragraphs of
https://hg.mozilla.org/mozilla-central/rev/6b064f9f6e10) and we should not
be encouraging that style.


> Overlong unrelated commit messages just make it harder to read blame.
>

This is the tail wagging the dog. Please file a bug against the tool for
letting best practices interfere with an important workflow.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: A reminder about commit messages: they should be useful

2017-04-17 Thread Gregory Szorc
On Mon, Apr 17, 2017 at 8:16 AM, Boris Zbarsky  wrote:

> A quick reminder to patch authors and reviewers.
>
> Changesets should have commit messages.  The commit message should
> describe not just the "what" of the change but also the "why".  This is
> especially true in cases when the "what" is obvious from the diff anyway;
> for larger changes it makes sense to have a summary of the "what" in the
> commit message.
>
> As a specific example, if your diff is a one-line change that changes a
> method call argument from "true" to "false", having a commit message that
> says "change argument to mymethod from true to false" is not very helpful
> at all.  A good commit message in this situation will at least mention the
> meaning for the argument.  If that does not make it clear why the change is
> being made, the commit message should explain the "why".
>
> Thank you,
> Boris
>
> P.S.  Yes, this was prompted by a specific changeset I saw.  This
> changeset had been marked r+, which means neither the patch author not the
> reviewer really thought about this problem.
>

A litmus test I apply when writing commit messages is "can the reviewer [or
any subsequent reader of the changeset] discern enough context from the
commit message alone to conduct review or understand the reason for the
change?" If yes, it is a good commit message. If no, it may* not be a good
commit message. I think it was Richard Newman who first told me this
analogy: commit messages are like fragments of a larger picture - if you
combine all the commit messages together, you construct a story for the
history of the code as if you were reading a good book. A bunch of
fragments like "Bug 123456 - Fix foo" don't construct a meaningful story
and are therefore not very useful.

An explicit goal of commit messages I write is that the reviewer/reader
doesn't need to read anything other than the changeset (like Bugzilla
comments) to conduct review or understand the changeset's context. That's
not to say they won't read those things and/or verify that content of the
commit message is true. But in cases where the reviewer trusts the author,
the words they've written, and agrees with the premise of the commit
message, it can save a ton of the reviewer's time. Also, when I as a
reviewer see a detailed commit message, I can assess the comprehension the
author has on the subject and this helps me vary scrutiny to different
aspects of the change, as appropriate. I can also interpret the commit
message as a thesis statement or hypothesis and then gauge whether the code
changes actually do that. If the commit message and code aren't in
agreement, it's a good sign extra scrutiny is needed. From my experience,
the extra context in commit messages has prevented tons of bugs from making
it past review.

* Sometimes there's just too much context and you need to reference things
external to the changeset.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Changes to Firefox repositories on hg.mozilla.org

2017-04-07 Thread Gregory Szorc
I've seen a few follow-up questions in IRC and bug reports and would like
to clarify a few points. Content inline and below.

On Thu, Apr 6, 2017 at 4:13 PM, Gregory Szorc <g...@mozilla.com> wrote:

> This message applies to anyone who clones a Firefox repository from
> hg.mozilla.org. *Failure to read this message may result in degraded
> Mercurial performance.* If you use git-cinnabar or use the mozilla-unified
> repository exclusively, you are likely shielded from degraded performance
> and can stop reading.
>
> Some changes were made to the hosting of high-volume Firefox repositories
> on hg.mozilla.org in the past week. The full list of repositories
> impacted so far is available at https://hg.cdn.mozilla.net/. Client-side
> operations against repositories cloned from hg.mozilla.org may be slower
> until a one-time operation is performed. Once complete, operations against
> hg.mozilla.org should be *faster* than they were before.
>
> tldr
> 
>
> To ensure your Firefox repository clone runs at peak performance:
>
> 1) Upgrade to the latest Mercurial release (currently 4.1.2) -
> instructions at [1].
>
> 2a) Run `hg debugupgraderepo --optimize redeltaparent --optimize
> redeltamultibase --run`. Learn more at [2]. This command will likely take a
> few hours to run. But your existing repo will function just as it did
> before, just faster. The command keeps a backup of your existing data and
> doesn't modify your repo until the last few milliseconds of execution.
>

If you just run `hg debugupgraderepo`, it tells you which changes it would
perform. If the list is empty and your .hg/store/00manifest.d file is
<300MB, the repo is likely optimally stored already.


>
> 2b) Create a fresh clone from hg.mozilla.org using Mercurial 4.1 then
> push/pull data from your existing clone to it. See [3] for specific
> instructions.
>

You only need to do either 2a or 2b.

Doing a clone will be faster than an in-place upgrade assuming it doesn't
take you >1hr to download ~1 GB from the Internet. If you do the clone
method, do read [3] and note the command to adjust the phases/publishing
setting when pulling changesets from your old repo into the new one. If you
make a mistake [3] also says how to correct it.


>
> Technical Changes
> -
>
> The main Firefox repositories on hg.mozilla.org were using a legacy
> storage format with known deficiencies. The repositories have been upgraded
> to use a modern storage format (generaldelta). This storage format results
> in smaller repositories (300-500 MB on-disk reduction is common) and faster
> operations against optimally encoded repositories.
>
> The main Firefox repositories on hg.mozilla.org have not only had their
> storage formats upgraded, they have also been optimized so data is stored
> within as efficiently as possible. This is effectively the Mercurial
> equivalent of `git repack -a -f`.
>
> While your local clone may be using generaldelta (run `hg
> debugupgraderepo` to see if it is already present), the deltas may not be
> optimal and this can result in repo bloat and slow operations. A re-clone
> or one-time data upgrade with certain optimizations (using the `hg
> debugupgraderepo` described in the tldr section) will align your local
> clone with hg.mozilla.org and result in optimal performance.
>
> Performance Implications
> 
>
> If 2 Mercurial peers (say your local clone and hg.mozilla.org) are using
> different repository storage formats, one side has to do work to normalize
> exchanged data so it can be applied. For 95% of repositories, this data
> normalization is fast and nobody likely notices. But because the Firefox
> repository is so large, it can add noticeable to extreme overhead to
> operations.
>
> With the Firefox repositories on hg.mozilla.org upgraded to generaldelta,
> the following can occur:
>
> * A Mercurial 3.7+ client with a generaldelta repo will essentially stream
> data from the server to local disk with minimal processing. (Read: this is
> ideal and fastest.)
>
> * A Mercurial 3.5+ client with a non-generaldelta repo will have to
> heavily process incoming data so it is compatible with local storage. This
> will slow down operations like `hg pull` in proportion to the amount of
> incoming data. The slowdown will be less pronounced on Mercurial 4.1 due to
> performance improvements in Mercurial.
>
> * A Mercurial 3.4 or older client isn't capable of receiving the server's
> native storage format over the wire protocol so the *server* has to
> normalize things to a format the client can understand. This slows down `hg
> clone` and `hg pull` operations drastically (45+ minute clone times
> regardless of network speed) and can create load pr

Changes to Firefox repositories on hg.mozilla.org

2017-04-06 Thread Gregory Szorc
This message applies to anyone who clones a Firefox repository from
hg.mozilla.org. *Failure to read this message may result in degraded
Mercurial performance.* If you use git-cinnabar or use the mozilla-unified
repository exclusively, you are likely shielded from degraded performance
and can stop reading.

Some changes were made to the hosting of high-volume Firefox repositories
on hg.mozilla.org in the past week. The full list of repositories impacted
so far is available at https://hg.cdn.mozilla.net/. Client-side operations
against repositories cloned from hg.mozilla.org may be slower until a
one-time operation is performed. Once complete, operations against
hg.mozilla.org should be *faster* than they were before.

tldr


To ensure your Firefox repository clone runs at peak performance:

1) Upgrade to the latest Mercurial release (currently 4.1.2) - instructions
at [1].

2a) Run `hg debugupgraderepo --optimize redeltaparent --optimize
redeltamultibase --run`. Learn more at [2]. This command will likely take a
few hours to run. But your existing repo will function just as it did
before, just faster. The command keeps a backup of your existing data and
doesn't modify your repo until the last few milliseconds of execution.

2b) Create a fresh clone from hg.mozilla.org using Mercurial 4.1 then
push/pull data from your existing clone to it. See [3] for specific
instructions.

Technical Changes
-

The main Firefox repositories on hg.mozilla.org were using a legacy storage
format with known deficiencies. The repositories have been upgraded to use
a modern storage format (generaldelta). This storage format results in
smaller repositories (300-500 MB on-disk reduction is common) and faster
operations against optimally encoded repositories.

The main Firefox repositories on hg.mozilla.org have not only had their
storage formats upgraded, they have also been optimized so data is stored
within as efficiently as possible. This is effectively the Mercurial
equivalent of `git repack -a -f`.

While your local clone may be using generaldelta (run `hg debugupgraderepo`
to see if it is already present), the deltas may not be optimal and this
can result in repo bloat and slow operations. A re-clone or one-time data
upgrade with certain optimizations (using the `hg debugupgraderepo`
described in the tldr section) will align your local clone with
hg.mozilla.org and result in optimal performance.

Performance Implications


If 2 Mercurial peers (say your local clone and hg.mozilla.org) are using
different repository storage formats, one side has to do work to normalize
exchanged data so it can be applied. For 95% of repositories, this data
normalization is fast and nobody likely notices. But because the Firefox
repository is so large, it can add noticeable to extreme overhead to
operations.

With the Firefox repositories on hg.mozilla.org upgraded to generaldelta,
the following can occur:

* A Mercurial 3.7+ client with a generaldelta repo will essentially stream
data from the server to local disk with minimal processing. (Read: this is
ideal and fastest.)

* A Mercurial 3.5+ client with a non-generaldelta repo will have to heavily
process incoming data so it is compatible with local storage. This will
slow down operations like `hg pull` in proportion to the amount of incoming
data. The slowdown will be less pronounced on Mercurial 4.1 due to
performance improvements in Mercurial.

* A Mercurial 3.4 or older client isn't capable of receiving the server's
native storage format over the wire protocol so the *server* has to
normalize things to a format the client can understand. This slows down `hg
clone` and `hg pull` operations drastically (45+ minute clone times
regardless of network speed) and can create load problems on the server.
(This is why we waited a while before performing this server-side change
and is why we'll have to lock out legacy clients if this load becomes a
problem.)

In addition, a Mercurial 4.1+ client will use zstandard instead of zlib for
compression, resulting in faster compression/decompression and fewer bytes
transferred. This translates to faster operations against hg.mozilla.org.

Why We Made This Change
---

Many of you may be asking: "since this change slows clients down, why make
it?" Good question. The viable choices were between doing nothing and not
allowing peak performance or making a 1-time change that enabled faster
operations while temporarily inconveniencing (to various extents) existing
clones and their users (namely Firefox developers). In other words, it was
about letting the past and legacy software continue to hinder us or to
break free and move to the future.

With Firefox and Mozilla, we have to play the long game. And, we need to
enable developers to move fast. This includes things like Firefox's
automation. This change will enable VCS interactions in automation to
complete faster which will reduce end-to-end times 

Re: windows build anti-virus exclusion list?

2017-03-29 Thread Gregory Szorc
On Sun, Mar 26, 2017 at 11:56 PM, Jean-Yves Avenard 
wrote:

> Hi.
>
> I have received the new Dell XPS 15 9560 and got very puzzled as to why
> compiling central was so slow on this machine.
> This is comparing against a Gigabyte Aero 14 with a gen 6 intel CPU
> (2.6Ghz i7-6600HQ) vs Dell's 7th gen (2.8Ghz i7-7700HQ)
> On the Aero 14, compiling central takes 24 minutes. On the XPS 15, first
> go 38 minutes.
> The XPS 15 came with McAfee anti-virus and Windows Defender is disabled.
> An exclusion list made almost no difference. Disabling entirely McAfee: the
> time dropped to 28 minutes.Uninstalling McAfee completely, enabling Windows
> defender with an exclusion list as mentioned in the first post: 26.5 minutes
> Now disabling Windows Defender: not just an exclusion list saw the time
> dropped to 25 minutes.Interestingly, on the Aero disabling Windows Defender
> or having just an exclusion list made almost no difference in compilation
> time. I can't explain the reason. Maybe because big brother is watching all
> the time!
>

The way that A/V scanning typically works on Windows is that the A/V
software inserts a "file system filter driver" into the kernel. This driver
sees pretty much all I/O operations and has the opportunity to change their
behavior, delay their fulfillment (by calling out into a blocking API in
userland), etc.

The file system filter driver for A/V is always installed. Exclusion list
processing needs to be handled by that driver. So just having the driver
installed means there is some overhead. If the exclusion list processing
code is slow, this will slow down I/O, even if a path is in the exclusion
list. From your numbers, I suspect McAfee may have a less optimal exclusion
list processor than Microsoft. That wouldn't surprise me.

You can see which file system filters are loaded by running `fltmc.exe
filters`. Note: you'll have to do this from a command prompt with
administrator access. You'll likely have to plug the names into a search
engine to figure out what they do.

I'm not sure about McAfee, but Windows Defender scans files during the
CloseHandle() API. But only if the file was written or appended to. This
scanning makes CloseHandle() go from ~1us to ~1ms. When you are writing
thousands of files, this scanning can add up! FWIW, Mercurial mitigates
this by having a thread pool that just sits around closing file
descriptors. This change made fresh `hg update` operations on Windows >60s
faster IIRC. I recommend using Process Monitor to record system call times.
That's how I discovered that Windows Defender was mucking about with
CloseHandle(). If you find system calls that other A/V (like McAfee) is
mucking with, we could potentially mitigate their impact in the build
system similar to what Mercurial has done.


> After following the instructions listed there: http://www.
> ultrabookreview.com/14875-fix-throttling-xps-15/
> Compilation time dropped to 23.8 minutes.The main factor was adding
> thermal pads to the MOSFETs.
> Undervolting the CPU by 125mV added 50s of compilation time, but dropped
> the processor temperature by 10C (max 78C vs 68C) and my guess will also
> add quite a lot of battery life.
>

I'll be blunt: you should not be performing full builds on a laptop because
of issues like thermal throttling. Laptops are OK for limited uses (like
traveling). But if you are paid to hack on Gecko, Mozilla should buy you a
desktop. Firefox Desktop developers (who can use artifact builds) can get
away with a laptop for building.


> So if you're in a hurry, you may want to try disabling Windows Defender
> completely.
> FWIW: on those same machines running Linux Ubuntu 16.10;Aero 14: 14
> minutesXPS 15: 13 minutes.That's out of the box, no tweaks of any kinds.
> JY
> ---
> If it ain't broken, please fix it
>
>
>
>
> On Fri, Mar 17, 2017 at 4:26 AM +0100, "Ben Kelly" 
> wrote:
>
>
>
>
>
>
>
>
>
>
> Hi all,
>
> I'm trying to configure my new windows build machine and noticed that
> builds were still somewhat slow.  I did:
>
> 1) Put it in high performance power profile
> 2) Made sure my mozilla-central dir was not being indexed for search
> 3) Excluded my mozilla-central directory from windows defender
>
> Watching the task monitor during a build, though, I still saw MsMpEng.exe
> (antivirus) running during the build.
>
> I ended up added some very broad exclusions to get this down close to
> zero.  I am now excluding:
>
>   - mozilla-central checkout
>   - mozilla-build install dir
>   - visual studio install dir
>   - /users/bkelly/appdada/local/temp
>   - /users/bkelly (because temp dir was not enough)
>
> I'd like to narrow this down a bit.  Does anyone have a better list of
> things to exclude from virus scanning for our build process?
>
> Thanks.
>
> Ben
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
>
>
>
>
> 

Better download security through browsers

2017-03-24 Thread Gregory Szorc
I recently reinstalled Windows 10 on one of my machines. This involved
visiting various web sites and downloading lots of software.

It is pretty common for software publishers to publish hashes or
cryptographic signatures of software so the downloaded software can be
verified. (Often times the download is performed through a CDN, mirroring
network, etc and you may not have full trust in the server operator.)

Unless you know how to practice safe security, you probably don't bother
verifying downloaded files match the signatures authors have provided.
Furthermore, many sites redundantly write documentation for how to verify
the integrity of downloads. This feels sub-optimal.

This got me thinking: why doesn't the user agent get involved to help
provide better download security? What my (not a web standard spec author)
brain came up with is standardized metadata in the HTML for the download
link (probably an ) that defines file integrity information. When the
user agent downloads that file, it automatically verifies file integrity
and fails the download or pops up a big warning box, etc or things don't
check out. In other words, this mechanism would extend the trust anchor in
the source web site (likely via a trusted x509 cert) to file downloads.
This would provide additional security over (optional) x509 cert validation
of the download server alone. Having the integrity metadata baked into the
origin site is important: you can't trust the HTTP response from the
download server because it may be from an untrusted server.

Having such a feature would also improve the web experience. How many times
have you downloaded a corrupted file? Advanced user agents (like browsers)
could keep telemetry of how often downloads fail integrity. This could be
used to identify buggy proxies, malicious ISPs rewriting content, etc.

I was curious if this enhancement to the web platform has ever been
considered and/or if it is something Mozilla would consider pushing.

gps
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2017-03-23 Thread Gregory Szorc
On Thu, Mar 23, 2017 at 9:10 PM, Jeff Muizelaar <jmuizel...@mozilla.com>
wrote:

> I have a Ryzen 7 1800 X and it does a Windows clobber builds in ~20min
> (3 min of that is configure which seems higher than what I've seen on
> other machines).


Make sure your power settings are aggressive. Configure and its single-core
usage is where Xeons and their conservative clocking really slowed down
compared to consumer CPUs (bug 1323106). Also, configure time can vary
significantly depending on page cache hits. So please run multiple times.

On Windows, I measure `mach configure` separately from `mach build` because
configure on Windows is just so slow and skews results. For Gecko
developers, I feel we want to optimize for compile time, so I tend to give
less weight to configure performance.


> This compares pretty favorably to the Lenovo p710
> machines that people are getting which do 18min clobber builds and
> cost more than twice the price.
>

I assume this is Windows and VS2015?

FWIW, I've been very interested in getting my hands on a Ryzen. I wouldn't
at all be surprised for the Ryzens to have better value than dual socket
Xeons. The big question is whether it is unquestionably better. For some
people (like remote employees who don't have access to an icecream
cluster), you can probably justify the extreme cost of a dual socket Xeon
over a Ryzen, even if the difference is only like 20%. Of course, the
counterargument is you can probably buy 2 Ryzen machines in place of a dual
socket Xeon. The introduction of Ryzen has literally changed the landscape
and the calculus that determines what hardware engineers should have.
Before I disappeared for ~1 month, I was working with IT and management to
define an optimal hardware load out for Firefox engineers. I need to resume
that work and fully evaluate Ryzen...



>
> -Jeff
>
> On Thu, Mar 23, 2017 at 7:51 PM, Jeff Gilbert <jgilb...@mozilla.com>
> wrote:
> > They're basically out of stock now, but if you can find them, old
> > refurbished 2x Intel Xeon E5-2670 (2.6GHz Eight Core) machines were
> > bottoming out under $1000/ea. It happily does GCC builds in 8m, and I
> > have clang builds down to 5.5. As the v2s leave warranty, similar
> > machines may hit the market again.
> >
> > I'm interested to find out how the new Ryzen chips do. It should fit
> > their niche well. I have one at home now, so I'll test when I get a
> > chance.
> >
> > On Wed, Jul 6, 2016 at 12:06 PM, Trevor Saunders
> > <tbsaunde+mozi...@tbsaunde.org> wrote:
> >> On Tue, Jul 05, 2016 at 04:42:09PM -0700, Gregory Szorc wrote:
> >>> On Tue, Jul 5, 2016 at 3:58 PM, Ralph Giles <gi...@mozilla.com> wrote:
> >>>
> >>> > On Tue, Jul 5, 2016 at 3:36 PM, Gregory Szorc <g...@mozilla.com>
> wrote:
> >>> >
> >>> > > * `mach build binaries` (touch network/dns/DNS.cpp): 14.1s
> >>> >
> >>> > 24s here. So faster link times and significantly faster clobber
> times. I'm
> >>> > sold!
> >>> >
> >>> > Any motherboard recommendations? If we want developers to use
> machines
> >>> > like this, maintaining a current config in ServiceNow would probably
> >>> > help.
> >>>
> >>>
> >>> Until the ServiceNow catalog is updated...
> >>>
> >>> The Lenovo ThinkStation P710 is a good starting point (
> >>> http://shop.lenovo.com/us/en/workstations/thinkstation/p-series/p710/
> ).
> >>> From the default config:
> >>>
> >>> * Choose a 2 x E5-2637v4 or a 2 x E5-2643v4
> >>> * Select at least 4 x 8 GB ECC memory sticks (for at least 32 GB)
> >>> * Under "Non-RAID Hard Drives" select whatever works for you. I
> recommend a
> >>> 512 GB SSD as the primary HD. Throw in more drives if you need them.
> >>>
> >>> Should be ~$4400 for the 2xE5-2637v4 and ~$5600 for the 2xE5-2643v4
> >>> (plus/minus a few hundred depending on configuration specific).
> >>>
> >>> FWIW, I priced out similar specs for a HP Z640 and the markup on the
> CPUs
> >>> is absurd (costs >$2000 more when fully configured). Lenovo's
> >>> markup/pricing seems reasonable by comparison. Although I'm sure
> someone
> >>> somewhere will sell the same thing for cheaper.
> >>>
> >>> If you don't need the dual socket Xeons, go for an i7-6700K at the
> least. I
> >>> got the
> >>> http://store.hp.com/us/en/pdp/cto-dynamic-kits--1/hp-envy-
> 750se-windows-7-desktop-p5q80av-aba-1
> >>> a few months ago and like it. A

Re: Please do NOT hand-edit web platform test MANIFEST.json files

2017-03-22 Thread Gregory Szorc
On Tue, Mar 21, 2017 at 7:44 PM, Boris Zbarsky  wrote:

> On 3/21/17 6:41 PM, Jeff Gilbert wrote:
>
>> JSON allows comments if all the JSON processors we use handle comments. :)
>>
>
> JSON.parse in JS does not.
>
> The Python "json" module does not as far as I can tell.
>
> What JSON processors are you thinking of?
>
> -Boris
>
> P.S.  The Python "json" module is most relevant here, since it's the thing
> actually being used to deal with MANIFEST.json.


Fun fact: lots of JSON documents also evaluate as Python data structures.
So if you prepend "foo = " and throw that into eval(), you can
magically evaluate a JSON document into a Python variable. Of course,
eval() is a security concern. But people blindly execute code in
mozilla-central (like the build system) all the time. So perhaps this is
tolerable.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The future of commit access policy for core Firefox

2017-03-21 Thread Gregory Szorc
On Thu, Mar 9, 2017 at 1:53 PM, Mike Connor  wrote:

> (please direct followups to dev-planning, cross-posting to governance,
> firefox-dev, dev-platform)
>
>
> Nearly 19 years after the creation of the Mozilla Project, commit access
> remains essentially the same as it has always been.  We've evolved the
> vouching process a number of times, CVS has long since been replaced by
> Mercurial & others, and we've taken some positive steps in terms of
> securing the commit process.  And yet we've never touched the core idea of
> granting developers direct commit access to our most important
> repositories.  After a large number of discussions since taking ownership
> over commit policy, I believe it is time for Mozilla to change that
> practice.
>
> Before I get into the meat of the current proposal, I would like to
> outline a set of key goals for any change we make.  These goals have been
> informed by a set of stakeholders from across the project including the
> engineering, security, release and QA teams.  It's inevitable that any
> significant change will disrupt longstanding workflows.  As a result, it is
> critical that we are all aligned on the goals of the change.
>
>
> I've identified the following goals as critical for a responsible commit
> access policy:
>
>- Compromising a single individual's credentials must not be
>sufficient to land malicious code into our products.
>- Two-factor auth must be a requirement for all users approving or
>pushing a change.
>- The change that gets pushed must be the same change that was
>approved.
>- Broken commits must be rejected automatically as a part of the
>commit process.
>
>
> In order to achieve these goals, I propose that we commit to making the
> following changes to all Firefox product repositories:
>
>- Direct commit access to repositories will be strictly limited to
>sheriffs and a subset of release engineering.
>   - Any direct commits by these individuals will be limited to fixing
>   bustage that automation misses and handling branch merges.
>- All other changes will go through an autoland-based workflow.
>   - Developers commit to a staging repository, with scripting that
>   connects the changeset to a Bugzilla attachment, and integrates with 
> review
>   flags.
>   - Reviewers and any other approvers interact with the changeset as
>   today (including ReviewBoard if preferred), with Bugzilla flags as the
>   canonical source of truth.
>   - Upon approval, the changeset will be pushed into autoland.
>   - If the push is successful, the change is merged to
>   mozilla-central, and the bug updated.
>
> I know this is a major change in practice from how we currently operate,
> and my ask is that we work together to understand the impact and concerns.
> If you find yourself disagreeing with the goals, let's have that discussion
> instead of arguing about the solution.  If you agree with the goals, but
> not the solution, I'd love to hear alternative ideas for how we can achieve
> the outcomes outlined above.
>

(Responding to several topics from the top post because I don't want to
spam with multiple replies.)

I think a lot of people are conflating "push access" with "ability to land
something." The distinction is the former grants you privileges to `hg
push` directly to repos on hg.mozilla.org and the latter allows delegation
of this ability. It is easy to conflate them because today "push access"
(level 3) gives you permission to trigger the landing (via the autoland
service via MozReview). But this is only because it was convenient to
implement this way. I want to explicitly state that we're moving towards
N=~10 people having raw push access to the Firefox repos with the majority
of landings occurring via autoland (via Conduit via MozReview/Bugzilla).
This reduces our attack surface area (less exposure to compromised SSH
keys), establishes better auditing (history of change will be on
Bugzilla/MozReview and not on a random local machine), eliminates potential
for bugs or variance with incorrect rebasing done on local machines, and
allows us to do extra things as part of the landing/rebase, such as
automatically reformat code to match unified code styles, do binding
generation, etc. They say all problems in computer science can be solved by
another level of indirection. Deferred landing (autoland) is such an
indirection and it solves a lot of problems. It will be happening. Most of
us will lose access to push directly to the Firefox repos. We won't be
losing ability to initiate a push, however. For the record, I'm not in
favor of making this change until the tooling is such that it won't be a
significant workflow/productivity regression. This is a reason why it
hasn't been made yet.

A handful have commented on whether a rebase invalidates a previous r+.
This is a difficult topic. Strictly speaking, a rebase invalidates
everything because a 

Re: Pulsebot has been given a major game misconduct penalty

2017-02-03 Thread Gregory Szorc
On Fri, Feb 3, 2017 at 12:52 PM, Emma Humphries  wrote:

>
> Pulsebot has been suspended when we found it was spamming 5-digit bmo bugs
> about GitHub issues.
>
> There's a ticket to clean up bmo https://bugzilla.mozilla.org/
> show_bug.cgi?id=1336563 and I'll work with that team to get it cleaned up.
>
> Please reach out to bmo-ad...@mozilla.org so we can work with everyone on
> merges of this sort in the future, because toaster shopping is not fun.
>

Sorry for the spam and pulsebot service disruption, everyone.

As #developers backscroll from ~1958 UTC shows, the pulsebot activity was
not anticipated.

The landing of Servo's history in the Firefox repository was a one-off
event. We'll take measures to ensure this doesn't happen during future
similar events.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Please don't abuse "No bug" in commit messages

2017-02-03 Thread Gregory Szorc
On Fri, Feb 3, 2017 at 7:11 AM, Ryan VanderMeulen  wrote:

> A friendly reminder that per the MDN commit rules, the use of "No bug" in
> the commit message is to be used sparingly - in general for minor things
> like whitespace changes/comment fixes/etc where traceability isn't as
> important.
> https://developer.mozilla.org/docs/Mozilla/Developer_guide/C
> ommitting_Rules_and_Responsibilities
>
> I've come across uses of "No bug" commits recently where entire upstream
> imports of vendored libraries were done. This is bad for multiple reasons:
> * If makes sheriffing difficult - if the patch is broken and needs backing
> out, where should the comment go? When it gets merged to mozilla-central,
> who gets notified?
>

The commit author and/or who pushed the commit. We have email addresses in
the commit message and the pushlog. We should use them. There is a
https://github.com/glandium/pulsebot feature request waiting to be filed.


> * It makes tracking a pain - what happens if that patch ends up needing
> uplift? What about when that patch causes conflicts with another patch
> needing uplift? What if it causes a regression and a blocking bug needs to
> be filed?
>

Then you file a new bug "Changeset XX " As I argue at
http://gregoryszorc.com/blog/2015/01/16/bugzilla-and-the-
future-of-firefox-development/ we've conflated bugs with
changesets/landings and I don't think this is optimal. At the end of the
day, the changeset/commit is the thing that matters because a bug doesn't
directly change the behavior of Firefox. Instead, bugs are convenient
anchor points for {discussion, tracking, review, etc} that just so happen
to exist most of the time because 10+ years ago we tightly coupled our code
review mechanism into Bugzilla, which requires the existence of a bug.


>
> Bugs are cheap. Please use them.
>

They are. But there is still a cost to them. While I agree that things like
imports of upstream code are an abuse of "no bug," I feel strongly that we
shouldn't institute avoidable process overhead, especially for things like
trivial whitespace or comment changes. If we insist on having a bug for
every changeset, then I insist we teach the review/landing process to
create bugs automatically when a bugless changeset is seen so people aren't
burdened with the overhead of doing that. We have most of the technical
pieces in place to support this, including bug 1328351 annotating every
file in the tree with a bug component.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Redirecting http://hg.mozilla.org/ to https://

2017-02-01 Thread Gregory Szorc
http://hg.mozilla.org now HTTP 301s to https://hg.mozilla.org/. Please
report any problems against bug 450645 and/or make noise in #vcs on
irc.mozilla.org.

On Thu, Jan 26, 2017 at 2:17 PM, Gregory Szorc <g...@mozilla.com> wrote:

> It may be surprising, but hg.mozilla.org is still accepting plain text
> connections via http://hg.mozilla.org/ and isn't redirecting them to
> https://hg.mozilla.org/.
>
> On February 1 likely around 0800 PST, all requests to
> http://hg.mozilla.org/ will issue an HTTP 301 Moved Permanently redirect
> to https://hg.mozilla.org/.
>
> If anything breaks as a result of this change, the general opinion is it
> deserves to break because it isn't using secure communications and is
> possibly a security vulnerability. Therefore, unless this change causes
> widespread carnage, it is unlikely to be rolled back.
>
> Please note that a lot of 3rd parties query random content on
> hg.mozilla.org. For example, Curl's widespread mk-ca-bundle.pl script for
> bootstrapping the trusted CA bundle queried http://hg.mozilla.org/ until
> recently [1]. So it is likely this change may break random things outside
> of Mozilla. Again, anything not using https://hg.mozilla.org/ should
> probably be treated as a security vulnerability and fixed ASAP.
>
> For legacy clients only supporting TLS 1.0 (this includes Python 2.6 and
> /usr/bin/python on all versions of OS X - see [2]), hg.mozilla.org still
> supports [marginally secure compared to TLS 1.1+] TLS 1.0 connections and
> will continue to do so for the foreseeable future.
>
> This change is tracked in bug 450645. Please subscribe to stay in the loop
> regarding future changes, such as removing support for TLS 1.0 and not
> accepting plain text http://hg.mozilla.org/ connections at all.
>
> Please send comments to bug 450645 or reply to dev-version-control@lists.
> mozilla.org.
>
> [1] https://github.com/curl/curl/commit/1ad2bdcf110266c33eea70b895cb8c
> 150eeac790
> [2] https://github.com/Homebrew/homebrew-core/issues/3541
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Redirecting http://hg.mozilla.org/ to https://

2017-01-26 Thread Gregory Szorc
It may be surprising, but hg.mozilla.org is still accepting plain text
connections via http://hg.mozilla.org/ and isn't redirecting them to
https://hg.mozilla.org/.

On February 1 likely around 0800 PST, all requests to http://hg.mozilla.org/
will issue an HTTP 301 Moved Permanently redirect to https://hg.mozilla.org/
.

If anything breaks as a result of this change, the general opinion is it
deserves to break because it isn't using secure communications and is
possibly a security vulnerability. Therefore, unless this change causes
widespread carnage, it is unlikely to be rolled back.

Please note that a lot of 3rd parties query random content on hg.mozilla.org.
For example, Curl's widespread mk-ca-bundle.pl script for bootstrapping the
trusted CA bundle queried http://hg.mozilla.org/ until recently [1]. So it
is likely this change may break random things outside of Mozilla. Again,
anything not using https://hg.mozilla.org/ should probably be treated as a
security vulnerability and fixed ASAP.

For legacy clients only supporting TLS 1.0 (this includes Python 2.6 and
/usr/bin/python on all versions of OS X - see [2]), hg.mozilla.org still
supports [marginally secure compared to TLS 1.1+] TLS 1.0 connections and
will continue to do so for the foreseeable future.

This change is tracked in bug 450645. Please subscribe to stay in the loop
regarding future changes, such as removing support for TLS 1.0 and not
accepting plain text http://hg.mozilla.org/ connections at all.

Please send comments to bug 450645 or reply to
dev-version-cont...@lists.mozilla.org.

[1]
https://github.com/curl/curl/commit/1ad2bdcf110266c33eea70b895cb8c150eeac790
[2] https://github.com/Homebrew/homebrew-core/issues/3541
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to vendor Servo in mozilla-central

2017-01-03 Thread Gregory Szorc
Over in bug 1322769, we're planning to vendor the contents of the Servo Git
repository into mozilla-central as part of Quantum efforts.

The initial vendoring of the Servo repository is the first milestone of a
larger project that aims to "unify" workflows and automation across the two
projects, allowing both projects to have insights into the other's
automation and allowing development work spanning both projects to be
simpler.

In this first milestone, we will vendor the history of the Servo Git
repository into mozilla-central.

We only get one chance to vendor the history of Servo before it is forever
set in stone in the history of mozilla-central. So I'd prefer to get it
right so we don't have regrets about the approach 10 years from now.
Therefore, if you care about getting it right, your brain and eyeballs on
the proposal would be useful.

The current plan is to rewrite Servo's history to:

* Remove merge commits (only taking the p1 ancestry of the repo)
* Add something in commit message summary line to indicate commit comes
from Servo
* Add source repo and commit annotations to the commit message
* Reformat some GitHub-flavored markdown to something more sane for plain
text display
* Remove everything related to submodules
* Drop WPT files (there's ~135,000 of them!)

We also tentatively plan to introduce the Servo history as a new root in
the mozilla-central repository so that bisect operations on Firefox/Gecko
don't land in the middle of thousands of Servo-only commits. (I concede
this is an ugly hack to work around limitations of the "bisect" command of
your preferred VCS.)

Unless we prune more from history rewriting, this will add ~8,000 commits
to mozilla-central touching ~5,700 files. The on-wire clone size of the
repository will increase by ~36 MB. On-disk repo size will increase by ~50
MB (not including inode overhead).

Also, I anticipate the techniques and tools used to import Servo's history
and keep it "synchronized" have uses outside of Servo. Our current
mechanisms for keeping upstream/third party projects in sync with
mozilla-central are not well standardized. I'd really like the work to
support Servo to be used for other projects as well. This potentially
includes a future where certain directories in mozilla-central are
effectively developed on GitHub or are using other "non-standard"
workflows. If you think you may benefit from this future work, I'd
appreciate if you could look at the Servo vendoring effort and make sure
things like the annotations tracking the original source of the commit are
sufficient for uses outside of Servo.

Again, it's bug 1322769 if you want to comment or follow along.

gps
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Windows CPU power settings

2016-12-12 Thread Gregory Szorc
I've been assessing new hardware options for Mozilla to issue to
developers. As part of evaluating some dual socket Xeon processors, I found
some unexpected behavior: these processors routinely underclock in Windows
10 unless several cores or are used. Contrast with the behavior of my i7
Skylake processor, which seems to ramp up to full clock pretty rapidly,
even when only a single core is used.

The Windows 10 power settings appear to set the minimum CPU frequency at 5%
or 10% of maximum. When I cranked this up to 100%, artifact build time
dropped from ~170s to ~77s and full build configure dropped from ~165s to
~97s!

If you are a Windows user with Xeons in your desktop, you may want to visit
Control Panel -> Hardware and Sound -> Power Options -> Edit Plan Settings
-> Change advanced power settings -> Process power management -> Minimum
processor state and crank that up and see what happens. Note: running your
CPU at 100% all the time may impact your power bill!

Bug 1323106 has been opened to track improving the ability of the build
system (namely `mach doctor`) to improve matters. If you would like to
report success/failure of changing power settings or if you know a thing or
two about CPU power management and can provide technical assistance, please
weigh in there.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


  1   2   3   4   5   6   >