Re: Some changes to how errors are thrown from Web IDL methods

2020-02-10 Thread Nathan Froyd
On Thu, Feb 6, 2020 at 9:12 AM Boris Zbarsky  wrote:

> 3) While ErrorResult::Throw taking just an nsresult still exists, it is
> deprecated and new code should not be adding new calls to it if that can
> be avoided.
>

We are attempting to add a static analysis that blocks new uses of
`NS_NewNamedThread` in bug 1613440 [0].  We'd definitely like to consider
making it general enough that it can serve as a sort of
quasi-[[deprecated]] [1] for other uses in the code base (e.g. the
sandboxing folks would like to incrementally disallow functions from being
called from certain locations as they work to lock down the sandbox), and
this sort of replacement seems like another good application.

If you have other things you'd like this static analysis to be used for,
please file dependencies on bug 1613440.

Thanks,
-Nathan

[0] https://bugzilla.mozilla.org/show_bug.cgi?id=1613440
[1] We can't actually use [[deprecated]] / __attribute__((deprecated))
because of their use in third-party code; having the compiler error on uses
of such functions would break the build. [2]
[2]
https://searchfox.org/mozilla-central/rev/3811b11b5773c1dccfe8228bfc7143b10a9a2a99/build/moz.configure/warnings.configure#140-142
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


mozilla-central is now on C++17

2019-12-05 Thread Nathan Froyd
Bug 1560664 [0] stuck on central, so all of mozilla-central is compiled as
C++17 now.

Most C++17 language features should be usable; whether library features are
fully implemented across all of our supported compilers/standard libraries
is yet to be determined [5].  I will be updating our C++ usage page [4]
shortly.

The feature people will be most excited about is probably "if constexpr"
[1][2], though I have heard expressions of happiness about structured
bindings [3].

As part of this effort, we have upgraded our minimum version requirements
for clang to 5 and for GCC to 7.1.  clang 5 is known to have issues with
some C++17 language features (e.g. inline variables), and clang 6 is known
to miscompile Firefox; bumping to require at least clang 7 in the
not-too-distant future is a definite possibility.

I don't anticipate that we would upgrade to C++20 for at least another year
or so.

Thanks to everybody who contributed patches and reviews for this effort,
especially Marco Castelluccio for resolving some involved issues with
coverage tests.

Happy hacking,
-Nathan

[0] https://bugzilla.mozilla.org/show_bug.cgi?id=1560664
[1] http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2016/p0128r1.html
[2] http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2016/p0292r1.html
[3] http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2016/p0144r2.pdf
[4]
https://developer.mozilla.org/en-US/docs/Mozilla/Using_CXX_in_Mozilla_code
[5] As usual, please continue to prefer Firefox equivalents for standard
library entities unless there is good reason to do otherwise (e.g.
interoperability with third-party code), as the Firefox equivalents
typically perform better, have improved safety features, and/or integrate
better with other machinery (e.g. leak checking).
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal: Replace NS_ASSERTION with MOZ_ASSERT and then remove it.

2019-10-30 Thread Nathan Froyd
On Wed, Oct 30, 2019 at 11:36 AM Tom Ritter  wrote:
>
> I will claim that the most common behavior of developers is to leave
> XPCOM_DEBUG_BREAK alone and not set it to any particular value. I bet most
> people haven't even heard of this or know what it does.
>
> With that env var unset, in Debug mode, NS_ASSERTION will print to stderr
> and otherwise do nothing. In non-Debug mode, it will just do nothing.
>
> Is that the best behavior for this? Should perhaps (most of) these claimed
> assertions really be MOZ_ASSERT? Hence this proposal.

You may be interested in
https://bugzilla.mozilla.org/show_bug.cgi?id=1457813#c5, the links
therein, and the following bug comments for why we have resisted a
wholesale transition.

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Passing UniquePtr by value is more expensive than by rref

2019-10-14 Thread Nathan Froyd
On Mon, Oct 14, 2019 at 3:58 AM Henri Sivonen  wrote:
> On Mon, Oct 14, 2019 at 9:05 AM Gerald Squelart  wrote:
> >
> > I'm in the middle of watching Chandler Carruth's CppCon talk "There Are No 
> > Zero-Cost Abstractions" and there's this interesting insight:
> > https://youtu.be/rHIkrotSwcc?t=1041
> >
> > The spoiler is already in the title (sorry!), which is that passing 
> > std::unique_ptr by value is more expensive than passing it by rvalue 
> > reference, even with no exceptions!
> >
> > I wrote the same example using our own mozilla::UniquePtr, and got the same 
> > result: https://godbolt.org/z/-FVMcV (by-value on the left, by-rref on the 
> > right.)
> > So I certainly need to recalibrate my gutfeelometer.
>
> The discussion in the talk about what is needed to fix this strongly
> suggested (without uttering "Rust") that Rust might be getting this
> right. With panic=abort, Rust gets this right (
> https://rust.godbolt.org/z/SZQaAS ) which really makes one appreciate
> both Rust-style move semantics and the explicitly not-committal ABI.

With a little voodoo placement of [[clang::trivial_abi]]
(https://quuxplusone.github.io/blog/2018/05/02/trivial-abi-101/,
https://reviews.llvm.org/D41039) on Pair specializations and UniquePtr
itself, one can make the by-value function look more like what you
might expect, but at the cost (!) of making the rvalue-ref function
look more like the original by-value function,
https://godbolt.org/z/A1wjl8.  I think that's a reasonable tradeoff to
make if we wanted to start using [[clang::trivial_abi]].

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Nika Layzell and Kris Maglione are now XPCOM peers

2019-09-04 Thread Nathan Froyd
It is my pleasure to announce that Nika and Kris are XPCOM peers.

Nika has been doing great work in and around XPIDL: modernizing XPIDL
(Array, yay!), reorganizing the way we access XPIDL metadata at
runtime, and bringing the excitement of XPIDL to Rust.

Kris noticed that Nika was going to become an XPCOM peer and, not
wanting to be left out, volunteered (yes, really).  Kris has worked on
modernizing the component manager and various thread-related
improvements.

Please welcome them to their new roles by sending particularly
difficult reviews their way. :)

Thanks,
-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


[C++] Intent to eliminate: `using namespace std;` at global scope

2019-08-29 Thread Nathan Froyd
Hi all,

In working on upgrading our C++ support to C++17 [1], we've run into
some issues [2] surrounding the newly-introduced `std::byte` [3],
various Microsoft headers that pull in definitions of `byte`, and
conflicts between the two when one has done `using namespace std;`,
particularly at global scope.  Any use of `using namespace $NAME` is
not permitted by the style guide [4].

A quick perusal of our code shows that we have, uh, "many" violations
of the prohibition of `using namespace $NAME` [5].  I do not intend to
boil the ocean and completely rewrite our codebase to eliminate all
such violations.  However, since the use of `using namespace std;` is
relatively less common (~100 files) and is blocking useful work,
eliminating that pattern seems like a reasonable thing to do.

Thus far, it appears that the problematic `using namespace std;`
instances all appear at global scope.  We have a handful of
function-scoped ones that do not appear to be causing problems; if
those are easy to remove in passing, we'll go ahead and remove
function-scoped ones as well.  The intent is to not apply this change
to third-party code unless absolutely necessary; we have various ways
of dealing with the aforementioned issues--if they even come up--in
third-party code.

The work is being tracked in [2].  Please do not add new instances of
`using namespace std;` at global scope, or approve new instances in
patches that you review; when this work is complete, we will ideally
have a lint that checks for this sort of thing automatically.  If you
would like to help with this project, please file blocking bugs
against [2].

Thanks,
-Nathan

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1560664
[2] https://bugzilla.mozilla.org/show_bug.cgi?id=1577319
[3] http://eel.is/c++draft/cstddef.syn#lib:byte
[4] https://google.github.io/styleguide/cppguide.html#Namespaces
[5] 
https://searchfox.org/mozilla-central/search?q=using+namespace+.%2B%3B=false=true=
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Upcoming C++ standards meeting in Cologne

2019-07-30 Thread Nathan Froyd
On Sat, Jul 27, 2019 at 1:42 PM Botond Ballo  wrote:
> If you're interested in some more details about what happened at last
> week's meeting, my blog post about it is now available (also on
> Planet):
>
> https://botondballo.wordpress.com/2019/07/26/trip-report-c-standards-meeting-in-cologne-july-2019/

Thanks for writing this up.  I always enjoy reading these reports.

One grotty low-level question about the new exception proposal.  Your
post states:

"it was observed that since we need to revise the calling convention
as part of this proposal anyways, perhaps we could take the
opportunity to make other improvements to it as well, such as allowing
small objects to be passed in registers, the lack of which is a pretty
unfortunate performance problem today (certainly one we’ve run into at
Mozilla multiple times). That seems intriguing."

How is revising the calling convention a C++ standards committee
issue?  Doesn't that properly belong to the underlying platform (CPU
and/or OS)?

Thanks,
-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: cross-language LTO enabled on nightly for all platforms

2019-07-22 Thread Nathan Froyd
On Mon, Jul 22, 2019 at 11:45 AM Bobby Holley  wrote:
> Can you confirm which types of builds enable this? Does --enable-release turn 
> it on?

If you really want to build this locally, you can add `export
MOZ_LTO=cross` in your mozconfig.  `--enable-release` does not
automatically enable LTO (cross-language or otherwise).

Thanks,
-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


cross-language LTO enabled on nightly for all platforms

2019-07-22 Thread Nathan Froyd
Hi all,

We now have link-time optimization (LTO) between Rust and C++ code
enabled on Nightly for all platforms (bug 1486042 [1]).  There have
been some concerns about potential slowdowns when crossing the C++ <=>
Rust boundary due to non-inlineable function calls, and Stylo needed
to implement some gnarly code copying between C++ and Rust to obtain
good performance.  With cross-language LTO enabled, such concerns and
hacks should become a thing of the past.

It is worth explicitly noting that enabling this feature does not seem
to have made much difference on our performance tests: if you are
doing performance work, you should *not* need to enable this feature.
(Which is a good thing, as it massively increases the amount of time
needed to link libxul...)  The primary benefit at the moment is not
having to implement code on both the C++ and Rust side as Stylo did
and to eliminate concerns that crossing the language boundary induces
a performance issue.

Please note that if you attempt to build your own cross-language
LTO-enabled binary on OS X, your binary will be broken in interesting
ways due to bugs in Xcode.  Work to error out much earlier in that
configuration is happening in bug 1563204 [2].

There were a number of people involved in this effort: Michael
Woerister did the Rust-side work [3].  David Major did the initial
landing for Win64 only [4].  Michael fixed some issues in bindgen that
prevented Win32 from working correctly [5].  And Mike Hommey tied up
loose ends and pushed things over the line while I was away for the
last two weeks.  Thanks to all involved!

-Nathan

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1486042
[2] https://bugzilla.mozilla.org/show_bug.cgi?id=1563204
[3] https://github.com/rust-lang/rust/issues/49879
[4] https://bugzilla.mozilla.org/show_bug.cgi?id=1512723
[5] https://github.com/rust-lang/rust-bindgen/pull/1558
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Coding style  : `int` vs `intX_t` vs `unsigned/uintX_t`

2019-07-05 Thread Nathan Froyd
On Fri, Jul 5, 2019 at 2:48 AM Jeff Gilbert  wrote:
> It is, however, super poignant to me that uint32_t-indexing-on-x64 is
> pessimal, as that's precisely what our ns* containers (nsTArray) use
> for size, /unlike/ their std::vector counterparts, which will be using
> the more-optimal size_t.

nsTArray uses size_t for indexing since bug 1004098.

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Coding style  : `int` vs `intX_t` vs `unsigned/uintX_t`

2019-07-04 Thread Nathan Froyd
The LLVM development list has been having a similar discussion,
started by a proposal to essentially follow the Google style guide:

http://lists.llvm.org/pipermail/llvm-dev/2019-June/132890.html

The initial email has links you can follow for more information.  I
recommend starting here:

https://www.youtube.com/watch?v=yG1OZ69H_-o=youtu.be=2249

Both for the "why is unsigned arithmetic problematic at scale"
(spoiler: you can't check for bad things happening automatically) and
an example of "what sort of optimizations are you giving up".

Chandler (the speaker above) has a response that is worth reading
(noting that objections like yours and otherwise are addressed by the
links in the original email):

http://lists.llvm.org/pipermail/llvm-dev/2019-June/133023.html

-Nathan

On Thu, Jul 4, 2019 at 2:03 PM Jeff Gilbert  wrote:
>
> I really, really like unsigned types, to the point of validating and
> casting into unsigned versions for almost all webgl code. It's a huge
> help to have a compile-time constraint that values can't be negative.
> (Also webgl has implicit integer truncation warnings-as-errors, so we
> don't really worry about mixed-signedness)
>
> If we insist on avoiding standard uint types, I'll be writing uint31_t 
> wrappers.
>
> If we're going to recommend against uint types, I would like to see
> specific compelling examples of problems with them, not just prose
> about "many people say" or "maybe missed optimizations".
>
> On Thu, Jul 4, 2019 at 8:11 AM Botond Ballo  wrote:
> >
> > On Thu, Jul 4, 2019 at 7:11 AM Henri Sivonen  wrote:
> > > > Do you happen to know why?  Is this due to worries about underflow or
> > > > odd behavior on subtraction or something?
> > >
> > > I don't _know_, but most like they want to benefit from optimizations
> > > based on overflow being UB.
> >
> > My understanding is yes, that's one of the motivations.
> >
> > Another, as hinted at in Gerald's quote, is that tools like UBSan can
> > diagnose and catch signed overflow because it's undefined behaviour.
> > They can't really do that for unsigned overflow because, since that's
> > defined to wrap, for all the tool knows the code author intended for
> > the overflow and wrapping to occur.
> >
> > Cheers,
> > Botond
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


crash reporting, inline functions, and you

2019-04-05 Thread Nathan Froyd
TL;DR: We're making some changes to how inlined functions are handled
in our crash reports on non-Windows platforms in bug 524410.  This
change should mostly result in more understandable crash stacks for
code that uses lots of inlining, and shouldn't make things any worse.
Some crash signatures may change as a result.  If you have concerns,
or you happen to have crash stacks that you're curious about whether
they'd change under this new policy, please comment in bug 524410 or
email me.

For the grotty details, read on.

Modern C++/Rust code relies on inlining for efficiency, and modern
compilers have gotten very good at accommodating such code: it's not
unusual for code to feature double-digit levels of inlining (A inlined
into B inlined into C...inlined into J).  A simple Rust function that
looks like:

  slice.into_iter().map(|...| { ... })

and you think of as spanning addresses BEGIN to END, might actually
feature small ranges of instructions from a dozen different functions,
and the compiler will (mostly) faithfully tell you a precise location
for each range.  (Instruction A comes from some iterator code,
instruction B comes from a bit of your closure, instruction C comes
from some inlined function three levels deep inside of your
closure...) Unfortunately, this faithfulness means that in the event
of a crash, the crashing instruction might get attributed to some code
in libstd with no indication of how that relates to the original code.

Bug 524410, supporting inline functions in the symbol dumper, has been
open for a decade now.  The idea is that compilers (on Unix platforms,
not entirely sure this is true on Windows) will not only give you
precise information about what function particular instruction ranges
come from, they will also give you information about the chain of
inlining that resulted in those particular instructions.  The symbol
dumper ought to be able to emit enough information to reconstruct
"frames" from inlined functions.  That is, if you have:

```
addr0 ---+
...  | A
addr1 ---+
addr2 ---+   ---+ --+
...  | B|   |
addr3 ---+  | operator+ |
addr4 ---+  |   |
...  | C|   |
addr5 ---+   ---+   | DoTheThing
addr6 ---+   ---+   |
...  | D|   |
addr7 ---+  | operator[]|
addr8 ---+  |   |
...  | E|   |
addr9 ---+   ---+ --+
...
```

(apologies if the ASCII art doesn't come through), and you're crashing
at `addr6`, the status quo is that you know you crashed in `D`, and
the next frame is whoever called `DoTheThing`.  The ideal state of
affairs is that you're told that you crashed in `D`, called from
`operator[]`, called from `DoTheThing`, and so forth.

We're not there yet.  Changing the symbol dumper to emit this
information requires changing the symbol file format, which requires
some coordinated updates to several pieces of infrastructure (and
probably others that I don't know about).  It also requires discussion
with upstream Breakpad (and therefore breaking many more consumers
than just Mozilla-internal ones) and/or forking Breakpad completely
and/or rewriting all our tools (which is a special case of forking).
We want to get to that end state, but it's a fair bit of work.

The patches on the bug implement a truncated version of the above that
doesn't require dumping the entire inlining hierarchy.  The idea is
that, as much as possible, for addresses in some function, you want to
attribute those addresses to source code lines for said function.  So
instead of recording the most precise lines possible (A, B, C, D, E,
etc. in the above), you want to simply attribute the entire range of
B-E to `DoTheThing`.  This transformation loses information, but it
tends to produce stack traces that make more sense to humans.  You can
see some examples of the changes in:

https://bugzilla.mozilla.org/show_bug.cgi?id=524410#c22
https://bugzilla.mozilla.org/show_bug.cgi?id=524410#c29

which result in more sensible crash stacks, even if they don't
immediately point out what's going wrong.

This work has not landed yet, but should land sometime next week.  If
you have concerns, or you happen to have crash stacks that you're
curious about whether they'd change under this new policy, please
comment in bug 524410 or email me.

Thanks,
-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Duplicate dependency policy for Rust in mozilla-central?

2019-03-15 Thread Nathan Froyd
On Fri, Mar 15, 2019 at 9:32 AM Xidorn Quan  wrote:
> Should we have some kind of policy to address duplicate dependencies in Gecko 
> as well? Maybe I'm missing something but I don't think I'm aware of any 
> previous discussion about this.

I remember IRC discussions about this, but there were some concerns
that banning duplicates would result in slowing people down.  I
understand the concern, but I'm not sure how much of a problem it
would be in practice.

I personally am in favor of banning duplicates.  We would need a
Servo-like list of crates that can be duplicated because there are
several crates that are difficult to move forward.

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Min clang / libclang requirement was updated not long ago...

2019-02-27 Thread Nathan Froyd
On Wed, Feb 27, 2019 at 9:05 AM Axel Hecht  wrote:
>
> Am 27.02.19 um 14:39 schrieb Nathan Froyd:
> > On Wed, Feb 27, 2019 at 6:22 AM Kartikaya Gupta  wrote:
> >> On Wed, Feb 27, 2019 at 3:40 AM Axel Hecht  wrote:
> >>>
> >>> Can we please not force bootstrap?
> >>
> >> +1. In general bootstrap isn't "rock solid" enough to force people
> >> into running it.
> >
> > If people have problems with bootstrap (it doesn't do enough, it
> > assumes too much about your system, etc. etc.), please file bugs on
> > what's wrong.  We need to start depending more on bootstrap for
> > everything, to the point of "you can't depend on X unless it gets
> > installed via bootstrap", and we can't get to that world if we don't
> > know what rough edges people find in bootstrap.
>
> Do you have a suggestion on how to do that in practice? Rolling back
> from a broken development environment is easily a couple of hours of
> work, in the case of homebrew breaking all my virtualenvs, for example.

It's not clear to me what bootstrap does that breaks things.  Do you
want the ability to skip installing everything via homebrew?

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Min clang / libclang requirement was updated not long ago...

2019-02-27 Thread Nathan Froyd
On Wed, Feb 27, 2019 at 6:22 AM Kartikaya Gupta  wrote:
> On Wed, Feb 27, 2019 at 3:40 AM Axel Hecht  wrote:
> >
> > Can we please not force bootstrap?
>
> +1. In general bootstrap isn't "rock solid" enough to force people
> into running it.

If people have problems with bootstrap (it doesn't do enough, it
assumes too much about your system, etc. etc.), please file bugs on
what's wrong.  We need to start depending more on bootstrap for
everything, to the point of "you can't depend on X unless it gets
installed via bootstrap", and we can't get to that world if we don't
know what rough edges people find in bootstrap.

Thanks,
-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Moving reviews to Phabricator

2019-02-08 Thread Nathan Froyd
On Fri, Feb 8, 2019 at 9:08 AM Andreas Tolfsen  wrote:
> Whilst I don’t have first hand experience, Phabricator has been
> known to struggle with large patches, such as the result of upgrading
> cargo dependencies under third_party/rust.  Was a bug ever filed
> on this?

Bug 1492214 was filed for large patches in general (WebRTC updates);
bug 1498171 was filed specifically for `mach vendor rust`.

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: XPCOM Tidying Proposal

2019-01-11 Thread Nathan Froyd
On Thu, Jan 10, 2019 at 6:15 PM Kyle Machulis  wrote:
> In an effort to bring Marie Kondo memes to dev-platform, I'd like to
> propose an XPCOM tidying project.

+1.

> - Removal of [noscript] methods in interfaces in favor of direct calls via
> Cast() where possible.
> - Direct getters through Cast() where possible, infallible (also where
> possible) otherwise.

For avoidance of doubt, since I don't think we have a global Cast()
function, this is meant to refer to idioms like:

https://searchfox.org/mozilla-central/rev/b4ebbe90ae4d0468fe6232bb4ce90699738c8125/caps/BasePrincipal.h#136-142

and we'd prefer the explicit downcast from an interface pointer
(assuming the interface is [builtinclass]) and a C++-side getter,
rather than declaring the getter in the interface definition?

> This would probably end up continuing making XPCOM interfaces look more
> like a guide to shared parts between C++/JS than the template for all
> access that is has been for years.

+1.

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to adjust testing to run on PGO builds only and not test on OPT builds

2019-01-04 Thread Nathan Froyd
On Fri, Jan 4, 2019 at 11:57 AM Nicholas Alexander
 wrote:
> One reason we might not want to stop producing opt builds: we produce
> artifact builds against opt (and debug, with --enable-debug in the local
> mozconfig).  It'll be very odd to have --enable-artifact-build and
> _require_ --enable-pgo or whatever it is in the local mozconfig.

This seems reasonable.  (I'm in agreement with the people upthread
that think we should have opt testing, but regardless of that
particular outcome, not requiring people to put goo in their
mozconfigs seems like a noble goal.)

> I expect that these opt build platforms will be relatively inexpensive to
> preserve, because step one (IIUC) of pgo is to build the same source files
> as the opt builds.  So with luck we get sccache hits between the jobs.
> Perhaps somebody with more knowledge of pgo and sccache can confirm or
> refute that assertion?

PGO uses different compilation flags than a normal opt build in both
the profiling and the profile use phases (for instrumentation, etc.),
so I'd assume that opt builds and PGO builds would not share compiled
objects.

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


arm64 windows nightlies now available

2018-12-19 Thread Nathan Froyd
I'm excited to announce that we have bona fide arm64 windows nightlies
available for download!

https://archive.mozilla.org/pub/firefox/nightly/latest-mozilla-central/

featuring full updater and installer support; see the
firefox-66.0a1.en-US.win64-aarch64* files in that directory.  Thanks
to Tom Prince for the updater work and Rob Strong and Matt Howell for
the installer work.

Please note that these builds are even nightlier than our normal
nightlies on other platforms: they have *not* gone through our usual
automated testing process, bugs are almost certain to crop up, etc.
etc.  That being said, I have been using builds off automation
(manually updating them) for several weeks now and have had a pleasant
experience.

There are still a few areas that we know need work: the Gecko profiler
is not functional, but should be by the end of the week.  The
crashreporter does not work.  Our top-tier JS JIT (IonMonkey) is not
turned on.  WebRTC is not turned on.  EME (Netflix, etc.) does not
work yet.  And so forth...Did I mention that these are nightlies?

If you use these builds and you find issues, please file bugs blocking:

https://bugzilla.mozilla.org/show_bug.cgi?id=arm64-windows-bugs

so we can start to triage and prioritize what needs to be fixed.

Happy dogfooding!
-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Disabling IPC protocol with build flags?

2018-12-13 Thread Nathan Froyd
We have PREPROCESSED_IPDL_SOURCES in moz.build, which should at least
let you preprocess IPDL files before they get compiled.  There are no
uses in the tree, but there are tests, so ideally you should not run
into *too* many issues.

-Nathan
On Thu, Dec 13, 2018 at 11:45 AM  wrote:
>
>TL;DR: Is there a way to make a "manages" declaration conditional, for 
> protocols that depend on types that might not be defined for certain 
> build-flags?
>
>
>I ask because I am working on a protocol that fulfills webrtc's networking 
> needs (PMediaTransport), but webrtc can be disabled as a whole with the 
> --disable-webrtc build flag. PMediaTransport uses many types that aren't 
> defined when webrtc is disabled. I have tried the following approaches so far:
>
> 1. Export a mostly-empty dummy version of PMediaTransport instead of the real 
> one when webrtc is disabled. This gets me an "error: |manager| declaration in 
> protocol `PMediaTransport' does not match any |manages| declaration in 
> protocol `PSocketProcessBridge'", even after a clobber. If I remove the 
> "real" PMediaTransport from the tree, this approach works. It seems that ipdl 
> files are processed in-tree (at least partially) instead of in exports.
>
> 2. Make sure the types PMediaTransport depends on are always defined. This 
> ends up pulling in a ton more webrtc-only code that these types depend on, 
> which is not ideal.
>
> The next thing for me to try is to typedef all of these undefined types to 
> int or similar when webrtc is disabled, but I was wondering if there was a 
> way to make a "manages" conditional on build-flags?
>
> Best regards,
> Byron Campen
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Dropping support for compiling with MSVC in the near future

2018-12-06 Thread Nathan Froyd
On Thu, Dec 6, 2018 at 6:10 PM Gijs Kruitbosch  wrote:
> Can someone elaborate on what this means for debugging on Windows, and
> for our onboarding story on Windows?

At least in terms of stepping through, examining variables, etc.,
clang-cl is on par with MSVC.  If there are specific, stop-the-presses
cases that MSVC handles better than clang-cl...that's part of this
thread's reason for existence: for people to speak up about issues.

I can't speak to your debugging experience today, though; perhaps
somebody with more experience on Windows can chime in.  And the docs
should be modernized, as you note.  e10s and the relative inability of
debuggers to handle multi-process debugging well means the debugging
experience has gotten worse everywhere, and it would be worth thinking
about ways that we could address e10s issues as well.

> We're already making
> people install MSVS to get the relevant Windows SDKs (manually, not
> supported via ./mach bootstrap, and hopefully ticking the right boxes in
> the installer or they have to do it again until they do win at
> checkbox-golfing), and now we're telling them that although we just made
> them download multiple gigs of stuff and install a pile of MS C++
> compiler infrastructure on their machine, we can't actually use that and
> they need to download *another* C++ compiler to actually build/debug
> Firefox?

clang-cl is installed as part of `mach boostrap`, and configure will
automatically find clang-cl in the location bootstrap places it,
without any fuss on the user's part.

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: What is future of editor mode-lines after the switch to clang-format?

2018-11-30 Thread Nathan Froyd
On Fri, Nov 30, 2018 at 1:51 PM Ehsan Akhgari  wrote:
> I think these are all great points.  It seems like for Emacs, it is not
> actually necessary to sprinkle modelines across all of the files in your
> repository (per https://bugzilla.mozilla.org/show_bug.cgi?id=1023839#c7).
> For Vim, Benjamin Bouvier just landed a patch in
> https://bugzilla.mozilla.org/show_bug.cgi?id=1511383 to update the existing
> modelines to have proper line width and tab width.
>
> It seems like for Emacs, we should probably do something similar also
> relatively soon merely to address the newly introduced inconsistencies due
> to the reformat.  But I'd like to hear from Emacs users what they think,
> and if they have a preference on updating existing modelines vs using a
> .dir-locals.el file instead...

Using .dir-locals.el sounds great, at least for things like
indent-tabs-mode and c-basic-offset.  Emacs 23 is older than what
comes with Ubuntu 14.04 (LTS), so I think we're in the clear using it
as far as Emacs versions go.

Google's style guide comes with a builtin style for emacs's cc-mode:

https://raw.githubusercontent.com/google/styleguide/gh-pages/google-c-style.el

which we could just import into .dir-locals.el.

Unfortunately, it doesn't look like .dir-locals.el provides any way to
set file modes, e.g. setting python-mode for moz.build file, as some
modelines need to do, so we'd need to keep at least the mode bits
around for those files.  That doesn't seem too bad.

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Signals in Firefox

2018-11-21 Thread Nathan Froyd
On Wed, Nov 21, 2018 at 4:45 AM David Teller  wrote:
> What is our policy on using Unix signals on Firefox? I am currently
> reviewing a patch by external contributors that involves inotify's
> signal API, and I assume it's a bad idea, but I'd like to ask around
> first before sending them back to the drawing board.

I don't think we have a policy, per se; certainly we already have uses
of signals in the JS engine's wasm implementation and the Gecko
profiler.  But in those cases, signals are basically the only way to
do what we want.  If there were alternative ways to accomplish those
tasks besides signals, I think we would have avoided signals.

inotify looks like it has a file descriptor-based interface which
seems perfectly usable.  Not being familiar with inotify beyond
reading http://man7.org/linux/man-pages/man7/inotify.7.html, is there
a reason to prefer the signal interface versus the file descriptor
interface?  We use the standard gio/gtk event loop, so hooking up the
returned file descriptor from inotify_init should not be onerous.
widget/gtk/nsAppShell.cpp even contains some code to crib from:

https://searchfox.org/mozilla-central/source/widget/gtk/nsAppShell.cpp#275-281

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Rust version required to build Firefox ESR versions

2018-08-08 Thread Nathan Froyd
On Wed, Aug 8, 2018 at 9:50 AM, Dirkjan Ochtman  wrote:
> A related question: is there some place where I can follow along with plans
> relating to Rust upgrades for mozilla-central? As a Linux distribution
> packager, that might be useful information. (I remember seeing there was a
> Rust upgrade planned from 1.24.0 straight to 1.28.0, but couldn't find a
> reference when I recently went looking for it in Bugzilla.)

There's https://wiki.mozilla.org/Rust_Update_Policy_for_Firefox which
appears to be reasonably current.

Thanks,
-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Rust version required to build Firefox ESR versions

2018-08-08 Thread Nathan Froyd
On Wed, Aug 8, 2018 at 7:07 AM,   wrote:
> What is plan for future Firefox ESR versions? Will ESR version require for 
> build new Rust versions as they are releasing or will it stay on version on 
> which was required for first ESR revision?

The plan is to keep the ESR versions on the Rust version that was
required to build them when the ESR branch was created.  We've done
this for our C/C++ compiler versions and that's worked out just fine.

Given Rust's compatibility guarantees, you may be able to build ESR
with later Rust releases, but your best bet is using the specific Rust
version that we're using.

Thanks,
-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


JS builds now depend on Rust

2018-08-02 Thread Nathan Froyd
JS-only builds now require that a suitable rustc and cargo be found at
configure time (bug 1444141, currently on inbound).  The version
requirements are identical to Gecko's version requirements (Rust
1.27.0 at the time of this writing) and will be bumped in tandem with
Gecko's version requirement bumps.

Note that *no* Rust code is being built in the JS engine/shell at this
time; we are adding the requirement first to make development of
Rust-requiring projects easier (e.g. bug 1469027 for cranelift support
for wasm).

Thanks,
-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++ standards proposal for a embedding library

2018-07-18 Thread Nathan Froyd
On Wed, Jul 18, 2018 at 4:54 PM, Botond Ballo  wrote:
> On Wed, Jul 18, 2018 at 4:13 PM, Boris Zbarsky  wrote:
>> Am I correct in my reading that this would require the C++ standard library
>> to include an implementation of the web platform?
>
> Either to include one, or to be able to find and use one provided by
> the OS/platform.

Only if the implementation wanted to be fully compliant, correct?
People were still happy to use libstdc++ (say) as an implementation of
the C++ standard even when it didn't contain .  Witness Boost's
extensive workarounds for C++ library implementations not quite
supporting something.

I can imagine that embedded implementations wouldn't include
, their documentation would be clear about this, and their
users would be OK with this situation.

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Coding style: Making the `e` prefix for enum variants not mandatory?

2018-06-29 Thread Nathan Froyd
On Thu, Jun 28, 2018 at 7:35 PM, Emilio Cobos Álvarez  wrote:
> Oh, I didn't realize that those peers were the only ones to be able to
> update the style guide, sorry. I guess it makes sense.
>
> I can revert the change if needed and try to get sign-off from some of
> those.
>
> Apologies again, I just followed the procedure that was followed in the
> previous thread to add the rule. Let me know if you want that change
> reverted and I'll happily do so.

My sense, after grepping through the code for "enum [A-Z].* \{" (that
is, ignoring `enum class`, since those are effectively prefixed by the
language) and eyeballing the results is that ALL_CAPS and e-prefixed
enums are more common than CamelCase.  Commands:

# rough approximations only!
# enums defined on a single line
git grep -E 'enum [A-Z].* \{[^}]+\}' -- '*.[ch]*' |grep -v -e
'gfx/skia' -e 'media/webrtc' |less
# multi-line enums; -A 1 so we can see the style of the first enum
git grep -A 1 -E 'enum [A-Z].* \{' -- '*.[ch]*' |grep -v $(for p in
$(cat tools/rewriting/ThirdPartyPaths.txt); do echo -e \\x2de $p;
done) |less

This exercise was not an exhaustive analysis by any means.  (If
somebody was going to be exhaustive about this, I think it'd be
interesting to try and consider whether the enums are at global scope
or at class scope, since using class scope enums outside the class
naturally requires qualifying them with the class name.)

Based on this, I think it's reasonable to say that e-prefixing or
ALL_CAPS for (non-`enum class`) enums is the preferred style.  For
`enum class`, we (of course) do everything, but I think CamelCase is
*slightly* more common.  Given the language-required qualification for
`enum class` and a more Rust-alike syntax, I would feel comfortable
with saying CamelCase `enum class` is the way to go.

Objections?  (Almost certainly? ;)

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Rust crate approval

2018-06-28 Thread Nathan Froyd
Thanks for raising these points.

On Tue, Jun 26, 2018 at 10:02 PM, Adam Gashlin  wrote:
> * Already vendored crates
> Can I assume any crates we have already in mozilla-central are ok to use?
> Last year there was a thread that mentioned making a list of "sanctioned"
> crates, did that ever come about?

I don't recall the discussion on sanctioned crates, do you have a
pointer to that thread?

Regardless, anything that's already vendored should be OK.

> * Updates
> I need winapi 0.3.5 for BITS support, currently third_party/rust/winapi is
> 0.3.4. There should be no problem updating it, but should I have this
> reviewed by the folks who originally vendored it into mozilla-central?

While we can accommodate multiple versions of crates in-tree, we would
prefer that only one version of a given crate is vendored into the
tree at any one time, but sometimes this is an impractical goal to
achieve.  So if upgrading whatever uses winapi 0.3.4 to use 0.3.5
instead is reasonable, please do that first.  If it turns out to be
impractical, go ahead and vendor the duplicate crate.

For review concerns, see below.

> * New crates
> I'd like to use the windows-service crate, which seems well written and has
> few dependencies, but the first 0.1.0 release was just a few weeks ago. I'd
> like to have that reviewed at least as carefully as my own code,
> particularly given how much unsafety there is, but where do I draw the
> line? For instance, it depends on "widestring", which is small and has been
> around for a while but isn't widely used, should I have that reviewed
> internally as well? Is popularity a reasonable measure?

Our normal review process is all that we have used so far; I think
thus far we have assumed that Rust's safety guarantees enable us to
forego a more stringent review process that has sometimes been used
for (some) C/C++ code.  (e.g. I think modules/brotli underwent some
amount of scrutiny, whereas mfbt/double-conversion was a more
rubber-stamp sort of import.)  This is probably not a tenable
long-term position, especially given how easy it is to pull in Rust
code vs. a  C/C++ library.

We have generally trusted people to use good judgement in what they
use and how much review is required.  Accordingly, I think you should
request review from the people who would normally review your code,
and if you have concerns about specific crates that are being
vendored, you should call those crates out as needing especial review.
If you or your reviewers think such reviews fall outside of your
comfort zone/area of expertise/Rust capabilities, please flag myself
or Ehsan, and we will work on finding people to help.

Thanks,
-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Coding style: brace initialization syntax

2018-04-13 Thread Nathan Froyd
On Fri, Apr 13, 2018 at 9:37 AM, Emilio Cobos Álvarez  wrote:
> Those changes I assume were generated with clang-format / clang-format-diff
> using the "Mozilla" coding style, so I'd rather ask people to agree in
> whether we prefer that style or other in order to change that if needed.
>
> Would people agree to use:
>
>  , mIsRootDefined { false }
>
> Instead of:
>
>  , mIsRootDefined{ false }
>
> What's people's opinion on that? Would people be fine with a more general
> "spaces around braces" rule? I can't think of a case right now where I
> personally wouldn't prefer it.

If we are going to have brace-initialization intermixed with
list-initialization (i.e. parentheses) in our codebase, I think we
should prefer no space prior to the brace, for consistency.  If we are
going to switch wholesale (which would be a big job!)...I'd probably
say "no space", just on "that's the way we've always done it" grounds,
but can be convinced otherwise.

I agree with bz on disallowing braces in constructor init lists.

> Also, we should probably state that consistency is preferred (I assume we
> generally agree on that), so in this case braces probably weren't even
> needed, or everything should've switched to them.

Indeed.

> Finally, while I'm here, regarding default member initialization, what's
> preferred?
>
>   uint32_t* mFoo = nullptr;
>
> Or:
>
>   uint32_t* mFoo { nullptr };

I lean towards the former here.  I think the former is more common in
the code I've seen, but apparently the latter is "preferred C++" or
something?

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: redundant(?) code churn and code style issues in bug 525063

2018-04-13 Thread Nathan Froyd
FWIW, all these complaints (and more) have been raised in the bug.
I'm not entirely sure what we're going to do yet, but rest assured
that people are definitely aware of the issues.

Thanks,
-Nathan

On Fri, Apr 13, 2018 at 8:31 AM, Kartikaya Gupta  wrote:
> On Fri, Apr 13, 2018 at 6:18 AM, Jonathan Kew  wrote:
>> It's presumably auto-generated by a static-analysis tool or something like
>> that, but ISTM it has been overly aggressive, adding a lot more code churn
>> than necessary (as well as committing some pretty extreme style violations
>> such as over-long lines).
>
> +1. I too was sad to see the style violations/general ugliness of the
> changes in the patch.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: incremental compilation for opt Rust builds

2018-03-13 Thread Nathan Froyd
On Tue, Mar 13, 2018 at 3:10 AM, Henri Sivonen <hsivo...@hsivonen.fi> wrote:
> On Tue, Mar 13, 2018 at 2:56 AM, Nathan Froyd <nfr...@mozilla.com> wrote:
>> (Our release builds use -O2 for Rust code.)
>
> What does cargo bench use by default?
> (https://internals.rust-lang.org/t/default-opt-level-for-release-builds/4581
> suggests -O3.)

As mentioned by Alexis, -O3 is indeed what gets used by default.

> That is, is cargo bench for a crate that's vendored into m-c
> reflective of that crate's performance when included in a Firefox
> release?

I do not know.  I know that folks working on WebRender were finding
that -O3 produces *much* better code for certain things; they filed
bugs for rustc that I can't find right now.  I don't know if those
bugs have been addressed.

When Stylo was getting off the ground a year ago, mbrubeck did some
-O2 vs. -O3 size analysis, which led to some -O2 vs. -O3 performance
analysis in https://bugzilla.mozilla.org/show_bug.cgi?id=1328954.  The
conclusion there is that -O3 wasn't worth it and came with a
reasonably large codesize increase.

I guess the answer is "probably, but you should measure to make sure"?

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


incremental compilation for opt Rust builds

2018-03-12 Thread Nathan Froyd
Hi all,

In bug 1437627, I turned on incremental compilation for Rust for local
developer opt builds as the default behavior.  Debug builds should be
using incremental compilation already, and automation builds continue
to *not* use incremental compilation, due to environmental
considerations that would make incremental builds unprofitable.

If you use local builds for performance analysis purposes, you should
set --enable-release or set RUSTC_OPT_LEVEL to 2 or 3 in your
mozconfig.  This condition was reached after looking through the
discussion at 
https://lists.mozilla.org/pipermail/dev-platform/2017-October/020324.html
when the default Rust optimization level was changed to -O1 from -O2.
Discussion in that thread suggested that -O2 or -O3 for Rust
compilation was more suitable for performance analysis than the
default of -O1.  (Our release builds use -O2 for Rust code.)

If you have issues or suggestions on how to improve, please file
blocking bugs for bug 1437627 and CC me.

Thanks,
-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Revised proposal to refactor the observer service

2018-01-29 Thread Nathan Froyd
On Wed, Jan 17, 2018 at 10:47 AM, Gabriele Svelto  wrote:
> 1) Introduce a new observer service that would live alongside the
> current one (nsIObserverService2?). This will use a machine-generated
> list of topics that will be held within the interface itself instead of
> a separate file as I originally proposed. This will become possible
> thanks to bug 1428775 [2]. The only downside of this is that the C++
> code will not use an enum but just integer constants. The upside is that
> this will need only one code generator and only one output file (the
> IDL) versus two generators and three files in my original proposal.
>
> 2) Migrate all C++-only and mixed C++/JS users to use the new service.
> Since the original service would still be there this can be done
> incrementally. Leave JS-only users alone.
>
> 3) Consider writing a JS-only pub/sub service that would be a better fit
> than the current observer service. If we can come up with something
> that's better than the observer service for JS then it can be used to
> retire the old service for good.

I'm not super-excited about having a split between C++/JS users and
JS/JS users.  Besides the duplication of effort, I'm guessing that it
will be painful on the JS side.  Which observer service do I use?
What if I suddenly find myself having C++ clients, do I have to
rewrite all previously JS-only-side callers?  And so on.

Here's a very half-baked idea: what if the canonical location for
observer topics lived in JS?  JS clients would make calls like:

Services.obs.notifyObservers(null, ObserverTopic.QuitApplication);

as described in the previous email, so they wouldn't have to care
about the particular type of ObserverTopic.QuitApplication and we'd
avoid the spelling errors that come with strings.  We would have to
wrap the interface described below, so Services.obs would be some sort
of JS object that called into nsIObserverService2; nsIObserverService2
would not be used directly from client code.  Under the hood, I think
we'd continue to use strings (unfortunately).

For non-artifact builds, we could generate some nice enumeration of
topics for C++ to use, as before.  We'd also generate some sort of
mapping from string topic names to enum values, whose use will be
described below.

The XPIDL for this imaginary service would look something like:

[builtinclass, scriptable]
interface nsIObserverService2
{
  [binaryname(NotifyObserversFromScript)]
  void notifyObservers(in string topic, /* other parameters? */);

%{C++
  void NotifyObservers(enum ObserverTopic, /* other parameters? */);
%}

  [binaryname(AddObserverFromScript)]
  void addObserver(in string topic, /* other parameters? */);

%{C++
  void AddObserver(enum ObserverTopic, /* other parameters? */);
%}

  /* and similarly for everything else required */
}

The AddObserverFromScript implementation would look something like:

  if (topic maps to an enum value) {
AddObserver(enum value, ...);
  } else {
/* add observer to separate hashtable mapping string topics to
   observer lists, the way nsObserverService works today */
  }

NotifyObserversFromScript would work the same way: we'd see if we knew
about this string topic, and then that determines which hashtable we'd
consult.  Having the string fallback means that newly-added topics for
artifact builds work.  This still duplicates effort, but at least
everybody is using the same interface.  And things added solely for
JS's usage are silently incorporated into the enum version at the next
full build.

This has the downside of not using integers quite everywhere, so
JS->C++ calls haven't been reduced much in cost.  The extra layer of
object proxying on the JS side might also make things more expensive.
I don't know what to do about that, but some cleverness on the JS side
could make it so we used integers for "known" things and strings for
"artifact" things, perhaps by asking the C++ layer at startup or
something.

Does that proposal make sense?  WDYT?

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Refactoring proposal for the observer service

2018-01-04 Thread Nathan Froyd
On Thu, Jan 4, 2018 at 4:44 PM, Gabriele Svelto  wrote:
> On 04/01/18 22:39, Ben Kelly wrote:
>>  Or make your "generator"
>> create the idl which then creates the js/c++?
>
> I tried as that could have worked! Unfortunately it doesn't seem to be
> possible ATM. mach bailed out with a weird error when I tried to put an
> IDL file among the generated ones. I didn't really dig into it but I
> suspect that since we already generate code from IDL files they're not
> expected to be generated in turn.

This is very doable, it just requires some build system hackery: we
accept preprocessed/generated WebIDL files, and generated IDL files
would require basically the same approach.  I can help with the build
system hackery if you want to continue pursuing this approach.

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Refactoring proposal for the observer service

2018-01-04 Thread Nathan Froyd
On Wed, Jan 3, 2018 at 5:30 PM, Ben Kelly  wrote:
> On Wed, Jan 3, 2018 at 5:09 PM, Gabriele Svelto  wrote:
>> So after validating my approach in that bug (which is almost ready) I've
>> thought that it might be time to give the observer service the same
>> treatment. First of all we'd have a list of topics (I've picked YAML for
>> the list but it could be anything that fits the bill):
>
> Could we use our existing idl/webidl/ipdl for this?  It would be nice not to
> have to maintain another code generator in the tree if possible.

I don't understand this objection on two levels:

1) Why does maintaining another code generator in tree hurt anything?
We have many one-off code generators for specific purposes:

https://searchfox.org/mozilla-central/search?q=GENERATED_FILES=false=false=moz.build

Some of these use non-Python generators (e.g. the a11y files and gfx
shaders), but there are probably enough to count on multiple hands.

I would expect modifications of the code generator to be infrequent,
and the code generator itself is liable to be a screenful or so of
straightforward code.

2) How would one shoehorn this into *DL?  The options that come to mind are:

- Separate methods for every observer topic, which sounds terrible
from a code duplication perspective.  Though maybe this would be nice
for JS clients, so we could say things like:

  Services.obs.notifyXPCOMShutdown(...)

which would save on some xpconnect traffic and representing a
large-ish enum in JS?
- WebIDL enums (I think), which would carry a large space penalty and
make everything that wants to use the observer service depend on code
from dom/, which seems undesirable.
- IDL enums, which aren't reflected into JS, so we'd need some custom
work there.
- IPDL doesn't even have the concept of definable enums, and wouldn't
reflect things into JS, so we'd need even more work there.  (Some of
this may be desirable; we talked about extending IPDL bits into JS in
Austin, and kmaglione felt this was reasonable, so...)

The first and third don't sound *too* bad, but I don't think that
writing a one-off code generator would be strictly worse...

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Hiding 'new' statements - Good or Evil?

2017-11-25 Thread Nathan Froyd
On Fri, Nov 24, 2017 at 11:35 AM, Eric Rescorla  wrote:
> On Thu, Nov 23, 2017 at 4:00 PM, smaug  wrote:
>> I guess I'd prefer UniquePtr::New() over MakeUnique to be more clear about
>> the functionality.
>
> This seems like a reasonable argument in isolation, but I think it's more
> important to mirror the standard C++ mechanisms and C++-14 already defines
> std::make_unique.
>
> As a post-script, given that we now can use C++-14, can we globally replace
> the MFBT clones of C++-14 mechanisms with the standard ones?

In general, no.

std::unique_ptr is not a drop-in replacement for mozilla::UniquePtr,
for instance, and people seem OK with that--plus maintaining our own
smart pointer class opens up optimization opportunities the standard
library can only implement with difficulty[0].  In a similar fashion,
having our own mozilla::Move means we can implement checks on the use
of Move that the standard library version can't.  std::vector is not
an adequate replacement for mozilla::Vector.  And so forth.

There are specific instances where using the standard library doesn't
seem problematic: we have a bug open on replacing TypeTraits.h with
type_traits, but that has run into issues with how we wrap STL
headers, and nobody has devoted the time to figuring out how to deal
with the issues.  But each instance would have to be considered on the
merits.

-Nathan

[0] See http://lists.llvm.org/pipermail/cfe-dev/2017-November/055955.html
and the ensuing discussion.  There was discussion of standardizing
this attribute, but it's not clear to me that std::unique_ptr could
immediately take advantage of this without ABI breakage.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


mozilla-central now compiles with C++14

2017-11-15 Thread Nathan Froyd
C++14 constructs are now usable in mozilla-central and related trees.
According to:

https://developer.mozilla.org/en-US/docs/Using_CXX_in_Mozilla_code

this opens up the following features for use:

* binary literals (0b001)
* return type deduction
* generic lambdas
* initialized lambda captures
* digit separators in numeric constants
* [[deprecated]] attribute

My personal feeling is that all of these features minus return type
deduction seem pretty reasonable to use immediately, but I would
welcome comments to the contrary.

Please note that our minimum GCC version remains at 4.9: I have seen
reports that GCC 4.9 might not always be as adept at compiling C++14
constructs as one might like, so you may want to be a little cautious
and use try to make sure GCC 4.9 does the right thing.

Starting the race to lobby for C++17 support in three...two...one... =D

Happy hacking,
-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Fennec now builds with clang instead of gcc

2017-10-29 Thread Nathan Froyd
On Sun, Oct 29, 2017 at 7:37 PM, Kris Maglione <kmagli...@mozilla.com> wrote:
> On Sun, Oct 29, 2017 at 07:15:50PM -0400, Nathan Froyd wrote:
>>
>> For non-Android platforms, the good news here is that compiling Fennec
>> with clang was the last major blocker for turning on C++14 support.
>
> Do we have a timeline for when we'll be able to start using those features,
> or a summary of which features we'll be able to start using? There are a few
> that I've been waiting on for a long time...

Which features are you particularly eager to use?  I'm on the fence as
to whether C++14 support should be turned on in 58 or wait until 59.

The canonical feature vs. compiler matrix lives at:

https://developer.mozilla.org/en-US/docs/Using_CXX_in_Mozilla_code

Once C++14 support gets turned on, we'll be able to use everything
supported by GCC 4.9 in cross-platform code.  Upgrading our Linux
requirements to GCC 6 or better (and MSVC to 2017) would be required
before getting to use shinier features.

Thanks,
-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Fennec now builds with clang instead of gcc

2017-10-29 Thread Nathan Froyd
Hi all,

Bug 1163171 has been merged to mozilla-central, moving our Android
builds over to using clang instead of GCC.  Google has indicated that
the next major NDK release will render GCC unsupported (no bugfixes
will be provided), and that it will be removed entirely in the near
future.  Switching to clang now makes future NDK upgrades easier,
provides for better integration with the Android development tools,
and brings improvements in performance/code size/standards support.

For non-Android platforms, the good news here is that compiling Fennec
with clang was the last major blocker for turning on C++14 support.
Using clang on Android also opens up the possibility of running our
static analyses on Android.

If you run into issues, please file bugs blocking bug 1163171.

Thanks,
-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-10-26 Thread Nathan Froyd
On Thu, Oct 26, 2017 at 9:34 AM, Henri Sivonen  wrote:
> As for the computer at hand, I want to put an end to this Nvidia
> obstacle to getting stuff done. It's been suggested to me that Radeon
> RX 560 would be well supported by distro-provided drivers, but the
> "*2" footnote at https://help.ubuntu.com/community/AMDGPU-Driver
> doesn't look too good. Based on that table it seems one should get
> Radeon RX 460. Is this the correct conclusion? Does Radeon RX 460 Just
> Work with Ubuntu 16.04? Is Radeon RX 460 going to be
> WebRender-compatible?

Can't speak to the WebRender compatibility issue, but I have a Radeon
R270 and a Radeon RX 470 in my Linux machine, and Ubuntu 16.04 seems
to be pretty happy with both of them.

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Experimenting with a shared review queue for Core::Build Config

2017-10-11 Thread Nathan Froyd
Does this user have a bugzilla :alias so that folks submitting patches
via MozReview or similar can just write r=build-peer or something,
rather than having to manually select the appropriate shared queue
after submitting their patch for review?

-Nathan

On Wed, Oct 11, 2017 at 1:41 PM, Chris Cooper  wrote:
> Many of the build peers have long review queues. I'm not convinced
> that all of the review requests going to any particular build peer
> need to be exclusive. We're going to try an experiment to see if we
> can make this better for patch authors and reviewers alike. To this
> end, we've set up a shared review queue for patches submitted to the
> Core::Build Config module.
>
> How to participate:
>
> When you submit your next Build Config patch, simply select the new,
> shared "user" core-build-config-revi...@mozilla.bugs as the reviewer
> instead of a specific build peer. The build peers are watching this
> new user, and will triage and review your patch accordingly.
>
> This new arrangement should hopefully shorten patch queues for
> specific reviewers and improve turnaround times for everybody. It also
> has the added benefit of automatically working around absences,
> vacations, and departures of build peers.
>
> Note: this system still allows for targeting reviews to specific build
> peers. Indeed, the build peers may do exactly that in triage if the
> patch in question touches a particular sub-domain. However, I would
> encourage patch authors to use the shared bucket unless you really
> understand the sub-domain yourself or are collaborating directly with
> a particular build peer.
>
> As indicated in the subject, this is an experiment. I will monitor
> patch queues and turnaround time over the next few months, and then
> decide in January whether we should continue or try something else.
>
> Thanks for your patience, and for trying something new.
>
> cheers,
> --
> coop
> ___
> dev-builds mailing list
> dev-bui...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-builds
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++ function that the optimizer won't eliminate

2017-10-06 Thread Nathan Froyd
On Fri, Oct 6, 2017 at 5:00 AM, Henri Sivonen  wrote:
> Do we already have a C++ analog of Rust's test::black_box() function?

We do not.

> Specifically, this isn't the answer for GCC:
> void* black_box(void* foo) {
>   asm ("":"=r" (foo): "r" (foo):"memory");
>   return foo;
> }

Can you provide a slightly larger example testcase (links to
godbolt.org would be excellent) that actually uses this, so we can see
what the compiler is doing?

I think it's customary to make these sorts of asms `volatile asm` to
tell the compiler to not touch it.  I don't know how to write one for
MSVC, but I think a small variant of the above should work for GCC.

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Re: Firefox and clang-cl

2017-09-07 Thread Nathan Froyd
On Thu, Sep 7, 2017 at 10:04 AM, Ben Kelly  wrote:
> On Mon, Aug 14, 2017 at 10:44 AM, Tristan Bourvon 
> wrote:
>
>> Here's the RFC of the overflow builtins:
>> http://clang-developers.42468.n3.nabble.com/RFC-Introduce-
>> overflow-builtins-td3838320.html
>> Along with the tracking issue: https://bugs.llvm.org/show_bug.cgi?id=12290
>> And the patch:
>> https://github.com/llvm-mirror/clang/commit/98d1ec1e99625176626b0bcd44cef7
>> df6e89b289
>>
>> There's also another patch that was added on top of this one which adds
>> more overflow builtins:
>> https://github.com/llvm-mirror/clang/commit/c41c63fbf84cc904580e733d1123d3
>> b03bb5584c
>>
>> It seems clear that this optimization could bring big performance
>> improvements on hot functions. It could also reduce binary size
>> substantially (we're talking about 14->5 instructions in their case).
>>
>
> Do we have a bug filed to investigate these overflow builtins?  Should we
> file one?

There is bug 1356936 for mozilla::CheckedInt; I don't know how many
saturating-style arithmetic implementations we have in the tree, or
whether similar bugs exist for those.

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


how to make your local --enable-optimize builds compile Rust faster

2017-08-09 Thread Nathan Froyd
TL; DR: apply 
https://github.com/froydnj/gecko-dev/commit/12a80904837c60e2fc3c68295b79c42eb9be6650.patch
to get faster --enable-optimize local builds if you are working on
Stylo or touching Rust in any way.  Please try to not commit it along
with your regular patches. =D

You may have noticed that Rust compile times for --enable-optimize
builds have gotten worse lately.  This is partly due to the large
amount of Rust code we now have (with more on the way, surely), but
also because our Rust code builds with Rust's link-time optimization
(LTO) for such builds.  Building our Rust code this way makes our
binaries smaller, but imposes significant compile-time costs.

Bug 1386371 is open to track disabling LTO in Rust code on
non-automation builds, but in the absence of a solution there, I have
written a patch:

https://github.com/froydnj/gecko-dev/commit/12a80904837c60e2fc3c68295b79c42eb9be6650.patch

which you can apply in your local repository.  Having local patches is
obviously not optimal, as there's a risk that they will be committed
accidentally, but it's probably the best solution we have right now.

I know you have suggestions and/or questions, so let's transition to a
Q format:

Q: This patch is great, my compile is as fast as a photon!  Why don't
we just commit the patch?

A: Compiling without LTO imposes significant binary size costs.  We
don't have a great way to leave LTO disabled for local builds, but
enable it for automation builds.

Q: We can pass in flags to rustc via `RUSTFLAGS`: we can set
RUSTFLAGS="-C lto" for automation builds!  Why not do that?

A: Because rustc complains about compiling all of our intermediate
rlibs with `-C lto`.

Q: Ugh.  Could we fix rustc to not complain?

A: rustc's behavior here, while reasonable, is certainly fixable.
This or the Cargo modifications, below, are feasible options for
fixing things.

Q: Why modify Cargo?  We could run our Cargo.toml files through a
preprocessor before passing them to `cargo`, setting `lto`
appropriately for the style of build we're doing.  Wouldn't that work?

A: The output of the preprocessed Cargo.toml would then live in the
objdir, which wouldn't play well with Cargo.lock files.  Upgrading
Rust packages would require a complicated dance as well, which in turn
would affect things like the servo syncing service on autoland.

Q: What if we put the generated Cargo.toml in the srcdir instead?

A: This idea is sort of feasible, but then the build process is
modifying the srcdir, which is far from ideal: we have actively fixed
instances of this happening in the past.  Upgrading packages would
also be a pain, for similar reasons as the previous question.

Q: Huh.  OK, so what are we doing?

A: The current idea, courtesy of glandium, is to add command-line
flags to Cargo to permit setting or overriding of arbitrary Cargo.toml
settings, and then add the appropriate flags to our Cargo invocations.
An initial implementation of this idea lives in
https://github.com/rust-lang/cargo/issues/4380, though there were
concerns expressed that this functionality might be a little
over-the-top for what we want to do, and making rustc stop complaining
(see above) might be a better fix.

Whichever fix we did--rustc or Cargo or maybe even both!--we'd need to
build in automation with newer versions of the appropriate tool, and
we'd need to ensure that local builds *didn't* use the options.  Both
of these solutions are reasonably simple, and it is entirely possible
that we could have the fix uplifted to beta Rust and therefore have
the fix available for the 1.20 release, which we're planning on using
to build 57.

Thanks,
-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Actually-Infallible Fallible Allocations

2017-08-02 Thread Nathan Froyd
On Tue, Aug 1, 2017 at 12:31 PM, Alexis Beingessner
 wrote:
> I was recently searching through our codebase to look at all the ways we
> use fallible allocations, and was startled when I came across several lines
> that looked like this:
>
> dom::SocketElement  = *sockets.AppendElement(fallible);
>
> For those who aren't familiar with how our allocating APIs work:
>
> * by default we hard abort on allocation failure
> * but if you add `fallible` (or use the Fallible template), you will get
> back nullptr on allocation failure
>
> So in isolation this code is saying "I want to handle allocation failure"
> and then immediately not doing that and just dereferencing the result. This
> turns allocation failure into Undefined Behaviour, rather than a process
> abort.
>
> Thankfully, all the cases where I found this were preceded by something
> like the following:
>
> uint32_t length = socketData->mData.Length();if
> (!sockets.SetCapacity(length, fallible)) {   JS_ReportOutOfMemory(cx);
>   return NS_ERROR_OUT_OF_MEMORY;}for (uint32_t i = 0; i <
> socketData->mData.Length(); i++) {dom::SocketElement  =
> *sockets.AppendElement(fallible);
>
> So really, the fallible AppendElement *is* infallible, but we're just
> saying "fallible" anyway.

Stating this upfront: I think we should use infallible AppendElement
here, or at the very least use a comment.  But it's worth looking at
the larger context of one of these examples:

https://dxr.mozilla.org/mozilla-central/source/netwerk/base/Dashboard.cpp#426-435

(AFAICT, the only such examples of this pattern come from the above
file, and for the reasons outlined below.)

In particular, the API of Sequence<> is constrained because it
inherits from FallibleTArray, which *only* exposes fallible
operations.  One can argue that FallibleTArray shouldn't do this, but
for Sequence, which is used for DOM bindings code, I believe the
intent is to nudge people writing DOM-exposed code to consider how to
recover from allocation failures, and to not blindly assume that
everything succeeds.

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: refcounting [WAS: More Rust code]

2017-08-02 Thread Nathan Froyd
On Wed, Aug 2, 2017 at 7:37 AM, Enrico Weigelt, metux IT consult
 wrote:
> On 31.07.2017 13:53, smaug wrote:
>> Reference counting is needed always if both JS and C++ can have a
>> pointer to the object.
>
> Anybody already thought about garbage collection ?

Reference counting is a garbage collection technique.  See
https://en.wikipedia.org/wiki/Reference_counting where the
introductory paragraphs and the first section specifically refer to it
as a garbage collection technique.  Or consult _The Garbage Collection
Handbook_ by Jones, Hosking, and Moss, which has an entire chapter
devoted to reference counting.

Note also that Gecko's reference counting tends to be cheaper than the
reference counting assumed in the literature, since many of Gecko's
reference-counted objects can use non-thread-safe reference counting,
as said objects are only ever accessed on a single thread.  (Compare
http://robert.ocallahan.org/2012/06/computer-science-in-beijing.html)

Changing the garbage collection technique used by our C++ code to
something other than reference counting would be a large project of
dubious worth.

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


new locking primitive: RecursiveMutex

2017-07-28 Thread Nathan Froyd
Earlier this week, bug 1347963 landed, introducing a new
mozilla::RecursiveMutex type.  A RecursiveMutex instance may be
acquired on the same thread while said thread is already holding the
lock; such behavior with mozilla::Mutex would result in deadlocks.

While we already have a recursively-acquirable lock, ReentrantMonitor,
ReentrantMonitor does too much for many scenarios: it provides
condition variable-like semantics as well as recursive locking.  This
extra functionality makes ReentrantMonitor relatively slow;
RecursiveMutex provides only the locking functionality and should be
at least 2x faster than ReentrantMonitor, in addition to being
smaller.  Bug 1347963 converted several uses of ReentrantMonitor to
RecursiveMutex; the conversions were all straightforward.

If your code already uses ReentrantMonitor solely for its recursive
locking capabilities, please see whether converting to RecursiveMutex
would be feasible.

Thanks,
-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Announcing MozillaBuild 3.0 Release

2017-07-24 Thread Nathan Froyd
On Mon, Jul 24, 2017 at 6:21 PM, Enrico Weigelt, metux IT consult
<enrico.weig...@gr13.net> wrote:
> On 24.07.2017 20:40, Nathan Froyd wrote:
>> Sure, it's daily business for us, too.  Mike cited examples in his
>> response (e.g. we cannot compile natively on 32-bit systems, Android
>> included, so Firefox for such platforms is cross compiled from a
>> 64-bit platform).
>
> OTOH, we should keep in mind that most distros dont do cross compiling.
> Some distros (eg. gentoo or lfs) are also building on the target.
>
> I don't like the idea of kicking away these platforms.

We do take into account the needs of Linux distributions when making
changes.  So far as I am aware, our compilation requirements for Linux
platforms have not caused huge amounts of headaches.

>>> Haven't tried on Windows yet. Can we crosscompile it from Linux ?
>>
>> No.  There are a few people interested, but there are lots of issues.
>
> I'd guess it could be helpful for developers not running Windows,
> at least for doing some build checks.

Developers not running Windows tend to use our try server for
compiling on Windows.  There are some good reasons for cross-compiling
to Windows, but none of them have become important enough to seriously
consider making the switch.

>>> This raises the question: why does it take up so much memory ?
>>
>> Because Firefox is a large program, and linking large programs takes
>> up a large amount of memory, more than is addressable on 32-bit
>> systems.
>
> Well, why is the main program so big that linking takes up so much
> memory ? Perhaps a lack of proper modularization ?

Well, libxul (the main shared library in Firefox) is rather large, but
we're not going to split it into smaller libraries.  We *did* have
multiple shared libraries in the past, and there was a significant
startup and performance hit for doing that.  So we have one large
shared library to link now.

> One thing we could do about that might be limitig the exported symbols
> of shared libraries (only export the really necessary ones).

We already do that.  Eyeballing the `readelf -sW` output from my
Firefox nightly on Linux, libxul exports ~1% of all the symbols it
defines.

Other people have mentioned options for pushing patches; my preferred
tool for doing this is git-bz-moz:
https://github.com/mozilla/git-bz-moz

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Announcing MozillaBuild 3.0 Release

2017-07-24 Thread Nathan Froyd
On Mon, Jul 24, 2017 at 4:25 PM, Enrico Weigelt, metux IT consult
 wrote:
> On 24.07.2017 16:00, Mike Hoye wrote:
>> Unfortunately we have to build _for_ a number of our supported operating
>> systems without being able to build _on_ those operating systems.
>
> Is that a big problem ?
>
> At least within Linux world, it's daily business for me (well, I'm
> doing a lot of embedded projects).

Sure, it's daily business for us, too.  Mike cited examples in his
response (e.g. we cannot compile natively on 32-bit systems, Android
included, so Firefox for such platforms is cross compiled from a
64-bit platform).

> Haven't tried on Windows yet. Can we crosscompile it from Linux ?

No.  There are a few people interested, but there are lots of issues.

>> That's been true for some time now; while we still support 32-bit systems,
>> for example, you can't build Firefox on 32-bit systems at all due to
>> memory constraints,
>
> This raises the question: why does it take up so much memory ?

Because Firefox is a large program, and linking large programs takes
up a large amount of memory, more than is addressable on 32-bit
systems.

>> This means some people on older hardware or OSes aren't able build
>> Firefox, that's true,
>
> Not sure, whether an 4core i7 w/ 8GB RAM already counts as "old", but
> it's really slow on my box. I've got the impression there's stil a lot
> of room for optimizations. For example, I wonder why lots of .cpp-files
> are #include'd into one.

Because doing this actually makes builds faster and the final binary
smaller.  See https://blog.mozilla.org/nfroyd/2013/10/05/faster-c-builds/
for details.

If you had more RAM, your builds would likely be much faster.  Four
C++ compiles could easily take 1-1.5GB of memory apiece, and you would
probably like to run other programs while you compile.

There are probably many places the build can be optimized; if you have
a suggestion, please file a bug at
https://bugzilla.mozilla.org/enter_bug.cgi?product=Core=Build%20Config

>> but it doesn't mean they have no way to contribute to Firefox,
>
> A CI for contributors would be very nice here. For example, I don't
> have any Windows systems for decades.

You can request level 1 commit access to our try infrastructure:
https://www.mozilla.org/en-US/about/governance/policies/commit/access-policy/
 That will enable you to build Firefox with your patches on all the
major platforms we support.

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: `mach cargo check` now available

2017-07-06 Thread Nathan Froyd
On Thu, Jul 6, 2017 at 2:28 AM, Simon Sapin  wrote:
> Would it make sense to allow arbitrary Cargo sub-commands? In Servo I end up
> using `mach cargo update` for manipulating Cargo.lock, `mach cargo rustc`
> for passing debugging options to the compiler, etc.

Maybe!  I'm less sure of how some of those commands translate into the
Gecko world, but file bugs and we can work things out from there.

Thanks,
-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


`mach cargo check` now available

2017-07-05 Thread Nathan Froyd
Cargo recently added a subcommand, `cargo check`, to perform type
checking of Rust crates without the additional step of code
generation.  As code generation tends to dominate Rust compilation
times, `cargo check` speeds up the edit-borrow checker-bewilderment
cycle.

This command is now available for toplevel crates (gkrust,
gkrust-gtest, geckodriver, and mozjs_sys) via the `mach cargo check`
command.  You can run:

$ mach cargo check

which checks gkrust (i.e. everything that goes into libxul).  You can
check other crates:

$ mach cargo check gkrust-gtest

or even multiple crates:

$ mach cargo check gkrust gkrust-gtest

If you have ideas on how this command could be improved, please bugs
in Core :: Build Config.

Happy hacking!
-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Profiling nightlies on Mac - what tools are used?

2017-06-20 Thread Nathan Froyd
On Tue, Jun 20, 2017 at 12:19 PM, Ehsan Akhgari <ehsan.akhg...@gmail.com> wrote:
> On 06/20/2017 08:34 AM, Nathan Froyd wrote:
>> There is some kind of interaction with the underlying machine (see
>> comment 104 in said bug, where the binaries perform identically on a
>> local machine, but differently on infrastructure), but we haven't
>> tracked that down yet.
>
> From comment 104 it seems that it is possible to reproduce the slowdown from
> the unstripped cross builds locally.  Has anyone profiled one of these
> builds comparing them to an unstripped non-cross build to see where the
> additional time is being spent?  I couldn't tell from the bug if this
> investigation has happened.

My understanding is that the slowdown cannot be reproduced on local
developer machines, but can be reproduced on loaner machines from
infra.  I don't think anybody has tried profiling on infra to see
where time differences are.

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Profiling nightlies on Mac - what tools are used?

2017-06-20 Thread Nathan Froyd
On Tue, Jun 20, 2017 at 3:59 AM, Julian Seward  wrote:
> On 20/06/17 05:58, Boris Zbarsky wrote:
>> On 6/19/17 11:22 PM, Gregory Szorc wrote:
>>> The decision to strip Nightly builds does not come lightly. Read 1338651
>>> comment 111 and later for the ugly backstory.
>>
>> It's still really confusing to me that not stripping symbols has a 
>> significant
>> performance impact.  That's not the case in any other build configuration I'm
>> aware of, and is somewhat surprising from first principles for everything
>> except startup performance.
>>
>> It really would be good to figure out what's actually going on there...
>
> I agree.  Stripping the symbols as a solution makes no sense to me, given
> that they are not expected to be loaded into the process image.
>
> From my scan of 1338651 it appears that we've demonstrated that the same
> preprocessed source is compiled in both cases.  But IIUC (and correct me if
> I'm wrong), we haven't shown that either the same code is generated, nor
> that there is not some different interaction with the underlying machine
> for the two builds.

We have demonstrated that the command lines for linking are basically
identical; there are of course differences in paths.  The native Mac
build was passing a static libc++ archive for linking on the command
line, but we showed that didn't matter by passing the same archive in
the cross-compiled case, which produced no change.

We have looked at the underlying machine code.  It is functionally
identical; jump tables are tagged as data-in-code in one, and there
are some small offset differences in jump instructions (which are due
to slightly different offsets in the binaries themselves), but nothing
else.

We have looked at the binaries themselves (e.g. sections and so
forth).  They are functionally identical; there are some small
differences between them which I think amount to path differences
being baked into the binary.

The native builds are codesigned while the cross ones are not.  This
too makes no difference.

There is some kind of interaction with the underlying machine (see
comment 104 in said bug, where the binaries perform identically on a
local machine, but differently on infrastructure), but we haven't
tracked that down yet.

Your theories are most welcome at this point. :)

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Shipping Headless Firefox on Linux

2017-06-15 Thread Nathan Froyd
On Thu, Jun 15, 2017 at 2:02 PM, Brendan Dahl  wrote:
> Headless will run less of the platform specific widget code and I don't
> recommend using it for platform specific testing. It is targeted more at
> web developers and testing regular content pages. There definitely will be
> cases where regular pages will need to exercise code that would vary per
> platform (such as fullscreen), but hopefully we can provide good enough
> emulation in headless and work to have a consistent enough behavior across
> platforms that it won't matter.

Would it be feasible to use headless mode for mochitests (or reftests,
etc. etc.)?  I know there are probably some mochitests which care
about the cases you mention above (e.g. fullscreen), but if that could
be taken care of in the headless code itself or we could annotate the
tests somehow, it would be a huge boon for running mochitests locally,
or even in parallel.  (We already have some support for running
reftests/crashtests in parallel.)

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: New character encoding conversion API

2017-06-15 Thread Nathan Froyd
On Thu, Jun 15, 2017 at 6:32 AM, Henri Sivonen  wrote:
> encoding_rs landed delivering correctness, safety, performance and
> code size benefits as well as new functionality.

Thanks for working on this.

>  * We don't have third-party crates in m-c that (unconditionally)
> require rust-encoding. However, if you need to import such a crate and
> it's infeasible to make it use encoding_rs directly, please do not
> vendor rust-encoding into the tree. Vendoring rust-encoding into the
> tree would bring in another set of lookup tables, which encoding_rs is
> specifically trying to avoid.

Can you file a bug so `mach vendor rust` complains about vendoring
rust-encoding?

Thanks,
-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Changing .idl files

2017-06-14 Thread Nathan Froyd
On Wed, Jun 14, 2017 at 2:14 PM, Andrew Swan  wrote:
> Sorry, this was misleading, I meant this as a narrow comment about the
> (still hypothetical!) scenario where something is prototyped as an
> experiment but we're in the process of landing it in m-c along with all the
> other built-in apis.  Of course we can't/don't expect reviewers to be aware
> of every small experiment that is out there.  And again, we've communicated
> to extension developers that they cannot rely on stable internal
> interface.  And finally, I agree with sfink and nfroyd that the only way to
> really be able to depend on experiments at a larger scale is to get them
> into automation.  I personally pledge not to complain about any changes
> that break out-of-tree code until then... :)

Thank you for clarifying this!

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Changing .idl files

2017-06-14 Thread Nathan Froyd
On Wed, Jun 14, 2017 at 12:54 PM, Steve Fink  wrote:
> On 06/14/2017 09:23 AM, Andrew Swan wrote:
>> I would hope that if we have promising or widely used webextension
>> experiments, that the relevant peers would be aware of them when reviewing
>> changes that might affect them but of course changing IDL bindings is only
>> one of a number of ways that a change to central could break an existing
>> experiment.  This is one of the drawbacks of having out-of-tree code, I
>> think its up to us (the webextensions maintainers) to either deal with
>> this
>> or get experiments worked into automation if this becomes a real problem
>> in
>> practice.
>
> Whoa. Experiments aren't tested in automation?

Whoa.  We're going to still have to think about interface compat with
external clients in a post-57 world?  This is the first I've heard of
this.

> Can they be, please? At least snapshotted versions.

+1  Almost anything automation-related would be better than "hope
peers think hard about this".

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


--disable-optimize --enable-debug builds added to infra

2017-06-02 Thread Nathan Froyd
Hi all,

Bug 1341404 has landed on mozilla-inbound, bringing --disable-optimize
--enable-debug builds to our infrastructure on our Tier 1 desktop
platforms.  Folks have complained several times this year that various
changes silently broke this style of build because said style was not
tested.  Ideally such breakage will become a thing of the past.

This is as good a place as any to remind/inform people that adding new
build-only builds with different configuration options and such is
fairly straightforward.  For instance, we're adding builds in bug
1321847 that ensure our configure-enforced minimum Rust requirement
reflects the reality of our Rust code in-tree.  So if you have a build
that you'd like to see added, file a Core :: Build Config bug about it
and we can start the discussion on doing that.

Happy hacking,
-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: new configure option: --enable-debug-rust

2017-05-12 Thread Nathan Froyd
On Thu, May 11, 2017 at 5:15 PM, Jeff Muizelaar <jmuizel...@mozilla.com> wrote:
> On Fri, Apr 14, 2017 at 10:46 AM, Nathan Froyd <nfr...@mozilla.com> wrote:
>> With these options, you get a browser that runs quickly (i.e. no DEBUG
>> assertions in C++ code), but still lets you debug the Rust code you
>> might be working on, ideally with faster compile times than you might
>> get otherwise.  --enable-debug implies --enable-debug-rust, of course.
>
> From my reading of config/rules.mk and experience it looks like
> --enable-rust-debug does not disable optimizations in Rust code. With
> opt-level=1 Rust still doesn't have a great debugging experience (the
> compiler mostly seems to think things are optimized out).

I think your reading is correct.  Please file a bug in Core :: Build Config.

Thanks,
-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Running mochitest on packaged builds with the sandbox

2017-05-09 Thread Nathan Froyd
On Mon, May 8, 2017 at 1:26 PM, Alex Gaynor  wrote:
> Top-line question: Do you rely on being able to run mochitests from a
> packaged build (`--appname`)?

I don't think it's a *fundamental* part of development workflows, but
I know folks have found value in being able to run tests--including
but not limited to mochitest--against packaged builds (release
versions, beta versions, whatever).  It would be nice to not break
that, or at least provide obvious escape hatches where possible.

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Using references vs. pointers in C++ code

2017-05-09 Thread Nathan Froyd
On Tue, May 9, 2017 at 10:39 AM, Boris Zbarsky <bzbar...@mit.edu> wrote:
> On 5/9/17 9:17 AM, Nathan Froyd wrote:
>> The argument I have always heard, Gecko-wise and elsewhere [1], is to
>> prefer pointers for modification
>
> This is for primitive-typed out or inout params, right?

I don't remember hearing any distinction one way or the other.  I
don't think we have a rule written down for Gecko, but a lot of the
things we have historically dealt with are heap-allocated anyway or
have had to extensively deal with XPIDL, so passing around pointers
has seemed natural.  But many recent things (e.g. WebIDL, some of the
editor refactorings, etc.) have started to prefer references even to
heap-allocated things, so we now have to think a little harder.

> In other words, we should prefer "int*" to "int&" for places where we expect
> the callee to modify the int, just like we should prefer "MyClass**" to
> "MyClass*&".  I guess the same for POD structs if we expect people to be
> writing to them wholesale via assignment operators? Not sure.

I think a broader definition of "POD struct" would be required here:
RefPtr and similar are technically not POD, but I *think* you'd
want to require RefPtr* arguments when you expect the smart pointer
to be assigned into?  Not sure.

> But for object-typed things like dom::Element or nsIFrame, it seems better
> to me to pass references instead of pointers (i.e "Element&" vs "Element*")
> for what are fundamentally in params, even though the callee may call
> mutators on the passed-in object.  That is, the internal state of the
> element may change, but its identity will not.

I get this argument: you want the non-nullability of references, but
you don't want the verboseness (or whatever) of NonNull or similar,
so you're using T& as a better T*.  I think I would feel a little
better about this rule if we permitted it only for types that deleted
assignment operators.  Not sure if that's really practical to enforce.

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Using references vs. pointers in C++ code

2017-05-09 Thread Nathan Froyd
On Tue, May 9, 2017 at 5:58 AM, Emilio Cobos Álvarez  wrote:
> Personally, I don't think that the fact that they're not used as much as
> they could/should is a good argument to prevent their usage, but I don't
> know what's the general opinion on that.

The argument I have always heard, Gecko-wise and elsewhere [1], is to
prefer pointers for modification, because it's clearly signaled at the
callsite that something might be happening to the value.  That would
rule out `T&`, but permit `const T&`.

-Nathan

[1] https://google.github.io/styleguide/cppguide.html#Reference_Arguments
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Representing a pointer to static in XPConnected JS?

2017-05-04 Thread Nathan Froyd
On Thu, May 4, 2017 at 12:32 PM, Henri Sivonen <hsivo...@hsivonen.fi> wrote:
> On Thu, May 4, 2017 at 4:27 PM, Nathan Froyd <nfr...@mozilla.com> wrote:
>> On Thu, May 4, 2017 at 3:08 AM, Henri Sivonen <hsivo...@hsivonen.fi> wrote:
>>> Is it feasible (with reasonably low effort) to introduce a new XPIDL
>>> type that is a pointer to a non-refcounted immutable static object in
>>> C++ and still gets bridged to JS?
>>
>> You can certainly have static objects with what amount to dummy
>> AddRef/Release methods passed through XPIDL (we do this in a couple of
>> places throughout Gecko), but I don't think you can get away with
>> having a non-refcounted object passed through XPIDL.
>
> Do the AddRef/Release need to be virtual?

Yes.  (I'm not sure how XPConnect would discover the refcounting
methods if they were non-virtual.)

Please note that the static objects with dummy AddRef/Release methods
also implement XPConnect interfaces, i.e. QueryInterface, nsresult
virtual methods, etc.

I think you could possibly make your things a WebIDL interface, which
don't require refcounting, and magically make the WebIDL interfaces
work with XPIDL, but I do not know the details there.

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Representing a pointer to static in XPConnected JS?

2017-05-04 Thread Nathan Froyd
On Thu, May 4, 2017 at 3:08 AM, Henri Sivonen  wrote:
> Is it feasible (with reasonably low effort) to introduce a new XPIDL
> type that is a pointer to a non-refcounted immutable static object in
> C++ and still gets bridged to JS?

You can certainly have static objects with what amount to dummy
AddRef/Release methods passed through XPIDL (we do this in a couple of
places throughout Gecko), but I don't think you can get away with
having a non-refcounted object passed through XPIDL.

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


new configure option: --enable-debug-rust

2017-04-14 Thread Nathan Froyd
Bug 1353810, recently merged to central, adds a new configure option
--enable-debug-rust.  This option enables compiling the Rust code
in-tree with debug-friendly settings (no optimization, multiple
codegen units for faster compiles, etc. etc.) even if you are
compiling with --disable-debug.  The intended use is in a mozconfig
thusly:

ac_add_options --enable-optimize
ac_add_options --disable-debug
ac_add_options --enable-rust-debug

With these options, you get a browser that runs quickly (i.e. no DEBUG
assertions in C++ code), but still lets you debug the Rust code you
might be working on, ideally with faster compile times than you might
get otherwise.  --enable-debug implies --enable-debug-rust, of course.

This configure option is not represented in any of our automation
configs and did require a few strategically placed #ifdef
MOZ_RUST_DEBUG, so it's entirely possible that changes to code on the
C++/Rust boundary will break compiling with this option.  Please be
careful.

If you have ideas on how this config option could be improved, or
ideas on how to improve our developer ergonomics around Rust
compilation generally, please file a bug in Core :: Build Config.

Thanks,
-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: windows build anti-virus exclusion list?

2017-03-17 Thread Nathan Froyd
On Fri, Mar 17, 2017 at 6:31 PM, Mike Hommey  wrote:
> On Fri, Mar 17, 2017 at 03:53:14PM -0400, Boris Zbarsky wrote:
>> On 3/17/17 3:40 PM, Ted Mielczarek wrote:
>> > We do try to build js/src pretty early in the build
>>
>> We do?  It's always the last thing I see building before we link libxul.
>> Seeing the js/src stuff appearing is how I know my build is about done...
>
> We don't try very hard, but it's also not listed to be last in the
> makefile that drives the build dependencies. In fact, it's in the middle
> of the dependencies for libxul... so I doubt even trying to move it
> there is going to affect the outcome much... At this point, someone
> needs to look at how Make actually orders the things it builds.

It is at least before all the libxul-specific code (i.e. code not in
mozglue/mfbt/external libs/etc.), but apparently that does not help
very much.

> It also doesn't help that Make (or ninja, etc. for that matter) is not
> aware of how long each target is going to take to build.

When this has come up in the context of ninja, the developer's
response has been that you should order your dependencies such that
things that take longer to build should appear earlier in the
dependency list.  I'd guess this is probably the same heuristic make
uses, although our recursive build structure probably doesn't play
very well with that.

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Tracking bug for removals after XPCOM extensions are no more?

2017-03-13 Thread Nathan Froyd
We do not.  Bug 1299187 is related to such work, but that bug only
covers unexporting symbols that 3rd party software would access.  bz
has filed a few bugs for removing nsIDOM* methods that only existed
due to 3rd party compat concerns, but I don't think there's been
systematic evaluation of what's just dead weight now.
-Nathan

On Fri, Mar 10, 2017 at 6:40 AM, Henri Sivonen  wrote:
> Do we have a tracking bug for all the stuff that we can and should
> remove once we no longer support XPCOM extensions?
>
> --
> Henri Sivonen
> hsivo...@hsivonen.fi
> https://hsivonen.fi/
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Should cheddar-generated headers be checked in?

2017-02-23 Thread Nathan Froyd
On Thu, Feb 23, 2017 at 1:25 AM, Henri Sivonen  wrote:
>> Alternately you could just generate it at build time, and we could pass
>> the path to $(DIST)/include in a special environment variable so you
>> could put the header in the right place.
>
> So just https://doc.rust-lang.org/std/env/fn.var.html in build.rs? Any
> naming conventions for the special variable? (I'm inferring from the
> way you said it that DIST itself isn't being passed to the build.rs
> process. Right?)

We already pass MOZ_DIST as $(DIST)/include, fwiw:

http://dxr.mozilla.org/mozilla-central/source/config/rules.mk#941

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: GCC 4.9 now required to build on Linux/Android

2016-12-23 Thread Nathan Froyd
On Fri, Dec 23, 2016 at 6:39 PM,  <gsquel...@mozilla.com> wrote:
> On Saturday, December 24, 2016 at 3:08:21 AM UTC+11, Nathan Froyd wrote:
>> paves the way for being able to compile in C++14
> So, can we start using the good stuff right now, or should we wait for a 
> proper "go" signal?

We'll need to wait for bug 1325632 to land.  While MSVC will happily
compile C++14 code with our current configuration right now, we still
pass -std=c++11 to GCC and Clang.  And there may be other things to do
as well (e.g. I think we need to start defining mozalloc-specific
sized allocation and deallocation functions, and maybe some other
things I have not thought of).

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


tier-2 Windows clang-cl static analysis builds running on inbound

2016-12-23 Thread Nathan Froyd
As per the subject.  This job is strictly for smoketest purposes;
there are no tests being run on the result of the build.

As these are tier-2 builds, build failures will not be cause for
backouts.  However, as clang complains about a wider range of problems
than our current MSVC builds do, and as the clang static analysis also
catches real problems, please take any failures seriously.  We'd like
to promote these to tier 1 in the near future, if that's possible.

This is the culmination of work by many people: Ehsan Akhgari has done
a ton of work with this, Ting-Yu Chou fixed all the (numerous!) issues
that the static analysis turned up on Windows, David Major helped
track down several tricky crashes and outright wrong behavior, and
numerous other people have fixed clang-cl issues in the past.  The
clang-cl developers have been responsive, fixing several clang-cl bugs
that Firefox exposed. Credit is also due to all the folks who have
reviewed patches.  Pete Moore and Dustin Mitchell helped out with
Taskcluster details; Taskcluster is a much more pleasant system to
deal with than buildbot.

Thanks,
-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: GCC 4.9 now required to build on Linux/Android

2016-12-23 Thread Nathan Froyd
On Fri, Dec 23, 2016 at 11:37 AM, Mike Hoye <mh...@mozilla.com> wrote:
> On 2016-12-23 11:08 AM, Nathan Froyd wrote:
>> Bug 1322792 has landed on inbound, which changes configure to require
>> GCC 4.9 to build; our automation switched over to GCC 4.9 for our
>> Linux builds earlier this week.  (Android builds have been using GCC
>> 4.9 for some time.)
>
> I happened to be poking at the MDN docs when this came in, so I'll update
> them to reflect this.

Thank you!

> I haven't tested our minimum hardware recommendations on Linux - 2GB ram,
> 30GB free space - recently, but I'll test them in the new year.

For 64-bit Linux, I think you need 4GB at the absolute minimum (build
may start swapping), and 8GB would be better.  My rule of thumb is
that you need a minimum of 2GB/thread that you're compiling with on
64-bit Linux (e.g. 16GB on a 4 core/8 thread machine); some of our
autogenerated files take lots of RAM to compile, and you want to have
a little bit left over for applications.

> Anyone know offhand if it's still possible to build on a 32-bit Linux box?
> We haven't been able to build on 32-bit Windows for a while now.

I suspect it's possible, based on halving the RAM from a 64-bit build,
but I haven't tried it.

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


GCC 4.9 now required to build on Linux/Android

2016-12-23 Thread Nathan Froyd
Bug 1322792 has landed on inbound, which changes configure to require
GCC 4.9 to build; our automation switched over to GCC 4.9 for our
Linux builds earlier this week.  (Android builds have been using GCC
4.9 for some time.)

This change paves the way for being able to compile in C++14 mode for
all of our Tier-1 platforms, which in turn unlocks using some C++14
features in our codebase:

* binary literals
* digit separators
* generic lambdas
* initialized lambda captures
* return type deduction (not quite sure if we want to use this feature widely)

We did not upgrade to GCC 5 for ABI compatibility reasons (GCC 5
changed the libstdc++ ABI); we did not upgrade to GCC 6 for the same
reason and because Gecko still has a few issues with GCC 6 (bug
1316555 tracks).

Thanks,
-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: New [must_use] property in XPIDL

2016-08-22 Thread Nathan Froyd
On Mon, Aug 22, 2016 at 7:39 PM, R Kent James  wrote:
> On 8/21/2016 9:14 PM, Nicholas Nethercote wrote:
>> I strongly encourage people to do likewise on
>> any IDL files with which they are familiar. Adding appropriate checks isn't
>> always easy
>
> Exactly, and I hope that you and others restrain your exuberance a
> little bit for this reason. A warning would be one thing, but a hard
> failure that forces developers to drop what they are doing and think
> hard about an appropriate check is just having you set YOUR priorities
> for people rather than letting people do what might be much more
> important work.

It's worth noting that "an appropriate check" may be as simple as:

  mozilla::Unused << MustUseMethod(...);

which effectively retains the status quo of not checking, while
quieting the compiler warning.

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: snake_case C++ in m-c (was: Re: C++ Core Guidelines)

2016-08-15 Thread Nathan Froyd
On Mon, Aug 15, 2016 at 9:56 AM, Henri Sivonen  wrote:
> Relatedly, on the topic of MFBT Range and GSL, under the subject "C++
> Core Guidelines" Jim Blandy  wrote:
>> One of the main roles of MFBT is to provide polyfills for features
>> standardized in C++ that we can't use yet for toolchain reasons (remember
>> MOZ_OVERRIDE?); MFBT features get removed as we replace them with the
>> corresponding std thing.
>
> I'd have expected a polyfill that expects to be swapped out to use the
> naming of whatever it's polyfill for, except maybe for the namespace.
> Since MFBT has mozilla::UniquePtr instead of mozilla::unique_ptr, I
> had understood mozilla::UniquePtr as a long-term Gecko-specific
> implementation of the unique pointer concept as opposed to something
> that's expected to be replaced with std::unique_ptr as soon as
> feasible.
>
> Are we getting value out of going against the naming convention of the
> C++ standard library in order to enforce a Mozilla-specific naming
> style?

Keeping the Gecko naming scheme avoids unwanted name conflicts versus
the standard library, and makes it a bit clearer in code where
prefixes are not present that something Gecko-ish is being used.  The
latter is helpful for things that are named similarly to the standard,
but differ dramatically (mozilla::Vector, mozilla::IsPod).  Removing
the polyfills to use something more standardized requires only
sed/perl-style renaming (mostly).  Manual effort to adjust includes
would be necessary whether we chose Gecko style or standard library
style.

> I suggest that we start allowing snake_case C++ in m-c so that C++
> wrappers for the C interfaces to Rust code can be named with mere
> copy-paste of the Rust method names and so that we don't need to
> change naming style of GSL stuff regardless of whether what's in the
> tree is a Mozilla polyfill for GSL, a third-party polyfill (for legacy
> compilers) of GSL or GSL itself.

I don't follow the argument here for Rust names.  I think it's
reasonable that if one needs to call FFI functions in Rust directly,
then we should use whatever names the Rust library chose for its FFI
interface.  That policy is no different than what we have today for
third-party libraries.  But if one is going to write wrappers around
those FFI functions (resp. third-party libraries), then it seems
equally reasonable that those wrappers should follow Gecko
conventions, and not the conventions of whatever code they are
wrapping.  Again, this is no different than what we have today for
third-party libraries.

For GSL polyfills, I think that we should continue to follow the MFBT
conventions set thus far and use Gecko style for naming.  But that is
partly skepticism about how much in GSL will actually get used and/or
how quickly GSL would get standardized and provided by our compiler
vendors.

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Rust 1.10 (to be) required to build Firefox with --enable-rust

2016-08-10 Thread Nathan Froyd
It does not...though judging from:

https://hg.mozilla.org/mozilla-build/file

it looks like we don't include Rust in MozillaBuild currently
regardless, as Rust is an optional build dependency at this point.  (I
could be mistaken about Rust's inclusion, but clicking through files
revealed no obvious mentions of Rust.)

I filed bug 1294083[1] to track/discuss.

Thanks
-Nathan

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1294083

On Wed, Aug 10, 2016 at 10:56 AM, Dave Townsend <dtowns...@mozilla.com> wrote:
> Does MozillaBuild include the appropriate version of rust?
>
> On Wed, Aug 10, 2016 at 6:18 AM, Nathan Froyd <nfr...@mozilla.com> wrote:
>>
>> TL; DR: As the subject says, although the patch is not yet on
>> mozilla-central.  You may want to pre-emptively update your Rust
>> before the build system requires you to.
>>
>> We've not been particularly aggressive with requiring new Rust
>> versions, but with the release of 1.10, we wanted to start compiling
>> Firefox with '-C panic=abort'.  Bug 1268727 tracks that work; patches
>> for said bug have landed on mozilla-inbound and seem likely to merge
>> to mozilla-central later today/this week.  The configure machinery
>> will check the Rust version that you have and refuse to proceed
>> further if you don't have a new enough version.
>>
>> Thanks,
>> -Nathan
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Rust 1.10 (to be) required to build Firefox with --enable-rust

2016-08-10 Thread Nathan Froyd
TL; DR: As the subject says, although the patch is not yet on
mozilla-central.  You may want to pre-emptively update your Rust
before the build system requires you to.

We've not been particularly aggressive with requiring new Rust
versions, but with the release of 1.10, we wanted to start compiling
Firefox with '-C panic=abort'.  Bug 1268727 tracks that work; patches
for said bug have landed on mozilla-inbound and seem likely to merge
to mozilla-central later today/this week.  The configure machinery
will check the Rust version that you have and refuse to proceed
further if you don't have a new enough version.

Thanks,
-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Rust code in mozilla-central now builds via cargo

2016-08-08 Thread Nathan Froyd
On Mon, Aug 8, 2016 at 6:41 AM, Andreas Tolfsen  wrote:
> This is great, but as of pulling central this morning I can’t build
> because configure complains about missing cargo.  I’ve filed
> https://bugzilla.mozilla.org/show_bug.cgi?id=1293219 about this.

Thanks for the report, I'll take a look.  I can't seem to reproduce,
but I will investigate.

If other people are running into this, posting your mozconfig,
configure output, and $OBJDIR/config.log in the bug would be helpful.

> It doesn’t make a difference if I remove `ac_add_options
> --enable-rust`, but I guess this might now be deprecated?

For avoidance of doubt: it was not the intent of this patchset to make
--enable-rust mandatory.  We're not to that point (yet).

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: [dev-servo] 25% Improvement in page load time!

2016-06-27 Thread Nathan Froyd
On Mon, Jun 27, 2016 at 2:01 PM, Jet Villegas  wrote:
> Shing Lyu from our Taipei Layout team reports a 25% page load improvement
> in Servo from moving to a hashtable lookup from an iterator search of the
> public suffix list ( https://publicsuffix.org/ )
>
> Should Gecko do the same thing and replace our binary search method?
> https://dxr.mozilla.org/mozilla-central/source/security/manager/ssl/nsSiteSecurityService.cpp#917

Gecko's public suffix code lives over in netwerk/dns/:

https://hg.mozilla.org/mozilla-central/file/tip/netwerk/dns/nsEffectiveTLDService.cpp#l51

Bug 1247835 [1] changed its hashtable usage to a binary search earlier
this year and we have not noticed any negative fallout.  Performance
measurements in the bug suggest the binary search might actually be
slightly faster, and the current structure enables us to easily share
the lookup structures between processes, as well as being smaller than
the previous hashtable scheme.  (If we switched to downloading public
suffix lists--bug 1083971 [2]--the sharing would presumably go away,
but we'd still get the size wins.)

-Nathan

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1247835
[2] https://bugzilla.mozilla.org/show_bug.cgi?id=1083971
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++11 standard library support enabled on all Tier-1 platforms

2016-05-27 Thread Nathan Froyd
On Fri, May 27, 2016 at 6:34 AM, Kurt Roeckx <k...@roeckx.be> wrote:
> On 2016-05-27 03:50, Nathan Froyd wrote:
>> Given the standard library's pervasive use of exceptions, and our
>> aversion to the same, if you are using a standard library header
>> that's not listed here:
>
> Are there plans to start using C++ exception?  The wiki seems to suggest
> there are plans, but it was last modified in 2008.

There was a discussion about this topic recently that I thought was on
dev-platform, but some searching doesn't turn it up.  The consensus I
remember was that was it was semi-desirable, but auditing all of the
existing code for exception safety was a long, tedious task.  We'd
have to see what the size and runtime penalties looked like as well.

I don't think anybody's planning on tackling this anytime soon.

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: C++11 standard library support enabled on all Tier-1 platforms

2016-05-27 Thread Nathan Froyd
On Thu, May 26, 2016 at 10:08 PM, Mike Hommey <m...@glandium.org> wrote:
> On Thu, May 26, 2016 at 09:50:56PM -0400, Nathan Froyd wrote:
>> This change also means that any non-Tier-1 platforms (FxOS, for
>> instance) that don't provide a C++11 standard library will probably
>> break in very short order as various code is removed from the tree.
>
> When do we actively remove support for stlport (which b2g still uses,
> and which is still an option of --with-android-cxx-stl)?

"Soon"?  I'll probably do this next week.

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


C++11 standard library support enabled on all Tier-1 platforms

2016-05-26 Thread Nathan Froyd
[CC mobile-firefox-dev and dev-fxos for notes below.]

Bug 1246743 (Mac libc++ support) and bug 1273934 (Android libc++
support for local development builds) have landed on mozilla-central.
This change means that all of our Tier-1 platforms now have a
more-or-less conformant C++11 standard library.  We can therefore
begin removing a decent amount of code that we required to support
pre-C++11 standard libraries and use standard facilities instead.  You
are still strongly encouraged to use Gecko-specific data structures
(nsTArray ns{C,}String, etc.) in preference to the standard library
ones, unless you need to interface with a third-party library.

Given the standard library's pervasive use of exceptions, and our
aversion to the same, if you are using a standard library header
that's not listed here:

http://dxr.mozilla.org/mozilla-central/source/config/stl-headers

you need to ask for review to get that header added to the list, per
policy in that file.

This change also means that any non-Tier-1 platforms (FxOS, for
instance) that don't provide a C++11 standard library will probably
break in very short order as various code is removed from the tree.

Developers who work on the C++ side of Firefox for Android are
strongly encouraged to upgrade to an r11 NDK; the r10 NDK should work,
but test results in the presence of crashes might be slightly wonky.

Enjoy!
-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Tier-1 for Linux 64 Debug builds in TaskCluster on March 14 (and more!)

2016-05-20 Thread Nathan Froyd
On Fri, May 20, 2016 at 11:18 AM, Armen Zambrano G.  wrote:
> On 2016-05-19 08:29 PM, Mike Hommey wrote:
>> It's also not possible to *trigger* new TC jobs on treeherder ; like,
>> pushing with no try syntax and filling what you want with "Add new
>> jobs". Or using "Add new job" after realizing you forgot a job in your
>> try syntax.
>
> martianwars is working on it:
> https://bugzilla.mozilla.org/show_bug.cgi?id=1254325
>
> I believe we should have it by the end of June.

Why are we moving to a new system when it's lacking user-facing
functionality that the old system had (T pushes on try comes to mind
also, but I believe that's been fixed...perhaps there are others,
too)?  I can believe taskcluster-based jobs bring a whole host of
benefits (having things in-tree seems fantastic, for one), but it's
hard to understand the enthusiasm for pushing forward when we have
useful features disappearing.

Thanks,
-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: What is "Process Type = content" in "mozilla crash reports"?

2016-05-19 Thread Nathan Froyd
On Thu, May 19, 2016 at 1:58 PM, Tobias B. Besemer
 wrote:
> Question is:
> If Mozilla will really "Backout MSVC 2015 from aurora" because 2 people are 
> not able to configure there PCs right in BIOS ???
> https://bugzilla.mozilla.org/show_bug.cgi?id=1270664

Perhaps those people deliberately disabled SSE in their BIOS for
testing purposes.  Which is valuable, because very few Firefox
developers are testing on non-SSE capable CPUs.

In any event, you misunderstand the cause here.  We're not backing out
MSVC changes just because of these two users.  We're backing out MSVC
changes because other infrastructure (the update server, the
installer, etc.) isn't yet prepared for the SSE-required world we
appear to be moving towards.  Making those changes deliberately in 49,
rather than being surprised by it in 48, ensures a better experience
for everyone (e.g. Firefox doesn't mysteriously start crashing when
upgrades happen).

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


libstdc++ debug mode enabled in debug builds

2016-05-11 Thread Nathan Froyd
libstdc++ has a "debug" mode:

https://gcc.gnu.org/onlinedocs/libstdc++/manual/debug_mode.html

which adds checks for iterator safety and algorithm preconditions.
Bug 1270832, recently landed on inbound, turns this mode on for debug
builds via our wrapped C++ headers.  Please file a bug if you find
that debug mode is not getting turned on when you think it should.

Safer hacking,
-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: libc++ enabled for Android / C++11 standard library update

2016-05-05 Thread Nathan Froyd
On Thu, May 5, 2016 at 5:36 PM,   wrote:
> Out of interest, what is the situation on Linux? Which C++11 standard library 
> will you be using? Will you be shipping your own copy as a shared library, or 
> will you be using the system one?  If I understand correctly, I assume you 
> cannot link against the libstdc++ that ships with GCC 4.8.5 as the libstdc++ 
> C++11 ABI did not stabilise until GCC 5.X (meaning your binaries will not 
> work properly unless the distro where you are running ships exactly the same 
> libstdc++)?

We use libstdc++ on Linux, with special hacks so that our binaries
will actually run against older shared libstdc++ than the headers we
compile with.  (It's possible that libstdc++ prior to GCC 5 isn't
completely C++11 compliant, but it's probably close enough for our
current purposes.)  See e.g.

http://dxr.mozilla.org/mozilla-central/source/build/unix/stdc++compat/stdc++compat.cpp#27
http://dxr.mozilla.org/mozilla-central/source/config/gcc-stl-wrapper.template.h#55

for an idea of what we do.  We haven't tried crossing the GCC 5 ABI
breakage yet.

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: libc++ enabled for Android / C++11 standard library update

2016-05-04 Thread Nathan Froyd
On Wed, May 4, 2016 at 1:12 PM, Henri Sivonen  wrote:
> Cool! Thank you!
>
> What impact, if anything, does this have on
> https://bugzilla.mozilla.org/show_bug.cgi?id=1208262 (adopting
> Microsoft's Guidelines Support Library or an approximation thereof)?

It gets us closer to a world where the standard library capabilities
assumed by the Guidelines Support Library (GSL) exist.  The bug
comments suggest that C++14 compiler support is required; judging from
https://developer.mozilla.org/en-US/docs/Using_CXX_in_Mozilla_code ,
the blocker on the C++14 compiler front is GCC 4.9, and we only
require GCC 4.8.  (I'm assuming that the C++14 features that MSVC 2015
doesn't implement are also features that don't get used in the GSL.)
And then after than, we need to ensure our standard libraries have all
the C++14 bits required (OS X and Android may not).  And then we can
have a conversation about whether to import things wholesale.

In the meantime, writing our own polyfills seems reasonable.  I
haven't looked in detail at GSL, but assuming the comparison table at
https://github.com/martinmoene/gsl-lite#features is a good overview:

* not_null, maybe_null; njn is working on these in bug 1266651.
* stack_array, dyn_array: our array classes can serve here, with subclassing (?)
* finally: mfbt has ScopeExit.h instead.
* narrow_cast: mfbt has Casting.h instead.

We are missing owner, zstring et al, and span; span seems
like the most useful out of those.

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: libc++ enabled for Android / C++11 standard library update

2016-05-03 Thread Nathan Froyd
On Tue, May 3, 2016 at 10:57 AM, Nathan Froyd <nfr...@mozilla.com> wrote:
> As the subject suggests.  It is also strongly suggested that you now
> use NDK r11b or above for your local Android development; this is what
> automation uses and what |mach bootstrap| installs.

It's worth pointing out two things I neglected in my original email:

1. This change is only on inbound right now; ideally it'll be on
central by tomorrow morning.  So |mach bootstrap| won't actually
install r11b until these changes are merged.
2. The local build still defaults to using our in-tree stlport
(non-C++11), which shouldn't cause too many problems, but it's
something to be aware of.

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


libc++ enabled for Android / C++11 standard library update

2016-05-03 Thread Nathan Froyd
As the subject suggests.  It is also strongly suggested that you now
use NDK r11b or above for your local Android development; this is what
automation uses and what |mach bootstrap| installs.

This change leaves Mac as our only tier-1 platform without a C++11
standard library.

Given the recent announcement that Mac 10.6-10.8 support will be
dropped, the path to moving Mac to a C++11 standard library is much
clearer.  Bug 1246743 will be repurposed for moving Mac to use
-stdlib=libc++, and the changeover should happen in short order.  Once
that's done, there are a large number of polyfills and non-C++11
workarounds that need to be removed, and I'm happy to review those
sorts of patches.

Thanks,
-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: MOZ_WARN_UNUSED_RESULT has been renamed as MOZ_MUST_USE

2016-04-29 Thread Nathan Froyd
On Fri, Apr 29, 2016 at 7:54 AM, Gerald Squelart  wrote:
> Now, for maximum defensiveness, shouldn't we go even further?
>
> How about: Make 'MOZ_MUST_USE' implicit for all functions/methods (except 
> void of course, probably methods returning T&, and maybe more as they come 
> up).
> When a result is not needed somewhere, use the 'Unused << foo()' idiom.
> And if a function's return is really not important, then mark it with 
> MOZ_MAY_IGNORE_RESULT (or similar).

This is a noble goal, but there is an enormous amount of code that
would need to be modified to make this feasible.  Perhaps if you
exempted nsresult from MOZ_MUST_USE types.

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


multiple Rust crates are now supported

2016-04-21 Thread Nathan Froyd
Bug 1163224 has landed on inbound, which means that Gecko builds with
--enable-rust now support multiple Rust crates.  This change is
intended to make the lives of people developing Rust components
easier, and it comes with several caveats:

1) There is zero support for interdependencies between crates, so you
have to structure your crate as one big crate that includes any
dependencies, rather than several separate crates, as is the norm.
This is clearly suboptimal, and it will be fixed.  I think it's an
open question exactly how we're going to integrate multiple crates and
external projects anyway, so feel free to experiment!

2) We do not have Rust support on all of our Tier 1 platforms (Android
is still being worked on), so actually depending on Rust code
everywhere is still not possible.

3) Due to bug 1178897, Rust code uses a completely different memory
allocator than the rest of Gecko.  We therefore don't have any
visibility into Rust's memory allocations through things like
about:memory, using Rust code worsens fragmentation issues, and there
are also edge cases with allocating in C++ and freeing in Rust (or
vice versa).  This is obviously something we're going to fix, ideally
soon.

We --enable-rust on all of our Tier 1 desktop platforms, but given 2)
and 3) above, it seems best to limit the amount of Rust code we
actually ship.  So if you want to land Rust components in-tree right
now, I'd recommend gating your component behind an --enable-shiny
configure option.  Ideally 2) and 3) will be fixed in short order, 1)
will be ironed out, and then the real fun can begin!

Thanks,
-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Please use "web-platform-tests --manifest-update" for updating wpt tests

2016-04-20 Thread Nathan Froyd
On Wed, Apr 20, 2016 at 8:59 AM, James Graham  wrote:
> On 20/04/16 13:53, Josh Matthews wrote:
>> Servo has a script [1] that runs on the build machine that executes
>> --manifest-update and checks whether the contents of MANFEST.json is
>> different before and after. We could do the same for Gecko and make it
>> turn the job orange on treeherder.
>
> I plan to add this, along with the lint from upstream, once it is easy to
> add specific lint jobs to treeherder; aiui a general framework for adding
> this kind of job is currently in progress.

We can already do this, no?  We have an ESLint job in tree:

https://dxr.mozilla.org/mozilla-central/source/testing/taskcluster/tasks/branches/base_jobs.yml#276

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: MacOS 10.6-10.8 support

2016-04-04 Thread Nathan Froyd
Re-ping on this thread.  It would be really useful to have a decision
one way or the other for figuring out exactly how a C++11 STL on OS X
is going to work.

-Nathan

On Thu, Mar 24, 2016 at 12:51 PM, Ralph Giles  wrote:
> Discussion seems to have wound down. Is there a decision on this?
>
>  -r
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Removing the Chromium event loop

2016-03-31 Thread Nathan Froyd
On Thu, Mar 31, 2016 at 8:08 AM, Gabriele Svelto  wrote:
> On this topic, did anyone experiment with trying to lighten the
> synchronization burden when handling nsEventQueues? Both nsThread and
> nsThreadPool acquire a mutex each time we need to get the next runnable;
> we could pull out all pending runnables every time we acquire the lock
> (up to a predefined maximum) to amortize the synchronization cost. In my
> measurements mutex-handling was also quite expensive on low-end ARM
> cores, not so much on x86 as long as the mutex was not contended.

There was some optimization work done in bug 1195767 and bug 1202497
to reduce the amount of locking we do for both nsThreadPool and
nsThread, respectively, and to use signaling on internal condition
variables rather than broadcast.  I know the changes were significant
in the case of nsThreadPool on some platforms; in the nsThread case we
are obviously doing less work, but I didn't try to measure the
savings.  I don't think we've experimented with trying to pull out
multiple events per lock acquisition (which sounds pretty tricky to do
such that you're actually saving work).

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Removing the Chromium event loop

2016-03-30 Thread Nathan Froyd
On Wed, Mar 30, 2016 at 2:34 PM, Benjamin Smedberg
 wrote:
> I've been unhappy with the fact that our event loop uses refcounted objects
> by default. *Most* runnables are pure-C++ and really don't need to be
> refcounted/scriptable.

I've been thinking about this too.  gfx has a separate thread pool
that was created partly because of the desire to be Gecko-free and
partly because of the overhead that nsIRunnable has.  It would be nice
to eliminate one of those objections.  Making this change would also
bring down bloat from vtables and essentially-useless methods.

> I'm asking you to consider unifying these two things by making our event
> loop work more like chromium and just using c++ objects without a refcount
> by default? Then to post a scriptable event to an event loop you'd have to
> have it own a separate scriptable object.

I'd like to make this happen if Kyle doesn't.

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to deprecate: MacOS 10.6-10.8 support

2016-03-10 Thread Nathan Froyd
On Thu, Mar 10, 2016 at 5:25 PM, Mike Hommey  wrote:

> On Thu, Mar 10, 2016 at 01:03:43PM -0500, Benjamin Smedberg wrote:
> > This will affect approximately 1.2% of our current release population.
> Here
> > are the specific breakdowns by OS version:
> >
> > 10.6
> >   0.66%
> > 10.7
> >   0.38%
> > 10.8
> >   0.18%
>
> It's unfair to mention those populations by percentage of the global
> Firefox population. What are those percentages relative to the number of
> OSX users? ISTR 10.6 represented something like 25% of the OSX users,
> which is a totally different story (but maybe I'm mixing things with
> Windows XP).
>

I heard much the same thing from the media team when I suggested getting
rid of 10.6 support to make our C++ standard library situation easier.
CC'ing Anthony.

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


nsAutoArrayPtr has been removed, please use UniquePtr<T[]> instead

2016-02-25 Thread Nathan Froyd
As the subject says, via bug 1229985.

Please also be advised that nsAutoPtr will suffer a similar fate in the
not-too-distant future, so writing new code with UniquePtr will make your
code better and the removal not take any longer than it needs to.

Thanks,
-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Gecko/Firefox stats and diagrams wanted

2016-02-09 Thread Nathan Froyd
On Tue, Feb 9, 2016 at 12:31 PM, Nicholas Alexander <nalexan...@mozilla.com>
wrote:

> I also wanted to try to find some diagrams to show how Firefox and Gecko
>> work/their architecture, from a high level perspective (not too insane a
>> level of detail, but reasonable).
>>
>
> Nathan Froyd worked up a very high-level slide deck for his onboarding
> sessions; they're amazing.  I'm not sure how public those slides are, so
> I've CCed him and he may choose to link to those.  I would really love to
> see these worked up into a document rather than a presentation.
>

The presentation is public:

https://docs.google.com/presentation/d/1ZHUkNzZK2TrF5_4MWd_lqEq7Ph5B6CDbNsizIkBxbnQ/edit?usp=sharing

I've tried to include links into wikis and whatnot where possible.  We have:

https://wiki.mozilla.org/Gecko:Overview

which includes jumping-off points for exploration of major subsystems, as
well.

If folks have suggestions of diagrams, links, etc. that should go in, I'd
love to hear about them.

Thanks,
-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Use of C++11 std::unique_ptr for the WOFF2 module

2016-02-01 Thread Nathan Froyd
On Mon, Feb 1, 2016 at 4:29 AM, Frédéric Wang  wrote:

> I tried updating the source code of WOFF2 to the latest upstream
> version. Unfortunately, try server builds fail on OSX and mobile devices
> because the C++11 class std::unique_ptr does not seem to be available.
> IIUC some bugzilla entries and older threads on this mailing list, at
> the moment only some of the C++11 features are usable in the mozilla
> build system. Does any of the build engineer know whether
> std::unique_ptr can be made easily available? Or should we just patch
> the WOFF2 library to use of std::vector (as was done in earlier version)


We're working on moving all of our platforms to use a C++11-ish standard
library.  For std::unique_ptr, at least, the best tack is to write a small
polyfill based on mfbt/UniquePtr.h.  (It's not clear to me how your
suggestion with std::vector applies here.)  If our UniquePtr isn't a
drop-in replacement for std::unique_ptr, that's worthy of a bug report.

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Moving FirefoxOS into Tier 3 support

2016-01-25 Thread Nathan Froyd
On Mon, Jan 25, 2016 at 12:30 PM, Ehsan Akhgari 
wrote:

> For example, for a long time b2g partners held back our minimum supported
> gcc.  Now that there are no such partner requirements, perhaps we can
> consider bumping up the minimum to gcc 4.8?  (bug 1175546)
>
> I'm sure others have similar examples to fill in.
>

One current example is b2g's reliance on stlport and changing the build to
support a modern C++ library like libc++.

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


  1   2   >