Rust documentation update

2020-07-08 Thread Nicholas Nethercote
Hi,

I have updated the in-tree documentation on writing Rust code:

   - Adding Rust code
   
   - Writing Rust code
   
   - Testing & debugging Rust code
   

These links can also be found on the Oxidation wiki
.

These docs are likely to be improved and expanded further in the future,
but for now they represent a collation of all the Rust-in-Firefox
documentation that I'm aware of. If you want to write Rust code in Firefox,
they are a good place to start.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Testing Rust code in tree

2020-06-25 Thread Nicholas Nethercote
I have written some draft docs about testing and debugging Rust code, which
can be seen in bug 1647987
. I have done my best
to summarize the important info mentioned in this thread, plus other stuff.
I'd be happy to hear feedback.

Relatedly, in bug 1646280
 I recently improved
the docs about building, linking and vendoring Rust crates. The updated
docs are here
.

Finally, I recently ran a questionnaire asking Mozilla engineers about pain
points when using Rust. If you are interested in seeing the results or the
in-progress planning of follow-up work, please contact me off-list.

Nick

On Tue, 12 May 2020 at 08:37, Dave Townsend  wrote:

> Do we have any standard way to test in-tree Rust code?
>
> Context: We're building a standalone binary in Rust that in the future will
> be distributed with Firefox and of course we want to test it. It lives
> in-tree and while we could use something like xpcshell to drive the
> produced executable and verify its effects it would be much nicer to be
> able to use Rust tests themselves. But I don't see a standard way to do
> that right now.
>
> Is there something, should we build something?
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Testing Rust code in tree

2020-05-11 Thread Nicholas Nethercote
The Oxidation wiki  has this entry in
its FAQ :


*Q:* How are in-tree Rust crates tested?

*A:* In general we don't run tests for third-party crates; the assumption
is that these crates are sufficiently well-tested elsewhere. (Which means
that large test fixtures should be removed when vendoring third-party
crates, because they just bloat mozilla-central.) Mozilla crates can be
tested with cargo test by adding them to RUST_TESTS in
toolkit/library/rust/moz.build. Alternatively, you can write a GTest that
uses FFI to call into Rust code


I don't know how accurate or complete that is, but hopefully it's a start.
Feel free to update it if necessary.

Nick

On Tue, 12 May 2020 at 08:37, Dave Townsend  wrote:

> Do we have any standard way to test in-tree Rust code?
>
> Context: We're building a standalone binary in Rust that in the future will
> be distributed with Firefox and of course we want to test it. It lives
> in-tree and while we could use something like xpcshell to drive the
> produced executable and verify its effects it would be much nicer to be
> able to use Rust tests themselves. But I don't see a standard way to do
> that right now.
>
> Is there something, should we build something?
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: New and improved stack-fixing

2020-05-05 Thread Nicholas Nethercote
On Wed, 6 May 2020 at 07:22, Markus Stange  wrote:

>
>
>> - On opt builds you won't get symbolication on any platform.
>>
>
> Are there plans to make this work? Does this apply to "optimized debug
> builds"?
>

This was in relation to tests, and it is due to this simple check

.

As for why that check is there... do opt builds produce any stack traces in
tests? Normal assertions aren't enabled on opt builds, but
diagnostic/release assertions are. I can't remember off the top of my head
if they produce stacks traces in opt builds, and likewise with crashes.

If stack traces are produced for any of the above cases, and debug info is
present, then fix_stacks.py should be able to fix the stack traces
correctly.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: New and improved stack-fixing

2020-04-28 Thread Nicholas Nethercote
On Wed, 29 Apr 2020 at 02:32, ISHIKAWA,chiaki  wrote:

>
> Ouch, I am using -gsplit-dwarf to make gdb startup time faster...
>
> -gsplit-dwarf is set manually on my bash script before invoking ./mach
> configure and build.
>

That would be the problem!

Do you think digging github of fix-stack gets me anywhere near the
> solution?
>
Or do I have to do a major surgery in the innards of|used libraries of
> symbolic and/or goblin ?
>

https://github.com/mozilla/fix-stacks/issues/22 is the open issue for
separate debug info. There is some information in that issue about what is
necessary. I think copying over the old logic from fix_linux_stack.py would
be a moderate-sized task.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: New and improved stack-fixing

2020-04-28 Thread Nicholas Nethercote
On Tue, 28 Apr 2020 at 17:50, ISHIKAWA,chiaki  wrote:

>
>   0:28.98 GECKO(727384) #01: ???
> (/NEW-SSD/moz-obj-dir/objdir-tb3/dist/bin/libxul.so + 0x5ab5921)
>   0:28.99 GECKO(727384) #02: ???
> (/NEW-SSD/moz-obj-dir/objdir-tb3/dist/bin/libxul.so + 0x5ab7f34)
>   0:28.99 GECKO(727384) #03: ???
> (/NEW-SSD/moz-obj-dir/objdir-tb3/dist/bin/libxul.so + 0x590de76)
>   0:28.99 GECKO(727384) #04: ???
> (/NEW-SSD/moz-obj-dir/objdir-tb3/dist/bin/libxul.so + 0x5a94c2f)
>   0:28.99 GECKO(727384) #05: ???
> (/NEW-SSD/moz-obj-dir/objdir-tb3/dist/bin/libxul.so + 0x5ab1589)
>
> [...]
> The file is  mozilla/tools/rb/fix_stacks.py.:
>
> # Matches lines produced by MozFormatCodeAddress(), e.g.
> # `#01: ???[tests/example +0x43a0]`.
> line_re = re.compile("#\d+: .+\[.+ \+0x[0-9A-Fa-f]+\]")
>
> You see the re (Regular Expression) does not match the line in mochitest
> log produced by NS_ASSERTION().
>
> What gives?
>

The example at the top of https://github.com/mozilla/fix-stacks/ explains
this.
```
For example, a line like this in the input:

#01: ???[tests/example +0x43a0]

is changed to something like this in the output:

#01: main (/home/njn/moz/fix-stacks/tests/example.c:24)

```

So the output has definitely gone through `fix-stacks`, which is what I
would expect for mochitest output, because all mochitest output is fed
through `fix-stacks`.

I don't know why it was unable to find the function name, filename and line
number. If something was wrong with the files I would expect `fix-stacks`
to emit an error message. Are the files built with debug info?

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: New and improved stack-fixing

2020-04-14 Thread Nicholas Nethercote
On Fri, 6 Mar 2020 at 09:52, Nicholas Nethercote 
wrote:

>
> I have written a new stack-fixing tool, called `fix-stacks`, which
> symbolizes stack traces produced by Firefox.
>
> [...]
>
> There is more work to be done. Soon, I plan to:
>
> * use `fix-stacks` on test outputs (in `utils.py` and `automation.py.in`);
>
> * re-enable stack fixing on Mac test runs on local builds, which is
> currently disabled because it is so slow;
>
> * add breakpad symbol support to `fix-stacks`;
>
> * remove the old scripts.
>

I have done these four things now. There has been a long tail of issues to
work through relating to test output on automation, and there may still be
problems with obscure build or test jobs or intermittent failures. If you
see any problems where `fix-stacks` is mentioned, please let me know.

For those of you who are interested in a detailed description of this work,
please take a look at
https://blog.mozilla.org/nnethercote/2020/04/15/better-stack-fixing-for-firefox/
.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: New and improved stack-fixing

2020-03-12 Thread Nicholas Nethercote
No need to file a bug, I will fix this (and any other documentation I can
find) soon. Thanks for the link to this particular case.

Nick

On Fri, 13 Mar 2020 at 01:08, Chris Hutten-Czapski 
wrote:

> This is wonderful news!
>
> The most recent time I interacted with this was tracking down a refcount
> leak. I was following the instructions at [1] which as of yet mention
> `fix_linux_stack.py`.
>
> Would you like me to file a bug for that to be fixed? I'd edit it
> directly, but I'm not confident I'd get the usage right.
>
> :chutten
>
> [1]:
> https://developer.mozilla.org/en-US/docs/Mozilla/Performance/Refcount_tracing_and_balancing#Post-processing_step_2_filtering_the_log
>
> On Fri, Mar 6, 2020 at 1:14 PM Gabriele Svelto 
> wrote:
>
>> On 06/03/20 16:46, Andrew Sutherland wrote:
>> > Thank you both very much for the clarifications and your excellent work
>> > here!
>> >
>> > In terms of the Sentry crates, I presume that means the crates in
>> > https://github.com/getsentry/symbolic repo?  Are there still reasons to
>> > use/pay attention to Ted's https://github.com/luser/rust-minidump
>> repo?
>> > For example, I use slightly modified versions of
>> > `get-minidump-instructions` and `minidump_dump` from the latter, but I
>> > want to make sure I'm hitching my metaphorical tooling wagon to the
>> > right metaphorical tooling horse.
>>
>> Yes, we will soon be improving over that to replace the
>> minidump-specific functionality of Breakpad. Sentry still relies on
>> Breakpad for that stuff and I intend to replace that functionality with
>> an all-Rust implementation based on Ted's stuff. Bug 1588530 [1] is the
>> meta that tracks all this work.
>>
>>  Gabriele
>>
>> [1] [meta] Crash reporting Rust rewrite
>> https://bugzilla.mozilla.org/show_bug.cgi?id=1588530
>>
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: New and improved stack-fixing

2020-03-05 Thread Nicholas Nethercote
On Fri, 6 Mar 2020 at 11:27, Andrew Sutherland 
wrote:

> Does this eliminate the need documented at
> https://developer.mozilla.org/en-US/docs/Mozilla/Projects/Mochitest#stacks
> to acquire a `minidump_stackwalk` binary and then expose it via
> MINIDUMP_STACKWALK environment variable to get symbolicated backtraces when
> local test runs crash?  Or is that part of the future work to "use
> `fix-stacks` on test outputs"?
>
The short answer is "that is part of the future work".

The long answer is: the following is my understanding, which could be wrong
in places, because this stuff is surprisingly complicated. Also, the
following is all about vanilla local builds that don't involve breakpad
symbols; automation builds are a bit different.

Currently:
- On Linux debug builds, you will get a symbolicated stack trace for the
following test suites: reftests, jsreftests, crashtests, gtests,
mochitests, cppunittests, xpcshelltests, and maybe web platform tests but
I'm not sure about them.
- On Mac debug builds you won't get symbolication; we have code to do
symbolication in the same circumstances as on Linux, but it's currently
disabled due to being so slow (bug 1543447).
- On Windows debug builds you won't get symbolication because we don't even
have code to do that.
- On opt builds you won't get symbolication on any platform.

Here is the function that these test suites use to enable symbolication,
which I will soon be modifying:
https://searchfox.org/mozilla-central/source/testing/mozbase/mozrunner/mozrunner/utils.py#227-285
.

Once bugs 1619837 and 1602717 are completed, you will get symbolicated
stack traces on Linux/Mac/Windows debug builds, and you shouldn't need to
use `minidump_stackwalk`.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


New and improved stack-fixing

2020-03-05 Thread Nicholas Nethercote
Hi all,

I have written a new stack-fixing tool, called `fix-stacks`, which
symbolizes stack traces produced by Firefox.

`fix-stacks` is intended to replace our existing stack-fixing scripts,
`fix_{linux,macosx}_stack.py` and `fix_stack_using_bpsyms.py`, which are
used (a) on many test outputs, both locally and on automation, and (b) by
DMD, Firefox's built-in heap profiler.

`fix-stacks` is now installed by `mach bootstrap` on Windows, Mac, and
Linux. It is usable from Python code via the `fix_stacks.py` wrapper
script. Its code is at https://github.com/mozilla/fix-stacks/.

In bug 1604095 I replaced the use of `fix_{linux,macosx}_stack.py` with
`fix_stacks.py` for DMD, with the following benefits.

* On Linux, stack-fixing of DMD output files (which occurs when you run
`dmd.py`) is roughly 100x faster. On my Linux box I saw reductions from
20-something minutes to ~13 seconds.

* On Mac, stack-fixing of DMD output files on Mac is roughly 10x faster. On
my Mac laptop I saw reductions from ~7 minutes to ~45 seconds.

* On Windows, stack-fixing of DMD output files now occurs. (It previously
did not because there is no `fix_window_stacks.py` script.) This means that
DMD is now realistically usable on Windows without jumping through hoops to
use breakpad symbols.

There is more work to be done. Soon, I plan to:

* use `fix-stacks` on test outputs (in `utils.py` and `automation.py.in`);

* re-enable stack fixing on Mac test runs on local builds, which is
currently disabled because it is so slow;

* add breakpad symbol support to `fix-stacks`;

* remove the old scripts.

The tree of relevant bugs can be seen at
https://bugzilla.mozilla.org/showdependencytree.cgi?id=1596292_resolved=1
.

The stack traces produced by `fix-stacks` are sometimes different to those
produced by the old stack-fixers. In my experience these differences are
minor and you won't notice them if you aren't looking for them. But let me
know if you have any problems.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Accuracy of the list of Rust components in and around Firefox?

2020-02-18 Thread Nicholas Nethercote
Hi,

We have a list of Rust components at
https://wiki.mozilla.org/Oxidation#Rust_Components.

This list includes both components within Firefox and components "around"
Firefox, i.e. not shipped in Firefox but used for Firefox development or
services.

I'd like to know if this list is accurate and up-to-date. If you know about
any Rust components I would appreciate it if you could look at the list and
see if that component is listed accurately. If not, please feel free to
correct the list yourself, or let me know privately and I can update it.

Thank you.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Improvements to infrastructure underpinning `firefox-source-docs`

2019-08-27 Thread Nicholas Nethercote
I recently wrote my first firefox-source-docs and I was surprised and
impressed by one thing in particular: when you run `./mach docs` or `./mach
docs ` it opens a Firefox instance that shows the generated HTML, and
it *also* starts up a server that automatically regenerates and reloads
that HTML every time a .rst file is saved. It makes for a very responsive
and pleasant editing experience.

Nick

On Wed, 28 Aug 2019 at 01:25, Andrew Halberstadt  wrote:

> Documentation is hard. It's difficult to write, difficult to find and
> always out
> of date. That's why we implemented our in-tree documentation system that
> underpins firefox-source-docs.mozilla.org. The documentation lives next
> to the
> code that it documents, so it can be updated within the same commit that
> makes
> the underlying changes. Unfortunately, this system has never been a
> pleasure to
> work with. Docs are slow to build, RST is foreign to most and the moz.build
> syntax to integrate them is confusing. Not to mention some serious bugs
> around
> how files are uploaded and stored in S3.
>
> I'm happy to announce many great new features for firefox-source-docs,
> courtesy
> of our GSoC student Shivam (aka :championshuttler). You can see a blog
> post he
> wrote on his experience here
> . Below
> is a list of all the work and awesome new
> features he added to our documentation infrastructure. Please note that bug
> 1576912 
> tracks saving this information into the docs themselves.
>
>
> *Server Synchronization*
>
> Previously, the doc system had a surprisingly major flaw. Docs were only
> ever
> uploaded to S3, never deleted. This meant that even if a source file no
> longer
> exists (e.g it was removed or renamed), the related page would continue to
> live
> on the server indefinitely. External links to the page would continue to
> function,
> the users of the link being none-the-wiser. Yikes!
>
> We now compare all the files that exist on the server against the list of
> source
> files in `mozilla-central`. Any files on the server that no longer exist in
> `mozilla-central` are removed.
>
>
> *Redirects*
>
> We now have the ability to define redirects in-tree! This will allow us to
> refactor and move docs around to our hearts content without needing to
> worry
> about stale external URLs. To set up a redirect simply add a line to this
> file:
>
> https://searchfox.org/mozilla-central/source/tools/docs/redirects.yml
>
> Any request starting with the prefix on the left, will be rewritten to the
> prefix on the right by the server. So for example a request to
> `/testing/marionette/marionette/index.html` will be re-written to
> `/testing/marionette/index.html`. Amazon's API only supports prefix
> redirects,
> so anything more complex isn't supported.
>
>
> *Nested Doc Trees*
>
> This has been implemented for awhile and wasn't part of the GSoC project,
> but
> due to a lack of redirects it didn't make sense to advertise too broadly.
> Up
> until recently, we've tended to add all doc folders to the root index. In
> other
> words each subsection is visible at the top level, creating a rather long
> and 
> incoherent  list of pages to
> the left.
>
> This feature essentially means we can now group related docs together under
> common "landing pages". This will allow us to refactor the docs into a
> structure
> that makes more sense. For example we could have a landing page for docs
> describing Gecko's internals, and another one for docs describing developer
> workflows in `mozilla-central`.
>
> Mark Banner has already started to organize some of the docs under some of
> these
> landing pages. See this commit
>  for a rough
> example.
>
> To clarify a few things:
>
> 1. The path specified in `SPHINX_TREES` *does not* need to correspond to
> a path
> in `mozilla-central`. For example, I could register my docs using
> `SPHINX_TREES["/foo"] = "docs"`, which would make that doc tree accessible
> at
> `firefox-source-docs.mozilla.org/foo`
> .
>
> 2. Any subtrees that are nested under another index will automatically be
> hidden
> from the main index.  This means you should make sure to link to any
> subtrees
> from somewhere in the landing page. So given my earlier doc tree at
> `/foo`, if I
> now created a subtree and registered it using `SPHINX_TREES["/foo/bar"] =
> "docs"`, those docs would *not* show up in the main index.
>
> 3. The relation between subtrees and their parents does not necessarily
> have any
> bearing with their relation on the file system. For example, a doc tree
> that
> lives under `/devtools` can be nested under an index that lives under
> `/browser`.
>
> I realize this explanation is about as clear as mud, so feel free 

Re: Static prefs are now defined in a YAML file

2019-08-07 Thread Nicholas Nethercote
On Thu, 18 Jul 2019 at 14:20, Nicholas Nethercote 
wrote:

> Greetings,
>
> I just landed the patches in bug 1564724. As a result, static prefs are no
> longer defined in modules/libpref/init/StaticPrefList.h. They are now
> defined in modules/libpref/init/StaticPrefList.yaml, from which a
> StaticPrefList.h file is generated. The format is explained in a comment at
> the top of StaticPrefList.yaml.
> [...]
> The change will also allow some larger benefits in the immediate future.
> - Rust access to static prefs can become easier (bug 1563555
> <https://bugzilla.mozilla.org/show_bug.cgi?id=1563555>).
>

This has now landed. If you want to access a mirrored static pref from Rust
code, do the following.
- Add a `rust: true` field to the definition in StaticPrefList.yaml.
- Use the `pref!` macro from the `static_prefs` crate, e.g. `let
foo_bar_baz = static_prefs::pref!("foo.bar-baz");`

You can see real-world examples in Stylo.

In other libpref news, docs are now available here
<https://firefox-source-docs.mozilla.org/modules/libpref/libpref/index.html>.
Note particularly the "Guidelines" section, which discusses how we have too
many prefs (several thousand!) and has some ideas about which kinds of
prefs might be removable (hint, hint).

Finally, we have a long-running project to convert all VarCache prefs to
static prefs. See here
<https://wiki.mozilla.org/Platform/PrefsOverhaul#.5BON_TRACK.5D_static-prefs>
for a description of the benefits. Once all VarCache prefs have been
converted, we'll be able to do some nice clean-ups within libpref. There
are still roughly 200 prefs that need converting, and most conversions are
straightforward. Any help would be welcome. Instructions are here
<https://bugzilla.mozilla.org/show_bug.cgi?id=1448219#c3>. We don't want
any duplication of effort, so please use the tracking bug
<https://bugzilla.mozilla.org/show_bug.cgi?id=1448219> to see what others
are doing and communicate what you are doing. I am happy to review any
patches. Thanks.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PHC, a probabilistic heap checker, will soon be enabled on Linux64 Nightly

2019-08-05 Thread Nicholas Nethercote
Hi again,

The enabling of PHC on Linux64 Nightly builds went smoothly, so I just
enabled it for Win64 Nightly builds (bug 1569864). This will increase the
number of users by roughly 10x.

If you have minidump access on crash-stats you can search for PHC crash
reports by searching for the presence of the `phc kind` field. The `phc
alloc stack` and `phc free stack` fields currently are not
auto-symbolicated; Christian Holler has a script that can do the
symbolication.

Nick


On Fri, 19 Jul 2019 at 13:32, Nicholas Nethercote 
wrote:

> Greetings,
>
> PHC is a probabilistic heap checker I have been working on. It has landed
> and I am planning to enable it on Monday morning AEST (which is Sunday
> afternoon/evening in many parts of the world). For now it will only be
> enabled on Linux64 Nightly builds (and also local developer builds) but I
> hope to also enable it on Win64 Nightly builds soon.
>
> Here is a short description of PHC that comes from the comment at the top
> of memory/replace/phc/PHC.cpp:
>
> // PHC is a probabilistic heap checker. A tiny fraction of randomly chosen
> heap
> // allocations are subject to some expensive checking via the use of OS
> page
> // access protection. A failed check triggers a crash, whereupon useful
> // information about the failure is put into the crash report. The cost and
> // coverage for each user is minimal, but spread over the entire user base
> the
> // coverage becomes significant.
>
> I have included the rest of that comment, which describes the
> implementation in more detail, at the bottom of this email. Also see bug
> 1523276 <https://bugzilla.mozilla.org/show_bug.cgi?id=1523276> for
> additional details.
>
> This is a tool that will hopefully detect use-after-free errors in the
> wild. It is a somewhat complex wrapper of our heap allocator, and
> everything depends on the heap allocator. This means that any bugs in PHC
> have the potential to cause a wide variety of problems. Please keep an eye
> out for any such problems. It's hard to say exactly what those problems
> might be... during testing I saw and fixed some deadlocks and some crashes
> within arena_dalloc(), but other effects may be possible. Also, it's
> completely non-deterministic (by design) which further complicates the
> detection of problems.
>
> Please let me know if you have any questions or concerns.
>
> Nick
>
>
> // The idea comes from Chromium, where it is called GWP-ASAN. (Firefox
> uses PHC
> // as the name because GWP-ASAN is long, awkward, and doesn't have any
> // particular meaning.)
> //
> // In the current implementation up to 64 allocations per process can
> become
> // PHC allocations. These allocations must be page-sized or smaller. Each
> PHC
> // allocation gets its own page, and when the allocation is freed its page
> is
> // marked inaccessible until the page is reused for another allocation.
> This
> // means that a use-after-free defect (which includes double-frees) will be
> // caught if the use occurs before the page is reused for another
> allocation.
> // The crash report will contain stack traces for the allocation site, the
> free
> // site, and the use-after-free site, which is often enough to diagnose the
> // defect.
> //
> // The design space for the randomization strategy is large. The current
> // implementation has a large random delay before it starts operating, and
> a
> // small random delay between each PHC allocation attempt. Each freed PHC
> // allocation is quarantined for a medium random delay before being
> reused, in
> // order to increase the chance of catching UAFs.
> //
> // The basic cost of PHC's operation is as follows.
> //
> // - The memory cost is 64 * 4 KiB = 256 KiB per process (assuming 4 KiB
> //   pages) plus some metadata (including stack traces) for each page.
> //
> // - Every allocation requires a size check and a decrement-and-check of an
> //   atomic counter. When the counter reaches zero a PHC allocation can
> occur,
> //   which involves marking a page as accessible and getting a stack trace
> for
> //   the allocation site. Otherwise, mozjemalloc performs the allocation.
> //
> // - Every deallocation requires a range check on the pointer to see if it
> //   involves a PHC allocation. (The choice to only do PHC allocations
> that are
> //   a page or smaller enables this range check, because the 64 pages are
> //   contiguous. Allowing larger allocations would make this more
> complicated,
> //   and we definitely don't want something as slow as a hash table lookup
> on
> //   every deallocation.) PHC deallocations involve marking a page as
> //   inaccessible and getting a stack trace for the deallocation site.
> //
> // In the fut

Re: Static prefs are now defined in a YAML file

2019-07-25 Thread Nicholas Nethercote
On Thu, 18 Jul 2019 at 14:20, Nicholas Nethercote 
wrote:

>
> I just landed the patches in bug 1564724. As a result, static prefs are no
> longer defined in modules/libpref/init/StaticPrefList.h. They are now
> defined in modules/libpref/init/StaticPrefList.yaml, from which a
> StaticPrefList.h file is generated. The format is explained in a comment at
> the top of StaticPrefList.yaml.
> [...]
>
The change will also allow some larger benefits in the immediate future.
> - StaticPrefList.h (which is generated from StaticPrefList.yaml) will be
> able to be split into pieces, resulting in much less recompilation each
> time a single static pref is changed (bug 1563139
> <https://bugzilla.mozilla.org/show_bug.cgi?id=1563139>).
>

This change just landed. StaticPrefs.h no longer exists, and the generated
header files all have the form StaticPrefs_foo.h, where 'foo' is the pref
group, which comes from the part of the pref name before the first dot.

For example, if you want to access the "apz.allow_zooming" pref via
StaticPrefs::apz_allow_zooming() you will need to include StaticPrefs_apz.h.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: accumulating most JS scripts' data as UTF-8, then directly parsing as UTF-8 (without inflating to UTF-16)

2019-07-20 Thread Nicholas Nethercote
This is excellent news. Do you have any measurements showing perf effects?

Semi-relatedly, Swift 5 will change the preferred encoding of strings from
UTF-16 to UTF-8. Some readers might find the accompanying blog post
interesting: https://swift.org/blog/utf8-string/.

Nick

On Sat, 20 Jul 2019 at 10:05, Jeff Walden  wrote:

> # Intent to ship: UTF-8 parsing of external 

PHC, a probabilistic heap checker, will soon be enabled on Linux64 Nightly

2019-07-18 Thread Nicholas Nethercote
Greetings,

PHC is a probabilistic heap checker I have been working on. It has landed
and I am planning to enable it on Monday morning AEST (which is Sunday
afternoon/evening in many parts of the world). For now it will only be
enabled on Linux64 Nightly builds (and also local developer builds) but I
hope to also enable it on Win64 Nightly builds soon.

Here is a short description of PHC that comes from the comment at the top
of memory/replace/phc/PHC.cpp:

// PHC is a probabilistic heap checker. A tiny fraction of randomly chosen
heap
// allocations are subject to some expensive checking via the use of OS page
// access protection. A failed check triggers a crash, whereupon useful
// information about the failure is put into the crash report. The cost and
// coverage for each user is minimal, but spread over the entire user base
the
// coverage becomes significant.

I have included the rest of that comment, which describes the
implementation in more detail, at the bottom of this email. Also see bug
1523276  for
additional details.

This is a tool that will hopefully detect use-after-free errors in the
wild. It is a somewhat complex wrapper of our heap allocator, and
everything depends on the heap allocator. This means that any bugs in PHC
have the potential to cause a wide variety of problems. Please keep an eye
out for any such problems. It's hard to say exactly what those problems
might be... during testing I saw and fixed some deadlocks and some crashes
within arena_dalloc(), but other effects may be possible. Also, it's
completely non-deterministic (by design) which further complicates the
detection of problems.

Please let me know if you have any questions or concerns.

Nick


// The idea comes from Chromium, where it is called GWP-ASAN. (Firefox uses
PHC
// as the name because GWP-ASAN is long, awkward, and doesn't have any
// particular meaning.)
//
// In the current implementation up to 64 allocations per process can become
// PHC allocations. These allocations must be page-sized or smaller. Each
PHC
// allocation gets its own page, and when the allocation is freed its page
is
// marked inaccessible until the page is reused for another allocation. This
// means that a use-after-free defect (which includes double-frees) will be
// caught if the use occurs before the page is reused for another
allocation.
// The crash report will contain stack traces for the allocation site, the
free
// site, and the use-after-free site, which is often enough to diagnose the
// defect.
//
// The design space for the randomization strategy is large. The current
// implementation has a large random delay before it starts operating, and a
// small random delay between each PHC allocation attempt. Each freed PHC
// allocation is quarantined for a medium random delay before being reused,
in
// order to increase the chance of catching UAFs.
//
// The basic cost of PHC's operation is as follows.
//
// - The memory cost is 64 * 4 KiB = 256 KiB per process (assuming 4 KiB
//   pages) plus some metadata (including stack traces) for each page.
//
// - Every allocation requires a size check and a decrement-and-check of an
//   atomic counter. When the counter reaches zero a PHC allocation can
occur,
//   which involves marking a page as accessible and getting a stack trace
for
//   the allocation site. Otherwise, mozjemalloc performs the allocation.
//
// - Every deallocation requires a range check on the pointer to see if it
//   involves a PHC allocation. (The choice to only do PHC allocations that
are
//   a page or smaller enables this range check, because the 64 pages are
//   contiguous. Allowing larger allocations would make this more
complicated,
//   and we definitely don't want something as slow as a hash table lookup
on
//   every deallocation.) PHC deallocations involve marking a page as
//   inaccessible and getting a stack trace for the deallocation site.
//
// In the future, we may add guard pages between the used pages in order
// to detect buffer overflows/underflows. This would change the memory cost
to
// (64 * 2 + 1) * 4 KiB = 516 KiB per process and complicate the machinery
// somewhat.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Static prefs are now defined in a YAML file

2019-07-17 Thread Nicholas Nethercote
Greetings,

I just landed the patches in bug 1564724. As a result, static prefs are no
longer defined in modules/libpref/init/StaticPrefList.h. They are now
defined in modules/libpref/init/StaticPrefList.yaml, from which a
StaticPrefList.h file is generated. The format is explained in a comment at
the top of StaticPrefList.yaml.

I did my best to ensure this change did not introduce any changes in
behaviour. However, any change that affects 900+ prefs has non-zero risk,
so please be on the lookout for any regressions. The riskiest part of the
change is that the old header file was preprocessed with the C++
preprocessor, while the new one uses our own preprocessor.py, and it's
possible there could be differences caused by that because the set of
constants defined is slightly different.

The change to YAML has some minor immediate benefits.
- Static pref definitions are more concise (no redundancy) and easier to
read.
- Naming inconsistencies for getters are no longer possible.

The change will also allow some larger benefits in the immediate future.
- Rust access to static prefs can become easier (bug 1563555
).
- StaticPrefList.h (which is generated from StaticPrefList.yaml) will be
able to be split into pieces, resulting in much less recompilation each
time a single static pref is changed (bug 1563139
).
- It will be possible to add checks for duplication with prefs defined in
modules/libpref/init/all.js (bug 1566315
).

Please let me know if you have any questions.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Fwd: Enable cargo pipelining to speed up Firefox builds

2019-05-28 Thread Nicholas Nethercote
Hi,

The Rust compiler and cargo have a very new feature called "pipelining".
This feature increases the amount of parallelism available when building a
multi-crate Rust project by overlapping the compilation of dependent crates.

For more details on how it works, and lots of measurements, see this thread:
https://internals.rust-lang.org/t/evaluating-pipelined-rustc-compilation/10199

*Potential speedups*

The speedups seen depend on various factors:
- the structure of your project's crate graph;
- the machine being used (machines with more cores are likely to see bigger
speedups);
- your build configuration (release/opt builds will speed up more than
debug builds).

On my 14-core Linux box some notable speedups of release/opt builds include:
- webrender: 1.27x faster
- cranelift-codegen: 1.20x faster
- Firefox: 1.07x faster (and Firefox is of course mostly C++ code)

On other projects we have seen as high as 1.84x improvement, while some
projects see no speedup at all. At worst, it should make things only
negligibly slower, because it's not doing significant additional work, just
changing the order in which certain things happen.

*How to use pipelining for Firefox*

To try it in Firefox, do the following:
- Do `rustup update nightly` to get a sufficiently recent compiler/cargo
combination.
- Do `rustup default nightly` to set the nightly compiler/cargo as your
default.
- Set the environment variable `CARGO_BUILD_PIPELINING=true`.
- Build Firefox.

If you want to evaluate its effectiveness before using it in earnest, you
can use erahm's script, which does a bunch of timings:
https://gist.github.com/EricRahm/d0bcedc0af5fcbd33dbedff497893cee

The script builds Firefox 13 times, so it takes a while to run. It does a
default opt build (`./mach -l build.log build`); you might want to change
it to use your preferred mozconfig file.

*How to use pipelining for other Rust projects*

To try it in other Rust projects, do the following:
- Do `rustup update nightly` to get a sufficiently recent compiler/cargo
combination.
- Do `CARGO_BUILD_PIPELINING=true cargo +nightly build` or something
similar.

*Caveats*

Please note that pipelining is new and still experimental. It seems to work
well but may have bugs. We hope to make it a stable feature soon. Alex
Crichton did the implementation within Cargo and I did the implementation
within rustc; please let us know about any problems.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: crash reporting, inline functions, and you

2019-04-07 Thread Nicholas Nethercote
I've done a lot of profiling of Rust code using tools that are based around
stack traces. I can't emphasize how useful it is to have inlined stack
frames.

Here is a highly typical example from a profile I had lying around:

#1: 0x655A213: alloc (alloc.rs:75)
#2: 0x655A213: alloc (alloc.rs:151)
#3: 0x655A213: reserve_internal<(syntax::tokenstream::TokenTree,
syntax::tokenstream::IsJoint),alloc::alloc::Global> (raw_vec.rs:668)
#4: 0x655A213: reserve<(syntax::tokenstream::TokenTree,
syntax::tokenstream::IsJoint),alloc::alloc::Global> (raw_vec.rs:491)
#5: 0x655A213: reserve<(syntax::tokenstream::TokenTree,
syntax::tokenstream::IsJoint)> (vec.rs:457)
#6: 0x655A213: push<(syntax::tokenstream::TokenTree,
syntax::tokenstream::IsJoint)> (vec.rs:1023)
#7: 0x655A213: parse_token_trees_until_close_delim (tokentrees.rs:27)
#8: 0x655A213: syntax::parse::lexer::tokentreesparse_token_tree (tokentrees.rs:81)
#9: 0x655A17F: parse_token_trees_until_close_delim (tokentrees.rs:26)
#10: 0x655A17F: syntax::parse::lexer::tokentreesparse_token_tree (tokentrees.rs:81)
#11: 0x655A17F: parse_token_trees_until_close_delim (tokentrees.rs:26)
#12: 0x655A17F: syntax::parse::lexer::tokentreesparse_token_tree (tokentrees.rs:81)

That stack trace shows inlined frames. Notice that there are only three
distinct addresses. Without the inlined frames, there would only be three
frames, something like this:

#1: 0x655A213: alloc (alloc.rs:75)
#2: 0x655A213: reserve_internal<(syntax::tokenstream::TokenTree,
syntax::tokenstream::IsJoint),alloc::alloc::Global> (raw_vec.rs:668)
#3: 0x655A17F: parse_token_trees_until_close_delim (tokentrees.rs:26)

Here is a more extreme (but still not *so* unusual) example:

#1: 0x5794562: alloc (alloc.rs:75)
#2: 0x5794562: exchange_malloc (alloc.rs:185)
#3: 0x5794562: clone (boxed.rs:332)
#4: 0x5794562: clone (mod.rs:1759)
#5: 0x5794562: ::clone (
mod.rs:1733)
#6: 0x579E30A:
{{closure}},rustc::mir::Statement,(),closure>
(mod.rs:272)
#7: 0x579E30A: fold (mod.rs:3150)
#8: 0x579E30A:
fold,rustc::mir::Statement,(),closure>
(mod.rs:272)
#9: 0x579E30A:
for_each>,closure>
(iterator.rs:604)
#10: 0x579E30A:
spec_extend>>
(vec.rs:1852)
#11: 0x579E30A:
spec_extend>
(vec.rs:1902)
#12: 0x579E30A: extend_from_slice (vec.rs:1351)
#13: 0x579E30A: to_vec (slice.rs:160)
#14: 0x579E30A: to_vec (slice.rs:380)
#15: 0x579E30A: clone (vec.rs:1642)
#16: 0x579E30A: ::clone (
mod.rs:1022)
#17: 0x564271A:
{{closure}},rustc::mir::BasicBlockData,(),closure>
(mod.rs:272)
#18: 0x564271A: fold (mod.rs:3150)
#19: 0x564271A:
fold,rustc::mir::BasicBlockData,(),closure>
(mod.rs:272)
#20: 0x564271A:
for_each>,closure>
(iterator.rs:604)
#21: 0x564271A:
spec_extend>>
(vec.rs:1852)
#22: 0x564271A:
spec_extend>
(vec.rs:1902)
#23: 0x564271A: extend_from_slice (vec.rs:1351)
#24: 0x564271A: to_vec (slice.rs:160)
#25: 0x564271A: to_vec (slice.rs:380)
#26: 0x564271A: clone (vec.rs:1642)
#27: 0x564271A: clone (
indexed_vec.rs:499)
#28: 0x564271A: ::clone (mod.rs:90)

There are three "real" stack frames and 25 inlined stack frames. Those
inline stack frames are the difference between "I have a vague idea of what
is going on" and "I know exactly what is going on".

Any changes that take us that direction are very useful :)

Nick

On Sat, 6 Apr 2019 at 03:34, Nathan Froyd  wrote:

> TL;DR: We're making some changes to how inlined functions are handled
> in our crash reports on non-Windows platforms in bug 524410.  This
> change should mostly result in more understandable crash stacks for
> code that uses lots of inlining, and shouldn't make things any worse.
> Some crash signatures may change as a result.  If you have concerns,
> or you happen to have crash stacks that you're curious about whether
> they'd change under this new policy, please comment in bug 524410 or
> email me.
>
> For the grotty details, read on.
>
> Modern C++/Rust code relies on inlining for efficiency, and modern
> compilers have gotten very good at accommodating such code: it's not
> unusual for code to feature double-digit levels of inlining (A inlined
> into B inlined into C...inlined into J).  A simple Rust function that
> looks like:
>
>   slice.into_iter().map(|...| { ... })
>
> and you think of as spanning addresses BEGIN to END, might actually
> feature small ranges of instructions from a dozen different functions,
> and the compiler will (mostly) faithfully tell you a precise location
> for each range.  (Instruction A comes from some iterator code,
> instruction B comes from a bit of your closure, instruction C comes
> from some inlined function three levels deep inside of your
> closure...) Unfortunately, this faithfulness means that in the event
> of a crash, the crashing instruction might get attributed to some code
> in libstd with no indication of how that relates to the original code.
>
> Bug 524410, supporting inline functions in the symbol dumper, has been
> open for a decade now.  The idea is that compilers (on Unix platforms,
> not entirely sure this is true on 

The DOMString type has been removed from XPIDL

2018-10-01 Thread Nicholas Nethercote
Greetings,

I just landed the patches for bug 1489047, which remove the DOMString type
from XPIDL.

Why? Here are details about how string types in XPIDL and WebIDL are
converted from JS to C++:

- XPIDL AString: `undefined` becomes Void, `null` becomes Void.

- XPIDL DOMString: `undefined` becomes "undefined", `null` becomes Void.

- WebIDL DOMString: `undefined` becomes "undefined", `null` becomes "null"
unless `[TreatNullAs=EmptyString]` is used. Also, `DOMString?` can be used
to get the XPCOM AString behavior.

In other words, XPIDL DOMString was almost-but-not-quite-identical to both
XPIDL AString and WebIDL DOMString. All the actual web-facing DOM stuff is
now defined via WebIDL, so there didn't seem much point in keeping XPIDL
DOMString.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


NS_strdup() and nsMemory::Clone() have been renamed

2018-09-04 Thread Nicholas Nethercote
Greetings,

In bug 1486690 I renamed both variants of NS_strdup() as NS_xstrdup(). I
also renamed nsMemory::Clone() as moz_xmemdup() and moved it into mozalloc.

Rationale:

- These functions are all infallible, and the 'x' prefix makes this clear,
as per our allocator naming convention (e.g. moz_xmalloc).

- The nsMemory class was a weird place for the Clone() function; allocator
stuff doesn't belong in XPCOM any more. And the name moz_xmemdup() is
similar to the existing moz_xstrdup().

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: mozilla::Hash{Map,Set}

2018-08-16 Thread Nicholas Nethercote
On Fri, Aug 17, 2018 at 3:31 AM, Eric Rahm  wrote:

>
> nsTHashTable is just a templated wrapper around PLDHashTable. For
> reference we also have nsDataHashtable, nsClassHashtable,
> nsRefPtrHashtable, nsInterfaceHashtable [1] all of which are rather
> convenient wrappers around PLDHashTable. These have also been implemented
> in such a way that the templates have lower code size overhead (thanks
> froydnj!). I somewhat prefer their interfaces to mfbt::HashMap (default
> infallible, plenty of predefined hashers, nice lookup for add semantics,
> etc), but to each their own.
>

About that last sentence:

- Default infallible: true, although I removed the init() function from
mozilla::Hash{Map,Set} (by making allocation lazy, waiting until first
insertion) so at least you don't have to worry about that any more.

- Plenty of predefined hashes: mozilla::Hash{Map,Set} does the right thing
by default if your key is an integer, pointer, UniquePtr, float, or double.
There's also the predefined CStringHasher for C string keys. Nothing at the
moment for RefPtr or nsCOMPtr, though.

- Nice lookup for add semantics: mozilla::Hash{Map,Set} has this too. You
use put() for an unconditional insertion, or lookupForAdd() + add() for a
conditional insertion.

If you ever looked at the old js::Hash{Map,Set} and found it confusing,
it's worth taking another look at mozilla::Hash{Map,Set}, because I
improved the high-level and per-method documentation a lot. I also
completely reordered the classes code so that operations are grouped
logically and are much easier to find.

Additionally we've been thinking about implementing a better hash algorithm
> such as round-robin hashing [2] , I'm not sure if that would work w/
> mfbt::HashMap, but it would be nice to implement it in one place rather
> than two.
>

The main takeaway I got from doing the microbenchmarks in bug  1477622 was
that Robin Hood hashing is probably on a minor improvement. It's probably
worth doing, but:

(a) for C++ hash tables, full templating and inlining makes the biggest
difference;
(b) for Rust hash tables (which already use Robin Hood), choosing a fast
hash function makes the biggest difference.

Point (a) was the reason I decided to move js::Hash{Map,Set} into MFBT.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: mozilla::Hash{Map,Set}

2018-08-16 Thread Nicholas Nethercote
On Fri, Aug 17, 2018 at 3:10 AM, Kris Maglione 
wrote:

>
> It would probably worth converging the APIs, if only because the
> nsTHashtable APIs are arcane and antiquated. But it's easier said than
> done. There are a lot of consumers of all of the hashtable APIs, and the
> code is... not very easy to work with.
>

Also, the APIs are *very* different.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: mozilla::Hash{Map,Set}

2018-08-15 Thread Nicholas Nethercote
On Thu, Aug 16, 2018 at 9:28 AM, Ehsan Akhgari 
wrote:

>
> Do you have any good suggestions of how to find such candidates?  One
> thing that came to my mind was that the BHR data may be a useful source of
> insight for this...   category=all=512_2048=explore=
> c8e925752fc94f78af9349665aad14c7=PLDHashTable%3A%
> 3ASearchTable=0> (for content processes, for example) suggests a
> few hashtables based on a very quick skimming which aren't surprising to me
> (things like nsHTMLDocument::mIdentifierMap, some XPCOM component manager
> hashtables, memory reporter hashtables, some font hashtables, the
> preferences hashtable, sEventListenerManagersHash, mJSHolderMap, etc.
>

I would have just said "anything that shows up in profiles"; using the BHR
data seems like a good idea.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: mozilla::Hash{Map,Set}

2018-08-15 Thread Nicholas Nethercote
Off the top of my head, the advantages of mozilla::Hash{Map,Set} over
std::unordered_{map,set} are the following.

- They are much faster on Mac and Linux and not much different on Windows:

https://treeherder.mozilla.org/perf.html#/graphs?timerange=604800=
mozilla-central,1732084,1,6=mozilla-central,
1730153,1,6=mozilla-central,1732083,1,6=
mozilla-central,1730147,1,6=mozilla-central,
1732092,1,6=mozilla-central,1730159,1,6

- The code is the same on all platforms, so there's no performance
variation.

- They integrate well with Mozilla code, e.g. by using AllocPolicy.

- They support memory reporting, via the shallowSizeOf{In,Ex}cluding()
methods.

- They don't use exceptions. (I'm not sure if std::unordered_{map,set} use
exceptions.)

- Generated code size is similar on Linux64; I haven't tried other
platforms. (PLDHash is substantially smaller, by comparison.)

Nick


On Thu, Aug 16, 2018 at 6:18 AM, Jeff Gilbert  wrote:

> Awesome!
>
> What are the pros/cons to mozilla::Hash{Map,Set} vs
> std::unordered_{map,set}? I'm assuming perf is the main draw? Do we
> have a data on this too?
>
> On Mon, Aug 13, 2018 at 10:44 PM, Nicholas Nethercote
>  wrote:
> > Hi,
> >
> > I recently moved Spidermonkey's js::Hash{Map,Set} classes from
> > js/public/HashTable.h into mfbt/HashTable.h and renamed them as
> > mozilla::Hash{Map,Set}. They can now be used throughout Gecko.
> >
> > (js/public/HashTable.h still exists in order to provide renamings of
> > mozilla::Hash{Map,Set} as js::Hash{Map,Set}.)
> >
> > Why might you use mozilla::Hash{Map,Set} instead of PLDHashTable (or
> > nsTHashtable and other descendants)?
> >
> > - Better API: these types provide proper HashMap and HashSet instances,
> and
> > (in my opinion) are easier to use.
> >
> > - Speed: the microbenchmark in xpcom/rust/gtest/bench-collect
> ions/Bench.cpp
> > shows that mozilla::HashSet is 2x or more faster than PLDHashTable. E.g.
> > compare "MozHash" against "PLDHash" in this graph:
> >
> > https://treeherder.mozilla.org/perf.html#/graphs?timerange=
> 604800=mozilla-central,1730159,1,6=mozilla-
> central,1730162,1,6=mozilla-central,1730164,1,6&
> series=mozilla-central,1732092,1,6=mozilla-
> central,1730163,1,6=mozilla-central,1730160,1,6
> >
> > Bug 1477627 converted a hot hash table from PLDHashTable to
> > mozilla::HashSet and appears to have sped up cycle collection in some
> cases
> > by 7%. If you know of another PLDHashTable that is hot, it might be worth
> > converting it to mozilla::HashTable.
> >
> > Both mozilla::Hash{Map,Set} and PLDHashTable use the same double-hashing
> > algorithm; the speed difference is due to mozilla::HashSet's extensive
> use
> > of templating and inlining. The downside of this is that mozilla::HashSet
> > will increase binary size more than PLDHashTable.
> >
> > There are overview comments at the top of mfbt/HashTable.h, and the
> classes
> > themselves have more detailed comments about every method.
> >
> > Nick
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


mozilla::Hash{Map,Set}

2018-08-15 Thread Nicholas Nethercote
Hi,

I recently moved Spidermonkey's js::Hash{Map,Set} classes from
js/public/HashTable.h into mfbt/HashTable.h and renamed them as
mozilla::Hash{Map,Set}. They can now be used throughout Gecko.

(js/public/HashTable.h still exists in order to provide renamings of
mozilla::Hash{Map,Set} as js::Hash{Map,Set}.)

Why might you use mozilla::Hash{Map,Set} instead of PLDHashTable (or
nsTHashtable and other descendants)?

- Better API: these types provide proper HashMap and HashSet instances, and
(in my opinion) are easier to use.

- Speed: the microbenchmark in xpcom/rust/gtest/bench-collections/Bench.cpp
shows that mozilla::HashSet is 2x or more faster than PLDHashTable. E.g.
compare "MozHash" against "PLDHash" in this graph:

https://treeherder.mozilla.org/perf.html#/graphs?timerange=604800=mozilla-central,1730159,1,6=mozilla-central,1730162,1,6=mozilla-central,1730164,1,6=mozilla-central,1732092,1,6=mozilla-central,1730163,1,6=mozilla-central,1730160,1,6

Bug 1477627 converted a hot hash table from PLDHashTable to
mozilla::HashSet and appears to have sped up cycle collection in some cases
by 7%. If you know of another PLDHashTable that is hot, it might be worth
converting it to mozilla::HashTable.

Both mozilla::Hash{Map,Set} and PLDHashTable use the same double-hashing
algorithm; the speed difference is due to mozilla::HashSet's extensive use
of templating and inlining. The downside of this is that mozilla::HashSet
will increase binary size more than PLDHashTable.

There are overview comments at the top of mfbt/HashTable.h, and the classes
themselves have more detailed comments about every method.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Major preference service architecture changes inbound

2018-07-19 Thread Nicholas Nethercote
On Fri, Jul 20, 2018 at 7:37 AM, Daniel Veditz  wrote:

> ​Prefs might be a terrible way to implement that functionality, but it's
> been used that way as long as we've had prefs in Mozilla so there seems to
> be a need for it. Early offenders: printer setup, mail accounts, external
> protocol handlers (and I believe content-type handlers, but those moved out
> long ago). Possibly inspired by the way the Windows registry was used.
>

The Windows registry is a very good comparison, alas.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Major preference service architecture changes inbound

2018-07-17 Thread Nicholas Nethercote
On Wed, Jul 18, 2018 at 1:57 AM, Kris Maglione 
wrote:

>
> While we're thinking about the prefs service, is there any possibility we
>> could enable off-main-thread access to preferences?
>>
>
> I think the chances of that are pretty close to 0, but I'll defer to Nick.
>

I agree, for the reasons that Kris mentioned.



> I need another thread to be able to query an extensible set of pref names
> that are not fully known at compile time.
>

This is a good example of how prefs is a far more general mechanism than I
would like, leading to all manner of use and abuse. "All I want is a
key-value store, with fast multi-threaded access, where the keys aren't
known ahead of time."

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Fission MemShrink Newsletter #1: What (it is) and Why (it matters to you)

2018-07-12 Thread Nicholas Nethercote
On Fri, Jul 13, 2018 at 1:56 AM, Andrew McCreight 
wrote:

> >
> > Just curious, is there a bug on file to measure excess capacity on
> > nsTArrays and hash tables?
>
> njn looked at that kind of issue at some point (he changed how arrays grow,
> for instance, to reduce overhead), but it has probably been around 5 years,
> so there may be room for improvement for things added in the meanwhile.
>

For a trip down memory lane, check out
https://blog.mozilla.org/nnethercote/2011/08/05/clownshoes-available-in-sizes-2101-and-up/.
The size classes described in that post are still in use today.

More usefully: if anyone wants to investigate slop -- which is only one
kind of wasted space, but an important one -- it's now really easy with DMD:
- Invoke DMD in "Live" mode (i.e. generic heap profiling mode, rather than
dark matter detection mode).
- Use the `--sort-by slop` flag with dmd.py.

Full instructions are at
https://developer.mozilla.org/en-US/docs/Mozilla/Performance/DMD.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Two tips for faster Firefox builds

2018-07-12 Thread Nicholas Nethercote
Hi,

Here are two things that might help you get faster builds.

TL;DR:

1. On Linux, make sure you have lld installed, because it's a *much* faster
linker than gold, and it's now used by default if installed.

2. Upgrade your sccache to version 0.2.7 to get faster rebuilds of changed
Rust files.


DETAILS:

1. lld was made the default linker on Linux (when installed) in
https://bugzilla.mozilla.org/show_bug.cgi?id=1473436. Thank you, glandium!

If you don't already have it installed, run `mach bootstrap`  to get a copy
in ~/.mozbuild/clang/bin/, which you can then add to your PATH.
Alternatively `mach artifact toolchain --from-build linux64-clang` will
download lld (and other clang tools) under the current directory.

To confirm that lld is being used, look for this output from configure:

  checking for linker... lld

On other platforms, lld is not yet the default but you *might* be able to
use it.

On Windows, again use `mach bootstrap` and add this to your mozconfig:

  export LINKER=lld-link

This used to cause problems with debugging (bug 1458109) but that has since
been fixed.

On Mac, if you have lld installed, add this to your mozconfig:

  export LDFLAGS=-fuse-ld=lld

but it might cause build errors, such as "No rule to make target
`libmozavutil_dylib.list', needed by `libmozavutil.dylib`".

https://bugzilla.mozilla.org/show_bug.cgi?id=1384434 is the tracking bug
for making lld the default on all platforms. Any bugs filed about problems
should block that bug.

Also, if you are building a Rust-only project, something like this might
work (on Linux; I'm not sure about other builds):

  RUSTFLAGS="-Clinker=clang -Clink-arg=-fuse-ld=lld" cargo build

See https://github.com/rust-lang/rust/issues/50584#issuecomment-398988026
for an example improvement.

https://github.com/rust-lang/rust/issues/39915 is the issue for making lld
the default linker for Rust.

On non-Linux platforms, I recommend testing with the resulting builds
before you place full confidence in them.


2. When the Rust compiler is invoked with incremental compilation (which
happens in any Firefox build that doesn't have --enable-release) sccache
0.2.7 will skip the file, which is good because incremental compilation has
much the same effect, and sccache used to cause significant slowdowns in
the cases where a cache miss occurred. Thank you to ted for this
improvement!

For example, on my fast Linux box, if I `touch` servo/components/style/
stylist.rs and rebuild (resulting in an sccache cache hit), I get these
times:
- old sccache disabled: 28s
- old sccache enabled: 25s
- new sccache enabled: 28s

I.e. sccache's new behaviour causes a tiny slowdown.

But if I insert a comment at the top of that file (resulting in an sccache
cache miss), I get these times:
- old sccache disabled: 37s
- old sccache enabled: 1m53s(!)
- new sccache enabled: 37s

I.e. sccache's new behaviour is a big win. And this "make a small change
and recompile" case is extremely common.

See https://github.com/mozilla/sccache/issues/257 for more details.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: XPIDL trajectory

2018-06-26 Thread Nicholas Nethercote
Thanks for the tip. I've updated the last part of the script accordingly:

# Count number of bytes in the xptdata.o file.
cd o64
echo -n "xptdata.o bytes: "
wc --bytes xpcom/reflect/xptinfo/xptdata.o
cd ..

The current measurements are now:

Tue, Jun 26, 2018: m-i 423583:4a20ed6e2fee
.idl files: 905
.idl lines: 97278 total
xptdata.o bytes: 1139160

Nick

On Tue, Jun 26, 2018 at 8:50 PM, Nika Layzell  wrote:

> Measuring the size of the xpt files is no longer as relevant. Since mccr8
> and I rewrote xptinfo, they're only used during the build step, and are not
> bundled. They're combined into a single .cpp file (xptdata.cpp) which
> generates shared static pages (mostly) without relocations :-).
>
> Perhaps measuring the size of xptdata.o could be more useful?
>
> - Nika
>
> On Tue, Jun 26, 2018, 12:31 AM Nicholas Nethercote 
> wrote:
>
>> Hi,
>>
>> After Firefox 57 removed support for legacy extensions, I decided to
>> (roughly) track how much XPIDL code we have. Here are some measurements:
>>
>> Fri, Aug 4, 2017: m-i 372493:72873c109b1b
>> .idl files: 1167
>> .idl lines: 110240 total
>> .xpt bytes: 417621 total
>>
>> Thu, Aug 17, 2017: m-i 375206:7a794cd2aee1
>> .idl files: 1150
>> .idl lines: 108854 total
>> .xpt bytes: 412984 total
>>
>> Tue, Sep 26, 2017: m-i 382849:42a6a1c9c4cf
>> .idl files: 1122
>> .idl lines: 107963 total
>> .xpt bytes: 411283 total
>>
>> Tue, Nov 14, 2017: m-i 391533:479f3105ad3b
>> .idl files: 1087
>> .idl lines: 106809 total
>> .xpt bytes: 409510 total
>>
>> Thu, Feb 01, 2018: m-i 401956:8ebdf597ade8
>> .idl files: 1027
>> .idl lines: 103800 total
>> .xpt bytes: 398695 total
>>
>> Tue, Jun 26, 2018: m-i 423583:4a20ed6e2fee
>> .idl files: 905
>> .idl lines: 97278 total
>> .xpt bytes: 3717958 total
>>
>> The trend is clearly down, except for the large increase in .xpt size for
>> the most recent measurement -- note the extra digit! It appears that .xpt
>> files used to be binary, and now they are JSON. This might be related to
>> mccr8's recent XPT overhaul (bug 1438688)?
>>
>> The script I use for this is below, for those who are interested.
>>
>> Nick
>>
>>
>> #! /bin/sh
>> #
>> # Measures various XPIDL things in ./ and o64/. (It assumes o64/ is the
>> # objdir.) I put the results in ~/moz/txt/xpidl.txt.
>>
>> if [ ! -d o64 ] ; then
>> echo "o64/ doesn't exist"
>> return 1
>> fi
>>
>> # Count number of .idl files, excluding ones in objdirs, testing ones, and
>> # mortar ones.
>> echo -n ".idl files: "
>> find . -name '*.idl' | \
>> grep -v "64[a-z]*\/dist" | grep -v "testing\/" | grep -v "mortar\/" |
>> \
>> wc --lines
>>
>> # Count number of lines in those .idl files.
>> echo -n ".idl lines: "
>> find . -name '*.idl' | \
>> grep -v "64[a-z]*\/dist" | grep -v "testing\/" | grep -v "mortar\/" |
>> \
>> xargs wc --lines | tail -1
>>
>> # Count number of bytes in .xpt files.
>> cd o64
>> echo -n ".xpt bytes: "
>> find . -name '*.xpt' | xargs wc --bytes | tail -1
>> cd ..
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


XPIDL trajectory

2018-06-25 Thread Nicholas Nethercote
Hi,

After Firefox 57 removed support for legacy extensions, I decided to
(roughly) track how much XPIDL code we have. Here are some measurements:

Fri, Aug 4, 2017: m-i 372493:72873c109b1b
.idl files: 1167
.idl lines: 110240 total
.xpt bytes: 417621 total

Thu, Aug 17, 2017: m-i 375206:7a794cd2aee1
.idl files: 1150
.idl lines: 108854 total
.xpt bytes: 412984 total

Tue, Sep 26, 2017: m-i 382849:42a6a1c9c4cf
.idl files: 1122
.idl lines: 107963 total
.xpt bytes: 411283 total

Tue, Nov 14, 2017: m-i 391533:479f3105ad3b
.idl files: 1087
.idl lines: 106809 total
.xpt bytes: 409510 total

Thu, Feb 01, 2018: m-i 401956:8ebdf597ade8
.idl files: 1027
.idl lines: 103800 total
.xpt bytes: 398695 total

Tue, Jun 26, 2018: m-i 423583:4a20ed6e2fee
.idl files: 905
.idl lines: 97278 total
.xpt bytes: 3717958 total

The trend is clearly down, except for the large increase in .xpt size for
the most recent measurement -- note the extra digit! It appears that .xpt
files used to be binary, and now they are JSON. This might be related to
mccr8's recent XPT overhaul (bug 1438688)?

The script I use for this is below, for those who are interested.

Nick


#! /bin/sh
#
# Measures various XPIDL things in ./ and o64/. (It assumes o64/ is the
# objdir.) I put the results in ~/moz/txt/xpidl.txt.

if [ ! -d o64 ] ; then
echo "o64/ doesn't exist"
return 1
fi

# Count number of .idl files, excluding ones in objdirs, testing ones, and
# mortar ones.
echo -n ".idl files: "
find . -name '*.idl' | \
grep -v "64[a-z]*\/dist" | grep -v "testing\/" | grep -v "mortar\/" | \
wc --lines

# Count number of lines in those .idl files.
echo -n ".idl lines: "
find . -name '*.idl' | \
grep -v "64[a-z]*\/dist" | grep -v "testing\/" | grep -v "mortar\/" | \
xargs wc --lines | tail -1

# Count number of bytes in .xpt files.
cd o64
echo -n ".xpt bytes: "
find . -name '*.xpt' | xargs wc --bytes | tail -1
cd ..
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Windows Address Sanitizer enabled on trunk

2018-06-01 Thread Nicholas Nethercote
This is excellent news! Great work by all those involved.

Nick

On Sat, Jun 2, 2018 at 5:16 AM, David Major  wrote:

> Bug 1360120 on inbound enables Windows ASan builds and tests on trunk
> branches.
>
> Initially these are tier-2 while we confirm that this doesn't
> introduce test flakiness. If nothing catches fire, I intend to bump
> them to tier-1 in the near future.
>
> You can run these jobs on try under the platform name "win64-asan" or
> build locally with the mozconfig here:
> https://developer.mozilla.org/en-US/docs/Mozilla/Testing/
> Firefox_and_Address_Sanitizer#Creating_local_builds_on_Windows
>
> If you're building ASan locally on Windows 10 version 1803, you'll
> want to run mach bootstrap to pick up a fresh clang with an
> 1803-specific fix.
>
> This platform has taken several years to stand up. Thank you to
> everyone who helped out, especially Ehsan for getting this started and
> Ting-Yu for working through a bunch of hurdles on automation.
>
> Happy sanitizing!
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Update on rustc/clang goodness

2018-05-29 Thread Nicholas Nethercote
On Wed, May 30, 2018 at 7:58 AM, Gregory Szorc  wrote:

>
> MSVC is a separate beast. It is a great compiler.
>

FWIW: numerous times I have been stymied by MSVC's bugs or lack of
features. Bug 1449787 is a recent example.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Building Firefox 61+ with GCC will soon require version GCC 6.1+

2018-04-05 Thread Nicholas Nethercote
Thank you for working on this, jgilbert.

I tried to take advantage of C++14's relaxed constexpr for bug 1451278, but
I'm getting one test job failure on automation, visible here:

https://treeherder.mozilla.org/#/jobs?repo=try=bcd8e01989d987268cfb6beb7f86e948eae3730d=172004924

It's the "spidermonkey-sm-rust-bindings-linux64/debug" job, which is
presumably still using an old version of GCC. It's a tier 2 job. Does
anyone know if it's important, and if so, how it should be updated?

Thanks.

Nick

On Tue, Apr 3, 2018 at 10:30 AM, Jeff Gilbert  wrote:

> Bug 1444274 will bump our minimum GCC version to 6.1. GCC-7 will
> continue to work.
>
> If you build with GCC instead of Clang on Linux, I've been told that
> the system gcc package for Ubuntu 16.04 LTS is gcc-5, so very soon
> you'll need to install a gcc-6 package to continue to build.
>
> With a bump to GCC 6, along with the bump to vs2017, we should be able
> to start relying on c++14 features. This will help out with tracking
> upstream projects which are starting to rely in c++14, as well as
> letting us phase in new tools into our c++ toolbox. (relaxed constexpr
> is a big one)
> ___
> dev-builds mailing list
> dev-bui...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-builds
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


dom/ipc/ContentPrefs.{h,cpp} are gone

2018-03-19 Thread Nicholas Nethercote
Greetings,

For some time we have had a whitelist of prefs that get sent to content
processes very early, in dom/ipc/ContentPrefs.cpp, not via the standard IPC
channels. There is also some checking machinery that makes sure that prefs
not on that list aren't accessed too early. It is a significant piece of
e10s technical debt that often trips people up (e.g. bug 1432979, bug
1439406).

I just landed the patch in bug 1436911, which eliminates the entire
mechanism by sending all necessary prefs early.
dom/ipc/ContentPrefs.{h,cpp} have been removed.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


nsGkAtoms.h has moved

2018-03-14 Thread Nicholas Nethercote
Hi,

I just landed bug 1445142, which moves dom/base/nsGkAtoms.h to
xpcom/ds/nsGkAtoms.h.

The reason for this move:

- These atoms aren't really DOM-specific.

- The move to xpcom/ means they can be registered immediately after the
atom table is created, instead of waiting until
nsLayoutStatics::Initialize() runs. This in turn let me remove a special
case involving duplication of the empty atom. This in turn will let me
further optimize the representation of static atoms.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Web-Feed subscription improvements

2018-03-14 Thread Nicholas Nethercote
On Wed, Mar 14, 2018 at 2:26 AM,  wrote:

>
> PS: I'm sorry if this post violates any posting guidelines, I'm not
> accustomed to newsgroups.
>

It doesn't violate guidelines. In fact, your post was a very model of a
good post: polite, friendly, and informed. Posts from users who are having
trouble with Firefox functionality are not always like that. Thank you!

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Prefs overhaul

2018-03-12 Thread Nicholas Nethercote
The old code read the entire file into memory before starting parsing. In
one place it did this with URLPreloader::ReadZip() (for default pref
files), and in another place it did this with
URLPreloader::ReadFile() (for user pref files). It was simplest to leave
that code unchanged.

Nick


On Mon, Mar 12, 2018 at 9:16 PM, David Teller <dtel...@mozilla.com> wrote:

> Out of curiosity, why is the read handled by C++ code?
>
> On 12/03/2018 10:38, Nicholas Nethercote wrote:
> > I don't know. But libpref's file-reading is done by C++ code, which
> passes
> > a string to the Rust code for parsing.
> >
> > Nick
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Prefs overhaul

2018-03-12 Thread Nicholas Nethercote
On Sun, Mar 11, 2018 at 4:24 AM, ISHIKAWA,chiaki 
wrote:

>
> I have noticed that you mentioned rewrite in Rust.
>
> Does the I/O primitive Rust uses handle the following "short read" case
> niciely?
>

I don't know. But libpref's file-reading is done by C++ code, which passes
a string to the Rust code for parsing.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: PSA: Chrome-only WebIDL interfaces no longer require DOM peer review

2018-03-09 Thread Nicholas Nethercote
On Sat, Mar 10, 2018 at 11:11 AM, Myk Melez  wrote:

> If we removed XPConnect bindings entirely and converted XPIDL interfaces
> used only by C++ into concrete native classes, then what else would
> continue to need XPCOM?
>

What's your definition of XPCOM? Look in xpcom/, there is a ton of stuff in
there that is unrelated to XPIDL...

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Prefs overhaul

2018-03-06 Thread Nicholas Nethercote
Hi,

I've been doing a lot of work to improve libpref recently, and there is
more in the pipeline. I've written a brief roadmap that might be of
interest to people: https://wiki.mozilla.org/Platform/PrefsOverhaul

I'd be happy to hear feedback, via this list or other means. Thanks.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: New prefs parser has landed

2018-02-20 Thread Nicholas Nethercote
On Sat, Feb 3, 2018 at 9:01 AM, Boris Zbarsky <bzbar...@mit.edu> wrote:

> On 2/2/18 4:57 PM, Nicholas Nethercote wrote:
>
>> It shouldn't be too hard, because the prefs grammar is very simple. I
>> would
>> just implement "panic mode" recovery, which scans for a synchronizing
>> token
>> like ';' or 'pref' and then continues from there.
>>
>
> OK.  I think that would address my concerns, for sure.
>

I just landed error recovery:
https://bugzilla.mozilla.org/show_bug.cgi?id=107264 (filed in 2001!)

And we had a real-world case where the new strictness (integer overflow
detection) was causing issues:
https://bugzilla.mozilla.org/show_bug.cgi?id=1438878

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: How to pass data from to a content process when it starts up?

2018-02-15 Thread Nicholas Nethercote
Thank you! That was exactly the info I needed, and I have it working
locally.

However, I didn't do exactly as you suggested -- passing
LaunchOptions to Sandbroker::LaunchApp dragged me into #include hell:

- base::LaunchOptions must be visible in
  security/sandbox/win/src/sandboxbroker/sandboxBroker.{h,cpp}, which
requires
  including ipc/chromium/src/base/process_util.h.

- We end up including headers from two significantly different versions of
the
  Chromium code: ipc/chromium/base/process_util.h and
  security/sandbox/chromium/sandbox/win/src/src/sandbox.h and we hit trouble
  with repeated declarations, e.g.:
  - DISALLOW_COPY_AND_ASSIGN, from ipc/chromium/src/base/basictypes.h and
security/sandbox/chromium/base/macros.h
  - DISALLOW_IMPLICIT_CONSTRUCTORS, likewise
  - LinkerInitialized
  - etc.

To avoid this, I chose to pass in a std::vector instead of a
LaunchOptions to SandboxBroker::LaunchApp.

Nick

On Wed, Feb 14, 2018 at 8:23 PM, <bo...@mozilla.com> wrote:

> Hi Nick,
>
> SandboxBroker::AddHandleToShare was added to add the handles to the
> sandbox policy, before it was realised that we'd need to do this for the
> non-sandboxed process launch as well, hence LaunchOptions::handles_to_
> inherit.
>
> I think we should change [1] to pass the LaunchOptions and then use them
> within SandboxBroker::LaunchApp to add the handles to the policy and get
> rid of SandboxBroker::AddHandleToShare.
>
> Cheers,
> Bob
>
> [1] https://searchfox.org/mozilla-central/rev/
> d03ad8843e3bf2e856126bc53b0475c595e5183b/ipc/glue/
> GeckoChildProcessHost.cpp#1046
>
> On Wednesday, 14 February 2018 07:02:43 UTC, Nicholas Nethercote  wrote:
> > Hi,
> >
> > When a content process is started, a bunch of pref values are sent via
> > some -intPrefs/-boolPrefs/-stringPrefs arguments on the command line.
> This
> > is
> > ugly and limiting and causes multiple problems, so I'd like to find a
> > different
> > way to send this data.
> >
> > The use case is pretty simple, because it's one way data transfer. The
> > important thing is that it must happen very early, i.e. before the normal
> > IPC
> > mechanism gets going.
> >
> > I figured shared memory would be a reasonable way to do this, something
> like
> > the following.
> >
> > - The parent sets up a shared memory segment, and writes some data to it.
> >
> > - The parent spawns the child, and passes identifying information about
> the
> >   shared memory segment to the child (e.g. via the command line).
> >
> > - The child gets the shared memory segment identifer, uses it to open the
> >   segment, and reads the data.
> >
> > - The child disposes of the shared memory segment.
> >
> > At first I tried using NSPR's shared memory functions, but they're not
> used
> > anywhere else in Firefox, and they have bugs, and shmget() is blocked by
> the
> > Linux sandbox.
> >
> > So then I tried using base::SharedMemory instead, from
> > ipc/chromium/src/base/shared_memory.h, basically like this:
> >
> > Parent:
> > SharedMemory shm;
> > shm.Create(name, size);
> > shm.Open();
> > shm.Map();
> > char* p = shm.memory();
> > ... write data to p ...
> > ... launch child process, passing `name` via cmdline ...
> > shm.Unmap();// done automatically when shm is destroyed
> > shm.Close();// done automatically when shm is destroyed
> >
> > Child:
> > ... get `name` from the command line...
> > SharedMemory shm;
> > shm.Open(name);
> > shm.Map();
> > char* p = shm.memory();
> > ... read data from p ...
> > shm.Delete();   // this is a no-op on Windows
> > shm.Unmap();// done automatically when shm is destroyed
> > shm.Close();// done automatically when shm is destroyed
> >
> > This works fine on Unix. If the shared memory file is closed by the
> parent
> > before it's opened by the child, that's ok, because it persists until it
> is
> > explicitly deleted.
> >
> > But it doesn't work on Windows. On Windows Delete() does nothing.
> Instead, a
> > file mapping object is auto-deleted when its refcount falls to zero. So
> if
> > the
> > parent calls Close() before the child calls Open() -- which happens in
> > practice -- then the file mapping object is auto-deleted and the child
> > Open()
> > fails.
> >
> > If I change the parent to heap-allocate `shm` so it's not auto-destroyed
> at
> > the
> > end of the function, things work out, but we'll end up with leaks: the
> > SharedMemory 

How to pass data from to a content process when it starts up?

2018-02-13 Thread Nicholas Nethercote
Hi,

When a content process is started, a bunch of pref values are sent via
some -intPrefs/-boolPrefs/-stringPrefs arguments on the command line. This
is
ugly and limiting and causes multiple problems, so I'd like to find a
different
way to send this data.

The use case is pretty simple, because it's one way data transfer. The
important thing is that it must happen very early, i.e. before the normal
IPC
mechanism gets going.

I figured shared memory would be a reasonable way to do this, something like
the following.

- The parent sets up a shared memory segment, and writes some data to it.

- The parent spawns the child, and passes identifying information about the
  shared memory segment to the child (e.g. via the command line).

- The child gets the shared memory segment identifer, uses it to open the
  segment, and reads the data.

- The child disposes of the shared memory segment.

At first I tried using NSPR's shared memory functions, but they're not used
anywhere else in Firefox, and they have bugs, and shmget() is blocked by the
Linux sandbox.

So then I tried using base::SharedMemory instead, from
ipc/chromium/src/base/shared_memory.h, basically like this:

Parent:
SharedMemory shm;
shm.Create(name, size);
shm.Open();
shm.Map();
char* p = shm.memory();
... write data to p ...
... launch child process, passing `name` via cmdline ...
shm.Unmap();// done automatically when shm is destroyed
shm.Close();// done automatically when shm is destroyed

Child:
... get `name` from the command line...
SharedMemory shm;
shm.Open(name);
shm.Map();
char* p = shm.memory();
... read data from p ...
shm.Delete();   // this is a no-op on Windows
shm.Unmap();// done automatically when shm is destroyed
shm.Close();// done automatically when shm is destroyed

This works fine on Unix. If the shared memory file is closed by the parent
before it's opened by the child, that's ok, because it persists until it is
explicitly deleted.

But it doesn't work on Windows. On Windows Delete() does nothing. Instead, a
file mapping object is auto-deleted when its refcount falls to zero. So if
the
parent calls Close() before the child calls Open() -- which happens in
practice -- then the file mapping object is auto-deleted and the child
Open()
fails.

If I change the parent to heap-allocate `shm` so it's not auto-destroyed at
the
end of the function, things work out, but we'll end up with leaks: the
SharedMemory object, opened file view, the handle, and the file mapping will
all leak.

I then tried using SharedMemory::ShareToProcess(), but that requires that
the
child process already exist and the parent has its PID, which is a pain.

I then found this blog post from Raymond Chen:
https://blogs.msdn.microsoft.com/oldnewthing/20031211-00/?p=41543

It describes exactly what I want, but it requires using the bInheritHandle
parameter, which base::SharedMemory doesn't do. I could change it to do so,
but
then it looks like I'd need to deal with I then found
LaunchOptions::handles_to_inherit as well... and at this point I figure it's
worth asking for help!

Does anybody have suggestions about the best way to do this? Thanks.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: New prefs parser has landed

2018-02-02 Thread Nicholas Nethercote
On Sat, Feb 3, 2018 at 12:42 AM, Boris Zbarsky  wrote:

>
> Perhaps I should implement error recovery, so that ill-formed prefs won't
>> cause subsequent prefs to be lost.
>>
>
> You mean pick up parsing again after hitting an error?   That sounds
> complicated...
>

It shouldn't be too hard, because the prefs grammar is very simple. I would
just implement "panic mode" recovery, which scans for a synchronizing token
like ';' or 'pref' and then continues from there. It's not foolproof but
works well in many cases.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: New prefs parser has landed

2018-02-01 Thread Nicholas Nethercote
On Fri, Feb 2, 2018 at 2:27 PM, Nicholas Nethercote <n.netherc...@gmail.com>
wrote:

> It might be possible that you could use about:config or the prefs API to
> set a string pref that contains an invalid escape sequence that the parser
> will subsequently reject. I will check that.
>

Thinking (and testing) some more: I don't think this is possible, because
we escape strings appropriately when writing them.

So I think the only way to get data loss is if a user hand-edits prefs.js
and introduces a syntax error, whereupon all subsequent prefs in the file
will be lost. (Which has always been true; the only thing that's changed is
what constitutes a syntax error.) Error recovery would eliminate that data
loss possibility.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: New prefs parser has landed

2018-02-01 Thread Nicholas Nethercote
On Fri, Feb 2, 2018 at 12:50 PM, Boris Zbarsky <bzbar...@mit.edu> wrote:

> On 2/1/18 6:40 PM, Nicholas Nethercote wrote:
>
>> - prefs.js, which is written by Firefox. Firefox should always generate
>> data that is accepted by the new parser
>>
>
> OK.  I assume we've double-checked that?  And that this was the case all
> along, for prefs extensions or about:config might have set in the past?
> (Especially the no-embedded-null thing.)
>

I don't know what you mean by "prefs extensions". It might be possible that
you could use about:config or the prefs API to set a string pref that
contains an invalid escape sequence that the parser will subsequently
reject. I will check that.


> We could make a backup copy of prefs.js if an error is discovered in it,
> so that when we write it out again the old thing can still be recovered if
> needed.


Perhaps I should implement error recovery, so that ill-formed prefs won't
cause subsequent prefs to be lost.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: New prefs parser has landed

2018-02-01 Thread Nicholas Nethercote
On Fri, Feb 2, 2018 at 11:32 AM, Xidorn Quan  wrote:

>
> One approach would probably be showing a warning UI to tell people that we
> fail to parse user.js and ask them to fix it.
>
> Not sure how complicated that approach would be.
>

This suggestion came up in one of the old bugs on this topic. One argument
against it was that libpref shouldn't depend on or be able to call UI code.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: New prefs parser has landed

2018-02-01 Thread Nicholas Nethercote
On Fri, Feb 2, 2018 at 10:32 AM, Justin Dolske  wrote:

>
> Can I presume that things which now trigger parsing issues are escaped or
> errored by the relevant prefs APIs? e.g. if a caller tries to set a pref
> name or value with an embedded NULL?
>

Only the parser has changed. The other APIs are the same.

If you try to use an embedded NUL via a C++ API function, it'll just be
treated like the string ends at that NUL. I'm not sure what will happen if
you use an embedded NUL via a JS API function, though it should be the same
behaviour as before.

NIck
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: New prefs parser has landed

2018-02-01 Thread Nicholas Nethercote
On Fri, Feb 2, 2018 at 9:47 AM, Boris Zbarsky  wrote:

>
> What happens if a user's prefs file has things that were OK but now fail?
> Is it effectively dataloss in that we will not parse anything after that
> and then write out the modified pref file with all the later things missing?
>

There are two kinds of "user's prefs".

- prefs.js, which is written by Firefox. Firefox should always generate
data that is accepted by the new parser, so I don't think there is a
problem there. (If users do hand-edit, there might be problems. But that
file has a big "Do not edit this file" warning at the top, so I'm not too
worried about that case.)

- user.js, which is hand-written by users. If there is a syntax error in
this file the parser will issue an error message[*], abort, and any prefs
after the syntax error will be ignored, but Firefox will otherwise run
normally.

So I don't think data loss is a concern, but user.js prefs that were
previously accepted might now be ignored.

[*] One tricky question is what to do with syntax errors. The current
behaviour is here:
https://searchfox.org/mozilla-central/source/modules/libpref/Preferences.cpp#1000-1011.
It writes the error to the browser console, but if that fails, prints it to
stderr instead. And then it also does an NS_WARNING. This means that errors
are easy to overlook, especially in user.js. Do you have suggestions on a
better approach? Do you think the parser should do error recovery? Bug
107624 has some context for this.

- The old parser allowed integer literals to overflow, silently wrapping
>> them.
>>
>>The new parser treats integer overflow as a parse error.
>>
>
> This seems like a footgun for users who set numeric prefs by hand...
>

Silently wrapping was also a footgun that came up in practice in bug
1424030 and bug 1434813. IMO it's better to give a clear error than to
silently do the wrong thing...

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


New prefs parser has landed

2018-02-01 Thread Nicholas Nethercote
Hi,

I just landed https://bugzilla.mozilla.org/show_bug.cgi?id=1423840, which
replaces the old prefs parser with a new one.

The new parser is faster, safer, gives better and more accurate error
messages, and is *much* easier to maintain.

It is also slightly stricter; see the list at the bottom of this email for
details (copied from the commit message). This increased strictness
required fixes in a few places (in Firefox, Talos, and Thunderbird) where
existing prefs files were doing bogus things that the old parser accepted,
such as junk characters between prefs, and integer literal values that
silently overflowed because they didn't fit in 32 bits.

Please keep an eye out for any other problems that might arise from this
change.

Nick


The new parser is slightly stricter than the old parser in a few ways.

- Disconcertingly, the old parser allowed arbitrary junk between prefs
  (including at the start and end of the prefs file) so long as that junk
  didn't include any of the following chars: '/', '#', 'u', 's', 'p'. I.e.
  lines like these:

!foo@bar("prefname", true);
ticky_pref("prefname", true);   // missing 's' at start
User_pref("prefname", true);// should be 'u' at start

  would all be treated the same as this:

pref("prefname", true);

  The new parser disallows such junk because it isn't necessary and seems
like
  an unintentional botch by the old parser.

- The old parser allowed character 0x1a (SUB) between tokens and treated it
  like '\n'.

  The new parser does not allow this character. SUB was used to indicate
  end-of-file (*not* end-of-line) in some old operating systems such as
MS-DOS,
  but this doesn't seem necessary today.

- The old parser tolerated (with a warning) invalid escape sequences within
  string literals -- such as "\q" (not a valid escape) and "\x1" and "\u12"
  (both of which have insufficient hex digits) -- accepting them literally.

  The new parser does not tolerate invalid escape sequences because it
doesn't
  seem necessary and would complicate things.

- The old parser tolerated character 0x00 (NUL) within string literals;
this is
  dangerous because C++ code that manipulates string values with embedded
NULs
  will almost certainly consider those chars as end-of-string markers.

  The new parser treats NUL chars as end-of-file, to avoid this danger and
  because it facilitates a significant optimization (described within the
  code).

- The old parser allowed integer literals to overflow, silently wrapping
them.

  The new parser treats integer overflow as a parse error. This seems
better,
  and it caught existing overflows of places.database.lastMaintenance, in
  testing/profiles/prefs_general.js (bug 1424030) and
  testing/talos/talos/config.py (bug 1434813).
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Revised proposal to refactor the observer service

2018-01-28 Thread Nicholas Nethercote
I have similar issues with some changes to prefs that I am planning. I too
am probably going to end up with a split implementation, with "good prefs"
(faster, better) for C++ usage and "crappy prefs" (the current
implementation) for JS usage.

https://bugzilla.mozilla.org/show_bug.cgi?id=1425909 is hitting the same
issue with telemetry probes.

In short, I know that front-end devs love artifact builds dearly, but they
are a significant constraint for these components that have longs lists of
things being tracked. It's a bit frustrating.

Nick

On Thu, Jan 18, 2018 at 2:47 AM, Gabriele Svelto 
wrote:

>  Hello all,
> I've tried to take into account all the suggestions from the discussion
> of my previous proposal [1] and I've come up with a new plan which
> should cover all bases. Don't hesitate to point out things that might
> still be problematic, since this is going to be a large refactoring it's
> better to get it right from the start.
>
> The biggest sticking point of my previous proposal was that it would
> prevent any modifications to the observer list from artifact builds.
> Addressing that particular issue effectively requires to keep the
> existing service in place so instead of replacing it altogether I'd do
> the following:
>
> 1) Introduce a new observer service that would live alongside the
> current one (nsIObserverService2?). This will use a machine-generated
> list of topics that will be held within the interface itself instead of
> a separate file as I originally proposed. This will become possible
> thanks to bug 1428775 [2]. The only downside of this is that the C++
> code will not use an enum but just integer constants. The upside is that
> this will need only one code generator and only one output file (the
> IDL) versus two generators and three files in my original proposal.
>
> 2) Migrate all C++-only and mixed C++/JS users to use the new service.
> Since the original service would still be there this can be done
> incrementally. Leave JS-only users alone.
>
> 3) Consider writing a JS-only pub/sub service that would be a better fit
> than the current observer service. If we can come up with something
> that's better than the observer service for JS then it can be used to
> retire the old service for good.
>
> So, how does this sound?
>
>  Gabriele
>
> [1]
> https://lists.mozilla.org/pipermail/dev-platform/2018-January/020935.html
> [2] Make it possible to generate IDL files
> https://bugzilla.mozilla.org/buglist.cgi?quicksearch=1428775
>
>
> ___
> firefox-dev mailing list
> firefox-...@mozilla.org
> https://mail.mozilla.org/listinfo/firefox-dev
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


All about Prefs

2017-12-17 Thread Nicholas Nethercote
Hi,

I recently became the module owner of Prefs, and there are several peers.
In the past it was an ill-maintained module, but that is no longer the case.

I have been doing a bunch of work to improve Prefs. As part of this I wrote
a document that describes how they currently work, and some of their
problems:
https://docs.google.com/document/d/1V5Wc9bXwfgMG2JOsswvDPVwZl_xiaSBxwJXf3fiEaU8/

I am working on a redesign that will fix a lot of the problems. There was a
meeting about this in Austin. I hope to implement this in Q1 2018.

By chance, I just discovered that a new DOMPreferences class has been
created in bug 1419771.
As far as I can tell, it was created and reviewed by DOM peers without any
input from Prefs peers. Although the changes are all within dom/, it was
still an unpleasant surprise, because this is an additional thing I'm going
to have to deal with in my redesign, on top of the existing gfxPrefs and
MediaPrefs classes.

If you have changes related to Prefs, please let me (or another Prefs peer)
know about it. Thanks.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Proposal to remove some preferences override support

2017-11-01 Thread Nicholas Nethercote
Greetings,

In https://bugzilla.mozilla.org/show_bug.cgi?id=1413413 I am planning to
remove a couple of things relating to preferences.

1) Remove the defaults/preferences directory support for extensions. This
is a feature that was used by legacy extensions but is not used by
WebExtensions.

2) Remove the "preferences" override directory in the user profile.
This removes
support for profile preferences override files other than user.js.

The bug has a patch with r+. The specific things it removes include:
- The "load-extension-default" notification.
- The NS_EXT_PREFS_DEFAULTS_DIR_LIST/"ExtPrefDL" directory list, including
the entry from the toolkit directory service.

Does anybody foresee any problems with this change?

Thanks.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: nsAutoConfig

2017-10-31 Thread Nicholas Nethercote
Thank you for the info! I will leave that code alone :)

Nick

On Wed, Nov 1, 2017 at 2:22 AM, <mka...@mozilla.com> wrote:

> On Tuesday, October 31, 2017 at 12:10:21 AM UTC-5, Nicholas Nethercote
> wrote:
> > Hi,
> >
> > I was just looking at the extensions/pref/autoconfig/ directory and
> trying
> > to understand what it does.
> >
> > As far as I can tell, the code is there to allow custom deployments with
> > particular prefs set, as described at
> > https://developer.mozilla.org/en-US/Firefox/Enterprise_
> deployment#Configuration
> >
> > It's initialized by modules/libpref/Preferences.cpp if a
> > "general.config.filename" pref is found:
> > http://searchfox.org/mozilla-central/rev/1ebd2eff44617df3b82eea7d2f3ca1
> b60cc591a0/modules/libpref/Preferences.cpp#3907-3920
> >
> > There is also some MozHarness code relating to it here:
> > http://searchfox.org/mozilla-central/source/testing/
> mozharness/mozharness/mozilla/firefox/autoconfig.py
> >
> > Just to check: is this still supported functionality?
> >
> > Thanks.
> >
> > Nick
>
> Yes, this is very much supported functionality. It is our primary means of
> enterprise customization right now.
>
> The reason there are very few tests is because when it was first
> developed, it wasn't possible to write tests that involved putting down
> files before startup. There are some tests now:
>
> http://searchfox.org/mozilla-central/source/extensions/
> pref/autoconfig/test/unit
>
> Mike Kaply
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


nsAutoConfig

2017-10-30 Thread Nicholas Nethercote
Hi,

I was just looking at the extensions/pref/autoconfig/ directory and trying
to understand what it does.

As far as I can tell, the code is there to allow custom deployments with
particular prefs set, as described at
https://developer.mozilla.org/en-US/Firefox/Enterprise_deployment#Configuration

It's initialized by modules/libpref/Preferences.cpp if a
"general.config.filename" pref is found:
http://searchfox.org/mozilla-central/rev/1ebd2eff44617df3b82eea7d2f3ca1b60cc591a0/modules/libpref/Preferences.cpp#3907-3920

There is also some MozHarness code relating to it here:
http://searchfox.org/mozilla-central/source/testing/mozharness/mozharness/mozilla/firefox/autoconfig.py

Just to check: is this still supported functionality?

Thanks.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Fennec now builds with clang instead of gcc

2017-10-29 Thread Nicholas Nethercote
Excellent news! Thank you for completing this. Updating toolchains isn't
glamorous work but it is important and helps everybody.

Nick

On Mon, Oct 30, 2017 at 10:15 AM, Nathan Froyd  wrote:

> Hi all,
>
> Bug 1163171 has been merged to mozilla-central, moving our Android
> builds over to using clang instead of GCC.  Google has indicated that
> the next major NDK release will render GCC unsupported (no bugfixes
> will be provided), and that it will be removed entirely in the near
> future.  Switching to clang now makes future NDK upgrades easier,
> provides for better integration with the Android development tools,
> and brings improvements in performance/code size/standards support.
>
> For non-Android platforms, the good news here is that compiling Fennec
> with clang was the last major blocker for turning on C++14 support.
> Using clang on Android also opens up the possibility of running our
> static analyses on Android.
>
> If you run into issues, please file bugs blocking bug 1163171.
>
> Thanks,
> -Nathan
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to remove client.mk

2017-10-27 Thread Nicholas Nethercote
This is excellent news.

Relatedly, I want to particularly say that I think moz.build files are
great. The syntax and semantics are very clear. They're easy to modify.
They handle both simple cases and complex cases well. They pretty much
never get in the way, which is exactly what a developer wants from a build
system. The contrast with Makefiles is... significant.

Nick

On Sat, Oct 28, 2017 at 10:16 AM, Gregory Szorc  wrote:

> client.mk has existed since 1998 (https://hg.mozilla.org/
> experimental/mozilla-central-cvs/rev/0a5e1fa423bd). Before `mach`,
> client.mk was the recommended way to build Firefox and perform other
> build tasks. client.mk has rarely changed in recent years because the
> build system maintainers have been working around it and because not much
> has changed around how the very early parts of "invoke the build system"
> work.
>
> The build system maintainers are making a significant push towards
> supporting building Firefox without GNU make in this and future quarters.
>
> Since client.mk is a make file and we want to not require make to build
> Firefox, client.mk is incompatible with our vision for the future of the
> Firefox build system. Furthermore, client.mk represents an ancient piece
> of build system infrastructure and its convoluted content creates
> consternation and inhibits forward progress. For these reasons, I'm
> announcing the intent to remove client.mk.
>
> If you have tools, infrastructure, workflows, etc that use client.mk - or
> any direct use of `make` for that matter - please transition them to `mach`
> commands and/or check with a build peer regarding your use case. You can
> file bugs regarding client.mk dependencies against bug 1412398.
>
> Thank you for your understanding.
>
> ___
> dev-builds mailing list
> dev-bui...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-builds
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: nsIAtom has been deCOMtaminated and is now called nsAtom

2017-10-08 Thread Nicholas Nethercote
On Mon, Oct 9, 2017 at 1:32 PM, Kyle Huey  wrote:

>
> You couldn't get to mozilla::Atom?
>

See https://bugzilla.mozilla.org/show_bug.cgi?id=1400460#c2.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


nsIAtom has been deCOMtaminated and is now called nsAtom

2017-10-08 Thread Nicholas Nethercote
Greetings,

I have been deCOMtaminating nsIAtom over the past two months:
https://bugzilla.mozilla.org/show_bug.cgi?id=1392883.

A big step that landed over a week ago was the devirtualization of nsIAtom,
which means it is no longer a subclass of nsISupports:
https://bugzilla.mozilla.org/show_bug.cgi?id=1400459.

And I just landed (on autoland) the final step of renaming nsIAtom as
nsAtom. This is tracked at
https://bugzilla.mozilla.org/show_bug.cgi?id=1400460.

Apologies for any conflicts or problems caused in outstanding patches. For
patches less than 1.5 weeks old (i.e. post-devirtualization) it's very
likely that simply replacing all nsIAtom occurrences with nsAtom will
suffice. For patches older than that here is a summary of changes that
might be required.

- nsIAtom --> nsAtom

- nsCOMPtr --> RefPtr

- nsCOMArray --> nsTArray
  - Count() --> Length()
  - ObjectAt() --> ElementAt()
  - AppendObject() --> AppendElement()
  - RemoveObjectAt() --> RemoveElementAt()

- ns*Hashtable -->
  ns*Hashtable

- nsInterfaceHashtable --> nsRefPtrHashtable

  # If the array contains atoms.
- nsCOMPtr --> nsTArray
  - nsArrayBase::Create() --> nsTArray()
  - GetLength() --> Length()
  - do_QueryElementAt() --> operator[]

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to Enable: Automated Static Analysis feedback in MozReview

2017-10-04 Thread Nicholas Nethercote
This sounds interesting!

But it's not analyzing patches that are not using MozReview. Will those
patches be analyzed after landing?

Nick

On Wed, Oct 4, 2017 at 6:17 PM, Jan Keromnes  wrote:

> TL;DR -- We wrote a static analysis bot for MozReview ("clangbot") and it's
> about to complain about any patches that would introduce new C/C++ code
> defects to Firefox.
>
> Please report any bugs with the bot here: https://bit.ly/2y9N9Vx
>
> In an effort to improve the quality of Firefox, we want to catch
> programming errors *before* they even make it into Nightly. To do this, we
> created a TaskCluster bot that runs clang static analysis on every patch
> submitted to MozReview. It then quickly reports any code defects directly
> on MozReview, thus preventing bad patches from landing until all their
> defects are fixed. Currently, its feedback is posted in about 10 minutes
> after a patch series is published on MozReview.
>
> Here is an example of an automated clangbot review:
> https://reviewboard.mozilla.org/r/171868/#review190602
>
> Our bot relies on three types of clang checkers:
>
> - Mozilla specific checkers
> .
> They
> detect incorrect Gecko programming patterns which could lead to bugs or
> security issues.
>
> - Clang-tidy checkers
> . They aim to
> suggest better programming practices and to improve memory efficiency and
> performance.
>
> - Clang-analyzer checkers
> . These checks are
> more advanced, for example some of them can detect dead code or memory
> leaks, but as a typical side effect they have false positives. Because of
> that, we have disabled them for now, but will enable some of them in the
> near future.
>
> The checkers that are currently enabled rarely generate false positives,
> and you can find the complete list of enabled checkers
>  tools/clang-tidy/config.yaml>
> in the tree. You can also run them on your own code with:
>
> > ./mach static-analysis check path/to/file.cpp
>
> This is only the first step. Next, we would like to catch more classes of
> programming errors.
>
> - If you know incorrect Gecko programming patterns which could be detected
> by static analysis, please send an email to release-m...@mozilla.com or
> report a bug in the Rewriting and Analysis
>  component=Rewriting%20and%20Analysis>
> component.
>
> - In parallel, if you see any additional clang-tidy checkers
>  which could be
> valuable for our code base if enabled, please let us know so that we can
> evaluate them.
>
> - Finally, we are looking into posting reviews to Phabricator in the near
> future as well.
>
> Feedback, questions or suggestions welcome.
>
> Thanks!
>
> Andi, Bastien, Jan and Sylvestre
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Null[C]String() has been renamed Void[C]String()

2017-09-22 Thread Nicholas Nethercote
Hi,

I just landed patches from bug 1401813 that rename Null[C]String() as
Void[C]String(). The commit message has the rationale:

> XPCOM's string API doesn't have the notion of a "null string". But it
does have
> the notion of a "void string" (or "voided string"), and that's what these
> functions are returning. So the names should reflect that.

This probably won't affect most people, because void strings are a niche
feature.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to require `mach try` for submitting to Try

2017-09-15 Thread Nicholas Nethercote
I should have been clearer: I'm not so worried about the syntax, so long as
https://mozilla-releng.net/trychooser/ still exists. What's unclear to me
is the revision management/selection side of things.

Nick

On Sat, Sep 16, 2017 at 9:57 AM, Kris Maglione <kmagli...@mozilla.com>
wrote:

> `mach try` is basically just a wrapper around try syntax, with the added
> ability to specify paths to directories you want to add tests for. It
> suffers the same lack of documentation that try syntax in general does.
>
> Trychooser now generates `mach try` command lines in addition to commit
> message syntax, so given that try syntax is supposed to continue to work
> with `mach try`, I don't see why trychooser wouldn't.
>
>
> On Sat, Sep 16, 2017 at 09:51:41AM +1000, Nicholas Nethercote wrote:
>
>> Are there docs on how to use `./mach try`? `./mach help try` doesn't give
>> much detail.
>>
>> Also, will https://mozilla-releng.net/trychooser/ still work? I'm
>> generally
>> more of a command line person than a GUI person, but this is one case
>> where
>> I find the GUI approach far more usable.
>>
>> Nick
>>
>> On Sat, Sep 16, 2017 at 8:30 AM, Gregory Szorc <g...@mozilla.com> wrote:
>>
>> The Try Service ("Try") is a mechanism that allows developers to schedule
>>> tasks in automation. The main API for that service is "Try Syntax" (e.g.
>>> "try: -b o -p linux -u xpcshell"). And the transport channel for making
>>> that API call is a Mercurial changeset's commit message plus a version
>>> control "push" to the "try" repository on hg.mozilla.org.
>>>
>>> As the recent "Reminder on Try usage and infrastructure resources" thread
>>> details, scaling Firefox automation - and Try in particular - is
>>> challenging. In addition, the number of workflows and variations that
>>> automation needs to support is ever increasing and continuously evolving.
>>>
>>> It shouldn't come as a surprise when I say that we've outgrown many of
>>> the
>>> implementation details of the Try Service. Try Syntax itself is over 7
>>> years old and has survived a complete rewrite of automation scheduling,
>>> for
>>> example. Aspects of Try are adversely impacting the ability for
>>> developers
>>> to use Try efficiently and therefore undermining our collective ability
>>> to
>>> get important work done quickly.
>>>
>>> In order to put ourselves in a position to more easily change
>>> implementation details of the Try Service so we may deliver a better
>>> experience for all, we'd like to require the use of `mach try` for Try
>>> submissions. This will ensure there is a single, well-defined, and
>>> easily-controlled mechanism for submitting to Try. This will allow
>>> greater
>>> flexibility and adaptability. It will provide better opportunities for
>>> user
>>> messaging. It will ensure that any new features are seen by everyone
>>> sooner. It will eventually lead to faster results on Try for everyone.
>>>
>>> Bug 1400330 ("require-mach-try") is on file to track requiring `mach try`
>>> to submit to Try.
>>>
>>> The mechanism for submitting to Try has remaining relatively stable for
>>> years. `mach try` is relatively new - and I suspect unused by a sizeable
>>> population. This is a potentially highly disruptive transition. That's
>>> why
>>> we're not making it immediately and why we're sending this email today.
>>>
>>> You can mitigate the disruption by using `mach try` before the forced
>>> transition is made and reporting bugs as necessary. Have them block
>>> "require-mach-try" if you need them addressed before the transition or
>>> "mach-try" otherwise. We don't really have a good component for `mach
>>> try`
>>> bugs, so put them in TaskCluster :: Task Configuration for now and chain
>>> them to a tracking bug for visibility.
>>>
>>> FAQ
>>>
>>> Q: When will the transition be made?
>>> A: When we feel `mach try` is usable for important workflows (as judged
>>> by
>>> blockers on "require-mach-try"). Also, probably not before Firefox 57
>>> ships
>>> because we don't want to interfere with that release.
>>>
>>> Q: What about old changesets?
>>> A: You will still be able to submit to Try using the current/legacy
>>> mechanism for old changesets. There will be a "flag day

Re: Intent to require `mach try` for submitting to Try

2017-09-15 Thread Nicholas Nethercote
Are there docs on how to use `./mach try`? `./mach help try` doesn't give
much detail.

Also, will https://mozilla-releng.net/trychooser/ still work? I'm generally
more of a command line person than a GUI person, but this is one case where
I find the GUI approach far more usable.

Nick

On Sat, Sep 16, 2017 at 8:30 AM, Gregory Szorc  wrote:

> The Try Service ("Try") is a mechanism that allows developers to schedule
> tasks in automation. The main API for that service is "Try Syntax" (e.g.
> "try: -b o -p linux -u xpcshell"). And the transport channel for making
> that API call is a Mercurial changeset's commit message plus a version
> control "push" to the "try" repository on hg.mozilla.org.
>
> As the recent "Reminder on Try usage and infrastructure resources" thread
> details, scaling Firefox automation - and Try in particular - is
> challenging. In addition, the number of workflows and variations that
> automation needs to support is ever increasing and continuously evolving.
>
> It shouldn't come as a surprise when I say that we've outgrown many of the
> implementation details of the Try Service. Try Syntax itself is over 7
> years old and has survived a complete rewrite of automation scheduling, for
> example. Aspects of Try are adversely impacting the ability for developers
> to use Try efficiently and therefore undermining our collective ability to
> get important work done quickly.
>
> In order to put ourselves in a position to more easily change
> implementation details of the Try Service so we may deliver a better
> experience for all, we'd like to require the use of `mach try` for Try
> submissions. This will ensure there is a single, well-defined, and
> easily-controlled mechanism for submitting to Try. This will allow greater
> flexibility and adaptability. It will provide better opportunities for user
> messaging. It will ensure that any new features are seen by everyone
> sooner. It will eventually lead to faster results on Try for everyone.
>
> Bug 1400330 ("require-mach-try") is on file to track requiring `mach try`
> to submit to Try.
>
> The mechanism for submitting to Try has remaining relatively stable for
> years. `mach try` is relatively new - and I suspect unused by a sizeable
> population. This is a potentially highly disruptive transition. That's why
> we're not making it immediately and why we're sending this email today.
>
> You can mitigate the disruption by using `mach try` before the forced
> transition is made and reporting bugs as necessary. Have them block
> "require-mach-try" if you need them addressed before the transition or
> "mach-try" otherwise. We don't really have a good component for `mach try`
> bugs, so put them in TaskCluster :: Task Configuration for now and chain
> them to a tracking bug for visibility.
>
> FAQ
>
> Q: When will the transition be made?
> A: When we feel `mach try` is usable for important workflows (as judged by
> blockers on "require-mach-try"). Also, probably not before Firefox 57 ships
> because we don't want to interfere with that release.
>
> Q: What about old changesets?
> A: You will still be able to submit to Try using the current/legacy
> mechanism for old changesets. There will be a "flag day" of sorts on
> mozilla-central after which all Try submissions will require `mach try` or
> nothing will get scheduled.
>
> Q: Will Try Syntax continue to work?
> A: For the foreseeable future, yes. There is a long-term goal to replace
> Try Syntax with something more automatic and less effort - at least for
> most use cases. But there are no definite plans or a timetable to remove
> Try Syntax.
>
> Q: Are there any other major changes planned?
> A: Several. People are hacking on path-based selection, `mach try fuzzy`
> improvements, moz.build-based annotations influencing what gets scheduled,
> not using a traditional Mercurial repository for backing Try, and more.
> Some of these may not be available to legacy Try submission workflows,
> giving you additional reasons to adopt `mach try` sooner.
>
> Q: Should I be worried about this transition negatively impacting my
> workflow?
> A: As long as you file bugs blocking "require-mach-try" to make it known
> why `mach try` doesn't work for you, your voice will be heard and hopefully
> acted on. So as long as you file bugs, you shouldn't need to worry.
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: New string types: nsAutoStringN<> and nsAutoCStringN<>

2017-08-22 Thread Nicholas Nethercote
On Wed, Aug 23, 2017 at 2:50 AM, Nicholas Alexander 
wrote:
>
>> I really wish our top-level namespace was "moz", rather than "mozilla".
>
> Can you say why?  Is it "just" shorter?  Is it more pleasant to
s/std/moz/ and vice versa?

Mostly because it's shorter. It's good for common names to be short. And
the use of "moz" has plenty of precedent, e.g. the MOZ_ prefix is common in
macros.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: New string types: nsAutoStringN<> and nsAutoCStringN<>

2017-08-22 Thread Nicholas Nethercote
I really wish our top-level namespace was "moz", rather than "mozilla".

Nick

On Tue, Aug 22, 2017 at 10:31 AM, Eric Rahm  wrote:

> On Mon, Aug 21, 2017 at 8:32 AM, Jonathan Kew  wrote:
>
> >
> > Wouldn't it be more "modern Gecko-ish", though, to drop the "ns" prefix,
> > and perhaps put Auto[C]String into the mozilla namespace?
> >
> >
> As Nick said, renaming all the things is a job for another day :)
>
> Coincidentally I have some pending changes that affect the internal naming
> of all of our strings. Externally (outside of xpcom/string) there will be
> no discernible change but I *could* move everything to the mozilla
> namespace and drop the 'ns' prefix. We could then gradually migrate to the
> new naming scheme externally. I think we'd eventually want to move the
> include path to 'mozilla/String.h' as well if we went this route, in the
> interim we could just have stubs that redirect to the mozilla path.
>
> I'm not sure how much backing that has -- we'd be going from nsString =>
> String which is pretty close to std::string -- it would be interesting to
> get some feedback.
>
> -e
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


New string types: nsAutoStringN<> and nsAutoCStringN<>

2017-08-20 Thread Nicholas Nethercote
Hi,

For a long time we have had types nsAutoString and nsAutoCString which are
strings with 64 chars of inline storage. They are good for holding short
strings, most often on the stack, because they avoid the need to heap
allocate
a char buffer.

I recently landed patches (bug 1386103) that introduce nsAutoStringN and
nsAutoCStringN. These are like the existing types but their length is a
template parameter. So if you want an nsString with 128 chars of inline
storage, you'd use nsAutoStringN<128>. If you want an nsCString with enough
inline storage to store an nsID you'd use nsAutoCStringN.

nsAutoString and nsAutoCString have been redefined as typedefs for
nsAutoStringN<64> and nsAutoCStringN<64>, respectively.

I have updated the MDN docs appropriately:
https://developer.mozilla.org/en-US/docs/Mozilla/Tech/XPCOM/
Guide/Internal_strings

Nick

p.s. The "Auto" in these names is confusing. "Auto" in Mozilla code usually
refers to an RAII wrapper type such as AutoPtr or AutoLock. nsInlineString
and
nsInlineCString would be better names for these types... but that's a
change
for another day.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: ns*String vs. printf()

2017-08-14 Thread Nicholas Nethercote
There's documentation about this here:
https://developer.mozilla.org/en-US/docs/Mozilla/Tech/XPCOM/Guide/Internal_strings#printf_and_a_UTF-16_string

Nick

On Tue, Aug 15, 2017 at 3:28 PM, Enrico Weigelt, metux IT consult <
enrico.weig...@gr13.net> wrote:

> Hi folks,
>
> what's the best way to print out ns*String classes via printf ?
> Using ToNewUTF8String or converting to nsCString to use it's get(),
> doesn't feel like the best approach ...
>
>
> --mtx
>
> --
>
> mit freundlichen Grüßen
> --
> Enrico, Sohn von Wilfried, a.d.F. Weigelt,
> metux IT consulting
> +49-151-27565287
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Changing .idl files

2017-08-07 Thread Nicholas Nethercote
On Tue, Aug 8, 2017 at 7:12 AM, Boris Zbarsky  wrote:

>
> So if right now we land a patch that breaks addons, and a nightly user
> updates, they get a broken browser and have to try to figure out whether
> it's because we broken an addon (and this may not be the first thing they
> think of) or because we introduced a bug that they should report.
>

I switched from Nightly to Release on my main work machine after
yesterday's Nightly update broke both Tree Style Tab and ChatZilla. I heard
that Tree Style Tab was fixed again shortly afterward.

I also heard that both of these may have been caused by disabling e10s
shims, or something like that, but I don't know if it's correct.


> So I strongly feel that to avoid wasting the time and effort of our
> nightly users we should not start landing addon-breaking changes (or at
> least ones that might cause exceptions in addons that break various browser
> functionality) until after we have disabled addons.
>

Is there going to be a clear point in time when legacy extensions stop
working in Nightly? I have the impression it'll be more of a slow
degradation.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: 64-bit Firefox progress report: 2017-07-18

2017-08-07 Thread Nicholas Nethercote
I think the 2GB "requirement" from Microsoft should be ignored, because
plenty of our users are ignoring it.

Nick

On Mon, Aug 7, 2017 at 5:51 PM, Chris Peterson 
wrote:

> On 2017-08-06 11:26 PM, Henri Sivonen wrote:
>
>> On Thu, Jul 20, 2017 at 10:42 AM, Chris Peterson
>> wrote:
>>
>>> Users with only 2 GB and 5 minute browser sessions would probably have a
>>> faster user experience with 32-bit Firefox than with 64-bit, but how do
>>> we
>>> weigh that experience versus the security benefits of ASLR?
>>>
>> Not giving users a security mechanism due to a non-obvious reason
>> feels bad. Furthermore, considering that Microsoft documents 2 GB as a
>> "requirement" for 64-bit Windows, is it really worthwhile for us to
>> treat three Windows pointer size combinations (32-bit on 32-bit,
>> 64-bit on 64-bit and 32-bit on 64-bit) as fully supported when one of
>> the combinations is in contradiction with the OS vendor's stated
>> requirements?
>>
>> Do we have any metrics on whether 32-bit on 64-bit exhibits bugs that
>> 32-bit on 32-bit and 64-bit on 64-bit don't? That is, what kind of bug
>> burden are we keeping by catering to users who've installed 64-bit
>> Windows with less than 2 GB of RAM in contradiction with what
>> Microsoft states as a requirement?
>>
>
> That's a fair question. 32-bit applications can only access 2 GB of
> virtual address space on Win32 OS, but can access 4 GB on Win64 OS. So in
> theory, some 32-bit pointer bugs could manifest differently on Win64 and
> Win32.
>
> Do we test 32-bit Firefox on Win32 or Win64 today? I know we build 32-bit
> Firefox on Win64. Since more people will run 32-bit Firefox on Win32 than
> on Win64, we should probably test on Win32 or at least test on Win64
> configured to only allow Firefox access to 2 GB of virtual address space.
>
> In our experiments with Win64 OS users, users with 2 GB or less had
> slightly worse retention and crash rates when running 64-bit Firefox than
> 32-bit Firefox.
>
> About 8% of Win64 users in our experiment had 2 GB or less, so we are
> talking about choosing a worse user experience for a fair number of people.
> (We didn't break out how many users had strictly less than 2 GB.) 64-bit
> Chrome's minimum memory requirement is 4 GB, so Google has similarly
> decided that supporting 32-bit on Win64 is worth the trouble.
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Stack walking in Gecko

2017-08-02 Thread Nicholas Nethercote
Hi,

We have multiple ways of getting stack traces, and it is hard to keep track
of
them all. I wrote down some notes that reflect my understanding, and I
thought
I'd share it in case it's useful, and in case people have ideas on how to
improve things.

Nick


-
mozglue/misc/StackWalk.{h,cpp}
-
The main stackwalk function is MozStackWalk:

> MFBT_API bool
> MozStackWalk(MozWalkStackCallback aCallback,
>  uint32_t aSkipFrames,
>  uint32_t aMaxFrames,
>  void* aClosure,

Here's how it is implemented on different platforms.

- Win32   Custom code using StackWalk64()
- Win64   Custom code using RtlVirtualUnwind()
- Linux32 FramePointerStackWalk?[1]
- Linux64 _Unwind_Backtrace
- Mac32?  FramePointerStackWalk?[1]
- Mac64   _Unwind_Backtrace[2]
- Android _Unwind_Backtrace?
- Other   (unimplemented, mostly)

[1] FramePointerStackWalk() is a straightforward FP-chasing unwind. It is a
public function and so can be called directly, too.

[2] _Unwind_Backtrace is really slow on some recent Mac versions, alas.

On Windows there is also MozStackWalkThread(), which differs from
MozStackWalk() by being able to walk the stack of a different thread.

-
Gecko Profiler
-
The Gecko Profiler has a couple of other stack walkers used on some
platforms.

- LUL: Linux-only stack walker that uses DWARF info to unwind.

- EHABI: Android-only stack walker that uses the ARM exception handling ABI.

-
Places where stack walking is used.
-
Lots of places use MozStackWalk() for all platforms: telemetry,
ah_crap_handler, HangMonitor, BlockingResourceBase,
TlsAllocationTracker.cpp,
nsTraceRefCnt.cpp, etc. These are all places where the number of stack walks
performed is small, so speed isn't that important.

The Gecko Profiler and DMD use a mixture of stack walkers, as per this
table.

Gecko Profiler  DMD
--  ---
Win32   FramePointerStackWalk   FramePointerStackWalk
Win64   MozStackWalkMozStackWalk
Linux32 LUL MozStackWalk
Linux64 LUL MozStackWalk
Mac32?  FramePointerStackWalk[1]FramePointerStackWalk[1]
Mac64   FramePointerStackWalk[1]FramePointerStackWalk[1]
Android EHABI   MozStackWalk

[1] requires MOZ_PROFILING to be set so that frame pointers are used.

The Gecko Profiler and DMD do *lots* of stack walks, so speed is important
for
them.

Note that the Gecko Profiler needs to stackwalk other threads than the
current
one.

- Windows' MozStackWalkThread() allows that by naming the thread.

- FramePointerStackWalk() allows that, because you just give it the
  to-be-walked thread's registers.

- _Unwind_Backtrace() doesn't allow it.

HangMonitor also requires off-thread stackwalking, but it's Windows-only
(perhaps exactly because it requires off-thread stackwalking). It could
potentially use profiler_suspend_and_sample() instead (see next paragraph).

Finally, the Gecko Profiler also has a public function
profiler_suspend_and_sample_thread() that walks the stack of another thread
(using the abovementioned stackwalkers), and interleaves that with
pseudostacks
and JS stacks. BHR uses this function.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Extensions and Gecko specific APIs

2017-07-27 Thread Nicholas Nethercote
FWIW, I share Steve's broad concerns here. Mozilla's track record on
extension APIs has had many dead-ends and changes of direction. Now that
we're wiping the slate clean, it would be good to not repeat history.

Nick

On Fri, Jul 28, 2017 at 3:02 AM, Steve Fink  wrote:

> On 07/26/2017 10:45 PM, Andrew Swan wrote:
>
>>
>> On Wed, Jul 26, 2017 at 4:27 PM, Steve Fink  sf...@mozilla.com>> wrote:
>>
>> This thread worries me greatly. Somebody tell me we have a plan
>> and policy around this already. Please?
>>
>>
>> We might, but I'm not sure what "this" you're concerned about.  Whether
>> API names should be prefixed?  Or whether we should expose things at all
>> that are unique to gecko/firefox to extensions?  There are a whole bunch of
>> things that get considered when new extension APIs are proposed including
>> safety, maintainability, performance, and yes, cross-browser compatibility.
>>
>
> "this" == exposing Gecko-specific functionality, or rather, what
> Gecko-specific functionality to expose and how to expose it in general.
> With emphasis on the how. The prefixing decision (answer: no) and
> do-it-at-all decision (answer: yes) are part of that.
>
> Unfortunately, there isn't anything written that explains actual criteria
>> in detail (its on our radar but somewhere behind a long list of engineering
>> tasks on the short-term priority list).
>>
>
> And I guess the parenthetical clause is what worries me. The people
> churning through that workload should be churning through that workload,
> and it's fine that they aren't spending time and mental space on the
> theoretical concerns of future compatibility issues or addon developer
> relations. But this is kind of a big deal for Mozilla strategically, so I
> would expect someone else to be working on the strategic plan before we
> reach the foot-shooting point.
>
> Hopefully, that someone would be in close contact with the engineers doing
> the work, since they have the best context and familiarity with large parts
> of the problem space, and hence their opinions deserve a lot of weight. As
> long as the consultation doesn't get in the way of getting stuff done.
> There's a ton of weight on you people's shoulders, and we don't want to add
> more.
>
> One person can do both strategy and tactics (or implementation) just fine,
> but it's usually not a good idea to do them at the same time. Different
> mindset, different tradeoffs.
>
>
>> My individual opinion is that something being unique to gecko or firefox
>> should not disqualify it from being exposed to extensions.  The webcompat
>> analogy doesn't really work here, the principle that the web should be open
>> and interoperable demands rigor in what gets exposed to content.  But a
>> browser extension isn't a web page, it is part of the browser itself, and
>> different browsers are inherently ... different.  They have different
>> features, different user interfaces, etc.  The fact that browser extensions
>> are built with web technology and that they modify or extend the very thing
>> that displays web content obscures this distinction, but it does make a big
>> difference.
>>
>
> I agree. But it isn't completely different from webcompat, either. We have
> used up much of developer's tolerance for breaking changes, so we really
> want to try hard to minimize changes that are going to break addons. (And
> minimize the pain of such breakage -- if we have a mechanism allowing us to
> easily identify the addons that will be affected, and provide a warning and
> documentation of the change in advance, we can probably get away with quite
> a bit.)
>
> Anyway, containers is a good example of something that we've exposed to
>> extensions that isn't likely to be supported in other browsers any time
>> soon.  Nevertheless, we need to design APIs in a way that doesn't
>> compromise on the other areas mentioned above: maintainability, safety,
>> performance.  And, to the extent we can, we should design APIs that could
>> be adopted by other browsers if they choose to.
>>
>
> Sure, and in my mind, that's the sort of tactical decisionmaking that
> *should* be done in the context of implementation. Which is different from
> the overall strategy of prefixing / opt-in imports / signing / versioning.
>
> Given our position, it's a bold move that says we're willing to
>> take the painful hit of pissing off addon authors and users
>> because we truly believe it is necessary to produce a top-quality
>> product.
>>
>>
>> There are certainly outraged addon authors out there but for what its
>> worth, we're also already getting a good amount of positive feedback from
>> both addon authors and users.
>>
>
> Sorry, don't take my ranting to imply that I'm somehow unhappy with the
> work you people are doing. To the contrary, it all seems to be going
> stunningly well, which is much to the credit of your team.
>
>
>> That's my argument for why the default 

Re: Announcing MozillaBuild 3.0 Release

2017-07-24 Thread Nicholas Nethercote
This thread is nominally about a new version of MozillaBuild 3.0, though it
has strayed significantly. I'm going to politely suggest that discussion of
other topics in this thread cease. If anybody wants to start other threads
on other topics, please do so.

But also be aware that this mailing list is read by a lot of people
(hundreds at least, I think) and it isn't really intended as a place for
new contributors to ask questions about anything and everything. In
contrast, the #introduction IRC channel is specifically designed for that.

Nick

On Tue, Jul 25, 2017 at 1:36 PM, Mike Hommey  wrote:

> On Tue, Jul 25, 2017 at 01:07:19AM +, Enrico Weigelt, metux IT consult
> wrote:
> > On 25.07.2017 00:34, Mike Hommey wrote:
> >
> > > Debian has 52esr.
> >
> > Maybe in experimental. But that doesn't help me much for jessie.
>
> No, it's in testing, and should reach stable at some point, and I guess
> the Debian LTS people will pull it for jessie, because it's already in
> wheezy.
>
> > > > And it still contains lots of dead weight, I'd like to get rid of.
> > >
> > > Which is out of scope for packaging something for Debian.
> >
> > Maybe for average Debian folks, who also have no problem w/ forcing
> > people into some special init system that wants to be an own OS.
>
> Oh, so you're also trolling, now.
>
> Mike
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More Rust code

2017-07-13 Thread Nicholas Nethercote
On Tue, Jul 11, 2017 at 11:03 AM, Nicholas Nethercote <
n.netherc...@gmail.com> wrote:

>
>
>> The first thing comes to my mind is crash reports. It currently doesn't
>> always include useful panic message from Rust, see for example [1] and [2].
>>
>
> I filed https://bugzilla.mozilla.org/show_bug.cgi?id=1379857 for this.
> It's blocking https://bugzilla.mozilla.org/show_bug.cgi?id=1348896, which
> is the Rust crash tracking bug.
>

jryans just landed a fix for bug 1379857. Turns out that Rust panic
messages weren't being captured by crash reports in content processes.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More Rust code

2017-07-11 Thread Nicholas Nethercote
On Tue, Jul 11, 2017 at 11:15 AM, Bobby Holley 
wrote:

> If I were the owner of that module I would consider implementing a policy
>> something like the following:
>>
>> "When a person writes a new compiled-code component, or majorly rewrites
>> an existing one, they should strongly consider writing it in Rust instead
>> of C or C++. Such a decision will depend on the nature of the component.
>> Components that are relatively standalone and have simple interfaces to
>> other components are good candidates to be written in Rust, as are
>> components such as parsers that process untrusted input."
>>
>
> I think this is pretty uncontroversial. The high-level strategic decision
> to bet on Rust has already been made, and the cost of depending on the
> language is already sunk. Now that we're past that point, I haven't heard
> anyone arguing why we shouldn't opt for memory safety when writing new
> standalone code. If there are people out there who disagree and think they
> have the arguments/clout/allies to make the case, please speak up.
>

As Gibson said: "The future is already here — it's just not very evenly
distributed."

The paragraph you wrote is obvious to you, but I suspect it's not obvious
to everyone -- Mozilla has a lot of contributors. There may still be some
Firefox developers who think that Rust something that other people do, and
that isn't relevant to them or their component(s). There may be some
Firefox developers who are interested in Rust, but feel intimidated or
uncertain about using it. There may be some developers who are keen to use
Rust, but haven't realized that we are past the experimental stage.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More Rust code

2017-07-10 Thread Nicholas Nethercote
On Tue, Jul 11, 2017 at 9:48 AM, Xidorn Quan  wrote:

> Tooling around debug info in Rust code still isn't great.
>

The good news is that tooling issues are generally among the easier ones to
solve. Even the existing Rust code that is present in Firefox will require
at least some of these things to be fixed.


> The first thing comes to my mind is crash reports. It currently doesn't
> always include useful panic message from Rust, see for example [1] and [2].
>

I filed https://bugzilla.mozilla.org/show_bug.cgi?id=1379857 for this. It's
blocking https://bugzilla.mozilla.org/show_bug.cgi?id=1348896, which is the
Rust crash tracking bug.

For that, I just did an experiment. In my local repo, when I remove all
> linking-related files from objdir/toolkit/library (to do a fresh linking),
> it takes ~1.7min for linking.
>
> When I touch one cpp file (in my experiment, layout/style/nsCSSAnonBoxes.cpp
> which is a reasonable small cpp file I happen to open), it takes 6s to
> link, and size of xul.ilk (Incremental Linker File) increases by 856KB.
>
> When I touch one rs file (in my experiment, servo/components/gecko_binding
> s/sugar/style_complex_color.rs which is a reasonable small rs file I
> happen to be familiar with), it takes ~2min to link (even longer than the
> fresh linking!) in addition to the compile time, and size of xul.ilk
> increases by 27.44MB. And after that, when I touch that cpp file again, it
> takes ~2.1min to link and size of xul.ilk increases by 54.64MB this time.
>
> I suspect the more times you touch rs files, the longer linking would
> take, and I guess this is because we link all Rust code into a single
> library, and then static link it into xul.dll, which makes the linker do
> lots of wasteful work. There could also be some linker bug involves, though.
>
> To mitigate this, we can probably avoid incremental linking when Rust code
> is touched. I also wonder if it is possible to avoid linking the whole
> gkrust into xul.dll at all and see if that makes things better. It would
> also be good to see if Rust developers have any advice for this situation.
>

Would you mind filing a bug report for this issue?

I've posted a link to this thread on the Rust Internals board:
https://internals.rust-lang.org/t/dev-platform-discussion-
on-getting-more-rust-code-into-firefox/5523

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More Rust code

2017-07-10 Thread Nicholas Nethercote
On Tue, Jul 11, 2017 at 2:33 AM, Bobby Holley  wrote:

>
> In other words, I think we should encourage people to consider Rust for
> new code, but also make sure that we have a thoughtful and informed
> discussion about whether a use-case makes sense and how best to support it.
>

Definitely! I deliberately described my "Rust should be considered /
preferred / mandated" goal as "provocative" :)  We should think big, and
then put engineering effort in where it makes sense and will have strong
benefits.


> Right now we have a module that's intended to govern decisions on these
> issues [1]. However, the peer list for that module seems oriented around
> C++ and build system expertise, and is sparse on people who have been
> deeply involved in the Rust integration efforts over the past year.
>
> So we could expand that group with expertise to tackle Rust integration
> issues. Or it could charter a committee of such experts to do so. Or some
> combination. Either way, I think we want some amount of steering here, and
> I can think of a handful of names whose input is likely required to get it
> right.
>

That module's name is "C++/Rust usage, tools, and style", and its
description is "Aspects of C++ use such as language feature usage, standard
library versions/usage, compiler/toolchain versions, formatting and naming
style, and aspects of Rust use as needs arise"

To me that feels orthogonal to the notion of encouraging the adoption of
more Rust components in Firefox. That kind of encouragement feels more
project-wide, and so IMO would fall under the "mozilla-toplevel" component.
IIRC Brendan resigned ownership of that module a while back but he's still
listed as the owner at https://wiki.mozilla.org/Modules/All#mozilla-toplevel.
So that leaves things in an uncertain state.

If I were the owner of that module I would consider implementing a policy
something like the following:

"When a person writes a new compiled-code component, or majorly rewrites an
existing one, they should strongly consider writing it in Rust instead of C
or C++. Such a decision will depend on the nature of the component.
Components that are relatively standalone and have simple interfaces to
other components are good candidates to be written in Rust, as are
components such as parsers that process untrusted input."

It could do with some tweaking and feedback, but in principle I don't see
why such a policy couldn't be implemented right now. I don't know where
this policy would be written down, though!

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More Rust code

2017-07-10 Thread Nicholas Nethercote
On Mon, Jul 10, 2017 at 11:41 PM, smaug  wrote:

>
> - Interop with existing components can be difficult. IPDL codegen rust
>> bindings could be a big help.
>>
>
> ipdl? or do you mean idl? or perhaps webidl?
>

bkelly suggested that item to me on Twitter :)  Probably all three of those
are relevant!

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


More Rust code

2017-07-10 Thread Nicholas Nethercote
Hi,

Firefox now has multiple Rust components, and it's on track to get a bunch
more. See https://wiki.mozilla.org/Oxidation for details.

I think this is an excellent trend, and I've been thinking about how to
accelerate it. Here's a provocative goal worth considering: "when writing a
new compiled-code component, or majorly rewriting an existing one, Rust
should be considered / preferred / mandated."

What are the obstacles? Here are some that I've heard.

- Lack of Rust expertise for both writing and reviewing code. We have some
pockets of expertise, but these need to be expanded greatly. I've heard
that there has been some Rust training in the Paris and Toronto offices.
Would training in other offices (esp. MV and SF, given their size) be a
good idea? What about remoties?

- ARM/Android is not yet a Tier-1 platform for Rust. See
https://forge.rust-lang.org/platform-support.html and
https://internals.rust-lang.org/t/arm-android-to-tier-1/5227 for some
details.

- Interop with existing components can be difficult. IPDL codegen rust
bindings could be a big help.

- Compile times are high, especially for optimized builds.

Anything else?

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Changing .idl files

2017-06-20 Thread Nicholas Nethercote
I just found https://bugzilla.mozilla.org/show_bug.cgi?id=1347507: "Stuff
we can remove when XPCOM extensions are no longer supported". A number of
the blocking bugs are for removing XPIDL interfaces. If you know of other
things that can be removed, please add them as blocking bugs.

Nick

On Wed, Jun 14, 2017 at 10:40 AM, Nicholas Nethercote <
n.netherc...@gmail.com> wrote:

> Hi,
>
> What must be considered when changing an XPIDL interface in a .idl file?
> As far
> as I know, it's the following.
>
> (1) Update browser C++ code that uses the interface. This is easy because
> the
> compiler will tell you the parts that need changing.
>
> (2) Update browser JS code that uses the interface. This is harder because
> it
> requires using grep or a similar tool to find all the occurrences. But it's
> usually not too bad.
>
> (3) Do extensions use it? If so, changing it probably isn't possible. This
> can
> be imperfectly determined by searching through addons/ in DXR.
>
> (4) Does Thunderbird use it? This is no longer a hard constraint, but is
> something to consider.
>
> Once Firefox 57 disallows all extensions other than Web Extension, I think
> (3) will
> no longer be relevant. In which case, once 57 is under development (August
> ~7th)
> it will be *much* easier to change XPIDL interfaces.
>
> (Although, if DevTools are moved its their own repository, that repo will
> have to be
> checked as well?)
>
> Is all that correct? Have I missed anything? Thanks.
>
> Nick
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Changing .idl files

2017-06-13 Thread Nicholas Nethercote
Hi,

What must be considered when changing an XPIDL interface in a .idl file? As
far
as I know, it's the following.

(1) Update browser C++ code that uses the interface. This is easy because
the
compiler will tell you the parts that need changing.

(2) Update browser JS code that uses the interface. This is harder because
it
requires using grep or a similar tool to find all the occurrences. But it's
usually not too bad.

(3) Do extensions use it? If so, changing it probably isn't possible. This
can
be imperfectly determined by searching through addons/ in DXR.

(4) Does Thunderbird use it? This is no longer a hard constraint, but is
something to consider.

Once Firefox 57 disallows all extensions other than Web Extension, I think
(3) will
no longer be relevant. In which case, once 57 is under development (August
~7th)
it will be *much* easier to change XPIDL interfaces.

(Although, if DevTools are moved its their own repository, that repo will
have to be
checked as well?)

Is all that correct? Have I missed anything? Thanks.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is it OK to make allocations that intentionally aren't freed? (was: Re: Is Big5 form submission fast enough?)

2017-05-21 Thread Nicholas Nethercote
On Mon, May 22, 2017 at 10:06 AM, Andrew McCreight 
wrote:

>
> NS_FREE_PERMANENT_DATA.
>

That's it! (Thank you mccr8.)

Please use that one. Look at the existing uses for ideas.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Is it OK to make allocations that intentionally aren't freed? (was: Re: Is Big5 form submission fast enough?)

2017-05-19 Thread Nicholas Nethercote
There's also a pre-processor constant that we define in Valgrind/ASAN/etc.
builds that you can check in order to free more stuff than you otherwise
would. But I can't for the life of me remember what it's called :(

Nick

On Sat, May 20, 2017 at 5:09 AM, Jeff Muizelaar 
wrote:

> We use functions like cairo_debug_reset_static_data() on shutdown to
> handle cases like this.
>
> -Jeff
>
> On Fri, May 19, 2017 at 1:44 AM, Henri Sivonen 
> wrote:
> > On Tue, May 16, 2017 at 7:03 AM, Tim Guan-tin Chien
> >  wrote:
> >> According to Alexa top 100 Taiwan sites and quick spot checks, I can
> only
> >> see the following two sites encoded in Big5:
> >>
> >> http://www.ruten.com.tw/
> >> https://www.momoshop.com.tw/
> >>
> >> Both are shopping sites (eBay-like and Amazon-like) so you get the idea
> how
> >> forms are used there.
> >
> > Thank you. It seems to me that encoder performance doesn't really
> > matter for sites like these, since the number of characters one would
> > enter in the search field at a time is very small.
> >
> >> Mike reminded me to check the Tax filing website:
> http://www.tax.nat.gov.tw/
> >> .Yes, it's unfortunately also in Big5.
> >
> > I guess I'm not going to try filing taxes there for testing. :-)
> >
> > - -
> >
> > One option I've been thinking about is computing an encode
> > acceleration table for JIS X 0208 on the first attempt to encode a CJK
> > Unified Ideograph in any of Shift_JIS, EUC-JP or ISO-2022-JP, for GBK
> > on the first attempt to encode a CJK Unified Ideograph in either GBK
> > or gb18030, and for Big5 on the first attempt to encode a CJK Unified
> > Ideograph in Big5.
> >
> > Each of the three tables would then remain allocated through to the
> > termination of the process.
> >
> > This would have the advantage of not bloating our binary footprint
> > with data that can be computed from other data in the binary while
> > still making legacy Chinese and Japanese encode fast without a setup
> > cost for each encoder instance.
> >
> > The downsides would be that the memory for the tables wouldn't be
> > reclaimed if the tables aren't needed anymore (the browser can't
> > predict the future) and executions where any of the tables has been
> > created wouldn't be valgrind-clean. Also, in the multi-process world,
> > the tables would be recomputed per-process. OTOH, if we shut down
> > rendered processes from time to time, it would work as a coarse
> > mechanism to reclaim the memory is case Japanese or Chinese legacy
> > encode is a relatively isolated event in the user's browsing pattern.
> >
> > Creating a mechanism for the encoding library to become aware of
> > application shutdown just in order to be valgrind-clean would be
> > messy, though. (Currently, we have shutdown bugs where uconv gets used
> > after we've told it can shut down. I'd really want to avoid
> > re-introducing that class of bugs with encoding_rs.)
> >
> > Is it OK to create allocations that are intentionally never freed
> > (i.e. process termination is what "frees" them)? Is valgrind's message
> > suppression mechanism granular enough to suppress three allocations
> > from a particular Rust crate statically linked into libxul?
> >
> > --
> > Henri Sivonen
> > hsivo...@hsivonen.fi
> > https://hsivonen.fi/
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Volunteer maintainer wanted: mac x86

2017-05-18 Thread Nicholas Nethercote
On Wed, May 17, 2017 at 5:28 PM, Nicholas Nethercote <n.netherc...@gmail.com
> wrote:

> https://developer.mozilla.org/en/docs/Supported_build_configurations
> still lists x86/Mac as a tier 1 platform.
>
> Should it be moved to tier 3, or removed altogether?
>

If anyone was wondering... I was about to remove it altogether from that
page, but I see that Ralph got in ahead of me. Thanks, Ralph!

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Volunteer maintainer wanted: mac x86

2017-05-17 Thread Nicholas Nethercote
https://developer.mozilla.org/en/docs/Supported_build_configurations still
lists x86/Mac as a tier 1 platform.

Should it be moved to tier 3, or removed altogether?

Nick

On Thu, Dec 1, 2016 at 2:29 AM, Benjamin Smedberg 
wrote:

> As of Firefox 53, we are intending to switch Firefox on mac from a
> universal x86/x86-64 build to a single-architecture x86-64 build.
>
> To simplify the build system and enable other optimizations, we are
> planning on removing support for universal mac build from the Mozilla build
> system.
>
> The Mozilla build and test infrastructure will only be testing the x86-64
> codepaths on mac. However, we are willing to keep the x86 build
> configuration supported as a community-supported (tier 3) build
> configuration, if there is somebody willing to step forward and volunteer
> as the maintainer for the port. The maintainer's responsibility is to
> periodically build the tree and make sure it continues to run.
>
> Please contact me directly (not on the list) if you are interested in
> volunteering. If I do not hear from a volunteer by 23-December, the Mozilla
> project will consider the Mac-x86 build officially unmaintained.
>
> --BDS
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Removing Jemalloc 4

2017-05-16 Thread Nicholas Nethercote
On Wed, May 17, 2017 at 4:28 AM, Steve Fink  wrote:

>
> But I'm curious if you know anything about the intended future directions
> of jemalloc, if there are any, and whether they align with anything we need.


As far as I know the short answer is "whatever Facebook needs", because
jemalloc's author works for Facebook and Facebook is the primary consumer.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Removing Jemalloc 4

2017-05-15 Thread Nicholas Nethercote
Just to add some context: glandium is deeply familiar with jemalloc4's
internals, having submitted numerous patches and fixes to it. And he has
spent *significant* time and effort on multiple occasions, on multiple
versions of jemalloc4, trying to avoid the performance regressions, without
success. If he can't get it into a state suitable for our use, I have
trouble imagining who else could.

In other words, this is not a hasty or premature decision, but one made
reluctantly based on experience.

Nick

On Tue, May 16, 2017 at 10:14 AM, Mike Hommey  wrote:

> Hi,
>
> We've tried to get off mozjemalloc for, apparently, close to 5 years
> (date of the filing of bug 762449). We've had memory usage regressions
> (like bug 1219914), and we've had perf regressions as per talos numbers
> (things like bug 1138999), and those have never gone away (with
> variations with each update of jemalloc >= 3).
>
> My best bet at this point is that it's "just" a consequence of the
> difference in memory layout due to the differences in how the allocators
> work. That doesn't make it more okay.
>
> Many updates to recent versions of jemalloc have been painful (usually
> breaking everything except linux), and the current tip of the jemalloc
> dev branch is not making things any better (see bug 1357301).
>
> Furthermore, bug 1361258 has started to modify mozjemalloc with new
> features for stylo, further deepening the API surface between both
> allocators. While what was added in bug 1361258 is possible to implement
> for jemalloc 4, the burden of continuing to maintain both allocators is
> not really appealing considering the perspective of ever switching.
>
> As much as it pains me, it's time to admit that switching to jemalloc >=
> 3 is not going to happen, and pull the plug once and for all. This
> decision has taken way too long to be made, and I apologize for that.
>
> This will happen with the landing of bug 1363992 in a few hours.
>
> As for the things we were hoping jemalloc >= 3 would allow us to do
> (essentially, heap partitioning), we'll be working on getting that on
> mozjemalloc shortly (bug 1361258 and followups will actually get us
> close on the necessary infrastructure), as well as cleaning up its code
> (I have a local patch queue that removes 15% of jemalloc.c), and
> probably converting it to C++ (at least for some RAII benefits, and
> converting some of the gory macros to templates)
>
> Mike.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Using references vs. pointers in C++ code

2017-05-09 Thread Nicholas Nethercote
On Wed, May 10, 2017 at 10:27 AM, Karl Tomlinson  wrote:

>
> Or is NotNull really too awkward IRL?
>

I wrote NotNull.h, and I've used it in various places. I'm ambivalent about
it. It does make things clear, but it also is a bit annoying to use. The
code tends to end up with WrapNotNull() and get() calls littered
throughout. It also doesn't currently work with UniquePtr.

The two big comments in mfbt/NotNull.h have some discussion about use cases
for references and pointers and NotNull that is relevant to this thread:
https://dxr.mozilla.org/mozilla-central/rev/b21b974d60d3075ae24f6fb1bae75d0f122f28fc/mfbt/NotNull.h#10-99

I also fiddled around with a MaybeNull type, to explicitly mark pointers
that are nullable, but it was more complex and less obviously useful than
NotNull, so I gave up on it.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Quantum Flow Engineering Newsletter #6

2017-04-21 Thread Nicholas Nethercote
Judging from the incoming flow of bug reports, the number of people using
the Gecko Profiler has increased in the last week or two. I take this as a
good sign that it's being used increasingly heavily for Quantum Flow work,
which is good.

Nick

On Fri, Apr 21, 2017 at 4:25 PM, Ehsan Akhgari 
wrote:

> Hi everyone,
>
> I would like to share some updates about some of the ongoing performance
> related work.
>
> We have started looking at the native stack traces that are submitted
> through telemetry from the Background Hang Reports that take more than 8
> seconds.  (We were hoping to have been able to reduce this threshold to
> 256ms  for a while
> now, but the road has been bumpy -- but this should land really soon now!)
> Michael Layzell put together a telemetry analysis job that creates a
> symbolicated version of this data here: https://people-mozilla.org/~
> mlayzell/bhr/.  For example, this
>  is the latest
> generated report.  The grouping of this data is unfortunate, since the data
> is collected based on the profiler pseudo-stack labels, which is captured
> after 128ms, and then native stack (if the hang continues for 8 seconds)
> gets captured after that, so the pseudo-stack and the native stack may or
> may not correspond, and this grouping also doesn't help going through the
> list of native stacks and triage them more effectively.  Work is under way
> to create a nice dashboard
>  out of this data,
> but in the mean time this is an area where we could really use all of the
> help that we can get.  If you have some time, it would be really nice if
> you can take a look at this data and see if you can make sense of some of
> these call stacks and find some useful bug reports out of them.  If you do
> end up filing bugs, these are super important bugs to work on, so please
> make sure you add "[qf]" to the status whiteboard so that we can track the
> bug.
>
> Another item worthy of highlight is Mike Conley's Oh No! Reflow! add-on
> .  Don't let the simple web
> page behind this link deceive you, this add-on is really awesome!  It
> generates a beep every time that a long running reflow happens in the
> browser UI (which, of course, you get to turn off when you don't need to
> hunt for bugs!), and it logs the sync reflows that happened alongside the
> JS call stack to the code that triggered them, and it also gives you a
> single link that allows you to quickly file a bug with all of the right
> info in it, pre-filled!  In fact you can see the list of already filed
> bugs
> 
> through this add-on!
>
> Another issue that I want to bring up is the [qf:p1] bugs.  As you have
> noticed, there are a lot of them.  :-)  It is possible that some of these
> bugs aren't important to work on, for example because they only affect edge
> case conditions that affects a super small subset of users and that wasn't
> obvious when the bug was triaged.  In some other cases it may turn out that
> fixing the bug requires massive amounts of work that is unreasonable to do
> in the amount of time we have, or that the right people for it are doing
> more important work and can't be interrupted, and so on.  Whatever the
> issue is, whether the bug was mis-triaged, or can't be fixed, please make
> sure to raise it on the bug!  In general the earlier these issues are
> uncovered the better it is, because everyone can focus their time on more
> important work.  I wanted to make sure that this wasn't lost in all of the
> rush around our communication for Quantum Flow, my apologies if this hasn't
> been clear before.
>
>
> On to the acknowledgement section, I hope I'm not forgetting to mention
> anyone's name here!
>
>
>- Bas Schouten made it so that we don't clear the compositor
>background immediately before drawing into it
>.  This made
>some painting and scrolling related benchmarks faster
>, and fixed
>a flickering issue
> in the mean
>time!
>- Mason Chang made the Youtube settings widget less janky
> on Windows with
>D2D when the video is fullscreen.
>- David Baron made the flushes caused by the code that watches your
>mouse to know which side of the window to put the status bar when you hover
>a link less severe
>.
>- Neil Deakin removed a synchronous layout flush
> that used to
>happen when closing a 

Re: A reminder about commit messages: they should be useful

2017-04-17 Thread Nicholas Nethercote
On Tue, Apr 18, 2017 at 11:34 AM, Ben Kelly  wrote:

> FWIW I agree with Olli.  I look for a good one line summary of the change,
> but beyond that I find you really do need to look at the bug to get the
> full context.
>

Huh, interesting. Thanks for the data point.


> I don't object to people writing longer commit messages, but that
> information needs to be in the bug.  Our tools today (splinter and
> mozreview) don't do that automatically AFAIK.  I think there is an hg
> extension you can get to do it with splinter.
>

Yes, 'hg bzexport' defaults to copying the commit message into a bug
comment.

 Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: A reminder about commit messages: they should be useful

2017-04-17 Thread Nicholas Nethercote
On Tue, Apr 18, 2017 at 9:58 AM, smaug  wrote:

>
> That is why we have links to the bug. Bug should always be the unite of
> truth telling
> why some change was done. Bugs tend to have so much more context about the
> change than any individual commit message can or should have.
>

With all due respect, I think you have a different view on this to at least
some people (and perhaps most people).

In a recent similar dev-platform thread you said the following:

"In practice I tend to just read the first line of a commit message and the
most important part there is the bug number."

I was genuinely surprised to read that. Like bz and gps, I strongly prefer
getting context from commit messages than from bug reports. I appreciate
informative commit messages. I put effort into writing them myself, and I
would expect a reviewer to read them given that they are intended to make
their life easier. Obviously, conciseness is also desired, but I prefer
commit messages that err on the side of too much detail rather than not
enough.

(In general, IME the two main ways a patch author can help a reviewer are
(a) write small patches, and (b) write good commit messages.)

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: e10s-multi update and tests

2017-03-22 Thread Nicholas Nethercote
On Thu, Mar 23, 2017 at 12:12 PM, Andrew McCreight 
wrote:

>
> Though maybe you are asking which processes count against the limit of 4.
>

Yes, that's what I am asking.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: e10s-multi update and tests

2017-03-22 Thread Nicholas Nethercote
Do we have a clear definition of "content process"? I.e. does/will it
include:

- GMP processes (no?)
- GPU process (probably not?)
- file:// URL processes (probably should?)
- Web Extensions processes (probably should?)
- ServiceWorker processes (probably should?)

Thanks.

Nick

On Thu, Mar 23, 2017 at 10:45 AM, Blake Kaplan  wrote:

> Hello all,
>
> As some of you might have noticed, we are now defaulting to 4 content
> processes on Nightly builds! We're continuing to collect data and
> planning on running experiments with different numbers of processes to
> generate more data and allow us to fine-tune our defaults and
> strategies for process allocation.
>
> As part of our effort to enable e10s-multi on Nightly, we disabled a
> few tests that were misbehaving. We've re-enabled most of them and are
> on track to finish re-enabling them (after verifying that the problem
> was in the test and not the underlying code).
>
> As we get to the end of that effort, I'd like to ask everybody to
> think about any areas that they feel are not tested with multiple
> content processes that should be. Obviously, as we find regressions
> related to multiple content processes we'll add regression tests, but
> I would like to avoid having any last-minute panics over missing
> tests.
>
> Thanks.
> --
> Blake
> ___
> firefox-dev mailing list
> firefox-...@mozilla.org
> https://mail.mozilla.org/listinfo/firefox-dev
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The future of commit access policy for core Firefox

2017-03-10 Thread Nicholas Nethercote
On Sat, Mar 11, 2017 at 2:23 PM, smaug via governance <
governa...@lists.mozilla.org> wrote:
>
> I'd be ok to do a quick r+ if interdiff was working well.

Depending on the relative timezones of the reviewer and reviewee, that
could delay landing by 24 hours or even a whole weekend.

In general there seems to be a large amount of support in this thread for
continuing to allow the r+-with-minor-fixes option.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: All the processes

2017-03-06 Thread Nicholas Nethercote
On Tue, Mar 7, 2017 at 11:27 AM,  wrote:
>
> Intuitively I don't grasp how each content process can add that much more
memory that it would become a "major problem" jumping from 4 to 8

Simple: lots of stuff gets duplicated in each process. Efforts have been
made to share data between processes to avoid duplication, but it's often
not easy, and there are lots of data structures.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: All the processes

2017-03-06 Thread Nicholas Nethercote
On Tue, Mar 7, 2017 at 9:22 AM, Ben Kelly  wrote:

> These measurements are for full content processes.  Many of the processes
> in the above do not need all the chrome script we load in content processes
> today.
>

That's good to know. But it would still be good to get measurements of
these slimmer processes, and check that they don't contain unnecessary
stuff, and decide what coordination is necessary between the different
process types (as kmag suggested elsewhere in the thread).

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: All the processes

2017-03-06 Thread Nicholas Nethercote
Thank you for all the responses. Here's an updated list:

- main process

- content process(es): 1 on release for most users; 2 on Nightly

- plugin process(es): just for Flash now? (Win32 involves two processes for
  Flash)

- GPU process (bug 1264543, in Fx53)

- Gecko Media Plugin process: one per origin doing EME and one shared by
all
  users of WebRTC that need to encode/decode H.264.

- file:// URL access process (bug 1147911, in Fx54)

- extension process(es): 1 for now, possibly more in the future

- worker process: planned (bug 1231208)

- network stack process: just an idea for now (bug 1322426)

- audio subsystem process: work has just started

- JSPlugin processes: for PDFium and Pepper Flash, possibly 1 or 2 for each
(bug 558184)

Now for the reason I raised this: the major downside of using multiple
processes is that it increases memory usage. Recent-ish measurements showed
that for e10s-multi we could probably go up to 4 content processes without
blowing it out too badly, but 8 would be a major problem.

However, it's obvious from the above that lots of different groups have
already or are planning to use separate processes for other things,
regardless of what number is chosen for e10s-multi. The apparent lack of
coordination here concerns me, and I'd like to have a concrete discussion
about the expected combined effects of so many separate processes.

Nick



On Sat, Mar 4, 2017 at 11:15 AM, Nicholas Nethercote <n.netherc...@gmail.com
> wrote:

> Hi,
>
> I want to understand all the different processes that we can and will have
> in Firefox. Here's a list I constructed off the top of my head.
>
> - main process
>
> - content process(es): 1 on release for most users; 2 on Nightly
>
> - plugin process: just for Flash now?
>
> - gfx compositor process (bug 1264543, in Fx53)
>
> - file:// URL access process (bug 1147911, in Fx53)
>
> IIRC there was a proposal for a thumbnail generation process a while back
> but judging by bug 1187441 that was scrapped.
>
> Do I have any of these details wrong? Have I missed any?
>
> Thanks.
>
> Nick
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


All the processes

2017-03-03 Thread Nicholas Nethercote
Hi,

I want to understand all the different processes that we can and will have
in Firefox. Here's a list I constructed off the top of my head.

- main process

- content process(es): 1 on release for most users; 2 on Nightly

- plugin process: just for Flash now?

- gfx compositor process (bug 1264543, in Fx53)

- file:// URL access process (bug 1147911, in Fx53)

IIRC there was a proposal for a thumbnail generation process a while back
but judging by bug 1187441 that was scrapped.

Do I have any of these details wrong? Have I missed any?

Thanks.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


  1   2   3   4   5   >