Re: Tagged pointers

2018-07-12 Thread Alexis Beingessner
On Thu, Jul 12, 2018 at 11:03 PM, Robert O'Callahan 
wrote:

> On Fri, Jul 13, 2018 at 11:40 AM, Steve Fink  wrote:
>
> > On 07/12/2018 04:27 PM, Cameron McCormack wrote:
> >
> >> On Fri, Jul 13, 2018, at 6:51 AM, Kris Maglione wrote:
> >>
> >>> I actually have a patch sitting around with helpers to make it super
> >>> easy to
> >>> use smart pointers as tagged pointers :) I never wound up putting it up
> >>> for
> >>> review, since my original use case went away, but it you can think of
> any
> >>> specific cases where it would be useful, I'd be happy to try and get it
> >>> landed.
> >>>
> >> Speaking of tagged pointers, I've used lower one or two bits for tagging
> >> a number of times, but I've never tried packing things into the high
> bits
> >> of a 64 bit pointer.  Is that inadvisable for any reason?  How many bits
> >> can I use, given the 64 bit platforms we need to support?
> >>
> >
> > JS::Value makes use of this. We preserve the bottom 47 bits, but that's
> > starting to be problematic as some systems want 48. So, stashing stuff
> into
> > the high 16 bits is pretty safe!
> >
>
> 57-bit address space support is coming for x86-64.
>
> Rob
>

Last I heard the 48-bit assumption had become so pervasive (and useful)
that
OS devs were planning to only expose this to processes that explicitly
opted into it.

In the case of linux, you would have to explicitly request high-address
pages to
start receiving them: https://lwn.net/Articles/717293/

I always assumed firefox simply wouldn't ever opt into high-addresses,
unless
fission mem-shrink doesn't work out and firefox suddenly needs 300TB of RAM
;p
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Tagged pointers

2018-07-12 Thread Mike Hommey
On Fri, Jul 13, 2018 at 03:03:47PM +1200, Robert O'Callahan wrote:
> On Fri, Jul 13, 2018 at 11:40 AM, Steve Fink  wrote:
> 
> > On 07/12/2018 04:27 PM, Cameron McCormack wrote:
> >
> >> On Fri, Jul 13, 2018, at 6:51 AM, Kris Maglione wrote:
> >>
> >>> I actually have a patch sitting around with helpers to make it super
> >>> easy to
> >>> use smart pointers as tagged pointers :) I never wound up putting it up
> >>> for
> >>> review, since my original use case went away, but it you can think of any
> >>> specific cases where it would be useful, I'd be happy to try and get it
> >>> landed.
> >>>
> >> Speaking of tagged pointers, I've used lower one or two bits for tagging
> >> a number of times, but I've never tried packing things into the high bits
> >> of a 64 bit pointer.  Is that inadvisable for any reason?  How many bits
> >> can I use, given the 64 bit platforms we need to support?
> >>
> >
> > JS::Value makes use of this. We preserve the bottom 47 bits, but that's
> > starting to be problematic as some systems want 48. So, stashing stuff into
> > the high 16 bits is pretty safe!
> >
> 
> 57-bit address space support is coming for x86-64.

The high 16 bits are also used for user-space address space on some tier-3
platforms, and we've had to make the memory allocator avoid those
addresses to make JS::Value work there.

Mike
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Tagged pointers

2018-07-12 Thread Robert O'Callahan
On Fri, Jul 13, 2018 at 11:40 AM, Steve Fink  wrote:

> On 07/12/2018 04:27 PM, Cameron McCormack wrote:
>
>> On Fri, Jul 13, 2018, at 6:51 AM, Kris Maglione wrote:
>>
>>> I actually have a patch sitting around with helpers to make it super
>>> easy to
>>> use smart pointers as tagged pointers :) I never wound up putting it up
>>> for
>>> review, since my original use case went away, but it you can think of any
>>> specific cases where it would be useful, I'd be happy to try and get it
>>> landed.
>>>
>> Speaking of tagged pointers, I've used lower one or two bits for tagging
>> a number of times, but I've never tried packing things into the high bits
>> of a 64 bit pointer.  Is that inadvisable for any reason?  How many bits
>> can I use, given the 64 bit platforms we need to support?
>>
>
> JS::Value makes use of this. We preserve the bottom 47 bits, but that's
> starting to be problematic as some systems want 48. So, stashing stuff into
> the high 16 bits is pretty safe!
>

57-bit address space support is coming for x86-64.

Rob
-- 
Su ot deraeppa sah dna Rehtaf eht htiw saw hcihw, efil lanrete eht uoy ot
mialcorp ew dna, ti ot yfitset dna ti nees evah ew; deraeppa efil eht. Efil
fo Drow eht gninrecnoc mialcorp ew siht - dehcuot evah sdnah ruo dna ta
dekool evah ew hcihw, seye ruo htiw nees evah ew hcihw, draeh evah ew
hcihw, gninnigeb eht morf saw hcihw taht.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Fission MemShrink Newsletter #1: What (it is) and Why (it matters to you)

2018-07-12 Thread Randell Jesup
>On 07/12/2018 11:08 PM, Randell Jesup wrote:
>> We may need to trade first-load time against memory use by lazy-initing
>> more things than now, though we did quite a bit on that already for
>> reducing startup time.
>
>One thing to remember that some of the child processes will be more
>important than others. For example all the processes used for browsing
>contexts in the foreground tab should probably prefer performance over
>memory (in cases that is something we can choose from), but if a
>process is only used for browsing contexts in background tabs and isn't
>playing any audio or such, it can probably use less memory hungry
>approaches.

Correct - we need to have observers/what-have-you for
background/foreground state (and we may want an intermediate state or
two - foreground-but-not-focused (for example a visible window that
isn't the focused window); recently-in-foreground (switching back and
forth); background-for-longer-than-delta, etc.

Modules can use these to drop caches, shut down unnecessary threads,
change strategies, force GCs/CCs, etc.

Some of this certainly already exists, but may need to be extended (and
used a lot more).

-- 
Randell Jesup, Mozilla Corp
remove "news" for personal email
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Using clang-cl to ship Windows builds

2018-07-12 Thread Anthony Jones
On Friday, 13 July 2018 11:26:07 UTC+12, Jörg Knobloch  wrote:
> On 10/07/2018 22:29, David Major wrote:
> > Bug 1443590 is switching our official Windows builds to use clang-cl
> > as the compiler.
> >
> > Please keep an eye out for regressions and file a blocking bug for
> > anything that might be fallout from this change. I'm especially
> > interested in hearing about the quality of the debugging experience.
> 
> Just out of interest a question from the de-facto Thunderbird maintainer:
> 
> Does clang-cl give an executable that does the same thing as the 
> executable created by MS VS C++?
> 
> After switching to clang-cl, one of our Window10-only test failures 
> magically disappeared 
> (https://bugzilla.mozilla.org/show_bug.cgi?id=1469188) and another one 
> magically appeared on Windows only 
> (https://bugzilla.mozilla.org/show_bug.cgi?id=1475166).
> 
> I'm not in a position to debug any of that, it's just an observation.
> 
> Jörg.

Changing the compiler will change all the timing characteristics. This can 
reliably produce different outcomes on race conditions. We've seen some of 
that. You're likely to be seeing the same thing.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Fission MemShrink Newsletter #1: What (it is) and Why (it matters to you)

2018-07-12 Thread Nicholas Nethercote
On Fri, Jul 13, 2018 at 1:56 AM, Andrew McCreight 
wrote:

> >
> > Just curious, is there a bug on file to measure excess capacity on
> > nsTArrays and hash tables?
>
> njn looked at that kind of issue at some point (he changed how arrays grow,
> for instance, to reduce overhead), but it has probably been around 5 years,
> so there may be room for improvement for things added in the meanwhile.
>

For a trip down memory lane, check out
https://blog.mozilla.org/nnethercote/2011/08/05/clownshoes-available-in-sizes-2101-and-up/.
The size classes described in that post are still in use today.

More usefully: if anyone wants to investigate slop -- which is only one
kind of wasted space, but an important one -- it's now really easy with DMD:
- Invoke DMD in "Live" mode (i.e. generic heap profiling mode, rather than
dark matter detection mode).
- Use the `--sort-by slop` flag with dmd.py.

Full instructions are at
https://developer.mozilla.org/en-US/docs/Mozilla/Performance/DMD.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Two tips for faster Firefox builds

2018-07-12 Thread Nicholas Nethercote
Hi,

Here are two things that might help you get faster builds.

TL;DR:

1. On Linux, make sure you have lld installed, because it's a *much* faster
linker than gold, and it's now used by default if installed.

2. Upgrade your sccache to version 0.2.7 to get faster rebuilds of changed
Rust files.


DETAILS:

1. lld was made the default linker on Linux (when installed) in
https://bugzilla.mozilla.org/show_bug.cgi?id=1473436. Thank you, glandium!

If you don't already have it installed, run `mach bootstrap`  to get a copy
in ~/.mozbuild/clang/bin/, which you can then add to your PATH.
Alternatively `mach artifact toolchain --from-build linux64-clang` will
download lld (and other clang tools) under the current directory.

To confirm that lld is being used, look for this output from configure:

  checking for linker... lld

On other platforms, lld is not yet the default but you *might* be able to
use it.

On Windows, again use `mach bootstrap` and add this to your mozconfig:

  export LINKER=lld-link

This used to cause problems with debugging (bug 1458109) but that has since
been fixed.

On Mac, if you have lld installed, add this to your mozconfig:

  export LDFLAGS=-fuse-ld=lld

but it might cause build errors, such as "No rule to make target
`libmozavutil_dylib.list', needed by `libmozavutil.dylib`".

https://bugzilla.mozilla.org/show_bug.cgi?id=1384434 is the tracking bug
for making lld the default on all platforms. Any bugs filed about problems
should block that bug.

Also, if you are building a Rust-only project, something like this might
work (on Linux; I'm not sure about other builds):

  RUSTFLAGS="-Clinker=clang -Clink-arg=-fuse-ld=lld" cargo build

See https://github.com/rust-lang/rust/issues/50584#issuecomment-398988026
for an example improvement.

https://github.com/rust-lang/rust/issues/39915 is the issue for making lld
the default linker for Rust.

On non-Linux platforms, I recommend testing with the resulting builds
before you place full confidence in them.


2. When the Rust compiler is invoked with incremental compilation (which
happens in any Firefox build that doesn't have --enable-release) sccache
0.2.7 will skip the file, which is good because incremental compilation has
much the same effect, and sccache used to cause significant slowdowns in
the cases where a cache miss occurred. Thank you to ted for this
improvement!

For example, on my fast Linux box, if I `touch` servo/components/style/
stylist.rs and rebuild (resulting in an sccache cache hit), I get these
times:
- old sccache disabled: 28s
- old sccache enabled: 25s
- new sccache enabled: 28s

I.e. sccache's new behaviour causes a tiny slowdown.

But if I insert a comment at the top of that file (resulting in an sccache
cache miss), I get these times:
- old sccache disabled: 37s
- old sccache enabled: 1m53s(!)
- new sccache enabled: 37s

I.e. sccache's new behaviour is a big win. And this "make a small change
and recompile" case is extremely common.

See https://github.com/mozilla/sccache/issues/257 for more details.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Tagged pointers

2018-07-12 Thread Mike Hommey
On Thu, Jul 12, 2018 at 04:40:39PM -0700, Steve Fink wrote:
> On 07/12/2018 04:27 PM, Cameron McCormack wrote:
> > On Fri, Jul 13, 2018, at 6:51 AM, Kris Maglione wrote:
> > > I actually have a patch sitting around with helpers to make it super easy 
> > > to
> > > use smart pointers as tagged pointers :) I never wound up putting it up 
> > > for
> > > review, since my original use case went away, but it you can think of any
> > > specific cases where it would be useful, I'd be happy to try and get it
> > > landed.
> > Speaking of tagged pointers, I've used lower one or two bits for tagging a 
> > number of times, but I've never tried packing things into the high bits of 
> > a 64 bit pointer.  Is that inadvisable for any reason?  How many bits can I 
> > use, given the 64 bit platforms we need to support?
> 
> JS::Value makes use of this. We preserve the bottom 47 bits, but that's
> starting to be problematic as some systems want 48. So, stashing stuff into
> the high 16 bits is pretty safe!
> 
> The number of low bits available depends on your pointer alignment. But you
> can generally get away with 2 bits on 32-bit, 3 bits on 64-bit -- unless
> it's a char*, in which case it's quite common to have byte-aligned pointers
> (eg when sharing part of another string.) You really do need to know the
> exact alignment, though, rather than guessing.
> 
> Bit ops are pretty cheap, and in these post-Spectre days, it's not an awful
> idea to xor with a type field in those high bits before (potentially
> speculatively) accessing pointers. I think you still get the benefits of
> speculation if it's the right type.

On the topic of tagged pointers, the approach taken in the LLVM code
base is interesting, as described in this talk by Chandler Carruth.
https://www.youtube.com/watch?v=vElZc6zSIXM
(the part relevant to tagged pointers starts at 22:36)

(relatedly, I have the beginning of something similar for rust)

Mike
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Tagged pointers

2018-07-12 Thread Steve Fink

On 07/12/2018 04:27 PM, Cameron McCormack wrote:

On Fri, Jul 13, 2018, at 6:51 AM, Kris Maglione wrote:

I actually have a patch sitting around with helpers to make it super easy to
use smart pointers as tagged pointers :) I never wound up putting it up for
review, since my original use case went away, but it you can think of any
specific cases where it would be useful, I'd be happy to try and get it
landed.

Speaking of tagged pointers, I've used lower one or two bits for tagging a 
number of times, but I've never tried packing things into the high bits of a 64 
bit pointer.  Is that inadvisable for any reason?  How many bits can I use, 
given the 64 bit platforms we need to support?


JS::Value makes use of this. We preserve the bottom 47 bits, but that's 
starting to be problematic as some systems want 48. So, stashing stuff 
into the high 16 bits is pretty safe!


The number of low bits available depends on your pointer alignment. But 
you can generally get away with 2 bits on 32-bit, 3 bits on 64-bit -- 
unless it's a char*, in which case it's quite common to have 
byte-aligned pointers (eg when sharing part of another string.) You 
really do need to know the exact alignment, though, rather than guessing.


Bit ops are pretty cheap, and in these post-Spectre days, it's not an 
awful idea to xor with a type field in those high bits before 
(potentially speculatively) accessing pointers. I think you still get 
the benefits of speculation if it's the right type.


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Fission MemShrink Newsletter #1: What (it is) and Why (it matters to you)

2018-07-12 Thread Cameron McCormack
On Fri, Jul 13, 2018, at 6:51 AM, Kris Maglione wrote:
> I actually have a patch sitting around with helpers to make it super easy to 
> use smart pointers as tagged pointers :) I never wound up putting it up for 
> review, since my original use case went away, but it you can think of any 
> specific cases where it would be useful, I'd be happy to try and get it 
> landed.

Speaking of tagged pointers, I've used lower one or two bits for tagging a 
number of times, but I've never tried packing things into the high bits of a 64 
bit pointer.  Is that inadvisable for any reason?  How many bits can I use, 
given the 64 bit platforms we need to support?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Using clang-cl to ship Windows builds

2018-07-12 Thread Jörg Knobloch

On 10/07/2018 22:29, David Major wrote:

Bug 1443590 is switching our official Windows builds to use clang-cl
as the compiler.

Please keep an eye out for regressions and file a blocking bug for
anything that might be fallout from this change. I'm especially
interested in hearing about the quality of the debugging experience.


Just out of interest a question from the de-facto Thunderbird maintainer:

Does clang-cl give an executable that does the same thing as the 
executable created by MS VS C++?


After switching to clang-cl, one of our Window10-only test failures 
magically disappeared 
(https://bugzilla.mozilla.org/show_bug.cgi?id=1469188) and another one 
magically appeared on Windows only 
(https://bugzilla.mozilla.org/show_bug.cgi?id=1475166).


I'm not in a position to debug any of that, it's just an observation.

Jörg.


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Fission MemShrink Newsletter #1: What (it is) and Why (it matters to you)

2018-07-12 Thread Xidorn Quan
On Fri, Jul 13, 2018, at 7:08 AM, smaug wrote:
> One thing to remember that some of the child processes will be more 
> important than others. For example all the processes used for browsing 
> contexts in 
> the foreground tab should probably prefer performance over memory (in 
> cases that is something we can choose from), but if a process
> is only used for browsing contexts in background tabs and isn't playing 
> any audio or such, it can probably use less memory hungry approaches.
> Like, could stylo use fewer threads when used in background-tabs-only-
> processes, and once the process becomes foreground, more threads are 
> created.

I've filed a bug for this after I saw this email thread: 
https://bugzilla.mozilla.org/show_bug.cgi?id=1475091

- Xidorn
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Fission MemShrink Newsletter #1: What (it is) and Why (it matters to you)

2018-07-12 Thread smaug

On 07/12/2018 11:08 PM, Randell Jesup wrote:

I do hope that the 100 process figures scenario that was given is a worse case 
scenario though...


It's not.  Worst case is a LOT worse.

Shutting down threads/threadpools when not needed or off an idle timer
is a Good thing.  There may be some perf hit since it may mean starting
a thread instead of just sending a message at times; this may require
some tuning in specific cases, or leaving 1 thread or more running
anyways.

Stylo will be an interesting case here.

We may need to trade first-load time against memory use by lazy-initing
more things than now, though we did quite a bit on that already for
reducing startup time.




One thing to remember that some of the child processes will be more important than others. For example all the processes used for browsing contexts in 
the foreground tab should probably prefer performance over memory (in cases that is something we can choose from), but if a process

is only used for browsing contexts in background tabs and isn't playing any 
audio or such, it can probably use less memory hungry approaches.
Like, could stylo use fewer threads when used in 
background-tabs-only-processes, and once the process becomes foreground, more 
threads are created.
We have similar approach in many cases for performance and responsiveness 
reasons, but less often for memory usage reasons.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Fission MemShrink Newsletter #1: What (it is) and Why (it matters to you)

2018-07-12 Thread Kris Maglione

On Thu, Jul 12, 2018 at 10:27:13PM +0200, Gabriele Svelto wrote:

On 12/07/2018 22:19, Kris Maglione wrote:

I've actually been thinking on filing a bug to do something similar, to
measure cumulative effects of excess padding in certain types since I
began looking into bug 1460674, and Sylvestre mentioned that
clang-analyzer can generate reports on excess padding.


I've encountered at least one structure where a boolean flag is 64-bits
in size on 64-bit builds. If we really want to go to the last mile we
might want to also evaluate things like tagged pointers; there's
probably some KiB's to be saved there too.


I actually have a patch sitting around with helpers to make it super easy to 
use smart pointers as tagged pointers :) I never wound up putting it up for 
review, since my original use case went away, but it you can think of any 
specific cases where it would be useful, I'd be happy to try and get it 
landed.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Fission MemShrink Newsletter #1: What (it is) and Why (it matters to you)

2018-07-12 Thread Kris Maglione

On Thu, Jul 12, 2018 at 04:08:49PM -0400, Randell Jesup wrote:

I do hope that the 100 process figures scenario that was given is a worse case 
scenario though...


It's not.  Worst case is a LOT worse.

Shutting down threads/threadpools when not needed or off an idle timer
is a Good thing.  There may be some perf hit since it may mean starting
a thread instead of just sending a message at times; this may require
some tuning in specific cases, or leaving 1 thread or more running
anyways.

Stylo will be an interesting case here.

We may need to trade first-load time against memory use by lazy-initing
more things than now, though we did quite a bit on that already for
reducing startup time.


This is a really important point: Memory usage and performance deeply 
intertwined.


There are hard limits on the amount of memory we can use, and the more 
of it we waste needlessly, the less we have available for performance 
optimizations that need it. In the worst (performance) case, we wind up 
swapping, at which point performance may as well not exist.


We're going to have to make hard decisions about when/how often/how 
aggressively we flush caches, spin down threads, unload tabs, ... The 
more unnecessary overhead we save, the less extreme we're going to have 
to be about this. And the better we get at spinning down unused threads 
and evicting low impact cache entries, the less aggressive we're going 
to have to be about the high impact ones. Throwing those things away 
will have a performance impact, but not throwing them away will, in the 
end, have a bigger one.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Yes=0, No=1

2018-07-12 Thread Justin Dolske
On Thu, Jul 12, 2018 at 1:28 PM, Jason Orendorff 
wrote:

>
> ...This is bad, right? Asking for a friend.
>
>
1

Justin
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to remove: the 'Memory usage of Subprocesses' table from about:performance

2018-07-12 Thread Kris Maglione

+1 for adding it back in the future.

Even if memory usage isn't as directly related to performance as 
CPU usage is, it has a *huge* effect on performance on memory 
constrained systems, if it causes them to have to swap.


Also, in my experience, the overlap between poorly-performing 
code and leaky code tends to be high, so it would really be nice 
to keep these numbers in one place.


On Thu, Jul 12, 2018 at 10:25:31AM -0700, Eric Rahm wrote:

Thanks Florian, considering it's roughly unmaintained right now, leaking,
and showing up in perf profiles it sounds reasonable to remove the memory
section. I've filed bug 1475301 [1] to allow us to measure USS off main
thread; we can deal with adding that back in the future if it makes sense.

-e

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1475301

On Thu, Jul 12, 2018 at 12:41 AM, Florian Quèze  wrote:


On Thu, Jul 12, 2018 at 1:18 AM, Eric Rahm  wrote:

> What performance issues are you seeing? RSS and USS should be relatively
> lightweight and the polling frequency isn't very high.

It seems ResidentUniqueDistinguishedAmount does blocking system calls,
resulting in blocking the main thread for several seconds in the worst
case. Here's a screenshot of a profile showing it:
https://i.imgur.com/DjRMQtY.png (unfortunately that profile is too big
and fails to upload with the 'share' feature).

There's also a memory leak in the implementation, after leaving
about:performance open for a couple hours, there was more than 300MB
of JS "Function" and "Call" objects (about:memory screenshot:
https://i.imgur.com/21YNDru.png ) and the devtools' Memory tool shows
that this is coming from the code queuing updates to that subprocess
memory table: https://i.imgur.com/04M71hg.png

Florian

--
Florian Quèze


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


--
Kris Maglione
Senior Firefox Add-ons Engineer
Mozilla Corporation

If we wish to count lines of code, we should not regard them as lines
produced but as lines spent.
--Edsger W. Dijkstra

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Fission MemShrink Newsletter #1: What (it is) and Why (it matters to you)

2018-07-12 Thread Kris Maglione

On Thu, Jul 12, 2018 at 08:56:28AM -0700, Andrew McCreight wrote:

On Thu, Jul 12, 2018 at 3:57 AM, Emilio Cobos Álvarez 
wrote:


Thanks for doing this!

Just curious, is there a bug on file to measure excess capacity on
nsTArrays and hash tables?


njn looked at that kind of issue at some point (he changed how arrays grow,
for instance, to reduce overhead), but it has probably been around 5 years,
so there may be room for improvement for things added in the meanwhile.
However, our focus here is really on reducing per-process memory overhead,
rather than generic memory improvements, because we've had a lot of focus
on the latter as part of MemShrink, but not the former, so there's likely
easier improvements to be had.


I kind of suspect that improving the storage efficiency of hashtables (and 
probably nsTArrays too) will have an out-sized effect on per-process memory. 
Just at startup, for a mostly empty process, we have a huge amount of memory 
devoted to hashtables that would otherwise be shared across a bunch of 
origins—enough that removing just 4 bytes of padding per entry would save 87K 
per process. And that number tends to grow as we populate caches that we need 
for things like layout and atoms.


As much as I'd like to be able to share many of those caches between 
processes, there are always going to need process-specific hashtables on top 
of the shared ones for things that can't be/shouldn't be/aren't yet shared. 
And that extra overhead tends to grow proportionally to the number of 
processes we have.



On 07/10/2018 08:19 PM, Kris Maglione wrote:


Welcome to the first edition of the Fission MemShrink newsletter.[1]

In this edition, I'll sum up what the project is, and why it matters to
you. In subsequent editions, I'll give updates on progress that we've made,
and areas that we'll need to focus on next.[2]


The Fission MemShrink project is one of the most easily overlooked
aspects of Project Fission (also known as Site Isolation), but is
absolutely critical to its success. And will require a company- and
community-wide effort effort to meet its goals.

The problem is thus: In order for site isolation to work, we need to be
able to run *at least* 100 content processes in an average Firefox session.
Each of those processes has its own base memory overhead—memory we use just
for creating the process, regardless of what's running in it. In the
post-Fission world, that overhead needs to be less than 10MB per process in
order to keep the extra overhead from Fission below 1GB. Right now, on our
best-cast platform, Windows 10, is somewhere between 17 and 21MB. Linux and
OS-X hover between 25 and 35MB. In other words, between 2 and 3.5GB for an
ordinary session.

That means that, in the best case, we need to reduce the memory we use in
content processes by *at least* 7MB. The problem, of course, is that there
are only so many places we can cut memory without losing functionality, and
even fewer places where we can make big wins. But, there are lots of places
we can make small and medium-sized wins.

So, to put the task into perspective, of all of the places we can cut a
certain amount of overhead, here are the number of each that we need to fix
in order to reach 1MB:

250KB:   4
100KB:  10
75KB:   13
50KB:   20
20KB:   50
10KB:  100
5KB:   200

Now remember: we need to do *all* of these in order to reach our goal.
It's not a matter of one 250KB improvement or 50 5KB improvements. It's 4
250KB *and* 200 5KB improvements. There just aren't enough places we can
cut 250KB. If we fall short in any of those areas, Project Fission will
fail, and Firefox will be the only major browser without site isolation.

But it won't fail, because all of you are awesome, and this is a totally
achievable goal if we all throw our effort behind it.

Essentially what this means, though, is that if we identify an area of
overhead that's 50KB[3] or larger that can be eliminated, it *has* to be
eliminated. There just aren't that many large chunks to remove. They all
need to go. And if an area of code has a dozen 5KB chunks that can be
eliminated, maybe they don't all have to go, but at least half of them do.
The more the better.


To help us triage these issues, we have a tracking bug (
https://bugzil.la/memshrink-content), and a per-bug whiteboard tag
([overhead:...]) which gives an estimate of how much per-process overhead
we believe fixing that bug would eliminate. Please feel free to add
blockers to the tracking bug if you think they're relevant, and to add or
update [overhead] tags if you have reasonable estimates.


With all of that said, here's a brief update of the progress we've made
so far:

In the past month, unique memory per process[4] has dropped 3-4MB[5], and
JS memory usage in particular has dropped 1.1-1.9MB.

Particular credit goes to:

* Eric Rahm added an AWSY test suite to track base content process memory
   (https://bugzil.la/1442361). Results:

Resident unique: 

Yes=0, No=1

2018-07-12 Thread Jason Orendorff
The codebase has a few bool-like enum classes like this:

enum class HolodeckSafetyProtocolsEnabled {
Yes, No
};

Note that `bool(HolodeckSafetyProtocolsEnabled::Yes)` is false.

...This is bad, right? Asking for a friend.

-j
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Fission MemShrink Newsletter #1: What (it is) and Why (it matters to you)

2018-07-12 Thread Gabriele Svelto
On 12/07/2018 22:19, Kris Maglione wrote:
> I've actually been thinking on filing a bug to do something similar, to
> measure cumulative effects of excess padding in certain types since I
> began looking into bug 1460674, and Sylvestre mentioned that
> clang-analyzer can generate reports on excess padding.

I've encountered at least one structure where a boolean flag is 64-bits
in size on 64-bit builds. If we really want to go to the last mile we
might want to also evaluate things like tagged pointers; there's
probably some KiB's to be saved there too.

There's also more than one place where we're using strings to identify
stuff where we could use enums/integers instead. And yeah, my much
delayed refactoring of the observer service got a lot higher on my
priority list after reading this thread.

 Gabriele



signature.asc
Description: OpenPGP digital signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Fission MemShrink Newsletter #1: What (it is) and Why (it matters to you)

2018-07-12 Thread Kris Maglione

On Thu, Jul 12, 2018 at 12:57:35PM +0200, Emilio Cobos Álvarez wrote:

Thanks for doing this!

Just curious, is there a bug on file to measure excess capacity on 
nsTArrays and hash tables?


I don't think so, but it's a good idea.

I've actually been thinking on filing a bug to do something similar, to 
measure cumulative effects of excess padding in certain types since I 
began looking into bug 1460674, and Sylvestre mentioned that 
clang-analyzer can generate reports on excess padding.


It would probably be a good idea to try to roll this into the same 
project.


One nice change coming up on this front is that bug 1402910 will probably 
allow us to increase the load factors of most of our hashtables without 
losing performance. Having up-to-date numbers for these things would 
probably help decide how to prioritize those sorts of bugs.



On 07/10/2018 08:19 PM, Kris Maglione wrote:

Welcome to the first edition of the Fission MemShrink newsletter.[1]

In this edition, I'll sum up what the project is, and why it matters 
to you. In subsequent editions, I'll give updates on progress that 
we've made, and areas that we'll need to focus on next.[2]



The Fission MemShrink project is one of the most easily overlooked 
aspects of Project Fission (also known as Site Isolation), but is 
absolutely critical to its success. And will require a company- and 
community-wide effort effort to meet its goals.


The problem is thus: In order for site isolation to work, we need to 
be able to run *at least* 100 content processes in an average 
Firefox session. Each of those processes has its own base memory 
overhead—memory we use just for creating the process, regardless of 
what's running in it. In the post-Fission world, that overhead needs 
to be less than 10MB per process in order to keep the extra overhead 
from Fission below 1GB. Right now, on our best-cast platform, 
Windows 10, is somewhere between 17 and 21MB. Linux and OS-X hover 
between 25 and 35MB. In other words, between 2 and 3.5GB for an 
ordinary session.


That means that, in the best case, we need to reduce the memory we 
use in content processes by *at least* 7MB. The problem, of course, 
is that there are only so many places we can cut memory without 
losing functionality, and even fewer places where we can make big 
wins. But, there are lots of places we can make small and 
medium-sized wins.


So, to put the task into perspective, of all of the places we can 
cut a certain amount of overhead, here are the number of each that 
we need to fix in order to reach 1MB:


250KB:   4
100KB:  10
75KB:   13
50KB:   20
20KB:   50
10KB:  100
5KB:   200

Now remember: we need to do *all* of these in order to reach our 
goal. It's not a matter of one 250KB improvement or 50 5KB 
improvements. It's 4 250KB *and* 200 5KB improvements. There just 
aren't enough places we can cut 250KB. If we fall short in any of 
those areas, Project Fission will fail, and Firefox will be the only 
major browser without site isolation.


But it won't fail, because all of you are awesome, and this is a 
totally achievable goal if we all throw our effort behind it.


Essentially what this means, though, is that if we identify an area 
of overhead that's 50KB[3] or larger that can be eliminated, it 
*has* to be eliminated. There just aren't that many large chunks to 
remove. They all need to go. And if an area of code has a dozen 5KB 
chunks that can be eliminated, maybe they don't all have to go, but 
at least half of them do. The more the better.



To help us triage these issues, we have a tracking bug 
(https://bugzil.la/memshrink-content), and a per-bug whiteboard tag 
([overhead:...]) which gives an estimate of how much per-process 
overhead we believe fixing that bug would eliminate. Please feel 
free to add blockers to the tracking bug if you think they're 
relevant, and to add or update [overhead] tags if you have 
reasonable estimates.



With all of that said, here's a brief update of the progress we've 
made so far:


In the past month, unique memory per process[4] has dropped 
3-4MB[5], and JS memory usage in particular has dropped 1.1-1.9MB.


Particular credit goes to:

* Eric Rahm added an AWSY test suite to track base content process memory
  (https://bugzil.la/1442361). Results:

   Resident unique: 
https://treeherder.mozilla.org/perf.html#/graphs?series=mozilla-central,1684862,1,4=mozilla-central,1684846,1,4=mozilla-central,1685133,1,4=mozilla-central,1685127,1,4

   Explicit allocations: 
https://treeherder.mozilla.org/perf.html#/graphs?series=mozilla-inbound,1706218,1,4=mozilla-inbound,1706220,1,4=mozilla-inbound,1706216,1,4

   JS: 
https://treeherder.mozilla.org/perf.html#/graphs?series=mozilla-central,1684866,1,4=mozilla-central,1685137,1,4=mozilla-central,1685131,1,4


* Andrew McCreight created a tool for tracking JS memory usage, and 
figuring

  out which scripts and objects are responsible for how much of it
  (https://bugzil.la/1463569).

* Andrew 

Re: Fission MemShrink Newsletter #1: What (it is) and Why (it matters to you)

2018-07-12 Thread Randell Jesup
>I do hope that the 100 process figures scenario that was given is a worse case 
>scenario though...

It's not.  Worst case is a LOT worse.

Shutting down threads/threadpools when not needed or off an idle timer
is a Good thing.  There may be some perf hit since it may mean starting
a thread instead of just sending a message at times; this may require
some tuning in specific cases, or leaving 1 thread or more running
anyways.

Stylo will be an interesting case here.

We may need to trade first-load time against memory use by lazy-initing
more things than now, though we did quite a bit on that already for
reducing startup time.

-- 
Randell Jesup, Mozilla Corp
remove "news" for personal email
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to remove: the 'Memory usage of Subprocesses' table from about:performance

2018-07-12 Thread Eric Rahm
Thanks Florian, considering it's roughly unmaintained right now, leaking,
and showing up in perf profiles it sounds reasonable to remove the memory
section. I've filed bug 1475301 [1] to allow us to measure USS off main
thread; we can deal with adding that back in the future if it makes sense.

-e

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1475301

On Thu, Jul 12, 2018 at 12:41 AM, Florian Quèze  wrote:

> On Thu, Jul 12, 2018 at 1:18 AM, Eric Rahm  wrote:
>
> > What performance issues are you seeing? RSS and USS should be relatively
> > lightweight and the polling frequency isn't very high.
>
> It seems ResidentUniqueDistinguishedAmount does blocking system calls,
> resulting in blocking the main thread for several seconds in the worst
> case. Here's a screenshot of a profile showing it:
> https://i.imgur.com/DjRMQtY.png (unfortunately that profile is too big
> and fails to upload with the 'share' feature).
>
> There's also a memory leak in the implementation, after leaving
> about:performance open for a couple hours, there was more than 300MB
> of JS "Function" and "Call" objects (about:memory screenshot:
> https://i.imgur.com/21YNDru.png ) and the devtools' Memory tool shows
> that this is coming from the code queuing updates to that subprocess
> memory table: https://i.imgur.com/04M71hg.png
>
> Florian
>
> --
> Florian Quèze
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Fission MemShrink Newsletter #1: What (it is) and Why (it matters to you)

2018-07-12 Thread Andrew McCreight
On Thu, Jul 12, 2018 at 3:57 AM, Emilio Cobos Álvarez 
wrote:

> Thanks for doing this!
>
> Just curious, is there a bug on file to measure excess capacity on
> nsTArrays and hash tables?
>
> WebKit has a bunch of bugs like:
>
>   https://bugs.webkit.org/show_bug.cgi?id=186709
>
> Which seem relevant.
>

njn looked at that kind of issue at some point (he changed how arrays grow,
for instance, to reduce overhead), but it has probably been around 5 years,
so there may be room for improvement for things added in the meanwhile.
However, our focus here is really on reducing per-process memory overhead,
rather than generic memory improvements, because we've had a lot of focus
on the latter as part of MemShrink, but not the former, so there's likely
easier improvements to be had.

Andrew


>  -- Emilio
>
> On 07/10/2018 08:19 PM, Kris Maglione wrote:
>
>> Welcome to the first edition of the Fission MemShrink newsletter.[1]
>>
>> In this edition, I'll sum up what the project is, and why it matters to
>> you. In subsequent editions, I'll give updates on progress that we've made,
>> and areas that we'll need to focus on next.[2]
>>
>>
>> The Fission MemShrink project is one of the most easily overlooked
>> aspects of Project Fission (also known as Site Isolation), but is
>> absolutely critical to its success. And will require a company- and
>> community-wide effort effort to meet its goals.
>>
>> The problem is thus: In order for site isolation to work, we need to be
>> able to run *at least* 100 content processes in an average Firefox session.
>> Each of those processes has its own base memory overhead—memory we use just
>> for creating the process, regardless of what's running in it. In the
>> post-Fission world, that overhead needs to be less than 10MB per process in
>> order to keep the extra overhead from Fission below 1GB. Right now, on our
>> best-cast platform, Windows 10, is somewhere between 17 and 21MB. Linux and
>> OS-X hover between 25 and 35MB. In other words, between 2 and 3.5GB for an
>> ordinary session.
>>
>> That means that, in the best case, we need to reduce the memory we use in
>> content processes by *at least* 7MB. The problem, of course, is that there
>> are only so many places we can cut memory without losing functionality, and
>> even fewer places where we can make big wins. But, there are lots of places
>> we can make small and medium-sized wins.
>>
>> So, to put the task into perspective, of all of the places we can cut a
>> certain amount of overhead, here are the number of each that we need to fix
>> in order to reach 1MB:
>>
>> 250KB:   4
>> 100KB:  10
>> 75KB:   13
>> 50KB:   20
>> 20KB:   50
>> 10KB:  100
>> 5KB:   200
>>
>> Now remember: we need to do *all* of these in order to reach our goal.
>> It's not a matter of one 250KB improvement or 50 5KB improvements. It's 4
>> 250KB *and* 200 5KB improvements. There just aren't enough places we can
>> cut 250KB. If we fall short in any of those areas, Project Fission will
>> fail, and Firefox will be the only major browser without site isolation.
>>
>> But it won't fail, because all of you are awesome, and this is a totally
>> achievable goal if we all throw our effort behind it.
>>
>> Essentially what this means, though, is that if we identify an area of
>> overhead that's 50KB[3] or larger that can be eliminated, it *has* to be
>> eliminated. There just aren't that many large chunks to remove. They all
>> need to go. And if an area of code has a dozen 5KB chunks that can be
>> eliminated, maybe they don't all have to go, but at least half of them do.
>> The more the better.
>>
>>
>> To help us triage these issues, we have a tracking bug (
>> https://bugzil.la/memshrink-content), and a per-bug whiteboard tag
>> ([overhead:...]) which gives an estimate of how much per-process overhead
>> we believe fixing that bug would eliminate. Please feel free to add
>> blockers to the tracking bug if you think they're relevant, and to add or
>> update [overhead] tags if you have reasonable estimates.
>>
>>
>> With all of that said, here's a brief update of the progress we've made
>> so far:
>>
>> In the past month, unique memory per process[4] has dropped 3-4MB[5], and
>> JS memory usage in particular has dropped 1.1-1.9MB.
>>
>> Particular credit goes to:
>>
>> * Eric Rahm added an AWSY test suite to track base content process memory
>>(https://bugzil.la/1442361). Results:
>>
>> Resident unique: https://treeherder.mozilla.org
>> /perf.html#/graphs?series=mozilla-central,1684862,1,4
>> =mozilla-central,1684846,1,4=mozilla-central,
>> 1685133,1,4=mozilla-central,1685127,1,4
>> Explicit allocations: https://treeherder.mozilla.org
>> /perf.html#/graphs?series=mozilla-inbound,1706218,1,4
>> =mozilla-inbound,1706220,1,4=mozilla-inbound,1706216,1,4
>> JS: https://treeherder.mozilla.org/perf.html#/graphs?series=mozi
>> lla-central,1684866,1,4=mozilla-central,1685137,1,4&
>> series=mozilla-central,1685131,1,4
>>
>> * Andrew McCreight 

Re: Fission MemShrink Newsletter #1: What (it is) and Why (it matters to you)

2018-07-12 Thread Tom Ritter
On Wed, Jul 11, 2018 at 6:25 PM, Karl Tomlinson  wrote:

> Is there a guideline that should be used to evaluate what can
> acceptably run in the same process for different sites?
>


This is on me to write. I have been slow at doing so mainly because there's
a lot of "What does X look like and where do its pats run" investigation I
feel I need to do to write it. (For X in at least { WebExtensions, WebRTC,
Compositing, Filters, ... })



> I assume the primary goal is to prevent one site from reading
> information that should only be available to another site?
>

Yep.



On Wed, Jul 11, 2018 at 6:56 PM, Robert O'Callahan 
wrote:

> On Thu, Jul 12, 2018 at 11:25 AM, Karl Tomlinson 
> wrote:
>
> > Would it be easier to answer the opposite question?  What should
> > not run in a shared process?  JS is a given.  Others?
> >
>
> Currently when an exploitable bug is found in content process code,
> attackers use JS to weaponize it with an arsenal of known techniques (e.g.
> heap spraying and shaping). An important question is whether, assuming a
> similar bug were found in a shared non-content process, how difficult would
> it be for content JS to apply those techniques remotely across the process
> boundary?


You're completely correct.


> That would be a pretty interesting problem for security
> researchers to work on.
>

It's always illustrative to have exploits that demonstrate this goal in the
target of interest - they may have created generic techniques that we can
address fundamentally (like with Memory Partitioning or Allocator
Hardening).  But people have been writing exploits for targets that don't
have a scripting environment for two decades or more, so all of those are
prior art for this sort of exploitation.  This isn't a reason not to pursue
this work, and it's not saying this work isn't a net security win though!

I have been pondering (and brainstormed with a few people) about creating
something Google native-client-like to enforce process-like state
separation between threads in a single process. That might make it safer to
share utility processes between content processes. But it's considerably
less straightforward than I was hoping. Big open research question.


Use of system font, graphics, or audio servers is in a similar bucket I
> > guess.
> >
>
> Taking control of an audio server would let you listen into phone calls,
> which seems interesting.
>
> Another question is whether you can exfiltrate cross-origin data by
> performing side-channel attacks against those shared processes. You
> probably need to assume that Spectre-ish attacks will be blocked at process
> boundaries by hardware/OS mitigations, but there could be
> browser-implementation-specific timing attacks etc. E.g. do IPDL IDs
> exposed to content processes leak useful information about the activities
> of other processes? Of course there are cross-origin timing-based
> information leaks that are already known and somewhat unfixable :-(.


Yup!

-tom
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Fission MemShrink Newsletter #1: What (it is) and Why (it matters to you)

2018-07-12 Thread Emilio Cobos Álvarez

Thanks for doing this!

Just curious, is there a bug on file to measure excess capacity on 
nsTArrays and hash tables?


WebKit has a bunch of bugs like:

  https://bugs.webkit.org/show_bug.cgi?id=186709

Which seem relevant.

 -- Emilio

On 07/10/2018 08:19 PM, Kris Maglione wrote:

Welcome to the first edition of the Fission MemShrink newsletter.[1]

In this edition, I'll sum up what the project is, and why it matters to 
you. In subsequent editions, I'll give updates on progress that we've 
made, and areas that we'll need to focus on next.[2]



The Fission MemShrink project is one of the most easily overlooked 
aspects of Project Fission (also known as Site Isolation), but is 
absolutely critical to its success. And will require a company- and 
community-wide effort effort to meet its goals.


The problem is thus: In order for site isolation to work, we need to be 
able to run *at least* 100 content processes in an average Firefox 
session. Each of those processes has its own base memory overhead—memory 
we use just for creating the process, regardless of what's running in 
it. In the post-Fission world, that overhead needs to be less than 10MB 
per process in order to keep the extra overhead from Fission below 1GB. 
Right now, on our best-cast platform, Windows 10, is somewhere between 
17 and 21MB. Linux and OS-X hover between 25 and 35MB. In other words, 
between 2 and 3.5GB for an ordinary session.


That means that, in the best case, we need to reduce the memory we use 
in content processes by *at least* 7MB. The problem, of course, is that 
there are only so many places we can cut memory without losing 
functionality, and even fewer places where we can make big wins. But, 
there are lots of places we can make small and medium-sized wins.


So, to put the task into perspective, of all of the places we can cut a 
certain amount of overhead, here are the number of each that we need to 
fix in order to reach 1MB:


250KB:   4
100KB:  10
75KB:   13
50KB:   20
20KB:   50
10KB:  100
5KB:   200

Now remember: we need to do *all* of these in order to reach our goal. 
It's not a matter of one 250KB improvement or 50 5KB improvements. It's 
4 250KB *and* 200 5KB improvements. There just aren't enough places we 
can cut 250KB. If we fall short in any of those areas, Project Fission 
will fail, and Firefox will be the only major browser without site 
isolation.


But it won't fail, because all of you are awesome, and this is a totally 
achievable goal if we all throw our effort behind it.


Essentially what this means, though, is that if we identify an area of 
overhead that's 50KB[3] or larger that can be eliminated, it *has* to be 
eliminated. There just aren't that many large chunks to remove. They all 
need to go. And if an area of code has a dozen 5KB chunks that can be 
eliminated, maybe they don't all have to go, but at least half of them 
do. The more the better.



To help us triage these issues, we have a tracking bug 
(https://bugzil.la/memshrink-content), and a per-bug whiteboard tag 
([overhead:...]) which gives an estimate of how much per-process 
overhead we believe fixing that bug would eliminate. Please feel free to 
add blockers to the tracking bug if you think they're relevant, and to 
add or update [overhead] tags if you have reasonable estimates.



With all of that said, here's a brief update of the progress we've made 
so far:


In the past month, unique memory per process[4] has dropped 3-4MB[5], 
and JS memory usage in particular has dropped 1.1-1.9MB.


Particular credit goes to:

* Eric Rahm added an AWSY test suite to track base content process memory
   (https://bugzil.la/1442361). Results:

    Resident unique: 
https://treeherder.mozilla.org/perf.html#/graphs?series=mozilla-central,1684862,1,4=mozilla-central,1684846,1,4=mozilla-central,1685133,1,4=mozilla-central,1685127,1,4 

    Explicit allocations: 
https://treeherder.mozilla.org/perf.html#/graphs?series=mozilla-inbound,1706218,1,4=mozilla-inbound,1706220,1,4=mozilla-inbound,1706216,1,4 

    JS: 
https://treeherder.mozilla.org/perf.html#/graphs?series=mozilla-central,1684866,1,4=mozilla-central,1685137,1,4=mozilla-central,1685131,1,4 



* Andrew McCreight created a tool for tracking JS memory usage, and 
figuring

   out which scripts and objects are responsible for how much of it
   (https://bugzil.la/1463569).

* Andrew and Nika Layzell also completely rewrote the way we handle 
XPIDL type
   info so that it's statically compiled into the executable and shared 
between

   all processes (https://bugzil.la/1438688, https://bugzil.la/1444745).

* Felipe Gomes split a bunch of code out of frame scripts so that it 
could be
   lazily loaded only when needed (https://bugzil.la/1467278, ...) and 
added a
   whitelist of JSMs that are allowed to be loaded at content process 
startup

   (https://bugzil.la/1471066)

* I did a bit of this too, and also prevented us from loading some other 
JSMs
   before we need them 

Re: Intent to remove: the 'Memory usage of Subprocesses' table from about:performance

2018-07-12 Thread Florian Quèze
On Thu, Jul 12, 2018 at 1:18 AM, Eric Rahm  wrote:

> What performance issues are you seeing? RSS and USS should be relatively
> lightweight and the polling frequency isn't very high.

It seems ResidentUniqueDistinguishedAmount does blocking system calls,
resulting in blocking the main thread for several seconds in the worst
case. Here's a screenshot of a profile showing it:
https://i.imgur.com/DjRMQtY.png (unfortunately that profile is too big
and fails to upload with the 'share' feature).

There's also a memory leak in the implementation, after leaving
about:performance open for a couple hours, there was more than 300MB
of JS "Function" and "Call" objects (about:memory screenshot:
https://i.imgur.com/21YNDru.png ) and the devtools' Memory tool shows
that this is coming from the code queuing updates to that subprocess
memory table: https://i.imgur.com/04M71hg.png

Florian

-- 
Florian Quèze
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Using clang-cl to ship Windows builds

2018-07-12 Thread Mike Hommey
On Wed, Jul 11, 2018 at 11:34:52PM -0700, Anthony Jones wrote:
> On Thursday, 12 July 2018 15:50:40 UTC+12, halivi...@gmail.com  wrote:
> > I hope that both Firefox and Chrome continue to keep the build and
> > tests running on MSVC. It would suck if for example we can't build
> > Firefox with MSVC.
> 
> I can't comment on Chrome.
> 
> > Will the Firefox team publish builds of Firefox from both MSVC and
> > Clang with symbols so we can profile ourselves and compare which is
> > faster for the webpages we use?
> 
> The MSVC nightly builds will likely continue until we fully commit to
> clang-cl. It is expensive to maintain MSVC workarounds and given that
> cross-language LTO[1] is compelling for Firefox, it is unlikely we'd
> return to MSVC.

Actually, I don't think we do produce MSVC nightly builds. We *do* MSVC
build on automation, but they're not exactly nightlies. You won't find
them along other nightlies. And until bug 1474756 lands, those builds
aren't even with PGO enabled.

Mike
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Using clang-cl to ship Windows builds

2018-07-12 Thread Anthony Jones
On Thursday, 12 July 2018 15:50:40 UTC+12, halivi...@gmail.com  wrote:
> I hope that both Firefox and Chrome continue to keep the build and tests 
> running on MSVC. It would suck if for example we can't build Firefox with 
> MSVC.

I can't comment on Chrome.

> Will the Firefox team publish builds of Firefox from both MSVC and Clang with 
> symbols so we can profile ourselves and compare which is faster for the 
> webpages we use?

The MSVC nightly builds will likely continue until we fully commit to clang-cl. 
It is expensive to maintain MSVC workarounds and given that cross-language 
LTO[1] is compelling for Firefox, it is unlikely we'd return to MSVC.

Anthony

[1] https://github.com/rust-lang/rust/issues/49879
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed W3C Charters: Accessibility (APA and ARIA Working Groups)

2018-07-12 Thread James Teh
I (and others in the accessibility team) think we should support these
charters. The ARIA working group is especially important in the future
evolution of web accessibility. I have some potential concerns/questions
regarding the personalisation semantics specifications from APA, but
they're more spec questions at this point and I don't think they need to be
raised with respect to charter. Certainly, cognitive disabilities is an
area that definitely needs a great deal more attention on the web, and the
APA are seeking to do that.

Thanks.

Jamie

On Wed, Jul 11, 2018 at 3:57 PM, L. David Baron  wrote:

> The W3C is proposing revised charters for:
>
>   Accessible Platform Architectures (APA) Working Group
>   https://www.w3.org/2018/03/draft-apa-charter
>
>   Accessible Rich Internet Applications (ARIA) Working Group
>   https://www.w3.org/2018/03/draft-aria-charter
>
>   https://lists.w3.org/Archives/Public/public-new-work/2018Jun/0003.html
>
> Mozilla has the opportunity to send comments or objections through
> Friday, July 27.
>
> The changes relative to the previous charters are:
> https://services.w3.org/htmldiff?doc1=https%3A%2F%
> 2Fwww.w3.org%2F2015%2F10%2Fapa-charter=https%3A%
> 2F%2Fwww.w3.org%2F2018%2F03%2Fdraft-apa-charter
> https://services.w3.org/htmldiff?doc1=https%3A%2F%
> 2Fwww.w3.org%2F2015%2F10%2Faria-charter=https%3A%
> 2F%2Fwww.w3.org%2F2018%2F03%2Fdraft-aria-charter
>
> Please reply to this thread if you think there's something we should
> say as part of this charter review, or if you think we should
> support or oppose it.
>
> -David
>
> --
> 턞   L. David Baron http://dbaron.org/   턂
> 턢   Mozilla  https://www.mozilla.org/   턂
>  Before I built a wall I'd ask to know
>  What I was walling in or walling out,
>  And to whom I was like to give offense.
>- Robert Frost, Mending Wall (1914)
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform