Re: Firefox Profiler now supports recording IPC messages

2019-10-30 Thread Gerald Squelart
Unfortunately using the pref doesn't always work.

Instead, go to the Firefox hamburger menu -> Web Developer -> Enable Profiler 
Toolbar Icon. It shows up as a small stopwatch.

As Greg said, we're still in transition, sorry for the confusion! ;-)

- Gerald

On Thursday, October 31, 2019 at 8:45:10 AM UTC+11, smaug wrote:
> FWIW, apparently the UI is in the devtools profiler UI, not in the profiler 
> addon.
> https://profiler.firefox.com/ still tells users to install the addon from 
> there.
> 
> I was told that one can get the button similar to the addon by enabling
> devtools.performance.popup.enabled boolean pref and then using 'Customize...'
> 
> Anyhow, great stuff. Seeing IPC messages in the profiles can be really handy.
> (and so far I've been positively surprised that we don't seem to send that 
> many IPC messages)
> 
> 
> -Olli
> 
> 
> 
> On 10/30/19 10:14 PM, Jim Porter wrote:
> > Recently, we landed a new feature for the Firefox Profiler: the ability to 
> > record IPC messages for monitored threads. This should be useful for 
> > evaluating IPC-related performance issues as we make progress on Project 
> > Fission. To enable this feature, just check the "IPC Messages" feature in 
> > the 
> > profiler popup and collect a profile! Then, IPC messages on all monitored 
> > threads will be recorded to the profile.
> > 
> > For an example of what this looks like, see this profile of a user (me) 
> > opening mozilla.org in a new tab: .
> > 
> > Since IPC messages are (obviously) cross-process, each IPC message is 
> > actually comprised of two profiler markers: one for the sending thread and 
> > one 
> > for the receiving thread. The profiler frontend then examines all the 
> > collected IPC markers and correlates the sending and receiving sides. After 
> > correlating each side, we can then determine the latency of the IPC 
> > message: this is defined to be the time between when the message is sent 
> > (i.e. 
> > when `SendMessage` or similar is called) and when it's received (i.e. once 
> > the recipient thread has constructed a `Message` object).
> > 
> > Sometimes, IPC messages will have an unknown duration. This means that the 
> > profiler marker for the other side of the IPC call wasn't recorded (either 
> > the thread wasn't profiled at all or the other side occurred outside of the 
> > time range of the profile).
> > 
> > As you can probably see from the example profile, the user interface is 
> > fairly basic for now: each thread just has a new timeline track to display 
> > its 
> > IPC messages, with outgoing messages in teal and incoming messages in 
> > purple. Of course, there's lots of room for improvement here, so if you 
> > have 
> > ideas for a visualization that would be useful to you, just file a bug and 
> > CC me on it!
> > 
> > Happy profiling!
> > - Jim
> > 
> > P.S.: For those who are curious about how we correlate each side of an IPC 
> > message, we compare the source and destination PIDs, the message's type, 
> > and its seqno. This is enough to uniquely identify any IPC message, though 
> > it does mean that reply messages are considered a separate message. If 
> > people find it useful, it should be straightforward to correlate initial 
> > and reply messages with each other as well.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: Promise.allSettled

2019-10-30 Thread Jan-Ivar Bruaroey

Promise.any is an inverse of Promise.all, right? There and back again:

Promise.all(promises.map(p => p.then(r => Promise.reject(r), e => e)))
  .then(r => Promise.reject(r), e => e)

https://jsfiddle.net/jib1/f085bcyk/

That said, Promise.any has a nice ring to it.

.: Jan-Ivar :.

On 10/30/19 6:39 PM, Jason Orendorff wrote:

Rather; but there is still a (rapidly closing) window to comment on the
*next* proposal in this vein: Promise.all <
https://github.com/tc39/proposal-promise-any/#ecmascript-proposal-promiseany>.
It is a stage 3 proposal now. Our representative on the ECMAScript standard
committee is Yulia Startsev (or you can just email me).

-j



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Can we change how [hidden] and [collapsed] work on XUL elements to more closely match HTML boolean attributes?

2019-10-30 Thread Brian Grinstead
In order to support moving more of our frontend to HTML, I'd like to propose 
that we change the way `hidden` and `collapsed` attributes and properties work 
on XUL elements and rewrite frontend consumers to adapt.

Currently, hidden and collapsed in XUL behave like:
* Only a value of true hides or collapses the element in the XUL markup. This 
is done in CSS rules 
(https://searchfox.org/mozilla-central/rev/1fe0cf575841dbf3b7e159e88ba03260cd1354c0/toolkit/content/minimal-xul.css#30-38)
 and there's also platform code that specifically looks for "true" on the 
attribute.
* Setting el.hidden = true or any truthy value will set hidden="true" in the 
markup
* Setting el.hidden = false or any falsy value will remove the hidden attribute

If we want to support more easily porting non Custom Element XUL elements (like 
) to HTML without requiring a Custom Element, I think we should switch 
the behavior to more closely match HTML:
* Setting any value to the hidden/collapsed HTML attribute will hide the 
element (we’d add CSS rules matching those in minimal-xul.css but in xul.css 
targeting html elements with the existence of the attribute instead of checking 
for “true”). 
* el.collapsed is not a thing: we’d rewrite consumers from `el.collapsed = val` 
to `el.toggleAttribute(“collapsed”, val)`
* Setting el.hidden = true or any truthy value will set hidden="" in the DOM
* Setting el.hidden = false or any falsy value will remove the hidden attribute
* The recommended way to write things in HTML markup is .  and  are also acceptable but  or any other value is _not_ acceptable as per 
https://html.spec.whatwg.org/multipage/common-microsyntaxes.html#boolean-attributes
 .
* The recommended way to write things in XHTML markup is 

Things I know we need to look more closely at before making a change:
* Check consumers that use XULStore to persist these values to make sure that 
they don't explicitly store [hidden=false] as a way to indicate that it's 
visible.

Are there reasons not to make this change, or things that need further 
investigation?

Note that this came out of a suggestion from ntim in 
https://phabricator.services.mozilla.com/D51168 - there's some more background 
in that patch.

Thanks,
Brian

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: Promise.allSettled

2019-10-30 Thread Jan-Ivar Bruaroey
Yes, it would have been clever of me to predict the semantics 3 years 
ago. ;-)


But as I mention in the stackoverflow post, depending on context and the 
type(s) of values expected, errors can often be distinguished easily 
enough in practice, if not in general.


That said, I've never actually had an opportunity to use this pattern 
for anything...


.: Jan-Ivar :.

On 10/30/19 6:39 PM, Boris Zbarsky wrote:

On 10/30/19 6:19 PM, Jan-Ivar Bruaroey wrote:

This always seemed trivial to me to do with:

 Promise.all(promises.map(p => p.catch(e => e)))


This has different semantics from Promise.allSettled, right?  In 
particular, it effectively treats rejected promises as resolved with the 
rejection value, forgetting the fact that they were rejected. 
Promise.allSettled, on the other hand, preserves information about the 
state.


I think you can still polyfill it, using something like:

Promise.all(promises.map(p => p.then(
   v => { { status: "fulfilled", value: v } },
   e => { { status: "rejected", reason: e } }));

though I think this might create more temporary Promises than allSettled 
does...


-Boris


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: Promise.allSettled

2019-10-30 Thread Boris Zbarsky

On 10/30/19 6:19 PM, Jan-Ivar Bruaroey wrote:

This always seemed trivial to me to do with:

     Promise.all(promises.map(p => p.catch(e => e)))


This has different semantics from Promise.allSettled, right?  In 
particular, it effectively treats rejected promises as resolved with the 
rejection value, forgetting the fact that they were rejected. 
Promise.allSettled, on the other hand, preserves information about the 
state.


I think you can still polyfill it, using something like:

Promise.all(promises.map(p => p.then(
  v => { { status: "fulfilled", value: v } },
  e => { { status: "rejected", reason: e } }));

though I think this might create more temporary Promises than allSettled 
does...


-Boris
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to ship: Promise.allSettled

2019-10-30 Thread Jan-Ivar Bruaroey

This always seemed trivial to me to do with:

Promise.all(promises.map(p => p.catch(e => e)))

https://stackoverflow.com/questions/31424561/wait-until-all-promises-complete-even-if-some-rejected/36115549#36115549

But I guess it's too late to make that point. I guess the more 
primitives the merrier!


.: Jan-Ivar :.

On 10/30/19 5:44 PM, Jason Orendorff wrote:

In Firefox 71, we'll ship Promise.allSettled, a standard way to `await`
several promises at once. André Bargull [:anba] contributed the
implementation of this feature. It's in Nightly now.

Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1539694
Shipped in: https://bugzilla.mozilla.org/show_bug.cgi?id=1549176

Standard: https://tc39.es/ecma262/#sec-promise.allsettled

MDN:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise/allSettled

Platform coverage: All, no pref

DevTools bug: N/A. The DevTools don't currently have any custom support
for peeking at the internal state of Promise objects.

Other browsers: Shipped in Chrome 76, Safari 13.

Testing: There are test262 tests covering this feature:
https://github.com/tc39/test262/tree/master/test/built-ins/Promise/allSettled

Use cases: Promise.allSettled is useful in async code. It's used to wait
for several tasks to finish in parallel. What sets it apart from the
existing methods Promise.race and Promise.all is that it *doesn't*
short-circuit as soon as a single task succeeds/fails.

Secure contexts: This is a JS language feature and is therefore present
in all contexts.

-j



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Firefox Profiler now supports recording IPC messages

2019-10-30 Thread Greg Tatum
Thanks for the clarification, Olli. We're in a bit of a transition process
of moving the addon's implementation into mozilla central with a direct
integration. I'll try to prioritize finishing up that transition as it's a
bit confusing as new features are landing in the popup, but not in the
addon right now.

On Wed, Oct 30, 2019 at 4:50 PM smaug  wrote:

> FWIW, apparently the UI is in the devtools profiler UI, not in the
> profiler addon.
> https://profiler.firefox.com/ still tells users to install the addon from
> there.
>
> I was told that one can get the button similar to the addon by enabling
> devtools.performance.popup.enabled boolean pref and then using
> 'Customize...'
>
> Anyhow, great stuff. Seeing IPC messages in the profiles can be really
> handy.
> (and so far I've been positively surprised that we don't seem to send that
> many IPC messages)
>
>
> -Olli
>
>
>
> On 10/30/19 10:14 PM, Jim Porter wrote:
> > Recently, we landed a new feature for the Firefox Profiler: the ability
> to record IPC messages for monitored threads. This should be useful for
> > evaluating IPC-related performance issues as we make progress on Project
> Fission. To enable this feature, just check the "IPC Messages" feature in
> the
> > profiler popup and collect a profile! Then, IPC messages on all
> monitored threads will be recorded to the profile.
> >
> > For an example of what this looks like, see this profile of a user (me)
> opening mozilla.org in a new tab: .
> >
> > Since IPC messages are (obviously) cross-process, each IPC message is
> actually comprised of two profiler markers: one for the sending thread and
> one
> > for the receiving thread. The profiler frontend then examines all the
> collected IPC markers and correlates the sending and receiving sides. After
> > correlating each side, we can then determine the latency of the IPC
> message: this is defined to be the time between when the message is sent
> (i.e.
> > when `SendMessage` or similar is called) and when it's received (i.e.
> once the recipient thread has constructed a `Message` object).
> >
> > Sometimes, IPC messages will have an unknown duration. This means that
> the profiler marker for the other side of the IPC call wasn't recorded
> (either
> > the thread wasn't profiled at all or the other side occurred outside of
> the time range of the profile).
> >
> > As you can probably see from the example profile, the user interface is
> fairly basic for now: each thread just has a new timeline track to display
> its
> > IPC messages, with outgoing messages in teal and incoming messages in
> purple. Of course, there's lots of room for improvement here, so if you
> have
> > ideas for a visualization that would be useful to you, just file a bug
> and CC me on it!
> >
> > Happy profiling!
> > - Jim
> >
> > P.S.: For those who are curious about how we correlate each side of an
> IPC message, we compare the source and destination PIDs, the message's
> type,
> > and its seqno. This is enough to uniquely identify any IPC message,
> though it does mean that reply messages are considered a separate message.
> If
> > people find it useful, it should be straightforward to correlate initial
> and reply messages with each other as well.
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Firefox Profiler now supports recording IPC messages

2019-10-30 Thread smaug

FWIW, apparently the UI is in the devtools profiler UI, not in the profiler 
addon.
https://profiler.firefox.com/ still tells users to install the addon from there.

I was told that one can get the button similar to the addon by enabling
devtools.performance.popup.enabled boolean pref and then using 'Customize...'

Anyhow, great stuff. Seeing IPC messages in the profiles can be really handy.
(and so far I've been positively surprised that we don't seem to send that many 
IPC messages)


-Olli



On 10/30/19 10:14 PM, Jim Porter wrote:
Recently, we landed a new feature for the Firefox Profiler: the ability to record IPC messages for monitored threads. This should be useful for 
evaluating IPC-related performance issues as we make progress on Project Fission. To enable this feature, just check the "IPC Messages" feature in the 
profiler popup and collect a profile! Then, IPC messages on all monitored threads will be recorded to the profile.


For an example of what this looks like, see this profile of a user (me) opening 
mozilla.org in a new tab: .

Since IPC messages are (obviously) cross-process, each IPC message is actually comprised of two profiler markers: one for the sending thread and one 
for the receiving thread. The profiler frontend then examines all the collected IPC markers and correlates the sending and receiving sides. After 
correlating each side, we can then determine the latency of the IPC message: this is defined to be the time between when the message is sent (i.e. 
when `SendMessage` or similar is called) and when it's received (i.e. once the recipient thread has constructed a `Message` object).


Sometimes, IPC messages will have an unknown duration. This means that the profiler marker for the other side of the IPC call wasn't recorded (either 
the thread wasn't profiled at all or the other side occurred outside of the time range of the profile).


As you can probably see from the example profile, the user interface is fairly basic for now: each thread just has a new timeline track to display its 
IPC messages, with outgoing messages in teal and incoming messages in purple. Of course, there's lots of room for improvement here, so if you have 
ideas for a visualization that would be useful to you, just file a bug and CC me on it!


Happy profiling!
- Jim

P.S.: For those who are curious about how we correlate each side of an IPC message, we compare the source and destination PIDs, the message's type, 
and its seqno. This is enough to uniquely identify any IPC message, though it does mean that reply messages are considered a separate message. If 
people find it useful, it should be straightforward to correlate initial and reply messages with each other as well.


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to ship: Promise.allSettled

2019-10-30 Thread Jason Orendorff
In Firefox 71, we'll ship Promise.allSettled, a standard way to `await`
several promises at once. André Bargull [:anba] contributed the
implementation of this feature. It's in Nightly now.

Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1539694
Shipped in: https://bugzilla.mozilla.org/show_bug.cgi?id=1549176

Standard: https://tc39.es/ecma262/#sec-promise.allsettled

MDN:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise/allSettled

Platform coverage: All, no pref

DevTools bug: N/A. The DevTools don't currently have any custom support
for peeking at the internal state of Promise objects.

Other browsers: Shipped in Chrome 76, Safari 13.

Testing: There are test262 tests covering this feature:
https://github.com/tc39/test262/tree/master/test/built-ins/Promise/allSettled

Use cases: Promise.allSettled is useful in async code. It's used to wait
for several tasks to finish in parallel. What sets it apart from the
existing methods Promise.race and Promise.all is that it *doesn't*
short-circuit as soon as a single task succeeds/fails.

Secure contexts: This is a JS language feature and is therefore present
in all contexts.

-j
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Firefox Profiler now supports recording IPC messages

2019-10-30 Thread Jim Porter
Recently, we landed a new feature for the Firefox Profiler: the ability 
to record IPC messages for monitored threads. This should be useful for 
evaluating IPC-related performance issues as we make progress on Project 
Fission. To enable this feature, just check the "IPC Messages" feature 
in the profiler popup and collect a profile! Then, IPC messages on all 
monitored threads will be recorded to the profile.


For an example of what this looks like, see this profile of a user (me) 
opening mozilla.org in a new tab: .


Since IPC messages are (obviously) cross-process, each IPC message is 
actually comprised of two profiler markers: one for the sending thread 
and one for the receiving thread. The profiler frontend then examines 
all the collected IPC markers and correlates the sending and receiving 
sides. After correlating each side, we can then determine the latency of 
the IPC message: this is defined to be the time between when the message 
is sent (i.e. when `SendMessage` or similar is called) and when it's 
received (i.e. once the recipient thread has constructed a `Message` 
object).


Sometimes, IPC messages will have an unknown duration. This means that 
the profiler marker for the other side of the IPC call wasn't recorded 
(either the thread wasn't profiled at all or the other side occurred 
outside of the time range of the profile).


As you can probably see from the example profile, the user interface is 
fairly basic for now: each thread just has a new timeline track to 
display its IPC messages, with outgoing messages in teal and incoming 
messages in purple. Of course, there's lots of room for improvement 
here, so if you have ideas for a visualization that would be useful to 
you, just file a bug and CC me on it!


Happy profiling!
- Jim

P.S.: For those who are curious about how we correlate each side of an 
IPC message, we compare the source and destination PIDs, the message's 
type, and its seqno. This is enough to uniquely identify any IPC 
message, though it does mean that reply messages are considered a 
separate message. If people find it useful, it should be straightforward 
to correlate initial and reply messages with each other as well.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal: Replace NS_ASSERTION with MOZ_ASSERT and then remove it.

2019-10-30 Thread Bobby Holley
On Wed, Oct 30, 2019 at 8:47 AM Nathan Froyd  wrote:

> On Wed, Oct 30, 2019 at 11:36 AM Tom Ritter  wrote:
> >
> > I will claim that the most common behavior of developers is to leave
> > XPCOM_DEBUG_BREAK alone and not set it to any particular value. I bet
> most
> > people haven't even heard of this or know what it does.
> >
> > With that env var unset, in Debug mode, NS_ASSERTION will print to stderr
> > and otherwise do nothing. In non-Debug mode, it will just do nothing.
> >
> > Is that the best behavior for this? Should perhaps (most of) these
> claimed
> > assertions really be MOZ_ASSERT? Hence this proposal.
>
> You may be interested in
> https://bugzilla.mozilla.org/show_bug.cgi?id=1457813#c5, the links
> therein, and the following bug comments for why we have resisted a
> wholesale transition.
>

And because the discussion there references roc's arguments but doesn't
link to them, here they are:
https://robert.ocallahan.org/2011/12/case-for-non-fatal-assertions.html


>
> -Nathan
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal: Replace NS_ASSERTION with MOZ_ASSERT and then remove it.

2019-10-30 Thread Nathan Froyd
On Wed, Oct 30, 2019 at 11:36 AM Tom Ritter  wrote:
>
> I will claim that the most common behavior of developers is to leave
> XPCOM_DEBUG_BREAK alone and not set it to any particular value. I bet most
> people haven't even heard of this or know what it does.
>
> With that env var unset, in Debug mode, NS_ASSERTION will print to stderr
> and otherwise do nothing. In non-Debug mode, it will just do nothing.
>
> Is that the best behavior for this? Should perhaps (most of) these claimed
> assertions really be MOZ_ASSERT? Hence this proposal.

You may be interested in
https://bugzilla.mozilla.org/show_bug.cgi?id=1457813#c5, the links
therein, and the following bug comments for why we have resisted a
wholesale transition.

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Proposal: Replace NS_ASSERTION with MOZ_ASSERT and then remove it.

2019-10-30 Thread Tom Ritter
I will claim that the most common behavior of developers is to leave
XPCOM_DEBUG_BREAK alone and not set it to any particular value. I bet most
people haven't even heard of this or know what it does.

With that env var unset, in Debug mode, NS_ASSERTION will print to stderr
and otherwise do nothing. In non-Debug mode, it will just do nothing.

Is that the best behavior for this? Should perhaps (most of) these claimed
assertions really be MOZ_ASSERT? Hence this proposal.



Now switching them assertions does reduce the flexibility here. Presently,
by controlling XPCOM_DEBUG_BREAK, you can make NS_ASSERTION suspend the
process, print out a stack trace, abort, print the stack and then abort, or
trigger a breakpoint in a debugger.

One full test run on linux64 produced over 1500 instances of 72 unique
assertions - while some of those are expected assertions because of the
expectAssertions API, they can't all be; so I'm skeptical that the
DEBUG_BREAK env var feature is actually usable without triggering instances
unrelated to what you're actually working on. It's probably much easier to
drop in a NS_DebugBreak(NS_DEBUG_BREAK).

And because some tests use the expectAssertions API (about 70), we couldn't
completely remove this until that got refactored to some new mechanism -
perhaps just minimizing NS_ASSERTION into a newly renamed thing that was
used just for this purpose.

But I believe the majority of ~5600 assertions today can be moved to
MOZ_ASSERT without consequence and that this would more accurately reflect
a developer's intention when they wrote the code.  (I think. The name at
least implies it.)  And for the ones that we're triggering unintentionally,
and can't become MOZ_ASSERT today - perhaps we can file issues, get them
investigated and changed to a NS_WARN_IF or a Log or some other behavior?
For the rest, we can cut over all of the ones except the ones we've seen
triggered on Taskcluster based on some heuristic of choosing taskcluster
runs.  (Which I'd be happy to hear suggestions for if anyone has thoughts
on something more sophisticated than 'the last week of -central runs'.)

PS: This is separate from a proposal to deal with NS_WARNING_ASSERTION
which could come next...
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


New Memory Tooling in the Profiler

2019-10-30 Thread Greg Tatum
The Firefox Profiler has been getting some new memory tooling support to
help diagnose memory issues. In the past few weeks we have support for
native (C++, Rust) memory allocation tracking. The new tooling works with
the existing profiler UI, like the call tree, and flame graph. In addition,
the memory stacks can be manipulated with call tree transforms. We have
documentation outlining how this feature works and how to use it here:

https://profiler.firefox.com/docs/#/./memory-allocations

Here is a snippet from the docs:

> The profiler has experimental support for analyzing allocations in native
> code (C++ and Rust) via stack sampling. These features require Nightly and
> the new Profiler Toolbar Icon (directions below).
>
> The Native Allocations feature works by collecting the stack and size of
> memory allocations from native (C++ or Rust) code. It does not collect
> every allocation, but only samples a subset of them. The sampling is biased
> towards larger allocations and larger frees. Larger allocations are more
> likely show up in the profile, and will most likely be more representative
> of the actual memory usage. Keep in mind that since these allocations are
> only sampled, not all allocations will be recorded. This means that memory
> track (the orange graph at the top) will most likely report different
> numbers for memory usage.
>
We're planning some new features to more accurately track memory usage, as
the current implementation has some limitations. These can be tracked in
Bug 1564474. Please let us know if these features are helpful and how we
can improve them for your workflows. Also let us know if they help you
solve real memory issues.

Example flame graph of loading DevTools: https://perfht.ml/36izsBC
Example flame graph filtered to JavaScript stacks: https://perfht.ml/2BX0f8o

Finally, special thanks to Nicholas Nethercote for his reviews on the
platform changes here, and also for letting us steal a lot of his work and
ideas from DMD.

~Greg Tatum and the Profiler Team
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: To what extent is sccache's distributed compilation usable?

2019-10-30 Thread Jeff Muizelaar
On Tue, Oct 29, 2019 at 9:45 PM Marcos Caceres  wrote:

> Thanks Steve... so it does really sound like perhaps just investing in more 
> individual computing power might be the way to go. I think that's ok... even 
> if it means, in my own case, personally paying the extra "Apple Tax" for a 
> Mac Pro once they are released (I guess I'd be looking at ~US$10,000). The 
> increased productivity might make it worth it tho.
>

sccache supports doing cross compilation so you don't actually need to
buy a fast Mac to get fast Mac builds. A fast cheap Linux desktop will
get you much better bang for your buck.

-Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: To what extent is sccache's distributed compilation usable?

2019-10-30 Thread Gabriele Svelto
 Hi all,
when dealing with builds on slower machines I usually run a build every
morning (4AM) so that when I wake up I've got it ready (together with a
warm cache) and following builds won't be as bad.

This can be done easily on a Mac by setting up a cronjob that will run a
build on a freshly updated mozilla-central clone (you might want to
stash your changes before updating). I then set a schedule with the
energy saver preferences to wake up my Mac from sleep a few minutes
before the cronjob is supposed to run.

This is not a solution for changes that end up affecting large parts of
the build (such as touching a common header) but usually saves me a lot
of time.

 Gabriele

On 29/10/19 18:53, Steve Fink wrote:
> On 10/28/19 9:17 PM, Marcos Caceres wrote:
>> On Tuesday, October 29, 2019 at 3:27:52 AM UTC+11, smaug wrote:
>>> Quite often one has just a laptop. Not compiling tons of Rust stuff
>>> all the time would be really nice.
>>> (I haven't figured out when stylo decides to recompile itself - it
>>> seems to be somewhat random.)
>> Probably a gross misunderstanding on my part, but the sccache project
>> page states [1]: "It is used as a compiler wrapper and avoids
>> compilation when possible, storing a cache in a remote storage using
>> the Amazon Simple Cloud Storage Service (S3) API, the Google Cloud
>> Storage (GCS) API, or Redis."
>>
>> I'm still (possibly naively) imagining that we will leverage the "the
>> cloud"™️ to speed up compiles? Or am I totally misreading what the
>> above is saying?
>>
>> [1] https://github.com/mozilla/sccache#sccache---shared-compilation-cache
> 
> My experience with other distributed compilation tools (distcc, icecc)
> indicates that cloud resources are going to be of very limited use here.
> Compiles are just way too sensitive to network bandwidth and latency,
> especially when compiling with debuginfo which tends to be extremely
> large. Even if the network transfer takes way less time than the
> compile, the sending/receiving scheduling never seems to work out very
> well and things collapse down to a trickle.
> 
> Also, I've had very limited luck with using slow local machines. A CPU
> is not a CPU  -- even on a local gigabit network, farming off compiles
> to slow machines is more likely to slow things down than speed them up.
> Despite the fancy graphical tools, I was never completely satisfied with
> my understanding of exactly why that is. It could be that a lack of
> parallelism meant that everything ended up repeatedly waiting on the
> slow machine to finish the last file in a directory (or whatever your
> boundary of parallelism is). Or it could be network contention,
> especially when your object files have massive debuginfo portions. (I
> always wanted to have a way to generate split debuginfo, and not block
> on the debuginfo transfers.) The tools tended to show things working
> great for a while, and then slowing down to a snail's pace.
> 
> I've long thought [1] that predictive prefetching would be cool: when
> you do something (eg pull from mozilla-central), a background task
> starts prefetching cached build results that were generated remotely.
> Your local compile would use them if they were available, or generate
> them locally if not. That would at least do no harm (if you don't count
> network bandwidth).
> 
> sccache's usage of S3 makes sense when running from within AWS. I'm
> skeptical of its utility when running remotely. But I haven't tried
> setting up sccache on my local network, and my internet connectivity
> isn't great anyway.
> 
> I really ought to put my decade-old desktop into action again. My last
> attempt was with icecc, and though it worked pretty well when it worked,
> the pain in keeping it alive wasn't worth the benefit.
> 
> 
> [1] Ancient history -
> https://wiki.mozilla.org/Sfink/Thought_Experiment_-_One_Minute_Builds
> 
> 
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform




signature.asc
Description: OpenPGP digital signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: To what extent is sccache's distributed compilation usable?

2019-10-30 Thread Reuben Morais
> On 29. Oct 2019, at 18:53, Steve Fink  wrote:
> 
> On 10/28/19 9:17 PM, Marcos Caceres wrote:
>> On Tuesday, October 29, 2019 at 3:27:52 AM UTC+11, smaug wrote:
>>> Quite often one has just a laptop. Not compiling tons of Rust stuff all the 
>>> time would be really nice.
>>> (I haven't figured out when stylo decides to recompile itself - it seems to 
>>> be somewhat random.)
>> Probably a gross misunderstanding on my part, but the sccache project page 
>> states [1]: "It is used as a compiler wrapper and avoids compilation when 
>> possible, storing a cache in a remote storage using the Amazon Simple Cloud 
>> Storage Service (S3) API, the Google Cloud Storage (GCS) API, or Redis."
>> 
>> I'm still (possibly naively) imagining that we will leverage the "the 
>> cloud"™️ to speed up compiles? Or am I totally misreading what the above is 
>> saying?
>> 
>> [1] https://github.com/mozilla/sccache#sccache---shared-compilation-cache
> 
> My experience with other distributed compilation tools (distcc, icecc) 
> indicates that cloud resources are going to be of very limited use here. 
> Compiles are just way too sensitive to network bandwidth and latency, 
> especially when compiling with debuginfo which tends to be extremely large. 
> Even if the network transfer takes way less time than the compile, the 
> sending/receiving scheduling never seems to work out very well and things 
> collapse down to a trickle.
> 
> Also, I've had very limited luck with using slow local machines. A CPU is not 
> a CPU  -- even on a local gigabit network, farming off compiles to slow 
> machines is more likely to slow things down than speed them up. Despite the 
> fancy graphical tools, I was never completely satisfied with my understanding 
> of exactly why that is. It could be that a lack of parallelism meant that 
> everything ended up repeatedly waiting on the slow machine to finish the last 
> file in a directory (or whatever your boundary of parallelism is). Or it 
> could be network contention, especially when your object files have massive 
> debuginfo portions. (I always wanted to have a way to generate split 
> debuginfo, and not block on the debuginfo transfers.) The tools tended to 
> show things working great for a while, and then slowing down to a snail's 
> pace.

This research offers some insight: 
https://www.usenix.org/system/files/atc19-fouladi.pdf / 
https://github.com/StanfordSNR/gg

In particular:

> Existing remote compilation systems, including distcc and icecc, send data 
> between a master node and the workers frequently during the build. These 
> systems perform best on a local network, and add substantial latency when 
> building on more remote servers in the cloud. In contrast, gg uploads all the 
> build input once and executes and exchanges data purely within the cloud, 
> reducing the effects of network latency.

Their system using 8000 AWS Lambdas is 2x faster than an icecc cluster when 
building Chromium, although that is still a *remote* icecc cluster. Maybe the 
speedup is less significant with a local cluster like we're running in our 
offices.

-- reuben
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform