On Wed, Jul 11, 2018 at 02:42:11PM +0200, David Bruant wrote:
2018-07-10 20:19 GMT+02:00 Kris Maglione <kmagli...@mozilla.com>:
The problem is thus: In order for site isolation to work, we need to be
able to run *at least* 100 content processes in an average Firefox session
I've seen this information of 100 content processes in a couple places but
i haven't been able to find the rationale for it. How was the 100 number
So, the basic problem here is that we don't get to choose the number of
content processes we'll have. It will depend entirely on the number of
origins that we load documents from at any given time. In practice, the
biggest contributing factor to that number tends to be iframes (mostly
for things like ads and social widgets).
The "100 processes" number was initially chosen based on experimentation
(basically, counting the number of origins loaded by typical pages on
certain popular sites) and our knowledge of typical usage patterns. It's
meant to be a conservative estimate of the number of processes typical
users are likely to hit on a regular basis, though hopefully not all the
For heavy users, we expect the number to be much higher. And while those
users typically have more RAM to spare, they also tend not to be happy
when we waste it.
We also need to add to that number the Activity Stream process that
hosts things like about:newtab and about:home, the system extension
process, processes for any other extensions the user has installed
(which will each likely need their own processes for the same reasons
each content origin will), and the pre-loaded web content process.
We've been working on improving our estimates by collecting telemetry on
the number of document groups per tab group:
But we don't have enough data to draw conclusions yet.
Would 90 prevent a release of project fission?
This isn't really something we get to choose. The closest I can come is
something like "would an overhead of 1.1GB prevent a release of project
Fission". And, while the answer may turn out to be "no", I'd prefer not
to speculate, because that's a decision we'd wind up paying for with
There are some other hacks that we can use to decrease the overall
overhead, like aggressively unloading background tabs, and flushing
their resources. We're almost certainly going to wind up having to do
some of that regardless, but it comes at a performance cost. The more
aggressive we have to be about it, the less responsive the browser is
going to wind up being. So, again, the shorter we fall on our memory
How will the rollout happen?
Will the rollout happen progressively (like 2 content processes soon, 4
soon after, 10 some time after, etc.) or does it have to be 1 (current
situation IIUC) then 100?
* Andrew McCreight created a tool for tracking JS memory usage, and figuring
out which scripts and objects are responsible for how much of it
How often is this code run? Is there a place to find the daily output of
this tool applied to a nightly build for instance?
For the moment, it requires a patched build of Firefox, so we've been
running it locally as we try to track down and fix memory issues, and
Andrew has been periodically updating the numbers in the bug.
I believe Andrew has been working on updating the patch to a land-able
state (which is non-trivial), after which we'll hopefully be able to get
up-to-date numbers from automation.
: Particularly readers of TechCrunch, which regularly loads 30
origins on a single page.
: Essentially documents of different origin.
: Essentially sets of tabs that are tied together because they were
opened by things like window.open() calls or link clicks from other
: Which currently have only one of, but may need more of in the
future in order to support loading several iframes in a given page
without noticeable lag or jank.
dev-platform mailing list