Re: An analysis of content process memory overhead

2016-04-14 Thread Nicholas Nethercote
On Mon, Mar 21, 2016 at 3:50 PM, Nicholas Nethercote wrote: > > - Heap overhead is significant. Reducing the page-cache size could save a > couple of MiBs. Improvements beyond that are hard. Turning on jemalloc4 > *might* help a bit, but I wouldn't bank on it, and

Re: An analysis of content process memory overhead

2016-03-20 Thread Nicholas Nethercote
On Tue, Mar 15, 2016 at 2:34 PM, Nicholas Nethercote wrote: > > - > Conclusion > - > > The overhead per content process

Re: An analysis of content process memory overhead

2016-03-20 Thread Till Schneidereit
I filed bug 876173[1] about this a long time ago. Recently, I talked to Gabor, who's started looking into enabling multiple content processes. One other thing we should be able to do is sharing the self-hosting compartment as we do between runtimes within a process. It's not that big, but it's

Re: An analysis of content process memory overhead

2016-03-19 Thread Nicolas B. Pierron
On 03/17/2016 08:05 AM, Thinker Li wrote: On Wednesday, March 16, 2016 at 10:22:40 PM UTC+8, Nicholas Nethercote wrote: Even if we can fix that, it's just a lot of JS code. We can lazily import JSMs; I wonder if we are failing to do that as much as we could, i.e. are all these modules

Re: An analysis of content process memory overhead

2016-03-19 Thread Gabriele Svelto
On 15/03/2016 04:34, Nicholas Nethercote wrote: > - "heap-overhead" is 4 MiB per process. I've looked at this closely. > The numbers tend to be noisy. > > - "page-cache" is pages that jemalloc holds onto for fast recycling. It is > capped at 4 MiB per process and we can reduce that with a

Re: An analysis of content process memory overhead

2016-03-19 Thread David Rajchenbach-Teller
I seem to remember that our ChromeWorkers (SessionWorker, PageThumbsWorker, OS.File Worker) were pretty memory-hungry, but I don't see any workers there. Does this mean that they have negligible overhead or that they are only in the parent process? Cheers, David On 15/03/16 04:34, Nicholas

Re: An analysis of content process memory overhead

2016-03-19 Thread Ben Kelly
On Thu, Mar 17, 2016 at 9:50 AM, Nicolas B. Pierron < nicolas.b.pier...@mozilla.com> wrote: > Source compressions should already be enabled. I think we do not do it > for small sources, and for Huge sources, as the compression would either be > useless, or it would take a noticeable amount of

Re: An analysis of content process memory overhead

2016-03-19 Thread Nicholas Nethercote
On Fri, Mar 18, 2016 at 2:29 AM, David Rajchenbach-Teller < dtel...@mozilla.com> wrote: > > I seem to remember that our ChromeWorkers (SessionWorker, > PageThumbsWorker, OS.File Worker) were pretty memory-hungry, but I don't > see any workers there. Does this mean that they have negligible

Re: An analysis of content process memory overhead

2016-03-19 Thread Boris Zbarsky
On 3/17/16 9:50 AM, Nicolas B. Pierron wrote: Note, this worked on B2G, but this would not work for Gecko. For example all tabs addons have to use toSource to patch the JS functions. Note that we do have the capability to lazily load the source from disk when someone does this, and we do use

An analysis of content process memory overhead

2016-03-18 Thread Nicholas Nethercote
Greetings, erahm recently wrote a nice blog post with measurements showing the overhead of enabling multiple content processes: http://www.erahm.org/2016/02/11/memory-usage-of-firefox-with-e10s-enabled/ The overhead is high -- 8 content processes *doubles* our physical memory usage -- which