Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-11-02 Thread Gregory Szorc
On Thu, Nov 2, 2017 at 3:43 PM, Nico Grunbaum  wrote:

> For rr I have an i7 desktop with a base clock of 4.0 Ghz, and for building
> I use icecc to distribute the load (or rather I will be again when bug
> 1412240[0] is closed).  The i9 series has lower base clocks (2.8 Ghz, and
> 2.6Ghz for the top SKUs)[1], but high boost clocks of 4.2 Ghz.  If I were
> to switch over to an i9 for everything, would I see a notable difference in
> performance in rr?
>

Which i7? You should get better CPU efficiency with newer
microarchitectures. The i9's we're talking about are based on Skylake-X
which is based on Skylake which are the i7-6XXX models in the consumer
lines. It isn't enough to compare MHz: you need to also consider
microarchitectures, memory, and workload.

https://arstechnica.com/gadgets/2017/09/intel-core-i9-7960x-review/2/ has
some single-threaded benchmarks. The i7-7700K (Kaby Lake) seems to "win"
for single-threaded performance. But the i9's aren't far behind. Not far
enough behind to cancel out the benefits of the extra cores IMO.

This is because the i9's are pretty aggressive about using turbo. More
aggressive than the Xeons. As long as cooling can keep up, the top-end GHz
is great and you aren't sacrificing that much perf to have more cores on
die. You can counter by arguing that the consumer-grade i7's can yield more
speedups via overclocking. But for enterprise uses, having this all built
into the chip so it "just works" without voiding warranty is a nice trait :)

FWIW, the choice to go with Xeons always bothered me because we had to make
an explicit clock vs core trade-off. Building Firefox requires both many
cores for compiling and fast cores for linking. Since the i9's turbo so
well, we get the best of both worlds. And at a much lower price. Aside from
the loss of ECC, it is a pretty easy decision to switch.


> -Nico
>
> [0] https://bugzilla.mozilla.org/show_bug.cgi?id=1412240 Build failure in
> libavutil (missing atomic definitions), when building with clang and icecc
>
> [1] https://ark.intel.com/products/series/123588/Intel-Core-X-
> series-Processors
>
> On 10/27/17 7:50 PM, Robert O'Callahan wrote:
>
>> BTW can someone forward this entire thread to their friends at AMD so AMD
>> will fix their CPUs to run rr? They're tantalizingly close :-/.
>>
>> Rob
>>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-11-02 Thread Nico Grunbaum
For rr I have an i7 desktop with a base clock of 4.0 Ghz, and for 
building I use icecc to distribute the load (or rather I will be again 
when bug 1412240[0] is closed).  The i9 series has lower base clocks 
(2.8 Ghz, and 2.6Ghz for the top SKUs)[1], but high boost clocks of 4.2 
Ghz.  If I were to switch over to an i9 for everything, would I see a 
notable difference in performance in rr?


-Nico

[0] https://bugzilla.mozilla.org/show_bug.cgi?id=1412240 Build failure 
in libavutil (missing atomic definitions), when building with clang and 
icecc


[1] 
https://ark.intel.com/products/series/123588/Intel-Core-X-series-Processors


On 10/27/17 7:50 PM, Robert O'Callahan wrote:

BTW can someone forward this entire thread to their friends at AMD so AMD
will fix their CPUs to run rr? They're tantalizingly close :-/.

Rob


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Website memory leaks

2017-11-02 Thread Nick Fitzgerald
We have https://bugzilla.mozilla.org/show_bug.cgi?id=1243091 on file for
automatic leak detection in the DevTools' memory panel.

I'd have liked to try implementing
http://www.cs.utexas.edu/ftp/techreports/tr06-07.pdf because it can see
through frameworks/libraries (or claims to in a convincing way, at least)
and point to the "user" code that is transitively responsible. In today's
modern web that is full of every kind of framework and library, that seems
like a very useful property to have.

As far as seeing the CC's heap, there is
https://bugzilla.mozilla.org/show_bug.cgi?id=1057057 on file for that. It
shouldn't be hard for someone who understand the CC and all of its macros.
You basically just make a specialization of JS::ubi::Concrete that can
enumerate outgoing edges and report shallow size.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Website memory leaks

2017-11-02 Thread Robert O'Callahan
Now that I'm writing a Web app for real, I realize just how easy it is to
accidentally leak :-(.

It would be useful, or at least cool, to be able to show users and
developers a graph of memory usage over time, one line per tab. You could
limit this to just show the top N memory-hungry tabs.

A UI intervention like the slow-script notification seems plausible. You
can probably come up with pretty good heuristics for identifying leaking
tabs. One signal would be steady memory growth during times without user
interaction. Then you can show the memory growth graph and offer to reload
the tab.

When devtools are open you could be a lot more aggressive about warning
developers if you think their app might be leaking. Actually, as an aside,
there's a more general devtools request here: when I'm using my own app,
even when devtools are not open, I'd like to be notified of JS errors and
other serious issues. One way to do this would be that if I ever open
devtools on a site, then set a "warn on errors" flag for that site, and
ensure the warnings offer a "stop warning me" button.

There's a lot of research literature on tools for tracking down leaks using
periodic heap snapshots and other techniques. Most of it's probably junk
but it's worth consulting. Nick Mitchell and Gary Sevitsky did some good
work.

IIRC you already have a "this is probably an ad" URL blacklist, so bounding
the memory usage of those IFRAMEs, freezing them when they reach it, sounds
good. You shouldn't need process isolation for IFRAMEs for this.

Rob
-- 
lbir ye,ea yer.tnietoehr  rdn rdsme,anea lurpr  edna e hnysnenh hhe uresyf
toD
selthor  stor  edna  siewaoeodm  or v sstvr  esBa  kbvted,t
rdsme,aoreseoouoto
o l euetiuruewFa  kbn e hnystoivateweh uresyf tulsa rehr  rdm  or rnea
lurpr
.a war hsrer holsa rodvted,t  nenh hneireseoouot.tniesiewaoeivatewt sstvr
esn
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Website memory leaks

2017-11-02 Thread smaug

On 11/02/2017 09:58 PM, Kris Maglione wrote:

Related: I've been thinking for a long time that we need better tools for 
tracking what sites/usage patterns are responsible for the time we spend in
CC (and possibly GC, but I think that tends to be less of a problem).

I've noticed, in particular, that having multiple Amazon tabs open, and keeping 
them open for a long time, tends to lead to many, long CC pauses. But
I have no idea why, or how to even begin profiling it. And the only reason I 
even know/suspect that Amazon is the problem is that I've noticed a
strong pattern.


Have you filed a bug? Web sites do often leak, and by leak I mean keeping tons 
of such objects alive that they don't need anymore. If CC times go up
when a web site is leaking, it probably means a missing "black-bit-propagation" 
optimization.
Google Reader was a canonical example of a web page which leaked basically 
everything, and we did optimize things out from CC graph so that
at least CC times stayed low enough, even though memory usage went up.



Leaks certainly make the problem worse, but I'm not sure that they're all of the 
problem. Either way, I think being able to say "this compartment n
milliseconds of CC time in the last minute" would be a pretty good way for us 
to pick out memory problems in particular web pages.


On Thu, Nov 02, 2017 at 10:34:22AM -0400, Randell Jesup wrote:

[Note: I'm a tab-hoarder - but that doesn't really cause this problem]

tl;dr: we should look at something (roughly) like the existing "page is
making your browser slow" dialog for website leaks.


Over the last several months (and maybe the last year), I've noticed an
increasing problem in websites: runaway leaks, leading to
a poorly-performing browser or OOM crashes.  Yes, these have *always*
occurred (people write bad code, shock!), but the incidence seems to
have gotten much worse 'recently' (anecdotally).

This may be the result of ads/ad networks (in some cases it provably
is), in other cases it's the sites themselves (like the 4M copies of
":DIV" that facebook was creating over a day if you left a message
visible, plus GB of related leaks (80K copies of many strings/objects).
Many of the pages causing these leaks are major sites, like nytimes.com,
washington post, cnn, arstechnica, Atlantic, New Yorker, etc.

However, regardless of who's making the error or the details, I've been
noticing that I'm having to "hunt for the bad tab" more and more
frequently (usually via about:memory, which users wouldn't
do/understand).  Multi-e10s helps a bit and some tabs won't degrade, but
also enables my system to get into phyical-memory-pressure faster.  (The
machine I notice this on is Win10, 12GB ram, quad-core AMD 8xxx (older,
but 3.5GHz).  I'm running Nightly (32-bit) with one profile (for
facebook), and beta (64bit) with another profile).

While physical-memory pressure causes problems (including for the system
as a whole), the leaking can make Content processes unresponsive, to the
point where about:memory doesn't get data for them (or it's incomplete).
This makes it hard-to-impossible to fix the problem; I kill Content
processes by hand in that case - regular users would just force-close the
browser (and perhaps start Chrome instead of restarting.)  We see an
insanely high number of OOMs in the field; likely this is a major
contributing factor.  Chrome will *tend* to have less tabs impacted by
one leaking, though the leaking tab will still be ugly.

Sometimes the leak manifests simply as a leak (like the washington post
tab I just killed and got 3.5 of the 5GB in use by a content process
back).  Others (depending I assume on what is leaked and how it's kept
alive) cause a core (each) to be chewed doing continual GC/CC (or
processing events in app JS touching some insanely-long internal
structure), slowing the process (and system) to a crawl.


*Generally* these are caused by leaving a page up for a while - navigate
and leave, and you never notice it (part of why website developers don't
fix these, or perhaps don't care (much)).  But even walking away from a
machine with one or a couple of tabs, and coming back the next day can
present you with an unusable tab/browser, or a "this tab has crashed"
screen.


Hoping site owners (or worse, ad producers) will fix their leaks is a
losing game, though we should encourage it and offer tools where
possible.  However, we need a broader solution (or at least tools) for
dealing with this for users.


I don't have a magic-bullet solution, and I'm sure there isn't one given
JS and GC as a core of browsers.  I do think we need to talk about it
(and perhaps not just Mozilla), and find some way to help users here.


One half-baked idea is to provide a UI intervention similar to what we
do on JS that runs too-long, and ask the user if they want to freeze the
tab (or better yet in this case, reload it).  If we trigger before the
side-effects get out of hand, freezing may be ok.  We should also
identify the 

Re: Website memory leaks

2017-11-02 Thread Randell Jesup
>On Thu, Nov 02, 2017 at 05:37:30PM +0200, smaug wrote:
>>This has been an issue forever, and there aren't really good tools on any 
>>browser, as far as
>>I know, for web devs to debug their leaks.
>>Internally we do have useful data (CC and GC graphs and such), but would need 
>>quite some ux skills to design some good
>>UI to deal with leaks. Also, the data to deal with is often massive, so the 
>>tool should be implemented that in mind.
>
>We do have memory tools in devtools now, and the dominator tree in
>particular can provide some useful data for tracking down leaks. But those
>tools aren't especially well-maintained at the moment, and I honestly think
>they're too flaky at this point to recommend to webdevs for general use :(
>
>It would be nice if we could prioritize them again.

Also, tools for developing a page are one thing, but usage in the field
is another.  Ability to know in a loaded page about memory use (and to
know about usage of an embedded iframe) would give web devs information
and a way to put their own pressure (and maybe limits) on.

Plus, most devs unless they're dealing with a report about a problem
won't go looking.  Something that proactively can poke them is far more
likely to get action.

-- 
Randell Jesup, Mozilla Corp
remove "news" for personal email
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Website memory leaks

2017-11-02 Thread smaug

On 11/02/2017 10:01 PM, Kris Maglione wrote:

On Thu, Nov 02, 2017 at 05:37:30PM +0200, smaug wrote:

This has been an issue forever, and there aren't really good tools on any 
browser, as far as
I know, for web devs to debug their leaks.
Internally we do have useful data (CC and GC graphs and such), but would need 
quite some ux skills to design some good
UI to deal with leaks. Also, the data to deal with is often massive, so the 
tool should be implemented that in mind.


We do have memory tools in devtools now, and the dominator tree

We've had that for quite some time, but that is missing the native side of the 
object graph, which is often rather crucial.
And I think it is hard to query various things from the tool.



in particular can provide some useful data for tracking down leaks. But those 
tools
aren't especially well-maintained at the moment, and I honestly think they're 
too flaky at this point to recommend to webdevs for general use :(

It would be nice if we could prioritize them again.


On 11/02/2017 04:34 PM, Randell Jesup wrote:

[Note: I'm a tab-hoarder - but that doesn't really cause this problem]

tl;dr: we should look at something (roughly) like the existing "page is
making your browser slow" dialog for website leaks.


Over the last several months (and maybe the last year), I've noticed an
increasing problem in websites: runaway leaks, leading to
a poorly-performing browser or OOM crashes.  Yes, these have *always*
occurred (people write bad code, shock!), but the incidence seems to
have gotten much worse 'recently' (anecdotally).

This may be the result of ads/ad networks (in some cases it provably
is), in other cases it's the sites themselves (like the 4M copies of
":DIV" that facebook was creating over a day if you left a message
visible, plus GB of related leaks (80K copies of many strings/objects).
Many of the pages causing these leaks are major sites, like nytimes.com,
washington post, cnn, arstechnica, Atlantic, New Yorker, etc.

However, regardless of who's making the error or the details, I've been
noticing that I'm having to "hunt for the bad tab" more and more
frequently (usually via about:memory, which users wouldn't
do/understand).  Multi-e10s helps a bit and some tabs won't degrade, but
also enables my system to get into phyical-memory-pressure faster.  (The
machine I notice this on is Win10, 12GB ram, quad-core AMD 8xxx (older,
but 3.5GHz).  I'm running Nightly (32-bit) with one profile (for
facebook), and beta (64bit) with another profile).

While physical-memory pressure causes problems (including for the system
as a whole), the leaking can make Content processes unresponsive, to the
point where about:memory doesn't get data for them (or it's incomplete).
This makes it hard-to-impossible to fix the problem; I kill Content
processes by hand in that case - regular users would just force-close the
browser (and perhaps start Chrome instead of restarting.)  We see an
insanely high number of OOMs in the field; likely this is a major
contributing factor.  Chrome will *tend* to have less tabs impacted by
one leaking, though the leaking tab will still be ugly.

Sometimes the leak manifests simply as a leak (like the washington post
tab I just killed and got 3.5 of the 5GB in use by a content process
back).  Others (depending I assume on what is leaked and how it's kept
alive) cause a core (each) to be chewed doing continual GC/CC (or
processing events in app JS touching some insanely-long internal
structure), slowing the process (and system) to a crawl.


*Generally* these are caused by leaving a page up for a while - navigate
and leave, and you never notice it (part of why website developers don't
fix these, or perhaps don't care (much)).  But even walking away from a
machine with one or a couple of tabs, and coming back the next day can
present you with an unusable tab/browser, or a "this tab has crashed"
screen.


Hoping site owners (or worse, ad producers) will fix their leaks is a
losing game, though we should encourage it and offer tools where
possible.  However, we need a broader solution (or at least tools) for
dealing with this for users.


I don't have a magic-bullet solution, and I'm sure there isn't one given
JS and GC as a core of browsers.  I do think we need to talk about it
(and perhaps not just Mozilla), and find some way to help users here.


One half-baked idea is to provide a UI intervention similar to what we
do on JS that runs too-long, and ask the user if they want to freeze the
tab (or better yet in this case, reload it).  If we trigger before the
side-effects get out of hand, freezing may be ok.  We should also
identify the tab (in the message/popup, and visually in the tab bar - we
don't do either today).  This solution has issues (though freezing tabs
now that "slow down your browser" does too, but it's still useful
overall).

We can also look at tools to characterize site leaks or identify
patterns that point to a leak before it gets out of 

Re: Website memory leaks

2017-11-02 Thread Kris Maglione

On Thu, Nov 02, 2017 at 05:37:30PM +0200, smaug wrote:

This has been an issue forever, and there aren't really good tools on any 
browser, as far as
I know, for web devs to debug their leaks.
Internally we do have useful data (CC and GC graphs and such), but would need 
quite some ux skills to design some good
UI to deal with leaks. Also, the data to deal with is often massive, so the 
tool should be implemented that in mind.


We do have memory tools in devtools now, and the dominator tree in 
particular can provide some useful data for tracking down leaks. But 
those tools aren't especially well-maintained at the moment, and I 
honestly think they're too flaky at this point to recommend to webdevs 
for general use :(


It would be nice if we could prioritize them again.


On 11/02/2017 04:34 PM, Randell Jesup wrote:

[Note: I'm a tab-hoarder - but that doesn't really cause this problem]

tl;dr: we should look at something (roughly) like the existing "page is
making your browser slow" dialog for website leaks.


Over the last several months (and maybe the last year), I've noticed an
increasing problem in websites: runaway leaks, leading to
a poorly-performing browser or OOM crashes.  Yes, these have *always*
occurred (people write bad code, shock!), but the incidence seems to
have gotten much worse 'recently' (anecdotally).

This may be the result of ads/ad networks (in some cases it provably
is), in other cases it's the sites themselves (like the 4M copies of
":DIV" that facebook was creating over a day if you left a message
visible, plus GB of related leaks (80K copies of many strings/objects).
Many of the pages causing these leaks are major sites, like nytimes.com,
washington post, cnn, arstechnica, Atlantic, New Yorker, etc.

However, regardless of who's making the error or the details, I've been
noticing that I'm having to "hunt for the bad tab" more and more
frequently (usually via about:memory, which users wouldn't
do/understand).  Multi-e10s helps a bit and some tabs won't degrade, but
also enables my system to get into phyical-memory-pressure faster.  (The
machine I notice this on is Win10, 12GB ram, quad-core AMD 8xxx (older,
but 3.5GHz).  I'm running Nightly (32-bit) with one profile (for
facebook), and beta (64bit) with another profile).

While physical-memory pressure causes problems (including for the system
as a whole), the leaking can make Content processes unresponsive, to the
point where about:memory doesn't get data for them (or it's incomplete).
This makes it hard-to-impossible to fix the problem; I kill Content
processes by hand in that case - regular users would just force-close the
browser (and perhaps start Chrome instead of restarting.)  We see an
insanely high number of OOMs in the field; likely this is a major
contributing factor.  Chrome will *tend* to have less tabs impacted by
one leaking, though the leaking tab will still be ugly.

Sometimes the leak manifests simply as a leak (like the washington post
tab I just killed and got 3.5 of the 5GB in use by a content process
back).  Others (depending I assume on what is leaked and how it's kept
alive) cause a core (each) to be chewed doing continual GC/CC (or
processing events in app JS touching some insanely-long internal
structure), slowing the process (and system) to a crawl.


*Generally* these are caused by leaving a page up for a while - navigate
and leave, and you never notice it (part of why website developers don't
fix these, or perhaps don't care (much)).  But even walking away from a
machine with one or a couple of tabs, and coming back the next day can
present you with an unusable tab/browser, or a "this tab has crashed"
screen.


Hoping site owners (or worse, ad producers) will fix their leaks is a
losing game, though we should encourage it and offer tools where
possible.  However, we need a broader solution (or at least tools) for
dealing with this for users.


I don't have a magic-bullet solution, and I'm sure there isn't one given
JS and GC as a core of browsers.  I do think we need to talk about it
(and perhaps not just Mozilla), and find some way to help users here.


One half-baked idea is to provide a UI intervention similar to what we
do on JS that runs too-long, and ask the user if they want to freeze the
tab (or better yet in this case, reload it).  If we trigger before the
side-effects get out of hand, freezing may be ok.  We should also
identify the tab (in the message/popup, and visually in the tab bar - we
don't do either today).  This solution has issues (though freezing tabs
now that "slow down your browser" does too, but it's still useful
overall).

We can also look at tools to characterize site leaks or identify
patterns that point to a leak before it gets out of hand.  Also tools
that help website builders identify if their pages are leaking in the
field (memory-use triggers? within-tab about:memory dumps available to
the page?)

Perhaps we can also push to limit memory use (CPU use??) in 

Re: Website memory leaks

2017-11-02 Thread Kris Maglione
Related: I've been thinking for a long time that we need better tools 
for tracking what sites/usage patterns are responsible for the time we 
spend in CC (and possibly GC, but I think that tends to be less of a 
problem).


I've noticed, in particular, that having multiple Amazon tabs open, and 
keeping them open for a long time, tends to lead to many, long CC 
pauses. But I have no idea why, or how to even begin profiling it. And 
the only reason I even know/suspect that Amazon is the problem is that 
I've noticed a strong pattern.


Leaks certainly make the problem worse, but I'm not sure that they're 
all of the problem. Either way, I think being able to say "this 
compartment n milliseconds of CC time in the last minute" would be a 
pretty good way for us to pick out memory problems in particular web 
pages.



On Thu, Nov 02, 2017 at 10:34:22AM -0400, Randell Jesup wrote:

[Note: I'm a tab-hoarder - but that doesn't really cause this problem]

tl;dr: we should look at something (roughly) like the existing "page is
making your browser slow" dialog for website leaks.


Over the last several months (and maybe the last year), I've noticed an
increasing problem in websites: runaway leaks, leading to
a poorly-performing browser or OOM crashes.  Yes, these have *always*
occurred (people write bad code, shock!), but the incidence seems to
have gotten much worse 'recently' (anecdotally).

This may be the result of ads/ad networks (in some cases it provably
is), in other cases it's the sites themselves (like the 4M copies of
":DIV" that facebook was creating over a day if you left a message
visible, plus GB of related leaks (80K copies of many strings/objects).
Many of the pages causing these leaks are major sites, like nytimes.com,
washington post, cnn, arstechnica, Atlantic, New Yorker, etc.

However, regardless of who's making the error or the details, I've been
noticing that I'm having to "hunt for the bad tab" more and more
frequently (usually via about:memory, which users wouldn't
do/understand).  Multi-e10s helps a bit and some tabs won't degrade, but
also enables my system to get into phyical-memory-pressure faster.  (The
machine I notice this on is Win10, 12GB ram, quad-core AMD 8xxx (older,
but 3.5GHz).  I'm running Nightly (32-bit) with one profile (for
facebook), and beta (64bit) with another profile).

While physical-memory pressure causes problems (including for the system
as a whole), the leaking can make Content processes unresponsive, to the
point where about:memory doesn't get data for them (or it's incomplete).
This makes it hard-to-impossible to fix the problem; I kill Content
processes by hand in that case - regular users would just force-close the
browser (and perhaps start Chrome instead of restarting.)  We see an
insanely high number of OOMs in the field; likely this is a major
contributing factor.  Chrome will *tend* to have less tabs impacted by
one leaking, though the leaking tab will still be ugly.

Sometimes the leak manifests simply as a leak (like the washington post
tab I just killed and got 3.5 of the 5GB in use by a content process
back).  Others (depending I assume on what is leaked and how it's kept
alive) cause a core (each) to be chewed doing continual GC/CC (or
processing events in app JS touching some insanely-long internal
structure), slowing the process (and system) to a crawl.


*Generally* these are caused by leaving a page up for a while - navigate
and leave, and you never notice it (part of why website developers don't
fix these, or perhaps don't care (much)).  But even walking away from a
machine with one or a couple of tabs, and coming back the next day can
present you with an unusable tab/browser, or a "this tab has crashed"
screen.


Hoping site owners (or worse, ad producers) will fix their leaks is a
losing game, though we should encourage it and offer tools where
possible.  However, we need a broader solution (or at least tools) for
dealing with this for users.


I don't have a magic-bullet solution, and I'm sure there isn't one given
JS and GC as a core of browsers.  I do think we need to talk about it
(and perhaps not just Mozilla), and find some way to help users here.


One half-baked idea is to provide a UI intervention similar to what we
do on JS that runs too-long, and ask the user if they want to freeze the
tab (or better yet in this case, reload it).  If we trigger before the
side-effects get out of hand, freezing may be ok.  We should also
identify the tab (in the message/popup, and visually in the tab bar - we
don't do either today).  This solution has issues (though freezing tabs
now that "slow down your browser" does too, but it's still useful
overall).

We can also look at tools to characterize site leaks or identify
patterns that point to a leak before it gets out of hand.  Also tools
that help website builders identify if their pages are leaking in the
field (memory-use triggers? within-tab about:memory dumps available to
the page?)


Re: Website memory leaks

2017-11-02 Thread Randell Jesup
>Many of the pages causing these leaks are major sites, like nytimes.com,
>washington post, cnn, arstechnica, Atlantic, New Yorker, etc.
...
>Perhaps we can also push to limit memory use (CPU use??) in embedded
>ads/restricted-iframes/etc, so sites can stop ads from destroying the
>website performance for users over time.  I've often seen ads taking
>300MB-2GB.

So in support of this general concept of limiting ad memory use:

https://www.scientificamerican.com/article/should-iconic-lake-powell-be-drained/
doesn't leak if you load it -- until you scroll down, and the ads load.
Then it leaks forever... to the tune of 1GB leaked in 5-10 minutes.
Differential about:memory reports show that what's primarily leaking are
ads, in particular at this moment:

400.25 MB (43.45%) -- detached
  398.81 MB (43.30%) -- 
window(https://tpc.googlesyndication.com/safeframe/1-0-13/html/container.html)
seems like the worst culprit, plus
406.69 MB (44.15%) --  
top(https://www.scientificamerican.com/article/should-iconic-lake-powell-be-drained/,id=NNN)
  327.79 MB (35.59%) -- js-zone(0xNNN)

Other ads have leaked a few MB to 75MB each.

Also, as soon as I scrolled down CPU use went from ~0% for the process
to ~20% (on a 4-thread/core AMD CPU).

Worse yet (perhaps bug 1410381??) when I hit reload CPU use dropped (not
to 0).  10 sec later, Mem use climbed 3.7GB to 4.5GB.  then dropped to
3.7GB and climbed back to 4.5GB.  Then dropped to 2.7GB - barely above
where it was before I scrolled down - and stayed stable. (Note: I hit
reload with the current position near the bottom with 1 ad visible (no
video).

Using an additional GB+ of memory in order to free 1GB of memory
seems... excessive.

-- 
Randell Jesup, Mozilla Corp
remove "news" for personal email
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Any intent to implement the W3C generic sensor API in Firefox?

2017-11-02 Thread Boris Zbarsky

On 11/2/17 12:54 PM, hoehn6...@gmail.com wrote:

note: Chrome 63 does support it in it early version already


I should note that this is slightly misleading.  According to 
https://groups.google.com/a/chromium.org/forum/?utm_medium=email_source=footer#!msg/blink-dev/2zPZt3watBk/M4qcI8wlBwAJ 
Chrome is doing an experiment, in which they will support this API in 
versions 63-65 only, as an origin trial, after which the experiment will 
end.



If someone likes to see a code snipped how it can be used with google's cardboard to 
detect the magnetic "button", I provided something at 
https://stackoverflow.com/questions/40270350/detect-google-cardboard-magnetic-button-click-in-javascript


Right, as this post notes it's disabled by default unless you 
force-enable it or participate in the origin trial.  I'm not sure why 
your mail was worded to make it sound like Chrome 63 shipped actual 
support


-Boris

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Any intent to implement the W3C generic sensor API in Firefox?

2017-11-02 Thread Boris Zbarsky

On 11/2/17 12:54 PM, hoehn6...@gmail.com wrote:

The W3C defined a generic sensor API for browsers which also allows to support 
sensors like the Magnetometer. The spec can be found here: 
https://www.w3.org/TR/generic-sensor/


This isn't a spec, nor did the W3C "define" it.  This is a working draft 
(a spec proposal), that may or may not end up turning into a spec.



Does Mozilla intend to support that API for Firefox?


Does this API avoid the problems described in 
https://groups.google.com/forum/#!topic/mozilla.dev.platform/45XApRxACaM ?


-Boris
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Any intent to implement the W3C generic sensor API in Firefox?

2017-11-02 Thread hoehn6691
The W3C defined a generic sensor API for browsers which also allows to support 
sensors like the Magnetometer. The spec can be found here: 
https://www.w3.org/TR/generic-sensor/

Does Mozilla intend to support that API for Firefox? Is it maybe even already 
under development?

note: Chrome 63 does support it in it early version already and it works 
nicely. As a second note as I saw some discussion around here about security 
and sensors: it does require the code to be delivered via HTTPS - otherwise you 
get security exceptions.


cheers
Stefan

If someone likes to see a code snipped how it can be used with google's 
cardboard to detect the magnetic "button", I provided something at 
https://stackoverflow.com/questions/40270350/detect-google-cardboard-magnetic-button-click-in-javascript
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Website memory leaks

2017-11-02 Thread David Durst
What you're describing having seen is, I think, exactly what I've been
trying to reproduce in a now-blocking-57 bug (1398652).

By your description, the only thing that makes sense to me -- to account
for unknown/unknowable changes on the site -- is to track potential runaway
growth of the content process. I realize how stupid this sounds, but what I
observed was that when the content process was in trouble, it continued to
grow. And the effect/impact builds: first the tab, then the process, then
the browser, then the OS. So if there's a way to determine that "this
process has only grown in the past X amount of time" or past a certain
threshold, that's the best indicator I've come up with.

I don't know what we'd do at that point -- force killing the content
process sounds severe (though possibly correct) -- or some alert similar to
the dreaded slow script one.


--
David Durst [:ddurst]

On Thu, Nov 2, 2017 at 11:31 AM, Randell Jesup 
wrote:

> >about:performance can provide the tab/pid mapping, with some performance
> >indexes.
> >This might help solve your issue listed in the side note.
>
> mconley told me in IRC that today's nightly has brought back the PID in
> the tooltip (in Nightly only); it was accidentally removed.
>
> about:performance can be useful, but 3 tabs vs all tabs is too coarse
> for me, and things like "site leaked memory and is slowing CC" I presume
> doesn't show up in the 'heat' for the tab there.
>
> --
> Randell Jesup, Mozilla Corp
> remove "news" for personal email
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Website memory leaks

2017-11-02 Thread smaug

This has been an issue forever, and there aren't really good tools on any 
browser, as far as
I know, for web devs to debug their leaks.
Internally we do have useful data (CC and GC graphs and such), but would need 
quite some ux skills to design some good
UI to deal with leaks. Also, the data to deal with is often massive, so the 
tool should be implemented that in mind.

Ads not destroying the performance is a bit different issue in general, and is
somewhat part of the Quantum DOM and iframe throttling. We don't have all of 
that enabled yet.
But sure, ads using too much memory is something to think some more.

We do have pids in e10s-Nightly tab tooltips. There was just a regression for 
couple of days.

On 11/02/2017 04:34 PM, Randell Jesup wrote:

[Note: I'm a tab-hoarder - but that doesn't really cause this problem]

tl;dr: we should look at something (roughly) like the existing "page is
making your browser slow" dialog for website leaks.


Over the last several months (and maybe the last year), I've noticed an
increasing problem in websites: runaway leaks, leading to
a poorly-performing browser or OOM crashes.  Yes, these have *always*
occurred (people write bad code, shock!), but the incidence seems to
have gotten much worse 'recently' (anecdotally).

This may be the result of ads/ad networks (in some cases it provably
is), in other cases it's the sites themselves (like the 4M copies of
":DIV" that facebook was creating over a day if you left a message
visible, plus GB of related leaks (80K copies of many strings/objects).
Many of the pages causing these leaks are major sites, like nytimes.com,
washington post, cnn, arstechnica, Atlantic, New Yorker, etc.

However, regardless of who's making the error or the details, I've been
noticing that I'm having to "hunt for the bad tab" more and more
frequently (usually via about:memory, which users wouldn't
do/understand).  Multi-e10s helps a bit and some tabs won't degrade, but
also enables my system to get into phyical-memory-pressure faster.  (The
machine I notice this on is Win10, 12GB ram, quad-core AMD 8xxx (older,
but 3.5GHz).  I'm running Nightly (32-bit) with one profile (for
facebook), and beta (64bit) with another profile).

While physical-memory pressure causes problems (including for the system
as a whole), the leaking can make Content processes unresponsive, to the
point where about:memory doesn't get data for them (or it's incomplete).
This makes it hard-to-impossible to fix the problem; I kill Content
processes by hand in that case - regular users would just force-close the
browser (and perhaps start Chrome instead of restarting.)  We see an
insanely high number of OOMs in the field; likely this is a major
contributing factor.  Chrome will *tend* to have less tabs impacted by
one leaking, though the leaking tab will still be ugly.

Sometimes the leak manifests simply as a leak (like the washington post
tab I just killed and got 3.5 of the 5GB in use by a content process
back).  Others (depending I assume on what is leaked and how it's kept
alive) cause a core (each) to be chewed doing continual GC/CC (or
processing events in app JS touching some insanely-long internal
structure), slowing the process (and system) to a crawl.


*Generally* these are caused by leaving a page up for a while - navigate
and leave, and you never notice it (part of why website developers don't
fix these, or perhaps don't care (much)).  But even walking away from a
machine with one or a couple of tabs, and coming back the next day can
present you with an unusable tab/browser, or a "this tab has crashed"
screen.


Hoping site owners (or worse, ad producers) will fix their leaks is a
losing game, though we should encourage it and offer tools where
possible.  However, we need a broader solution (or at least tools) for
dealing with this for users.


I don't have a magic-bullet solution, and I'm sure there isn't one given
JS and GC as a core of browsers.  I do think we need to talk about it
(and perhaps not just Mozilla), and find some way to help users here.


One half-baked idea is to provide a UI intervention similar to what we
do on JS that runs too-long, and ask the user if they want to freeze the
tab (or better yet in this case, reload it).  If we trigger before the
side-effects get out of hand, freezing may be ok.  We should also
identify the tab (in the message/popup, and visually in the tab bar - we
don't do either today).  This solution has issues (though freezing tabs
now that "slow down your browser" does too, but it's still useful
overall).

We can also look at tools to characterize site leaks or identify
patterns that point to a leak before it gets out of hand.  Also tools
that help website builders identify if their pages are leaking in the
field (memory-use triggers? within-tab about:memory dumps available to
the page?)

Perhaps we can also push to limit memory use (CPU use??) in embedded
ads/restricted-iframes/etc, so sites can stop ads from 

Re: Website memory leaks

2017-11-02 Thread Randell Jesup
>about:performance can provide the tab/pid mapping, with some performance
>indexes.
>This might help solve your issue listed in the side note.

mconley told me in IRC that today's nightly has brought back the PID in
the tooltip (in Nightly only); it was accidentally removed.

about:performance can be useful, but 3 tabs vs all tabs is too coarse
for me, and things like "site leaked memory and is slowing CC" I presume
doesn't show up in the 'heat' for the tab there.

-- 
Randell Jesup, Mozilla Corp
remove "news" for personal email
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Website memory leaks

2017-11-02 Thread Shih-Chiang Chien
about:performance can provide the tab/pid mapping, with some performance
indexes.
This might help solve your issue listed in the side note.

Best Regards,
Shih-Chiang Chien
Mozilla Taiwan

On Thu, Nov 2, 2017 at 3:34 PM, Randell Jesup  wrote:

> [Note: I'm a tab-hoarder - but that doesn't really cause this problem]
>
> tl;dr: we should look at something (roughly) like the existing "page is
> making your browser slow" dialog for website leaks.
>
>
> Over the last several months (and maybe the last year), I've noticed an
> increasing problem in websites: runaway leaks, leading to
> a poorly-performing browser or OOM crashes.  Yes, these have *always*
> occurred (people write bad code, shock!), but the incidence seems to
> have gotten much worse 'recently' (anecdotally).
>
> This may be the result of ads/ad networks (in some cases it provably
> is), in other cases it's the sites themselves (like the 4M copies of
> ":DIV" that facebook was creating over a day if you left a message
> visible, plus GB of related leaks (80K copies of many strings/objects).
> Many of the pages causing these leaks are major sites, like nytimes.com,
> washington post, cnn, arstechnica, Atlantic, New Yorker, etc.
>
> However, regardless of who's making the error or the details, I've been
> noticing that I'm having to "hunt for the bad tab" more and more
> frequently (usually via about:memory, which users wouldn't
> do/understand).  Multi-e10s helps a bit and some tabs won't degrade, but
> also enables my system to get into phyical-memory-pressure faster.  (The
> machine I notice this on is Win10, 12GB ram, quad-core AMD 8xxx (older,
> but 3.5GHz).  I'm running Nightly (32-bit) with one profile (for
> facebook), and beta (64bit) with another profile).
>
> While physical-memory pressure causes problems (including for the system
> as a whole), the leaking can make Content processes unresponsive, to the
> point where about:memory doesn't get data for them (or it's incomplete).
> This makes it hard-to-impossible to fix the problem; I kill Content
> processes by hand in that case - regular users would just force-close the
> browser (and perhaps start Chrome instead of restarting.)  We see an
> insanely high number of OOMs in the field; likely this is a major
> contributing factor.  Chrome will *tend* to have less tabs impacted by
> one leaking, though the leaking tab will still be ugly.
>
> Sometimes the leak manifests simply as a leak (like the washington post
> tab I just killed and got 3.5 of the 5GB in use by a content process
> back).  Others (depending I assume on what is leaked and how it's kept
> alive) cause a core (each) to be chewed doing continual GC/CC (or
> processing events in app JS touching some insanely-long internal
> structure), slowing the process (and system) to a crawl.
>
>
> *Generally* these are caused by leaving a page up for a while - navigate
> and leave, and you never notice it (part of why website developers don't
> fix these, or perhaps don't care (much)).  But even walking away from a
> machine with one or a couple of tabs, and coming back the next day can
> present you with an unusable tab/browser, or a "this tab has crashed"
> screen.
>
>
> Hoping site owners (or worse, ad producers) will fix their leaks is a
> losing game, though we should encourage it and offer tools where
> possible.  However, we need a broader solution (or at least tools) for
> dealing with this for users.
>
>
> I don't have a magic-bullet solution, and I'm sure there isn't one given
> JS and GC as a core of browsers.  I do think we need to talk about it
> (and perhaps not just Mozilla), and find some way to help users here.
>
>
> One half-baked idea is to provide a UI intervention similar to what we
> do on JS that runs too-long, and ask the user if they want to freeze the
> tab (or better yet in this case, reload it).  If we trigger before the
> side-effects get out of hand, freezing may be ok.  We should also
> identify the tab (in the message/popup, and visually in the tab bar - we
> don't do either today).  This solution has issues (though freezing tabs
> now that "slow down your browser" does too, but it's still useful
> overall).
>
> We can also look at tools to characterize site leaks or identify
> patterns that point to a leak before it gets out of hand.  Also tools
> that help website builders identify if their pages are leaking in the
> field (memory-use triggers? within-tab about:memory dumps available to
> the page?)
>
> Perhaps we can also push to limit memory use (CPU use??) in embedded
> ads/restricted-iframes/etc, so sites can stop ads from destroying the
> website performance for users over time.  I've often seen ads taking
> 300MB-2GB.
>
> I'm sure there are better/more ideas - suggestions?  What thought has
> gone on in the past? (this *can't* be a new observation or idea)
>
>
>
> Also, I suspect that website behaviors/leaks like this are part of why
> many users have developed a habit of 

[Firefox Desktop] Issues found: October 23rd to October 27th

2017-11-02 Thread Cornel Ionce

Hi everyone,

Here's the list of new issues found and filed by the Desktop Release QA 
Team last week, *October 23 - October* *27* (week 43).


Additional details on the team's priorities last week, as well as the 
plans for the current week are available at:


   https://public.etherpad-mozilla.org/p/DesktopManualQAWeeklyStatus



*RELEASE CHANNEL*
ID  Summary Product Component   Is a regression 
Assigned to
1411533 
	The https://web.skype.com/ text box displays only one space if the 
spacebar is used multiple times

Core
DOM
NO  NOBODY


*BETA CHANNEL
*

*
*
ID  Summary Product Component   Is a regression 
Assigned to
1410830 
Doorhangers are displayed on top even if origin window is out of focus
Toolkit
Notifications and Alerts
	YES 
 
	NOBODY

1410855 
	Window buttons (minimize, maximize, close) are overlapping the text 
from the Menu Bar

Firefox
Toolbars and Customization
	YES 
 
	NOBODY

1411188 
	The Accessibility instantiator does not display the installation folder 
of JAWS or NVDA

Core
Disability Access APIs
NO  NOBODY
1411224 
	[Ubuntu][Search bar] "Change Search Settings" is partially displayed if 
the window is on half of the screen

Firefox
Search
	YES 
 
	NOBODY

1411254 
The New Tab Preferences close button has a "Done" label on hover
Firefox
Activity Streams: Newtab
TBD NOBODY
1411569 
The snippets cover the New Tab Preferences Done button
Firefox
Activity Streams: Newtab
TBD NOBODY
1411919 
Transition from the library menu to the available sub menus is glitching
Firefox
Menus
	YES 
 
	NOBODY

1411948 
Onboardin Performance is not translated on local romania build
Core
Localization
NO  NOBODY
1411964 
Overflow panel looks "duplicated" after unpinning an item from the panel
Firefox
Toolbars and Customization
TBD NOBODY
1412265
 	Font issues in the messages displayed for 
empty Highlights and Top Stories sections

Firefox
Activity Streams: Newtab
TBD NOBODY
1412276 
	Hover effect from side identity panel not applied when cursor focuses 
password key icon

Firefox
Address Bar
NO  NOBODY
1411970 
[Intermittent] Top Stories bookmarked status cannot be removed
Firefox
Activity Streams: Newtab
NO  Andrei Oprea 

*
NIGHTLY CHANNEL**
*


ID  Summary Product Component   Is a regression 
Assigned to
1411990 
	[Form Autofill] Expiration month field is populated with both month and 
year even though the field supports two caracters

Toolkit
Form Manager
	YES 
 
	NOBODY

1411190 
[Form Autofill] Credit card is not saved for a few top shopping sites
Toolkit
Form Manager
	YES 
 
	Ray Lin 
1412232  	[Form Autofill] - Credit Cards 
synced between stations are not working properly

Toolkit
Form Manager
NO  NOBODY


*ESR CHANNEL*
none


Regards,
Cornel (:cornel_ionce)
Desktop Release QA
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Intent to remove: elements matching hack

2017-11-02 Thread Emilio Cobos Álvarez
Hi,

In bug 1374247, I intend to remove the XBL compatibility hack introduced
in bug 653881 [1] for which  elements may be "transparent"
in selector-matching.

That means that a selector like .foo > .bar, would match a tree like:

  

  

  

The motivation is the following: This was not implemented in stylo, but
the Firefox UI depends on it, so we need to do something if we want
stylo on chrome documents.

I fear that it is the kind of thing that if we ever implement, it will
stay there forever. Also, one-offs like these tend to be buggy and
overlooked when implementing optimizations.

Finally, this hack doesn't apply to Shadow DOM, so removing it would
likely help with the efforts of transitioning away from XBL.

Since just removing it would have chances to silently break parts of the
Firefox UI, I'm landing a diagnostic assertion to verify that we don't
use it. I've been browsing for a couple days now with it without issues,
but if you get a crash pointing you to that bug, please file and I'll
take care of it :)

Thanks!

 -- Emilio

[1]:
https://hg.mozilla.org/mozilla-central/rev/1b8207252e9ca1194eccd3adc14b961785fb4e8e
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Website memory leaks

2017-11-02 Thread Randell Jesup
[Note: I'm a tab-hoarder - but that doesn't really cause this problem]

tl;dr: we should look at something (roughly) like the existing "page is
making your browser slow" dialog for website leaks.


Over the last several months (and maybe the last year), I've noticed an
increasing problem in websites: runaway leaks, leading to
a poorly-performing browser or OOM crashes.  Yes, these have *always*
occurred (people write bad code, shock!), but the incidence seems to
have gotten much worse 'recently' (anecdotally).

This may be the result of ads/ad networks (in some cases it provably
is), in other cases it's the sites themselves (like the 4M copies of
":DIV" that facebook was creating over a day if you left a message
visible, plus GB of related leaks (80K copies of many strings/objects).
Many of the pages causing these leaks are major sites, like nytimes.com,
washington post, cnn, arstechnica, Atlantic, New Yorker, etc.

However, regardless of who's making the error or the details, I've been
noticing that I'm having to "hunt for the bad tab" more and more
frequently (usually via about:memory, which users wouldn't
do/understand).  Multi-e10s helps a bit and some tabs won't degrade, but
also enables my system to get into phyical-memory-pressure faster.  (The
machine I notice this on is Win10, 12GB ram, quad-core AMD 8xxx (older,
but 3.5GHz).  I'm running Nightly (32-bit) with one profile (for
facebook), and beta (64bit) with another profile).

While physical-memory pressure causes problems (including for the system
as a whole), the leaking can make Content processes unresponsive, to the
point where about:memory doesn't get data for them (or it's incomplete).
This makes it hard-to-impossible to fix the problem; I kill Content
processes by hand in that case - regular users would just force-close the
browser (and perhaps start Chrome instead of restarting.)  We see an
insanely high number of OOMs in the field; likely this is a major
contributing factor.  Chrome will *tend* to have less tabs impacted by
one leaking, though the leaking tab will still be ugly.

Sometimes the leak manifests simply as a leak (like the washington post
tab I just killed and got 3.5 of the 5GB in use by a content process
back).  Others (depending I assume on what is leaked and how it's kept
alive) cause a core (each) to be chewed doing continual GC/CC (or
processing events in app JS touching some insanely-long internal
structure), slowing the process (and system) to a crawl.


*Generally* these are caused by leaving a page up for a while - navigate
and leave, and you never notice it (part of why website developers don't
fix these, or perhaps don't care (much)).  But even walking away from a
machine with one or a couple of tabs, and coming back the next day can
present you with an unusable tab/browser, or a "this tab has crashed"
screen.


Hoping site owners (or worse, ad producers) will fix their leaks is a
losing game, though we should encourage it and offer tools where
possible.  However, we need a broader solution (or at least tools) for
dealing with this for users.


I don't have a magic-bullet solution, and I'm sure there isn't one given
JS and GC as a core of browsers.  I do think we need to talk about it
(and perhaps not just Mozilla), and find some way to help users here.


One half-baked idea is to provide a UI intervention similar to what we
do on JS that runs too-long, and ask the user if they want to freeze the
tab (or better yet in this case, reload it).  If we trigger before the
side-effects get out of hand, freezing may be ok.  We should also
identify the tab (in the message/popup, and visually in the tab bar - we
don't do either today).  This solution has issues (though freezing tabs
now that "slow down your browser" does too, but it's still useful
overall).

We can also look at tools to characterize site leaks or identify
patterns that point to a leak before it gets out of hand.  Also tools
that help website builders identify if their pages are leaking in the
field (memory-use triggers? within-tab about:memory dumps available to
the page?)  

Perhaps we can also push to limit memory use (CPU use??) in embedded
ads/restricted-iframes/etc, so sites can stop ads from destroying the
website performance for users over time.  I've often seen ads taking
300MB-2GB.

I'm sure there are better/more ideas - suggestions?  What thought has
gone on in the past? (this *can't* be a new observation or idea)



Also, I suspect that website behaviors/leaks like this are part of why
many users have developed a habit of browsing and closing, opening few
tabs at any point in a session (since leaving them open *sometimes*
hurts you a lot).

Part of why I care about this is that I'm looking at how far we can go
with multiple content processes, site isolation, and moving (more)
security-sensitive and/or crashy services into separate processes -- all
of which come into play here.

Side-note: removing the process-id from tabs in mult-e10s has 

Re: Proposal to remove some preferences override support

2017-11-02 Thread Mike Kaply
When I first read this, I thought you meant the defaults/preferences
directory in Firefox.

Considering this only worked for true legacy extensions (not bootstrapped
extensions), this shouldn't be a problem for enterprise.

We double check to sure that no system add-ons need this.

Mike

On Wed, Nov 1, 2017 at 6:41 PM, Nicholas Nethercote 
wrote:

> Greetings,
>
> In https://bugzilla.mozilla.org/show_bug.cgi?id=1413413 I am planning to
> remove a couple of things relating to preferences.
>
> 1) Remove the defaults/preferences directory support for extensions. This
> is a feature that was used by legacy extensions but is not used by
> WebExtensions.
>
> 2) Remove the "preferences" override directory in the user profile.
> This removes
> support for profile preferences override files other than user.js.
>
> The bug has a patch with r+. The specific things it removes include:
> - The "load-extension-default" notification.
> - The NS_EXT_PREFS_DEFAULTS_DIR_LIST/"ExtPrefDL" directory list, including
> the entry from the toolkit directory service.
>
> Does anybody foresee any problems with this change?
>
> Thanks.
>
> Nick
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Firefox Security Team Newsletter Q3 17

2017-11-02 Thread Paul Theriault
For anyone who clicked the link and was confused, NOW the wiki has the
latest newsletter. Apologies for that.

https://wiki.mozilla.org/SecurityEngineering/Newsletter

On Thu, Nov 2, 2017 at 9:26 PM,  wrote:

> [ See formatted version here: https://wiki.mozilla.org/
> SecurityEngineering/Newsletter ]
>
> = Firefox Security Team Newsletter Q3 17 =
>
> Firefox Quantum is almost here, and contains several important security
> improvements. Improved sandboxing, web platform hardening, crypto
> performance improvements and much more. Read on to find out all the
> security goodness coming through the Firefox pipeline.
>
> - Sandbox work is seeing great progress. As of 57, Windows, Mac OS X, and
> Linux all have file system access restricted by the sandbox which is a
> major milestone reached. Further restrictions are enabled for Windows in
> Firefox 58.
>
> - Firefox 57 treats now data URLs as unique origins, reducing the risk of
> Cross-Site Scripting (XSS).
>
> - The Firefox Multi-Account Containers Add-on shipped, allowing users to
> juggle multiple identities in a single browsing session.
>
> - Increased AES-GCM performance in Firefox 56, and support for Curve25519
> in Firefox 57 (the first formally verified cryptographic algorithm in a web
> browser)
>
> - Experimental support for anti-phishing FIDO U2F “Security Key” USB
> devices landed behind a preference in Firefox 57. This feature is a
> forerunner to W3C Web Authentication, which will bring this anti-phishing
> technology to a wider market.
>
> - The privacy WebExtension API can now be used to control the
> privacy.resistFingerprinting preference and first party isolation
>
>
>
> = Team Highlights =
>
>
> = Security Engineering =
> == Crypto Engineering ==
> - AES-GCM performance is increased across the board, making large
> transfers more efficient in Firefox 56.
> - Our implementation of Curve25519 in Firefox 57 is the first formally
> verified cryptographic algorithm in a web browser.
> - Experimental support for anti-phishing FIDO U2F “Security Key” USB
> devices landed behind a preference in Firefox 57. This feature is a
> forerunner to W3C - - Web Authentication, which will bring this
> anti-phishing technology to a wider market.
>
>
> == Privacy and Content Security==
> - The privacy WebExtension API can now be used to control the
> privacy.resistFingerprinting preference and first party isolation
> - Containers launched as an extension available from AMO
> - Containers have had a few improvements for web extensions:
> Containers now enabled when installing a contextual identity extension,
> Events to monitor container changes, Ability to get icon urls for
> containers along with hex colour codes, Cleaner APIs
> - Lightbeam was remade as a web extension.
> - Firefox 57 treats data URLs as unique origins  which mitigates the risk
> of XSS, make Firefox standard-compliant and consistent with the behavior of
> other browsers.
> - Shipped version 4 of the Safe Browsing protocol.
>
> == Firefox and Tor Integration ==
> -Continue the Tor patch uplift work focusing on browser fingerprinting
> resistance
> - Landed 12 more anti-fingerprinting patches in 57
> - The MinGW build has landed in mozilla-central and is available in
> treeherder
>
> ==Content Isolation==
> - Various Windows content process security features enabled over the
> quarter including disabling of legacy extension points (56), image load
> policy improvements (57), increased restrictions on job objects (58), and
> finally we've enabled the alternate desktop feature in Nightly after
> battling various problems with anti-virus software interfering with child
> process startup.
> - The new 'default deny' read access policy for the Linux file access
> broker is now enabled by default for content processes and is rolling out
> in Firefox 57. The broker forwards content process file access requests to
> the parent process for approval, severely restricting what a compromised
> content process could do within the local file system.
> - Numerous access rules associated with file system, operating system
> services, and device access have been removed from the OSX content process
> sandbox. In terms of file system access, we've reached parity with Chrome's
> renderer. Remaining print server access will be removed in Q4, removal of
> graphics and audio access is currently in planning.
> - We continue to invest in cleaning up various areas of the code that have
> accumulated technical debt.
>  - We’ve completed our research on the scope of enabling the Win32k System
> Call Disable Policy feature. This feature will isolate content processes
> from a large class of Win32k kernel APIs commonly used to gain sandbox
> escape and privilege escalation. Planning for this long term project is
> currently underway with work expected to commence in Q4.
> - As a result of the stability and process startup problems encountered
> due to 3rd party code injection, a new internal 

Firefox Security Team Newsletter Q3 17

2017-11-02 Thread ptheriault
[ See formatted version here: 
https://wiki.mozilla.org/SecurityEngineering/Newsletter ]

= Firefox Security Team Newsletter Q3 17 =

Firefox Quantum is almost here, and contains several important security 
improvements. Improved sandboxing, web platform hardening, crypto performance 
improvements and much more. Read on to find out all the security goodness 
coming through the Firefox pipeline.

- Sandbox work is seeing great progress. As of 57, Windows, Mac OS X, and Linux 
all have file system access restricted by the sandbox which is a major 
milestone reached. Further restrictions are enabled for Windows in Firefox 58.

- Firefox 57 treats now data URLs as unique origins, reducing the risk of 
Cross-Site Scripting (XSS).

- The Firefox Multi-Account Containers Add-on shipped, allowing users to juggle 
multiple identities in a single browsing session.

- Increased AES-GCM performance in Firefox 56, and support for Curve25519 in 
Firefox 57 (the first formally verified cryptographic algorithm in a web 
browser)

- Experimental support for anti-phishing FIDO U2F “Security Key” USB devices 
landed behind a preference in Firefox 57. This feature is a forerunner to W3C 
Web Authentication, which will bring this anti-phishing technology to a wider 
market.

- The privacy WebExtension API can now be used to control the 
privacy.resistFingerprinting preference and first party isolation



= Team Highlights =


= Security Engineering =
== Crypto Engineering ==
- AES-GCM performance is increased across the board, making large transfers 
more efficient in Firefox 56.
- Our implementation of Curve25519 in Firefox 57 is the first formally verified 
cryptographic algorithm in a web browser.
- Experimental support for anti-phishing FIDO U2F “Security Key” USB devices 
landed behind a preference in Firefox 57. This feature is a forerunner to W3C - 
- Web Authentication, which will bring this anti-phishing technology to a wider 
market.


== Privacy and Content Security==
- The privacy WebExtension API can now be used to control the 
privacy.resistFingerprinting preference and first party isolation
- Containers launched as an extension available from AMO
- Containers have had a few improvements for web extensions:
Containers now enabled when installing a contextual identity extension, Events 
to monitor container changes, Ability to get icon urls for containers along 
with hex colour codes, Cleaner APIs
- Lightbeam was remade as a web extension.
- Firefox 57 treats data URLs as unique origins  which mitigates the risk of 
XSS, make Firefox standard-compliant and consistent with the behavior of other 
browsers.
- Shipped version 4 of the Safe Browsing protocol.

== Firefox and Tor Integration ==
-Continue the Tor patch uplift work focusing on browser fingerprinting 
resistance
- Landed 12 more anti-fingerprinting patches in 57
- The MinGW build has landed in mozilla-central and is available in treeherder

==Content Isolation==
- Various Windows content process security features enabled over the quarter 
including disabling of legacy extension points (56), image load policy 
improvements (57), increased restrictions on job objects (58), and finally 
we've enabled the alternate desktop feature in Nightly after battling various 
problems with anti-virus software interfering with child process startup.
- The new 'default deny' read access policy for the Linux file access broker is 
now enabled by default for content processes and is rolling out in Firefox 57. 
The broker forwards content process file access requests to the parent process 
for approval, severely restricting what a compromised content process could do 
within the local file system.
- Numerous access rules associated with file system, operating system services, 
and device access have been removed from the OSX content process sandbox. In 
terms of file system access, we've reached parity with Chrome's renderer. 
Remaining print server access will be removed in Q4, removal of graphics and 
audio access is currently in planning.
- We continue to invest in cleaning up various areas of the code that have 
accumulated technical debt.
 - We’ve completed our research on the scope of enabling the Win32k System Call 
Disable Policy feature. This feature will isolate content processes from a 
large class of Win32k kernel APIs commonly used to gain sandbox escape and 
privilege escalation. Planning for this long term project is currently underway 
with work expected to commence in Q4.
- As a result of the stability and process startup problems encountered due to 
3rd party code injection, a new internal initiative has formed to better 
address problems associated with unstable software injected into Firefox. This 
cross-team group will explore and improve policy revolving around outreach and 
blocking, data collection and research, and improved injection mitigation 
techniques within Firefox.


= Operations Security =
- addons.mozilla.org and Firefox Screenshots went 

Re: Proposal to remove some preferences override support

2017-11-02 Thread Axel Hecht
Looping in mkaply explicitly, if that has impact on organizational 
deployments.


Axel

Am 02.11.17 um 00:41 schrieb Nicholas Nethercote:

Greetings,

In https://bugzilla.mozilla.org/show_bug.cgi?id=1413413 I am planning to
remove a couple of things relating to preferences.

1) Remove the defaults/preferences directory support for extensions. This
is a feature that was used by legacy extensions but is not used by
WebExtensions.

2) Remove the "preferences" override directory in the user profile.
This removes
support for profile preferences override files other than user.js.

The bug has a patch with r+. The specific things it removes include:
- The "load-extension-default" notification.
- The NS_EXT_PREFS_DEFAULTS_DIR_LIST/"ExtPrefDL" directory list, including
the entry from the toolkit directory service.

Does anybody foresee any problems with this change?

Thanks.

Nick



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to remove some preferences override support

2017-11-02 Thread o . e . ekker
On Thursday, November 2, 2017 at 1:08:21 AM UTC+1, Kris Maglione wrote:
> On Thu, Nov 02, 2017 at 01:06:09AM +0100, Jörg Knobloch wrote:
> >   On 02/11/2017 00:41, Nicholas Nethercote wrote:
> >
> > 1) Remove the defaults/preferences directory support for extensions. This
> > is a feature that was used by legacy extensions but is not used by
> > WebExtensions.
> >
> >   Is that the facility that legacy extensions can have
> >   defaults/preferences/defaults.js to declare their preferences?
> >
> >   In Thunderbird we're trying to keep legacy extensions working, currently
> >   by having |ac_add_options "MOZ_ALLOW_LEGACY_EXTENSIONS=1"| in the
> >   mozconfigs.
> >
> >   So are the components legacy extensions rely on slowly dismantled? So some
> >   will keep working and some will stop working?
> 
> Yes. The components that non-bootstrapped add-ons rely on, 
> anyway.
> 
> >   If so, what is the replacement? Setting up the preference in JS on the fly
> >   in the add-on?
> 
> Yes.

So what's the alternative for localized extension names and/or descriptions? At 
the moment I have these two lines in my defaults/preferences/defaults.js:

pref("extensions.{CC3C233D-6668-41bc-AAEB-F3A1D1D594F5}.name", 
"chrome://mailredirect/locale/mailredirect.properties");
pref("extensions.{CC3C233D-6668-41bc-AAEB-F3A1D1D594F5}.description", 
"chrome://mailredirect/locale/mailredirect.properties");

And chrome/locale//mailredirect.properties contains the localized 
name/description:

extensions.{CC3C233D-6668-41bc-AAEB-F3A1D1D594F5}.name=Mail Redirect
extensions.{CC3C233D-6668-41bc-AAEB-F3A1D1D594F5}.description=Allow to redirect 
(a.k.a. "remail") mail messages to other recipients.

Maybe this can temporarily changed to a localized section in the extension's 
install manifest, but I suppose that functionality will go away too?

Onno
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform