Quantum Flow Engineering Newsletter #17

2017-07-27 Thread Ehsan Akhgari
Hi everyone,

Next week, Nightly will switch to the 57 branch, beginning the development
cycle of what will be the last train leaving the station towards Firefox
57.  Around 5 months ago, I started

writing the first one of these newsletters, which of course was well past
when the Quantum Flow project got started.  It's probably a good time for a
retrospective on the way that we have come so far.

During this time, many small and medium size performance projects were
started.  Some are finished (and shipping even!) and some still ongoing,
but the rate of progress has been quite astonishing.  I have tried to talk
about as many of these as possible in some detail in the newsletter, and
these are only one part of the overall performance work happening.  Here is
a list of some of these projects with some overall status report:

   - We had a lot of reports of performance issues such as during page
   loads that were either caused or exacerbated by long running sync IPCs from
   the content to the chrome process.  We started a focused effort to deal
   with this problem, which started from restricting the addition of new
   synchronous messages and gradually working towards removing the existing
   ones that were slow according to telemetry data.  I've reported on the
   status of this work periodically, and it will probably be an ongoing effort
   for some more time, but we are well on the path of solving the majority of
   the severe issue that affects our users by Firefox 57 at this point.
   - The most important performance issues are the ones that affect the
   real users.  In the past we had built an infrastructure for reporting
   backtraces from hangs that users experience through telemetry so that we
   can diagnose and fix them called Background Hang Reports, but this setup
   hadn't survived the passage of time.  We started to stand up some python
   scripts to process the data coming through the telemetry servers to start
   getting actionable data while starting to create an awesome new UI
    for it.  Many thanks to Michael Layzell
   and Doug Thayer for their great work on this so far.  Through this data, we
   have found and fixed
   
   a number of bugs.  The rate of discovery and fixing bugs from this data has
   been slower than I would have liked.  The reason is we didn't have enough
   engineers to look through this data and extract actionable bugs out of it.
   This process is still manual and very slow and time consuming.
   - We've kept up a rigorous process of continually measuring and tracking
   the performance of the browser in various workloads with the goal of
   identifying the most severe performance issues and eliminating them where
   possible.  In order to get help from the broad group of engineers and
   contributors, it's important to communicate what issues we consider as the
   most critical, so we have created an active bug triage process to identify
   the most important bugs and you have all probably heard all about this by
   now.  :-)  This may sound like a lot of process, but after several months
   when I look back now at the rate of performance fixes that have landed as a
   result of this, I think this has been fairly effective.  I think there has
   also been a lot about this triage process that we could have done better,
   like maintaining more consistency in prioritizing bugs, communicating more
   clearly about the criteria being used, etc.  But time pressure and the
   sheer number of bugs to deal with forced the situation in a lot of cases
   anyway.
   - A few projects grew into their own independent parallel efforts.  One
   example is some of the initial bugs that we had filed in various UI
   components in the front-end code started to emerge patterns that seemed to
   warrant some mini-projects formed around them.  One example was synchronous
   layout and style flushes triggered by various code in the browser front-end
   (and sometimes by Gecko invoked by the front-end code), or various code
   such as timers going off at random time in the front-end code.  Chasing
   issues like this is now part of the Photon Performance project and is being
   actively worked on, and the difference this is making is quite noticeable
   in the performance of various parts of the UI.  Another example is reflow
   performance.  We had seen expensive reflows in many profiles and even
   though we didn't have much concrete information about the sources of the
   problems, we reached out to the Layout team and asked for help.  That
   resulted into an effort
    to improve
   reflow performance, which is actively continuing.  Thanks to both the
   front-end and layout teams for leading these efforts!
   - We have continued to improve 

Re: Extensions and Gecko specific APIs

2017-07-27 Thread Bill McCloskey
On Thu, Jul 27, 2017 at 7:14 PM, Nicholas Nethercote  wrote:

> FWIW, I share Steve's broad concerns here. Mozilla's track record on
> extension APIs has had many dead-ends and changes of direction. Now that
> we're wiping the slate clean, it would be good to not repeat history.
>

I'm surprised it hasn't been mentioned here, but there is a process in
place for new APIs. This is my understanding of how it works:

1. API is prototyped as a WebExtension experiment. There's a fair amount of
documentation on how to do this [1], including API guidelines [2]. This
would be a probably good place for people to add rules of thumb, like roc's
point about free-form JSON being a bad idea.

2. File a bug for the API and mark it [design-decision-needed]. The API
will be discussed first in the bug, then in the community meeting, and
possibly brought to the WebExtensions advisory group [3] if there are any
concerns.

3. The API will be either accepted or rejected. Quite a few do get
rejected, so it's not a rubber stamp.

I think it's common for step 1 to be omitted, in which case someone from
the WebExtension team has to implement the API. But the intention is to
allow add-on developers to be able to propose an API without having to wait
for us to implement it.

If people have concerns, the best thing to do is to be a part of this
process. It's an open discussion, and I think that it would benefit from
more platform people participating--especially people who have experience
with web standards. There also has been a lot of discussion of this process
itself on dev-addons, which is where most of this stuff gets talked about.

-Bill

[1] https://webextensions-experiments.readthedocs.io/en/latest/
[2] https://webextensions-experiments.readthedocs.io/en/latest/new.html
[3] https://wiki.mozilla.org/WebExtensions/AdvisoryGroup


> Nick
>
> On Fri, Jul 28, 2017 at 3:02 AM, Steve Fink  wrote:
>
> > On 07/26/2017 10:45 PM, Andrew Swan wrote:
> >
> >>
> >> On Wed, Jul 26, 2017 at 4:27 PM, Steve Fink > sf...@mozilla.com>> wrote:
> >>
> >> This thread worries me greatly. Somebody tell me we have a plan
> >> and policy around this already. Please?
> >>
> >>
> >> We might, but I'm not sure what "this" you're concerned about.  Whether
> >> API names should be prefixed?  Or whether we should expose things at all
> >> that are unique to gecko/firefox to extensions?  There are a whole
> bunch of
> >> things that get considered when new extension APIs are proposed
> including
> >> safety, maintainability, performance, and yes, cross-browser
> compatibility.
> >>
> >
> > "this" == exposing Gecko-specific functionality, or rather, what
> > Gecko-specific functionality to expose and how to expose it in general.
> > With emphasis on the how. The prefixing decision (answer: no) and
> > do-it-at-all decision (answer: yes) are part of that.
> >
> > Unfortunately, there isn't anything written that explains actual criteria
> >> in detail (its on our radar but somewhere behind a long list of
> engineering
> >> tasks on the short-term priority list).
> >>
> >
> > And I guess the parenthetical clause is what worries me. The people
> > churning through that workload should be churning through that workload,
> > and it's fine that they aren't spending time and mental space on the
> > theoretical concerns of future compatibility issues or addon developer
> > relations. But this is kind of a big deal for Mozilla strategically, so I
> > would expect someone else to be working on the strategic plan before we
> > reach the foot-shooting point.
> >
> > Hopefully, that someone would be in close contact with the engineers
> doing
> > the work, since they have the best context and familiarity with large
> parts
> > of the problem space, and hence their opinions deserve a lot of weight.
> As
> > long as the consultation doesn't get in the way of getting stuff done.
> > There's a ton of weight on you people's shoulders, and we don't want to
> add
> > more.
> >
> > One person can do both strategy and tactics (or implementation) just
> fine,
> > but it's usually not a good idea to do them at the same time. Different
> > mindset, different tradeoffs.
> >
> >
> >> My individual opinion is that something being unique to gecko or firefox
> >> should not disqualify it from being exposed to extensions.  The
> webcompat
> >> analogy doesn't really work here, the principle that the web should be
> open
> >> and interoperable demands rigor in what gets exposed to content.  But a
> >> browser extension isn't a web page, it is part of the browser itself,
> and
> >> different browsers are inherently ... different.  They have different
> >> features, different user interfaces, etc.  The fact that browser
> extensions
> >> are built with web technology and that they modify or extend the very
> thing
> >> that displays web content obscures this distinction, but it does make a
> big
> >> difference.
> >>
> >
> 

Re: Extensions and Gecko specific APIs

2017-07-27 Thread Nicholas Nethercote
FWIW, I share Steve's broad concerns here. Mozilla's track record on
extension APIs has had many dead-ends and changes of direction. Now that
we're wiping the slate clean, it would be good to not repeat history.

Nick

On Fri, Jul 28, 2017 at 3:02 AM, Steve Fink  wrote:

> On 07/26/2017 10:45 PM, Andrew Swan wrote:
>
>>
>> On Wed, Jul 26, 2017 at 4:27 PM, Steve Fink  sf...@mozilla.com>> wrote:
>>
>> This thread worries me greatly. Somebody tell me we have a plan
>> and policy around this already. Please?
>>
>>
>> We might, but I'm not sure what "this" you're concerned about.  Whether
>> API names should be prefixed?  Or whether we should expose things at all
>> that are unique to gecko/firefox to extensions?  There are a whole bunch of
>> things that get considered when new extension APIs are proposed including
>> safety, maintainability, performance, and yes, cross-browser compatibility.
>>
>
> "this" == exposing Gecko-specific functionality, or rather, what
> Gecko-specific functionality to expose and how to expose it in general.
> With emphasis on the how. The prefixing decision (answer: no) and
> do-it-at-all decision (answer: yes) are part of that.
>
> Unfortunately, there isn't anything written that explains actual criteria
>> in detail (its on our radar but somewhere behind a long list of engineering
>> tasks on the short-term priority list).
>>
>
> And I guess the parenthetical clause is what worries me. The people
> churning through that workload should be churning through that workload,
> and it's fine that they aren't spending time and mental space on the
> theoretical concerns of future compatibility issues or addon developer
> relations. But this is kind of a big deal for Mozilla strategically, so I
> would expect someone else to be working on the strategic plan before we
> reach the foot-shooting point.
>
> Hopefully, that someone would be in close contact with the engineers doing
> the work, since they have the best context and familiarity with large parts
> of the problem space, and hence their opinions deserve a lot of weight. As
> long as the consultation doesn't get in the way of getting stuff done.
> There's a ton of weight on you people's shoulders, and we don't want to add
> more.
>
> One person can do both strategy and tactics (or implementation) just fine,
> but it's usually not a good idea to do them at the same time. Different
> mindset, different tradeoffs.
>
>
>> My individual opinion is that something being unique to gecko or firefox
>> should not disqualify it from being exposed to extensions.  The webcompat
>> analogy doesn't really work here, the principle that the web should be open
>> and interoperable demands rigor in what gets exposed to content.  But a
>> browser extension isn't a web page, it is part of the browser itself, and
>> different browsers are inherently ... different.  They have different
>> features, different user interfaces, etc.  The fact that browser extensions
>> are built with web technology and that they modify or extend the very thing
>> that displays web content obscures this distinction, but it does make a big
>> difference.
>>
>
> I agree. But it isn't completely different from webcompat, either. We have
> used up much of developer's tolerance for breaking changes, so we really
> want to try hard to minimize changes that are going to break addons. (And
> minimize the pain of such breakage -- if we have a mechanism allowing us to
> easily identify the addons that will be affected, and provide a warning and
> documentation of the change in advance, we can probably get away with quite
> a bit.)
>
> Anyway, containers is a good example of something that we've exposed to
>> extensions that isn't likely to be supported in other browsers any time
>> soon.  Nevertheless, we need to design APIs in a way that doesn't
>> compromise on the other areas mentioned above: maintainability, safety,
>> performance.  And, to the extent we can, we should design APIs that could
>> be adopted by other browsers if they choose to.
>>
>
> Sure, and in my mind, that's the sort of tactical decisionmaking that
> *should* be done in the context of implementation. Which is different from
> the overall strategy of prefixing / opt-in imports / signing / versioning.
>
> Given our position, it's a bold move that says we're willing to
>> take the painful hit of pissing off addon authors and users
>> because we truly believe it is necessary to produce a top-quality
>> product.
>>
>>
>> There are certainly outraged addon authors out there but for what its
>> worth, we're also already getting a good amount of positive feedback from
>> both addon authors and users.
>>
>
> Sorry, don't take my ranting to imply that I'm somehow unhappy with the
> work you people are doing. To the contrary, it all seems to be going
> stunningly well, which is much to the credit of your team.
>
>
>> That's my argument for why the default 

New: QA Test Plan Sign-off Requirement

2017-07-27 Thread Lawrence Mandel
(cross posting to a few lists for visibility)


tl;dr Starting with the Firefox 57 release, test plans will require
sign-off from Engineering, Product and QA before testing begins.

Engineering, Product, and QA each have unique insights into risks
associated with feature development. We have the best chance of success
when we have a diverse set of feedback on our plans. For QA, we need that
feedback early while there is time to adjust our test plans. The intention
is to avoid last-minute surprises and misunderstandings about test scope
that has at times been problematic when reporting test results.

In order to ensure QA receives the feedback they need, we are adding a
sign-off requirement. What we’re really asking is that you take an active
role in the quality assurance of your work by taking time to read the test
plan and provide feedback.

Three points related to sign-offs:

1. Granting a sign-off means:

   -

   High level testing objectives and testing scope are understood
   -

   Contents of ‘Risk Assessment and Coverage’ and ‘Test Areas’ meet your
   expectations


2. Sign-off is expected 1 week after it is requested. (Should typically be
near the beginning of the Nightly cycle.)

3. Sign-offs will be captured in each QA test plan as shown in the QA Test
Plan Template .

Thanks for your commitment to the quality of Firefox.

Lawrence
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


(reftest) Renaming reftest-print to reftest-paged

2017-07-27 Thread Tobias Schneider
In preparation of landing proper support to create print tests via
https://bugzilla.mozilla.org/show_bug.cgi?id=1299848 we renamed
reftest-print to reftest-paged. The reason for this was that reftest-print
is not actually testing real printed output but rather making sure layout
is done correct in paged mode. We renamed it in
https://bugzilla.mozilla.org/show_bug.cgi?id=1382327 to avoid confusion and
to reflect what these type of tests actually doing. In case reftest-print
will be used in future tests, the harness will throw an exception.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Extensions and Gecko specific APIs

2017-07-27 Thread Steve Fink

On 07/26/2017 10:45 PM, Andrew Swan wrote:


On Wed, Jul 26, 2017 at 4:27 PM, Steve Fink > wrote:


This thread worries me greatly. Somebody tell me we have a plan
and policy around this already. Please?


We might, but I'm not sure what "this" you're concerned about.  
Whether API names should be prefixed?  Or whether we should expose 
things at all that are unique to gecko/firefox to extensions?  There 
are a whole bunch of things that get considered when new extension 
APIs are proposed including safety, maintainability, performance, and 
yes, cross-browser compatibility.


"this" == exposing Gecko-specific functionality, or rather, what 
Gecko-specific functionality to expose and how to expose it in general. 
With emphasis on the how. The prefixing decision (answer: no) and 
do-it-at-all decision (answer: yes) are part of that.


Unfortunately, there isn't anything written that explains actual 
criteria in detail (its on our radar but somewhere behind a long list 
of engineering tasks on the short-term priority list).


And I guess the parenthetical clause is what worries me. The people 
churning through that workload should be churning through that workload, 
and it's fine that they aren't spending time and mental space on the 
theoretical concerns of future compatibility issues or addon developer 
relations. But this is kind of a big deal for Mozilla strategically, so 
I would expect someone else to be working on the strategic plan before 
we reach the foot-shooting point.


Hopefully, that someone would be in close contact with the engineers 
doing the work, since they have the best context and familiarity with 
large parts of the problem space, and hence their opinions deserve a lot 
of weight. As long as the consultation doesn't get in the way of getting 
stuff done. There's a ton of weight on you people's shoulders, and we 
don't want to add more.


One person can do both strategy and tactics (or implementation) just 
fine, but it's usually not a good idea to do them at the same time. 
Different mindset, different tradeoffs.




My individual opinion is that something being unique to gecko or 
firefox should not disqualify it from being exposed to extensions.  
The webcompat analogy doesn't really work here, the principle that the 
web should be open and interoperable demands rigor in what gets 
exposed to content.  But a browser extension isn't a web page, it is 
part of the browser itself, and different browsers are inherently ... 
different.  They have different features, different user interfaces, 
etc.  The fact that browser extensions are built with web technology 
and that they modify or extend the very thing that displays web 
content obscures this distinction, but it does make a big difference.


I agree. But it isn't completely different from webcompat, either. We 
have used up much of developer's tolerance for breaking changes, so we 
really want to try hard to minimize changes that are going to break 
addons. (And minimize the pain of such breakage -- if we have a 
mechanism allowing us to easily identify the addons that will be 
affected, and provide a warning and documentation of the change in 
advance, we can probably get away with quite a bit.)


Anyway, containers is a good example of something that we've exposed 
to extensions that isn't likely to be supported in other browsers any 
time soon.  Nevertheless, we need to design APIs in a way that doesn't 
compromise on the other areas mentioned above: maintainability, 
safety, performance.  And, to the extent we can, we should design APIs 
that could be adopted by other browsers if they choose to.


Sure, and in my mind, that's the sort of tactical decisionmaking that 
*should* be done in the context of implementation. Which is different 
from the overall strategy of prefixing / opt-in imports / signing / 
versioning.



Given our position, it's a bold move that says we're willing to
take the painful hit of pissing off addon authors and users
because we truly believe it is necessary to produce a top-quality
product.


There are certainly outraged addon authors out there but for what its 
worth, we're also already getting a good amount of positive feedback 
from both addon authors and users.


Sorry, don't take my ranting to imply that I'm somehow unhappy with the 
work you people are doing. To the contrary, it all seems to be going 
stunningly well, which is much to the credit of your team.




That's my argument for why the default answer here should be "Heck
yeah! If we can provide something that other browsers don't, DO
IT!" I could describe it as a fairness/good faith argument
instead: we just took away a bunch of powerful tools from our
users, claiming that it was for their own long-term good, so it
behooves us to give back whatever we can in a more manageable
form, in order to provide that promised good.


I think that's 

Re: Extensions and Gecko specific APIs

2017-07-27 Thread Enrico Weigelt, metux IT consult

On 26.07.2017 23:27, Steve Fink wrote:


Doing this at a time of weak market share is... courageous[2].


Remember when the whole FOSS movement started - the market share
was about zero.


In short: better to have fewer users now with a high quality product


ACK. Leave the toys to others.


But to make the sacrifice worthwhile, that means we have to *be* a high
quality product.


ACK. And to achieve that, there's a lot to clean up.


That's my argument for why the default answer here should be "Heck yeah!
If we can provide something that other browsers don't, DO IT!" I could


For example small footprint and easily managable code.

--mtx
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Extensions and Gecko specific APIs

2017-07-27 Thread Enrico Weigelt, metux IT consult

On 26.07.2017 22:23, Karl Dubost wrote:


As soon as some people are willing to adopt one of the browser

> "lab-style-features" in the open, because well it solves their
> issues and plays well with the ecosystem market shares, the

the vendor prefix strategy is falling apart for everyone else.

> It makes even things worse on a long term.

Maybe it's less painful when these things have to be explicitly
enabled by the user. So, arbitrary web coders (hopefully) don't
get the idea of relying on them. And they should have a limited
lifetime.


--mtx

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform