W3C and WebExtensions

2017-10-13 Thread Andrew McKay
We've been working on getting WebExtensions ready for Firefox 57, and
wanted to follow up with our status regarding the W3C Community Group
[1].

At this time Firefox implements a large part of the specification.
Deviations from the spec are being tracked in bug 1392434 [2].
Following the specification allows developers to write extensions and
know that the core of the extension will work across browsers, and
also understand where browsers may diverge from the specification.

We believe that extensions can do more than they do today, and will
continue to expand the API set Firefox supports as we move forward. As
we extend the APIs available to developers, we'll be marking the API
appropriately on MDN [3] so developers know what to expect. It is our
hope that other browsers will do the same, and that we'll collectively
grow the extension API.

This is documented on the wiki [4] and for more information on this or
particular APIs, please join dev-addons [5] or #webextensions on IRC.

[1]  https://www.w3.org/community/browserext/
[2] https://bugzilla.mozilla.org/show_bug.cgi?id=1392434
[3] https://bugzilla.mozilla.org/show_bug.cgi?id=1408494
[4] https://wiki.mozilla.org/WebExtensions/Spec
[5] https://mail.mozilla.org/listinfo/dev-addons
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: How to get pretty stack trace on Linux?

2017-10-13 Thread Masayuki Nakano

On 10/14/2017 12:29 AM, Masayuki Nakano wrote:
Ted, really sorry for the delay to say "Thank you" because of too busy 
of my life.


I tried to do this on my environment (Ubuntu), then, I succeeded to get 
a pretty stack trace even from trysever build.


1. Put |#include "nsTraceRefcnt.h"| and 
|nsTraceRefcnt::WalkTheStack(stderr);| where I want to get stack trace.

2. Post it to tryserver.
3. Get Linux build from "B" of "target.tar.bz2" in treeherder.
4. Get build symbols from "B" of "target.crashreporter-symbols.zip" in 
treeherder.

5. Then, extract all of them.
6. Run the tryserver build from terminal.
7. Save stack trace to a text file.
8. Run |/tools/rb/fix_linux_stack.py < stack.txt|


#8 runs in the directory which target.crashreporter-symbols.zip is unzipped.

--
Masayuki Nakano 
Software Engineer, Mozilla
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Experimenting with a shared review queue for Core::Build Config

2017-10-13 Thread Nicholas Alexander
On Fri, Oct 13, 2017 at 7:47 AM, Andreas Tolfsen  wrote:

> Also sprach smaug:
>
> How did the setup actually work?
>>
>
> I think farre gave a satisfactory summary of the review tool above.
>
> I've asked this from farre too before, and IIRC, the reply was
>> that is wasn't working that well. Certain devs still ended up
>> doing majority of the reviews.  But perhaps I misremember or
>> certainly I don't know the details.
>>
>
> Shared review queues isn’t a silver bullet balancing reviews
> between peers for a couple of different reasons, but it arguably
> makes it easier for peers to collaborate on (and be informed of)
> reviews.
>
> For certain parts of the codebase, it is a sad fact the bus factor
> is low and we shouldn’t be fooled that a system which practically
> allows any one of one’s peers to pick up the review, will somehow
> magically improve the turnaround time for patches in those areas.
> You will still have reviews that can practically only be reviewed by
> one single person who is the domain expert.
>
> On the flipside, when you have a patch for a piece of code multiple
> peers know well and feel comfortable accepting, turnaround time
> could be improved compared to the status quo where the single
> r? could be travelling, on PTO, or otherwise preoccupied.


This last point is what I think matters.  As a build peer (for a subset of
the build system), I know exactly who has the expertise to review my build
system patches -- and I also know which of my patches don't require deep
expertise and can be reviewed by any build peer.  I manually balance my
requests to keep this chaff out of the busy build peers' queues.  I'm
hoping the shared queue can let the set of build peers opt in to this
balancing, keeping the simple patches out of the busy build peers' queues.

Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: How to get pretty stack trace on Linux?

2017-10-13 Thread Masayuki Nakano
Ted, really sorry for the delay to say "Thank you" because of too busy 
of my life.


I tried to do this on my environment (Ubuntu), then, I succeeded to get 
a pretty stack trace even from trysever build.


1. Put |#include "nsTraceRefcnt.h"| and 
|nsTraceRefcnt::WalkTheStack(stderr);| where I want to get stack trace.

2. Post it to tryserver.
3. Get Linux build from "B" of "target.tar.bz2" in treeherder.
4. Get build symbols from "B" of "target.crashreporter-symbols.zip" in 
treeherder.

5. Then, extract all of them.
6. Run the tryserver build from terminal.
7. Save stack trace to a text file.
8. Run |/tools/rb/fix_linux_stack.py < stack.txt|

Thank you very much!

On 9/22/2017 9:55 AM, Ted Mielczarek wrote:

On Thu, Sep 21, 2017, at 08:51 PM, Masayuki Nakano wrote:

I'd like to get pretty stack trance which shows method names rather than
only address with tryserver build on Linux. However,
nsTraceRefcnt::WalkTheStack() cannot get method names on Linux as you
know.

The reason why I need to get it is, I have a bug report which depends on
the environment and I cannot reproduce it on my any environments.
Therefore, I'd like the reporter to log the stack trace when it occurs
with MOZ_LOG.

My questions are, how to or is it possible to get pretty stack trace on
Linux with MOZ_LOG? And/or do you have better idea to get similar
information to check which path causes a bug.

If it's impossible, I'll create a tryserver build with each ancestor
caller logs the path, though.



Hi Masayuki,

Our test harnesses accomplish this by piping the output of Firefox
through one of the stack fixing scripts in tools/rb[1].
fix_linux_stack.py uses addr2line, which should at least give you
function symbols on Nightly. You could use my GDB symbol server
script[2] to fetch the actual debug symbols from the symbol server if
you want full source line information.

Regards,
-Ted


1. https://dxr.mozilla.org/mozilla-central/source/tools/rb
2. https://gist.github.com/luser/193572147c401c8a965c
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform




--
Masayuki Nakano 
Software Engineer, Mozilla
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Experimenting with a shared review queue for Core::Build Config

2017-10-13 Thread Andreas Tolfsen

Also sprach smaug:


How did the setup actually work?


I think farre gave a satisfactory summary of the review tool above.


I've asked this from farre too before, and IIRC, the reply was
that is wasn't working that well. Certain devs still ended up
doing majority of the reviews.  But perhaps I misremember or
certainly I don't know the details.


Shared review queues isn’t a silver bullet balancing reviews
between peers for a couple of different reasons, but it arguably
makes it easier for peers to collaborate on (and be informed of)
reviews.

For certain parts of the codebase, it is a sad fact the bus factor
is low and we shouldn’t be fooled that a system which practically
allows any one of one’s peers to pick up the review, will somehow
magically improve the turnaround time for patches in those areas.
You will still have reviews that can practically only be reviewed by
one single person who is the domain expert.

On the flipside, when you have a patch for a piece of code multiple
peers know well and feel comfortable accepting, turnaround time
could be improved compared to the status quo where the single
r? could be travelling, on PTO, or otherwise preoccupied.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: W3C Proposed Recommendation: Cooperative Scheduling of Background Tasks (requestIdleCallback)

2017-10-13 Thread Andreas Farre
I see no reason for us to not support this.

On Tue, Oct 10, 2017 at 5:37 PM, L. David Baron  wrote:
> A W3C Proposed Recommendation is available for the membership of
> W3C (including Mozilla) to vote on, before it proceeds to the final
> stage of being a W3C Recomendation:
>
>   Cooperative Scheduling of Background Tasks
>   https://www.w3.org/TR/requestidlecallback/
>   https://w3c.github.io/requestidlecallback/
>   Deadline for responses: Tuesday, November 7, 2017
>
> If there are comments you think Mozilla should send as part of the
> review, please say so in this thread.  Ideally, such comments should
> link to github issues filed against the specification.  (I'd note,
> however, that there have been previous opportunities to make
> comments, so it's somewhat bad form to bring up fundamental issues
> for the first time at this stage.)
>
> Given that this is something that I believe we implement, we should
> be voting on this, even if that vote is just to support without any
> comments.  But I'd definitely like to hear from somebody
> knowledgable about the spec and our implementation before just doing
> that.
>
> -David
>
> --
> 턞   L. David Baron http://dbaron.org/   턂
> 턢   Mozilla  https://www.mozilla.org/   턂
>  Before I built a wall I'd ask to know
>  What I was walling in or walling out,
>  And to whom I was like to give offense.
>- Robert Frost, Mending Wall (1914)
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Firefox data platform & tools update, Q3 2017

2017-10-13 Thread Georg Fritzsche
*Mission Control showing content crashes per 1k usage hours.*


As the data platform & tools team we provide core tools for using data to
other teams. This spans Firefox Telemetry, data storage and analysis to
some central data viewing tools. To make new developments more visible we
publish a quarterly update on our work.

In the last quarter we continued focusing on decreasing data latency,
supporting analytics and experimentation workflows, improving stability and
building Mission Control.
Let's go faster

To enable faster decision making, we worked on improving latency for
important use-cases.

Most notable is that the main pings now arrive much faster, which power
most of our main dashboards and analysis. The new rule of thumb is 2 days
until 95% of the main ping data

is available, from activity in the browser to being available for analysis.

In Firefox Telemetry we can now record new probes from add-ons without
having to ride the trains, which greatly reduces shipping times for
instrumentation. This is available first with events

in 56 and scalars

in 58.

The new update ping

provides a lower-latency signal for when updates are staged and
successfully applied. It is queryable through the telemetry_update_parquet
dataset

.

Similarly, the new-profile ping

is a signal for new profiles and installations, which is now queryable
through the telemetry_new_profile_parquet dataset

.

The new first-shutdown ping

helps us to better understand users that churn after the first session, by
sending the first sessions data of a user immediately on Firefox shutdown.
Enabling experimentation

This year saw a lot of cross-team work on enabling experimentation
workflows in Firefox. A focus was on enabling various SHIELD studies

.

Here the experiments viewer 
saw a lot of improvements, which provides a front-end view for inspecting
how various metrics perform in an experiment.

An experiments dataset is now available in Redash and Spark, which includes
data for SHIELD opt-in experiments and is based on the main_summary dataset.

The experiment_aggregates dataset now includes metadata about the
experiment, and its reliability and speed have improved significantly.

Other use-cases can build on the ping data from most experiments using
experiment
annotations
,
which is available within 15 minutes in the telemetry-cohort data source
.
Tools for exploring data

Our data tools make it easier to access and query the data we have. Here
our Redash installation at sql.telemetry.mozilla.org saw many improvements
including:

   -

   Query revision control and reversion.
   -

   Better security and usability for templated queries.
   -

   Schema browser and autocomplete usability and performance improvements.
   -

   Better support for Athena data sources.


Mission Control  is
a new tool, which makes key measures of a Firefox release, like crash
counts, available with low latency. An early version of it is now available
here .

On the Firefox side, about:telemetry got a major redesign
,
which makes it more easy to navigate, added a global search and aligns it
with the photon design.
Powering data analysis

To make analysis more effective, two new datasets were added:

   -

   clients_daily
   
,
   which summarizes main ping data into one row per client and day.
   -

   heavy_users
   
   (docs
   
),
   which has a similar format, but contains only clients that match our
   definition of "heavy users".


For analysis jobs run through ATMO