Re: Testing Rust code in tree

2020-05-12 Thread Chris Hutten-Czapski
Glean Team here. Can confirm that libxul-provided symbols aren't in
rusttests builds at present (Rust stuff is built first then wrapped in the
loving embrace of libxul). We do write rusttests in our crates that
(currently) have no Gecko symbols (see toolkit/components/glean/api/), but
have ended up having to rely on GTest to execute our one-and-only FFI
entrypoint (see toolkit/components/glean/gtest/). This leaves us with a
hole in our testing capabilities for everything in a crate that has Gecko
symbols that isn't reachable from FFI.

:glandium (+Cc) tells us that it might be possible to fix this by using
build.rs to build the parts of Gecko needed for tests[1], but that's both
beyond my current knowledge (am a Rust neophyte) and outside my current
focus (gotta bring Glean to Firefox) so that's as far as I've gotten.

A specific wrinkle you may run across is that tests written for crates that
use Gecko symbols _can_ run rusttests so long as the tests don't exercise
any code that needs Gecko symbols... But Only On Not-Windows. On Windows
any crate with Gecko symbols will fail to link in the rusttests config.
Stylo gets around this by disabling their tests on Windows[2], so if you
want to use rusttests to cover your non-Gecko-needing pieces of your
Gecko-symbol-having crates, maybe that'll help.

Hope this helps!

:chutten

[1]: https://bugzilla.mozilla.org/show_bug.cgi?id=1628074#c2
[2]:
https://searchfox.org/mozilla-central/rev/4d2a9d5dc8f0e65807ee66e2b04c64596c643b7a/servo/ports/geckolib/tests/lib.rs#5-12

On Tue, May 12, 2020 at 6:00 AM James Graham  wrote:

> On 11/05/2020 23:54, Mike Hommey wrote:
> > On Mon, May 11, 2020 at 03:37:07PM -0700, Dave Townsend wrote:
> >> Do we have any standard way to test in-tree Rust code?
> >>
> >> Context: We're building a standalone binary in Rust that in the future
> will
> >> be distributed with Firefox and of course we want to test it. It lives
> >> in-tree and while we could use something like xpcshell to drive the
> >> produced executable and verify its effects it would be much nicer to be
> >> able to use Rust tests themselves. But I don't see a standard way to do
> >> that right now.
> >>
> >> Is there something, should we build something?
> >
> >
> https://searchfox.org/mozilla-central/rev/446160560bf32ebf4cb7c4e25d7386ee22667255/python/mozbuild/mozbuild/frontend/context.py#1393
>
> If it helps to have an example, Geckodriver is using RUST_TESTS
>
>
> https://searchfox.org/mozilla-central/source/testing/geckodriver/moz.build#9-20
>
> It's not depending on gecko, so it's a simpler case than the one Lina
> mentioned.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: New and improved stack-fixing

2020-03-12 Thread Chris Hutten-Czapski
This is wonderful news!

The most recent time I interacted with this was tracking down a refcount
leak. I was following the instructions at [1] which as of yet mention
`fix_linux_stack.py`.

Would you like me to file a bug for that to be fixed? I'd edit it directly,
but I'm not confident I'd get the usage right.

:chutten

[1]:
https://developer.mozilla.org/en-US/docs/Mozilla/Performance/Refcount_tracing_and_balancing#Post-processing_step_2_filtering_the_log

On Fri, Mar 6, 2020 at 1:14 PM Gabriele Svelto  wrote:

> On 06/03/20 16:46, Andrew Sutherland wrote:
> > Thank you both very much for the clarifications and your excellent work
> > here!
> >
> > In terms of the Sentry crates, I presume that means the crates in
> > https://github.com/getsentry/symbolic repo?  Are there still reasons to
> > use/pay attention to Ted's https://github.com/luser/rust-minidump repo?
> > For example, I use slightly modified versions of
> > `get-minidump-instructions` and `minidump_dump` from the latter, but I
> > want to make sure I'm hitching my metaphorical tooling wagon to the
> > right metaphorical tooling horse.
>
> Yes, we will soon be improving over that to replace the
> minidump-specific functionality of Breakpad. Sentry still relies on
> Breakpad for that stuff and I intend to replace that functionality with
> an all-Rust implementation based on Ted's stuff. Bug 1588530 [1] is the
> meta that tracks all this work.
>
>  Gabriele
>
> [1] [meta] Crash reporting Rust rewrite
> https://bugzilla.mozilla.org/show_bug.cgi?id=1588530
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


How To: Data Review in Components

2020-01-21 Thread Chris Hutten-Czapski
Hello!


We here in Data Stewardship have been receiving inquiries about how the
Data Collection Review process[0] works now that more products are being
built out of reusable components. What follows is a memo about how to
approach Data Review when you're adding a data collection to a reusable
component, or adding a reusable component to a product. The current home
for the living document version of this is here:
https://mana.mozilla.org/wiki/pages/viewpage.action?spaceKey=DATAPRACTICES=Data+Review+in+Components


Data Review was designed assuming that the Product was responsible for both
the data collection and reporting. The measurement code and the submission
code all lived in the same place so the developer instrumenting the probe
(using Telemetry.scalarSet or Telemetry::Accumulate or what-have-you) knew
not only what they were instrumenting, but across what populations this
probe would be reported.

This is because the code being instrumented was only ever a part of
Firefox[1].


With Android Components we radically shifted how we would build Firefox
(and other things) on Android. Instead of having all the pieces live
together and only ever being used for one product, we'd be developing the
pieces separately and using them in any number of products.


This means that when a data collection is added, chances are it's being
added to a Component, not a Product[2]. The developer adding the data
collection may not be aware of all the Products currently using their
Component, and can't know of future Products that might integrate it. This
makes Data Collection Review difficult as Question 7 tries to ascertain
what population is being measured with this new collection.

To solve this, the developer adding the data collection should list all the
Products they know of that currently embed their Component, and a phrase
like "Users of products that embed $MyComponent" (where $MyComponent is
replaced with the name of their Component). This will help the Data Steward
understand where this collection is expected to be collected today, and
help any interested person in the future learn what names they should use
when looking these things up.

If a Product that submits data (usually by initializing the Glean SDK) adds
a Component that collects data (these can be identified by their metrics
documentation, usually in docs/metrics.md), then this is an expansion of
the population of a data collection. This means the Product needs to submit
a Data Collection Review to expand the scope of the Component's Data
Collection to the population using the Product.

To complete the review some questions (like why the data is being
collected) will not need firm answers (as those will have been provided
when the collections were added). The list of metrics can be found in the
Component's documentation. The population is the population using the
Product, and this is an answer the Product is most suited to give. As is
the description of the opt-out mechanism.

With these small allowances, Data Review is adaptable to the new
component-based development situation on Android and wherever reusable
components are included. This is new, and we will make mistakes. Please do
ask questions of the Data Stewards along the way, and let them know if you
find anything they've missed.

Things that require Data Collection Review:

1. A new data collection.

2. A Product integrating a Component that collects data.

3. A Product adding a new Data Collection System (by integrating the Glean
SDK, for instance). In most cases merely integrating a new system will add
collection, so this will be covered under (1). In other cases, you may need
special permission to start using a new system.

Things that do not require Data Collection Review:

1. A Product upgrading an integrated Component to a new version that has
new data collections. (This is covered by (1) above. The Product could be
included in the review by name, or as a product that embeds $MyComponent.
If clarification is desired, we can amend the data collection review to
specifically include the Product by name. No biggie.)

Assumptions:

* All of the Products and Components engaging in this process are subject
to Mozilla's Privacy Policy.


If you have any questions, please find us at fx-datastewa...@mozilla.com,
at #data-stewards on chat.mozilla.org (when available), or reach out to any
Data Steward listed on the wiki[0].


Thanks!

[0]: https://wiki.mozilla.org/Firefox/Data_Collection

[1]: This isn't actually true. It could also be a part of Thunderbird or
Geckoview, but let's keep it simple for now.

[2]: Data collections can be added to Products, too. In those cases, the
old mental model from Firefox still applies.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Shorter Data Collection Review Form For Renewals

2019-12-20 Thread Chris Hutten-Czapski
Hello,

  In an effort to reduce the number of permanent data collections, the Data
Stewardship Steering Committee has approved the addition of a new, short
form (3 questions) for the purpose of making Data Collection renewals
simpler.[1]

  Please feel welcome to use this form if you are renewing an expired or
expiring data collection. You can continue to use the long form instead if
you wish, but that might make your Data Steward grumpy as the new renewal
response [2] is much shorter for them, too.

  While you're here, please do consider making any new data collection an
expiring one. Checking in on it every six months or so is a good way to
ensure that it continues to meet your needs. It also leads to good data
hygiene as unused collections will then turn themselves off, saving our
users (and Mozilla's!) bandwidth and storage costs.

-The Data Stewardship Steering Committee
Chris H-C, Martin Lopatka, Megan McCorquodale, Nicole Shadowen, Teon Brooks

[1]: https://github.com/mozilla/data-review/blob/master/renewal_request.md
[2]: https://github.com/mozilla/data-review/blob/master/renewal_review.md
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


"products" key now required for new Telemetry metrics

2019-07-16 Thread Chris Hutten-Czapski
Hello,

  As part of the project to report GeckoView metrics in Fenix, we have made
the `products` key required for metrics definitions in Histograms.json,
Scalars.yaml, and Events.yaml.

  The products key identifies which "products" (for a hand-wavy definition
of the word 'product') you want your probe to be recorded on and reported
through. If the `products` key for a metric doesn't contain the current
"product", then the data for that metric isn't recorded or reported.

  The full documentation is (or soon will be, if this reaches you before
the code lands either tonight or tomorrow in Nightly 70) available in the
usual places [1] [2] [3], but here's a summary of what you probably need to
know:

  For many cases, the correct value for `products` is `["firefox"]`. This
means Firefox Desktop only. Many metrics are only relevant on Firefox
Desktop, and many decisions are based only on Firefox Desktop data. If the
metric is like that, this will ensure we only collect the data we'll use.
(Mozilla Data Privacy Principle #3 - Limited data) (and we'll save memory
on every other product)

  In other cases the correct value will be the broadest one of `["firefox",
"fennec", "geckoview"]`. This will ensure that the metric is reported by
Firefox Desktop, Firefox for Android, and GeckoView-based products that use
GeckoViewTelemetryController.jsm to submit telemetry (like Focus. Don't
worry, it's okay if you don't know what this stuff is.). This is the full
list of supported products, and is the value that every currently-present
Telemetry metric has been given. This is often too broad, but without
performing a full survey we were unable to scope it more narrowly.

(( If you ever wish to narrow the population reporting your probe (by,
e.g., removing some products from the `products` key), you do not need to
seek Data Collection Review. If you wish to broaden the population (by,
e.g., adding products to the `products` key), you will require Data
Collection Review. ))

  Soon we will be adding a new acceptable value for the "products" key
(currently named "geckoview_streaming") which will enable GeckoView to pass
some Telemetry out to Glean to be reported by products like Fenix. Expect
more on that in the near future.

  If you have any questions, please do not hesitate to reach out. We can be
found on email, IRC#telemetry, and Slack#fx-metrics.

Your Friendly Neighbourhood Firefox Telemetry Team
( :chutten, :Dexter, :janerik, :gfritzsche, and :travis_ )

[1]:
https://firefox-source-docs.mozilla.org/toolkit/components/telemetry/telemetry/collection/histograms.html#products
[2]:
https://firefox-source-docs.mozilla.org/toolkit/components/telemetry/telemetry/collection/scalars.html#required-fields
[3]:
https://firefox-source-docs.mozilla.org/toolkit/components/telemetry/telemetry/collection/events.html#the-yaml-definition-file
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Fwd: Changes to about:telemetry -- Now With Processes!

2019-05-10 Thread Chris Hutten-Czapski
Hello!

about:telemetry, the UI that allows you to browse current and historical
Telemetry data in Firefox, is changing slightly. Starting with bug 1437446
(presently on autoland) it will default to showing you the Telemetry
collected in all process types (previously it would show only the "current"
process' data, which was confusing to users).

This means you no longer need to use the drop-down at the top-right near
the search bar to select the process you're interested in, _and_ search
will comb through all probes in all processes by default.

(that drop-down now controls which "store" of data you're viewing.)

This is all thanks to contributor :dalc who took this design across the
finish line. Thank you :dalc!

If you find any bugs, please file them as usual in Toolkit::Telemetry.

:chutten
Firefox Telemetry Team
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Input Delay Metric proposal

2018-09-25 Thread Chris Hutten-Czapski
For in-the-wild input delay not specifically during pageload we also
measure INPUT_EVENT_RESPONSE_DELAY_MS[1] and (from Firefox 53-60 and then
resuscitated in 64) INPUT_EVENT_RESPONSE_COALESCED_MS[2] which record[3]
from the time the OS created the input event until the time when the
process is done with the event.

So, for content processes it measures the time from when the OS creates the
input event, then the length of time it remained undelivered with the OS,
then the length of time it remained unhandled in the parent process, then
ipc time, then the length of time it remained unhandled on the content
process, and then the length of time it took to be processed by the content
process (which, IIRC, includes synchronous JS event handlers).

It didn't attempt to measure how long our style and layout engines would
take to manipulate structure and presentation and how long our rendering
and compositing machinery would take to raster it to the screen so a user
could see any visible results, but it was the closest thing to a holistic
"how long between the user trying to do something have we actually
completed doing that something?" measurement we had at the time (around the
more-modern e10s timeframe (Firefox 44) and, for the COALESCED variant,
around the Quantum timeframe)

(The COALESCED variant is trying to capture a more relevant sampling of
jank magnitude with the idea that multiple input events during a time when
input events are just queueing isn't a good unit of user frustration with
responsiveness. It coalesces any overlapping input events so that an OS
serving us 20 keypresses during a sync GC or other jank incident counts as
a single sample instead of 20 of them. In practice this results in an order
of magnitude fewer samples in the histogram.)

(There were also variants for measuring user-experienced jank during and
after application startup, but they weren't used as much)

Adding a variant that is only recorded between FCP and TTI would give us
yet another number we could consider... but something tells me that having
more numbers isn't necessarily a solution at this point :)

:chutten

[1]: https://mzl.la/2N0ECHy
[2]: https://mzl.la/2N2QMiT
[3]:
https://telemetry.mozilla.org/probe-dictionary/?search=INPUT_EVENT_RESPONSE


On Fri, Sep 21, 2018 at 11:30 AM  wrote:

> On Chrome, we've played around some with TTI-FCP.
>
> Another metric we've experimented a bit with is the length of the longest
> long task between FCP and TTI. You can think of this a bit like the "Max
> FID" - if someone tapped at the worst possible time, what would the FID be.
>
> We've also played some with Expected Queueing Time (
> https://docs.google.com/document/d/1Vgu7-R84Ym3lbfTRi98vpdspRr1UwORB4UV-p9K1FF0/)
> between FCP & TTI, which sounds similar to the MID proposal above.
>
> For delays during page load, I think focusing more on the worst case makes
> sense, so I lean towards something along the lines of the longest long task
> between FCP & TTI.
>
> We did a correlation study between FID, TTI and the longest long task
> between FCP & TTI. The longest long task approach correlated very slightly
> better with FID than TTI. We haven't run the correlation study on EQT
> between FCP & TTI, but I'm pretty confident it won't perform as well.
>
> Tim
>
> On Thursday, September 20, 2018 at 10:42:55 AM UTC-4, Ted Mielczarek wrote:
> > On Wed, Sep 19, 2018, at 2:42 PM, Randell Jesup wrote:
> > > Problem:
> > > Various measures have been tried to capture user frustration with
> having
> > > to wait to interact with a site they're loading (or to see the site
> > > data).  This includes:
> > >
> > > FID - First Input Delay --
> > > https://developers.google.com/web/updates/2018/05/first-input-delay
> > > TTI - Time To Interactive --
> > >
> https://developers.google.com/web/fundamentals/performance/user-centric-performance-metrics#time_to_interactive
> > > related to: FCP - First Contentful Paint and FMP - First Meaningful
> > > Paint --
> > >
> https://developers.google.com/web/fundamentals/performance/user-centric-performance-metrics#first_paint_and_first_contentful_paint
> > > TTVC (Time To Visually Complete), etc.
> > >
> > > None of these do a great job capturing the reality around pageload and
> > > interactivity.  FID is the latest suggestion, but it's very much based
> > > on watching user actions and reporting on them, and thus depends on how
> > > much they think the page is ready to interact with, and dozens of other
> > > things. It's only good for field measurements in bulk of a specific
> > > site, by the site author.  In particular, FID cannot reasonably be used
> > > in automation (or before wide deployment).
> > >
> > > Proposal:
> > >
> > > We should define a new measure based on FID name MID, for Median Input
> > > Delay, which is measurable in automation and captures the expected
> delay
> > > a user experiences during a load.  We can run this in automation
> against
> > > a set of captured pages, while also 

Re: Event Telemetry now sent on "event" pings instead of "main" pings

2018-07-04 Thread Chris Hutten-Czapski
More details in a more narrative-focused format can be found in the blog
post here:
https://chuttenblog.wordpress.com/2018/07/04/faster-event-telemetry-with-event-pings/

:chutten


On Wed, Jun 27, 2018 at 10:02 AM Chris Hutten-Czapski 
wrote:

> Hello,
>
>   Due to a forecast increase in demand for Event Telemetry, we have (via
> bug 1460595) moved Event Telemetry transmission from the "main" ping to
> special-purpose "event" pings.
>
>   This has a couple of benefits, most notably that they are sent more
> often (less latency) and can also send more event records (less/no
> truncation).
>
>   After a few days to a week I will be performing some validation analysis
> (via bug 1463410) of the contents and behaviours of these pings. If it all
> checks out we plan to uplift it to Beta 62.
>
>   This news should mostly only matter to you if you analyze events in situ
> within a ping. If the words "Spark," "ATMO," and "Databricks" mean little
> or nothing to you, then you can probably just take from this that we are
> just, once again, making things a bit better. Which we really enjoy doing :)
>
> As always, please do reach out if you have any questions or concerns.
>
> Your Friendly Neighbourhood Firefox Telemetry Team
> (:gfritzsche, :Dexter, :chutten, :janerik)
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Event Telemetry now sent on "event" pings instead of "main" pings

2018-06-27 Thread Chris Hutten-Czapski
Hello,

  Due to a forecast increase in demand for Event Telemetry, we have (via
bug 1460595) moved Event Telemetry transmission from the "main" ping to
special-purpose "event" pings.

  This has a couple of benefits, most notably that they are sent more often
(less latency) and can also send more event records (less/no truncation).

  After a few days to a week I will be performing some validation analysis
(via bug 1463410) of the contents and behaviours of these pings. If it all
checks out we plan to uplift it to Beta 62.

  This news should mostly only matter to you if you analyze events in situ
within a ping. If the words "Spark," "ATMO," and "Databricks" mean little
or nothing to you, then you can probably just take from this that we are
just, once again, making things a bit better. Which we really enjoy doing :)

As always, please do reach out if you have any questions or concerns.

Your Friendly Neighbourhood Firefox Telemetry Team
(:gfritzsche, :Dexter, :chutten, :janerik)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Update on rustc/clang goodness

2018-05-15 Thread Chris Hutten-Czapski
Recently I found myself using the mozbuild-supplied clang for compiling
Gecko on Linux (https://bugzilla.mozilla.org/show_bug.cgi?id=1451312). It
works just peachily for me, and I find myself liking their error message
decorators better than gcc's.

Since we supply a compiler during bootstrap, it seems a little odd we don't
default to using it.

:chutten


On Mon, May 14, 2018 at 4:58 PM David Major  wrote:

> We've confirmed that this issue with debug symbols comes from lld-link and
> not from clang-cl. This will likely need a fix from the LLVM side, but in
> the meantime I'd like to encourage people not to be deterred from using
> clang-cl as your compiler.
>
> On Thu, May 10, 2018 at 9:12 PM Xidorn Quan  wrote:
>
> > On Fri, May 11, 2018, at 10:35 AM, Anthony Jones wrote:
> > > I have some specific requests for you:
> > >
> > > Let me know if you have specific Firefox related cases where Rust
> is
> > > slowing you down (thanks Jeff [7])
> > > Cross language inlining is coming - avoid duplication between Rust
> > > and C++ in the name of performance
> > > Do developer builds with clang
>
> > Regarding the last item about building with clang on Windows, I'd not
> recommend people who use Visual Studio for debugging Windows build to build
> with clang at this moment.
>
> > I've tried using lld-link as linker (while continuing using cl rather
> than clang-cl) for my local Windows build, and it seems to cause problems
> when debugging with Visual Studio. Specifically, you may not be able to
> invoke debugging functions like DumpJSStack, DumpFrameTree in Immediate
> Windows, and variable value watching doesn't seem to work well either.
>
> > I've filed a bug[1] for the debugging function issue (and probably should
> file another for the watching issue as well).
>
> > [1] https://bugzilla.mozilla.org/show_bug.cgi?id=1458109
>
>
> > - Xidorn
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Use Counters were previously over-reporting, are now fixed.

2018-01-17 Thread Chris Hutten-Czapski
It depends on what you're measuring. The aggregates are recorded by
build_id, submission_date, OS, OS version, channel, major version, e10s
enabled setting, application name, and architecture. They can be returned
aggregated against any of those dimensions except the first two. So the
error depends on the values distributed amongst those dimensions... which
is a lot of things to check.

Generally speaking we don't add a use counter for anything that is used a
lot. Even people who see these features being used don't see them used a
lot. The only other candidate for largest change would be GetPreventDefault
(https://mzl.la/2mQqYMR), in the sub-2% category. Most use counters are
less than 0.1% usage.

So on the count of "baseline sense for how much I should care"... if you
have put in a Use Counter and thought usage was too high to remove that
feature, look again. Re-evaluate with the new, correct, data.

If you have considered using Use Counters but heard they couldn't be
trusted, they can. Now.

If neither of those things apply, this is just a note that Use Counters
exist and can be useful. Caring is optional :)

:chutten

On Wed, Jan 17, 2018 at 12:51 PM, Steve Fink <sf...@mozilla.com> wrote:

> On 1/17/18 7:57 AM, Chris Hutten-Czapski wrote:
>
>> Hello,
>>
>>Use Counters[0] as reported by the Telemetry Aggregator (via the HTTPS
>> API, and the aggregates dashboards on telemetry.mozilla.org) have been
>> over-reporting usage since bug 1204994[1] (about the middle of September,
>> 2015). They are now fixed [2], and in the course of fixing it, :gfritzsche
>> prepared a nifty view [3] of them that performs the fix client-side.
>>
>
> Can you give a sense for the size of the error in observed counts? (Is
> this a 15% should-have-been 10% type of thing, or almost always a 15.001%
> should-have-been 15% type of thing?)
>
> Could you order all of the counts by percentage error and send out a list
> of the top 10 or so? (eg in your blog post, you had PROPERTY_FILL_DOCUMENT
> going from 10.00% -> 9.46%, for an error of 0.54%. If that 0.54% is in the
> top 10, report the measure with its old and new percentages.) Really, it
> ought to be the list of measures, if any, that crossed over some "decision
> threshold", but I'm assuming that for the most part we don't have specific
> thresholds.
>
> I'm just trying to get a baseline sense for how much I should care. :-)
>
>
> Thanks,
>
> Steve
>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Use Counters were previously over-reporting, are now fixed.

2018-01-17 Thread Chris Hutten-Czapski
Hello,

  Use Counters[0] as reported by the Telemetry Aggregator (via the HTTPS
API, and the aggregates dashboards on telemetry.mozilla.org) have been
over-reporting usage since bug 1204994[1] (about the middle of September,
2015). They are now fixed [2], and in the course of fixing it, :gfritzsche
prepared a nifty view [3] of them that performs the fix client-side.

  Of all the problems to have with use-counters, _over_estimating usage is
the kind of problem that hurts the web least, as at least we weren't
retiring features that were used more than we reported.

  For a narrative-form description of the events, I wrote this blog post:
[4]. In short, we goofed. But it's fixed now, and the aggregator (and,
thus, telemetry.mozilla.org's telemetry aggregates dashboards) now report
the correct values for all queries, current and historical.

  So go forth and enjoy your use counters! They're pretty neat, actually.

:chutten

[0]:
https://firefox-source-docs.mozilla.org/toolkit/components/telemetry/telemetry/collection/use-counters.html
[1]: https://bugzilla.mozilla.org/show_bug.cgi?id=1204994
[2]: https://github.com/mozilla/python_mozaggregator/pull/59
[3]: http://georgf.github.io/usecounters/index.html
[4]:
https://chuttenblog.wordpress.com/2018/01/17/firefox-telemetry-use-counters-over-estimating-usage-now-fixed/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Changes to tab min-width

2017-10-05 Thread Chris Hutten-Czapski
I prefer the old behaviour, but I don't have a strong opinion on the
matter. I think it's because I'm used to tab navigation by keyboard
shortcut more than by mouse. I rearrange tabs so that they're close
together.

For everyone curious about how much of an outlier your subsessions are...
on Nightly 58: https://mzl.la/2ge6Bpk
Over 20% of subsessions have only one tab.
50% of subsessions have 3 or fewer
Over 90% of subsessions have 20 or fewer.
99% of subsessions have 218 or fewer tabs open concurrently.

Of course this is wildly different on other[1] channels: (
Beta 57's 99%ile is at 22[1]
Release 56's 99%ile is at 148, with 94% of subsessions with 20 or fewer
tabs[2]

Note: telemetry.mozilla.org is per-subsession aggregation, not per-client,
so clients with more subsessions are weighted more heavily
(pseudoreplication). Just something to keep in mind.

:chutten

[1]: https://mzl.la/2gdB1YP
[2]: https://mzl.la/2gdXftM


On Wed, Oct 4, 2017 at 4:46 PM, Girish Sharma 
wrote:

> +1 to 75px.
> All the points that I wanted to say about 50px being too small have
> already been said by now.
>
> On Thu, Oct 5, 2017 at 1:29 AM, Dirkjan Ochtman 
> wrote:
>
>> On Tue, Oct 3, 2017 at 10:36 PM, Jeff Griffiths 
>> wrote:
>>
>>> 1. do you prefer the existing behaviour or the new behaviour?
>>> 2. if you prefer a value for this pref different than 50 or 100, what
>>> is it? Why?
>>>
>>
>> Like others, I really like ~75 pixels. This allows me to see the first
>> 5-6 characters of the page's title, which I find really helpful in
>> distinguishing tabs. At 50 pixels, it's really only the favicon, which
>> seems much less helpful in my usage.
>>
>> Cheers,
>>
>> Dirkjan
>>
>> ___
>> firefox-dev mailing list
>> firefox-...@mozilla.org
>> https://mail.mozilla.org/listinfo/firefox-dev
>>
>>
>
>
> --
> Girish Sharma
> B.Tech(H), Civil Engineering,
> Indian Institute of Technology, Kharagpur
>
> ___
> firefox-dev mailing list
> firefox-...@mozilla.org
> https://mail.mozilla.org/listinfo/firefox-dev
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Retaining Nightly users after disabling of legacy extensions

2017-08-23 Thread Chris Hutten-Czapski
For those interested, preliminary data shows a continuing increase in the
Nightly population since 57. The number of users using Nightly 57 on August
17 was the highest number of users on any day on any Nightly version
since... well, our data retention policy cuts out at 6 months, so since at
least February 23. This continues the general growth of Nightly we've been
seeing since July with 56.

For the buildids, I just did a quick count of Nightly57 "main" pings
submitted on Aug 17 and found that over 90% are from after the 11th. Not
sure how normal that is (some people take forever to update their nightly),
but it puts a limit on how many users are purposefully "holding back".

Yes, I'm being purposefully vague on the numbers. I did say "preliminary
data" you'll notice. :)

:chutten

On Mon, Aug 14, 2017 at 9:28 AM, Andrew Swan  wrote:

> On Mon, Aug 14, 2017 at 6:16 AM, Honza Bambas  wrote:
>
> > Ed already mentioned that the addons manager doesn't automatically
> suggest
> > or even update to webext alternatives.  We really should have something
> > like this SOON - the more automatic or fluent the better.
>
>
> There is a "find a replacement" button next to disabled legacy extensions
> in about:addons.  Ed's original comment was that if, eg, Adblock Plus has a
> legacy version and a webextension version, we don't automatically direct
> users with the disabled legacy version to the webextension.  That's mostly
> because we don't actually have a straightforward way to do so, but it's
> also only an issue for users on non-release channels.
>
> -Andrew
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: GPU Process Experiment Results

2017-01-16 Thread Chris Hutten-Czapski
My one-sentence summary of the article - If anything, the test cohort with
the GPU process saw improved stability, especially for graphics crashes.

Which is awesome!

May need error bars for the figures to see how much of the results you saw
we might have to attribute to the noise of reported crash figures.

Also, I can't quite tell what the units are. The initial graphs seem to be
#crashes-reported-to-Socorro, but later ones talk of
#crashes-per-1000-usage-hours. The axes aren't labelled, so it's difficult
to be precise.

Have you attempted to measure crash counts and types via Telemetry instead
of Socorro? If all you need is a count and some metadata, the submission
rate to Telemetry is much (25x) higher than Socorro. (actually, if all you
need is a count, crash_aggregates would be a good place to start, as most
of the counting has been done for you).

All in all, very interesting and I can't wait to see future experiments in
Aurora and on Beta, when the time is right.

:chutten


On Mon, Jan 16, 2017 at 4:11 PM, Anthony Hughes  wrote:

> Hello Platform folks,
>
> Over the Christmas break I rolled out a Telemetry Experiment on Nightly to
> measure the stability impact of the GPU Process. This experiment concluded
> on January 11. Having had time to analyze the data I've published a report
> on my blog:
> https://ashughes.com/?p=374
>
> It should come up on Planet shortly but I wanted to post here for increased
> visibility. Feel free to send me questions, comments, and feedback.
>
> Cheers
>
> --
> Anthony Hughes
> Senior Quality Engineer
> Mozilla Corporation
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Windows XP and Vista Long Term Support Plan

2016-11-02 Thread Chris Hutten-Czapski
Over the past two months there has been no absolute decline in number of
Windows XP installs. (Source: Tableau data, which is sadly not public so I
cannot link because it reveals more data from our users than we feel
comfortable sharing)

Over the past two months there has been an absolute increase in the number
of Windows 7 and Windows 10 installs. (Tableau)

Thus, as a percentage it is decreasing. As a number of users, it isn't.

It is important to be clear and precise, and I apologize that my blogpost
failed in those respects.

So, to recap: there are costs to supporting Windows XP and Windows Vista.
The userbase is shrinking as a percentage of the overall Firefox userbase.
There is a plan that will ensure support to April of 2018, two years after
Chrome dropped support and a full decade after SP3 was released.

In my opinion, this is an excellent and generous plan. I expect the Windows
XP userbase to continue to shrink as a proportion of Firefox users, and to
restart shrinking as an absolute value in the near term.

Once again I am sorry for any confusion I may have caused.

:chutten


On Tue, Nov 1, 2016 at 11:08 PM, Peter Dolanjski 
wrote:

> Chutten is not as categoric as you are:
>>
>>   It is also possible that we’ve seen some ex-Chrome users fleeing
>>   Google’s drop of support from earlier this year.
>>
> This is possible, but I'd still expect to see the biggest impact when
> Chrome started including the scary persistent notification that the user
> will no longer get updates.
>
>
>>   Deseasonalized numbers for just WinXP users are hard to come by, so
>>   this is fairly speculative. One thing that’s for certain is that the
>>   diminishing Windows XP userbase trend I had previously observed (and
>>   was counting on seeing continue) is no longer in evidence.
>
>
> Chutten, if you have some other stats on this, I'd love to take a look.
> The longitudinal data still shows the following trend:
>
> *Changes to **Daily Active User proportion of WinXP to total Windows
> population from previous month:*
> Week of Jan. 20th, 2016: -4.3%
> Week of Feb. 20th, 2016: -2.5%
> Week of Mar. 20th, 2016: -2.6%
> Week of Apr. 20th, 2016: -3.2%
> Week of May 20th, 2016: -1.3%
> Week of June 20th, 2016: -3.3%
> Week of July 20th, 2016: -2.3%
> Week of Aug. 20th, 2016: -4.9%
> Week of Sept. 20th, 2016: -1.1%
> Week of Oct. 20th, 2016: -1.2%
>
> Sure, there were larger drops in the summer that seemed to have eased off
> in Sept./Oct. but it's too early to tell if that's just some weirdness from
> seasonality.
>
> Peter
>
>
>
>
>
> On Tue, Nov 1, 2016 at 9:56 PM, Mike Hommey  wrote:
>
>> On Wed, Nov 02, 2016 at 09:28:40AM +0800, Peter Dolanjski wrote:
>> > On 10/31/2016 3:54 PM, juar...@gmail.com wrote:
>> >
>> > >
>> > > Discontinuing support for 10% of users sounds like shrinking 10% of
>> > > customers, lay off 10% of employees, reduce 10% of funds for
>> > > investments.
>> >
>> >
>> > I can tell you that the evidence we have does not support the notion
>> > that end of life (or the approach we are proposing) will actually
>> > result in the attrition of those users.  We examined the impact of
>> > Chrome's end of life on Windows XP users.  The majority of users
>> > planned to stick with Chrome even without security updates.  We also
>> > saw almost zero evidence of Chrome's end of life causing an uptick in
>> > Firefox usage or downloads among XP users.
>>
>> Chutten is not as categoric as you are:
>>
>>   It is also possible that we’ve seen some ex-Chrome users fleeing
>>   Google’s drop of support from earlier this year.
>>
>>   Deseasonalized numbers for just WinXP users are hard to come by, so
>>   this is fairly speculative. One thing that’s for certain is that the
>>   diminishing Windows XP userbase trend I had previously observed (and
>>   was counting on seeing continue) is no longer in evidence.
>>
>>   https://chuttenblog.wordpress.com/2016/10/28/firefox-windows
>> -xp-exit-plan/
>>
>> Mike
>>
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Help ridding tests of unsafe CPOWs

2016-10-19 Thread Chris Hutten-Czapski
Things that would help me help with this endeavour:

1: A bug to file patches against
2: A method for detecting if our fix actually fixes the problem

I presume a skeleton of what we're looking for is:
1) Use DXR/ls -r/whatever to find the test file in the tree
2) On the line number(s) mentioned, replace the existing use of a CPOW with
something better. This may involve writing things in terms of ContentTask
(see documentation here[X]), or by simply finding a non-CPOW-using
alternative (like using browser.webNavigation.sessionHistory instead of
browser.sessionHistory)
3) Run the test to make sure it still passes using ./mach test
path/to/test.js
4) Write an informative commit message linking back to bug 
5) Based on what kind of test it is, send it to try to make sure it isn't
broken on other platforms
6) 
7) Get the patch reviewed on bug 

Is this correct?

:chutten

On Wed, Oct 19, 2016 at 5:00 AM, Gabriele Svelto 
wrote:

>  Hi Blake,
>
> On 19/10/2016 00:28, Blake Kaplan wrote:
> > I've been seeing a pattern of "unsafe" CPOWs causing our browser-chrome
> > mochitests to go intermittently orange. Generally, it seems that a test
> > randomly turns orange, gets starred and ignored until RyanVM or another
> one
> > of our sheriffs gets tired of seeing it and needinfos somebody in the
> hopes
> > of fixing it.
>
> I remember we did a big push in the War on Orange maybe three (or four?)
> years ago. We could do it again; calling out to everybody to take
> intermittent tests in the modules they're familiar with and start fixing
> them. Personally I'd happily dedicate a chunk of my time to doing it for
> a couple of weeks; having to re-trigger stuff every single time I make a
> try run drives me nuts.
>
>  Gabriele
>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: W3C Proposed Recommendation: HTML 5.1

2016-10-12 Thread Chris Hutten-Czapski
Can you provide any details (either inline, or a sampling of links) to
summarize the broader concerns that might not be encapsulated in the
document itself?

On Mon, Oct 10, 2016 at 9:46 PM, L. David Baron  wrote:

> A W3C Proposed Recommendation is available for the membership of W3C
> (including Mozilla) to vote on, before it proceeds to the final
> stage of being a W3C Recomendation:
>
>   HTML 5.1
>   W3C TR draft: https://www.w3.org/TR/html/
>   W3C Editor's draft: https://w3c.github.io/html/
>   deadline: Thursday, October 13, 2016
>
> If there are comments you think Mozilla should send as part of the
> review, please say so in this thread.  (I'd note, however, that
> there have been many previous opportunities to make comments, so
> it's somewhat bad form to bring up fundamental issues for the first
> time at this stage.)
>
> Note that this specification is somewhat controversial for various
> reasons, mainly related to the forking of the specification from the
> WHATWG copy, the quality of the work done on it since the fork, and
> some of the particular modifications that have been made since that
> fork.
>
> -David
>
> --
> 턞   L. David Baron http://dbaron.org/   턂
> 턢   Mozilla  https://www.mozilla.org/   턂
>  Before I built a wall I'd ask to know
>  What I was walling in or walling out,
>  And to whom I was like to give offense.
>- Robert Frost, Mending Wall (1914)
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform