Re: Presto: Comparing Firefox performance with other browsers (and e10s with non-e10s)

2016-02-15 Thread William Lachance

On 2016-02-14 8:30 PM, Valentin Gosu wrote:

>Great!  It'll be interesting to track how these change over time as well
>(or as versions get added to the list).  Again, medians/means/etc may
>help with evaluating and tracking this (or automating notices, ala Talos)
>

I thought about doing this, but Talos is always using static content on a
local connection, whereas this goes over a real network which may vary
performance, and load real websites which may change content or optimize
for different situations. I expect it's useful for confirming certain
properties, such as if page loads are faster on Fx, Chrome or Nightly, and
by how much, but probably can't get results that make sense over a longer
period of time.


These things are true of Talos too, at least to an extent -- machine 
configurations change, we modify tests, etc. As long as the numbers 
remain relatively stable on a day-to-day basis, Perfherder might well be 
able to generate useful alerts 
(https://treeherder.allizom.org/perf.html#/alerts?status=-1=1) 
when someone checks something in that improves or regresses performance.


If you're interested in getting your performance framework submitting to 
treeherder/perfherder, let me know. It's pretty trivial.


Will
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Presto: Comparing Firefox performance with other browsers (and e10s with non-e10s)

2016-02-14 Thread Martin Thomson
On Mon, Feb 15, 2016 at 1:12 PM, Martin Thomson  wrote:
> On Mon, Feb 15, 2016 at 12:30 PM, Valentin Gosu  
> wrote:
>>> Thumbnails, or columns on the right for each selected browser with
>>> median (or mean), with the best (for that site) in green, the worst in
>>> red would allow eyeballing the results and finding interesting
>>> differences without clicking on 100 links...  (please!)  Or to avoid
>>> overloading the page, one page with graphs like today, another with the
>>> columns I indicated (where clicking on the row takes you to the graph
>>> page for that side).
>>>
>>
>> What I noticed is that pages with lots of elements, and elements that come
>> from different sources seem to have a higher variability. So pages such as
>> flickr, with lots of images with various sizes, or pages that load various
>> ads.
>
> You currently graph every test result, sorted.  This can be reduced to
> a single measurement.  Here I think that you can take the 5th, 50th
> and 95th percentiles (mean isn't particularly interesting, and you
> want to avoid extreme outliers).  The x axis can then be used for
> something else.  The obvious choice is that you turn this into a bar
> graph with browsers on that x-axis.  You could probably remove the
> browser selector then.

Oh, to be a little less obtuse, I think that means that you get a
column graph with error bars on each column.  Your x-axis is by
browser (and version) with two columns for each (first view and
refresh).
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Presto: Comparing Firefox performance with other browsers (and e10s with non-e10s)

2016-02-14 Thread Martin Thomson
On Mon, Feb 15, 2016 at 12:30 PM, Valentin Gosu  wrote:
>> Thumbnails, or columns on the right for each selected browser with
>> median (or mean), with the best (for that site) in green, the worst in
>> red would allow eyeballing the results and finding interesting
>> differences without clicking on 100 links...  (please!)  Or to avoid
>> overloading the page, one page with graphs like today, another with the
>> columns I indicated (where clicking on the row takes you to the graph
>> page for that side).
>>
>
> What I noticed is that pages with lots of elements, and elements that come
> from different sources seem to have a higher variability. So pages such as
> flickr, with lots of images with various sizes, or pages that load various
> ads.

You currently graph every test result, sorted.  This can be reduced to
a single measurement.  Here I think that you can take the 5th, 50th
and 95th percentiles (mean isn't particularly interesting, and you
want to avoid extreme outliers).  The x axis can then be used for
something else.  The obvious choice is that you turn this into a bar
graph with browsers on that x-axis.  You could probably remove the
browser selector then.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Presto: Comparing Firefox performance with other browsers (and e10s with non-e10s)

2016-02-14 Thread Valentin Gosu
On 12 February 2016 at 17:11, Randell Jesup  wrote:

> >- You can click on each individual point to go to the WPT run and view
> >the results in greater detail
>
> What does "run index" mean in the graphs?  The values appear to be
> sorted from best to worst; so it's comparing best to best, next-best to
> next-best, etc?  Ah, and I see "sorted" undoes the sorting.
>

In the sorted version, it is the index of that run in the sorted array and
mostly meaningless.
The run index in the unsorted version is the index of the run in
chronological order, but the firstRun and repeatRun will share the same
index.


> I'd think displaying mean/median and std-deviation (or a bell-curve-ish)
> might be easier to understand.  But I'm no statistician. :-)  It also
> likely is easier to read when the numbers of samples don't match (or you
> need to stretch them all to the same "width"; using a bell-curve plot of
> median/mean/std-dev avoids that problem.
>

I also think it would be quite useful, but my knowledge of statistics is
pretty basic.
I tried to get the number of samples to match, but for a few domains that
didn't work. I assume there's a bug in my scripts.


>
> Thumbnails, or columns on the right for each selected browser with
> median (or mean), with the best (for that site) in green, the worst in
> red would allow eyeballing the results and finding interesting
> differences without clicking on 100 links...  (please!)  Or to avoid
> overloading the page, one page with graphs like today, another with the
> columns I indicated (where clicking on the row takes you to the graph
> page for that side).
>

What I noticed is that pages with lots of elements, and elements that come
from different sources seem to have a higher variability. So pages such as
flickr, with lots of images with various sizes, or pages that load various
ads.


>
> >--
> >Error sources:
> >
> >Websites may return different content depending on the UA string. While
> >this optimization makes sense for a lot of websites, in this situation it
> >is difficult to determine if the browser's performance or the website's
> >optimizations have more impact on the page load.
>
> Might be interesting to force our UA for a series of tests to match
> theirs, or vice-versa, just to check which sites appear to care (and
> then mark them).
>

I've tried it for a few domains and it didn't make much of a difference.
I'll try it for all the domains to see if there is a pattern we could make
out.


>
> Great!  It'll be interesting to track how these change over time as well
> (or as versions get added to the list).  Again, medians/means/etc may
> help with evaluating and tracking this (or automating notices, ala Talos)
>

I thought about doing this, but Talos is always using static content on a
local connection, whereas this goes over a real network which may vary
performance, and load real websites which may change content or optimize
for different situations. I expect it's useful for confirming certain
properties, such as if page loads are faster on Fx, Chrome or Nightly, and
by how much, but probably can't get results that make sense over a longer
period of time.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Presto: Comparing Firefox performance with other browsers (and e10s with non-e10s)

2016-02-12 Thread Patrick Meenan
On Thursday, February 11, 2016 at 8:27:46 PM UTC-5, Eric Rahm wrote:
> On Thursday, February 11, 2016 at 5:03:05 PM UTC-8, Patrick Meenan wrote:
> > "Memory Usage" is  complicated.  Specially when you try to compare 
> > different architectures.
> 
> Sure, but this is all Windows for desktop at least.

Sorry, I meant different browser architectures.  Even moving from monolithic to 
e10s won't be directly comparable.  Could be useful for comparing a single 
browser/architecture to itself over time though.

WPT also injects itself into the parent process and adds some to the memory 
stats (in particular, it keeps a copy of all of the response bodies and GDI 
memory for screen shots) so it's not a completely clean read but should be 
consistent over time for a given browser.

> 
> > Working set? Virtual memory? Accounting for shared pages, etc.
> 
> Working set (RSS) and private working set (USS) are the most interesting 
> numbers. This gets tricky with multi-process setups, but a reasonable 
> baseline I've been looking at is |total_memory = parent_rss + sum(child_uss)|
> 
> For example with Firefox I would be interested in the RSS of the parent 
> process (firefox.exe) and the USS of the child processes 
> (plugin-container.exe). For Chrome it would be more along the lines of the 
> RSS of the main chrome process, and the USS of the renderer/gpu/plugin 
> processes (and probably the RSS of the nacl process if that's still around).

I can grab a one-time snapshot of these at the end of the test/page load.  
Walking the process list was too expensive to do every 100ms (which is how 
often I collect CPU utilization).  Should be able to add it today and report 
both the parent RSS and sum(child_uss) as separate numbers.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Presto: Comparing Firefox performance with other browsers (and e10s with non-e10s)

2016-02-12 Thread Randell Jesup
>TL;DR - Firefox does pretty well when compared to Chrome.

Excellent!  Cool stats!

>We've begun this project by doing a set of performance runs on
>WebPageTest.com in order to compare Firefox's performance against Chrome on
>the home pages of the Alexa 100 (the top 100 web sites worldwide). We've
>made a visualization tool which plots all of the runs on a graph, so we may
>easily compare them:
>
>http://valenting.github.io/presto-plot/plot.html
>
>- click on a domain link to see results for each site.
>- you are able to view the results sorted or unsorted, or filter by the
>browser version
>- you can also compare metrics other than load time - such as speed
>Index, number of DNS queries, etc
>- you can also compare the browsers on several connectivity profiles
>(Cable, 3G, Dialup)
>- You can click on each individual point to go to the WPT run and view
>the results in greater detail

What does "run index" mean in the graphs?  The values appear to be
sorted from best to worst; so it's comparing best to best, next-best to
next-best, etc?  Ah, and I see "sorted" undoes the sorting.

I'd think displaying mean/median and std-deviation (or a bell-curve-ish)
might be easier to understand.  But I'm no statistician. :-)  It also
likely is easier to read when the numbers of samples don't match (or you
need to stretch them all to the same "width"; using a bell-curve plot of
median/mean/std-dev avoids that problem.

Thumbnails, or columns on the right for each selected browser with
median (or mean), with the best (for that site) in green, the worst in
red would allow eyeballing the results and finding interesting
differences without clicking on 100 links...  (please!)  Or to avoid
overloading the page, one page with graphs like today, another with the
columns I indicated (where clicking on the row takes you to the graph
page for that side).

>--
>Error sources:
>
>Websites may return different content depending on the UA string. While
>this optimization makes sense for a lot of websites, in this situation it
>is difficult to determine if the browser's performance or the website's
>optimizations have more impact on the page load.

Might be interesting to force our UA for a series of tests to match
theirs, or vice-versa, just to check which sites appear to care (and
then mark them).

Great!  It'll be interesting to track how these change over time as well
(or as versions get added to the list).  Again, medians/means/etc may
help with evaluating and tracking this (or automating notices, ala Talos)

-- 
Randell Jesup, Mozilla Corp
remove "news" for personal email
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Presto: Comparing Firefox performance with other browsers (and e10s with non-e10s)

2016-02-12 Thread Eric Rahm
On Friday, February 12, 2016 at 9:41:02 AM UTC-8, Patrick Meenan wrote:
> > > For example with Firefox I would be interested in the RSS of the parent 
> > > process (firefox.exe) and the USS of the child processes 
> > > (plugin-container.exe). For Chrome it would be more along the lines of 
> > > the RSS of the main chrome process, and the USS of the 
> > > renderer/gpu/plugin processes (and probably the RSS of the nacl process 
> > > if that's still around).
> > 
> > I can grab a one-time snapshot of these at the end of the test/page load.  
> > Walking the process list was too expensive to do every 100ms (which is how 
> > often I collect CPU utilization).  Should be able to add it today and 
> > report both the parent RSS and sum(child_uss) as separate numbers.
> 
> Just pushed support in WPT for collecting memory stats at the end of a test: 
> https://github.com/WPO-Foundation/webpagetest/commit/629f48ea5c57be57b50e1f97942c98dface593b2
> 
> Sample result: http://www.webpagetest.org/xmlResult/160212_3Z_12D1/
> 
> Specifically:
> browser_process_count - Number of browser processes
> browser_main_memory_kb - Full working set of the "main" browser process in KB 
> (main for WPT is whatever is doing the network communications)
> browser_other_private_memory_kb - Sum of the private working sets for all 
> other browser processes (in KB)
> browser_working_set_kb - Both working set numbers combined

This is great, thanks for adding it. Getting memory usage on "real" sites 
should be quite interesting.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Presto: Comparing Firefox performance with other browsers (and e10s with non-e10s)

2016-02-12 Thread Patrick Meenan
> > For example with Firefox I would be interested in the RSS of the parent 
> > process (firefox.exe) and the USS of the child processes 
> > (plugin-container.exe). For Chrome it would be more along the lines of the 
> > RSS of the main chrome process, and the USS of the renderer/gpu/plugin 
> > processes (and probably the RSS of the nacl process if that's still around).
> 
> I can grab a one-time snapshot of these at the end of the test/page load.  
> Walking the process list was too expensive to do every 100ms (which is how 
> often I collect CPU utilization).  Should be able to add it today and report 
> both the parent RSS and sum(child_uss) as separate numbers.

Just pushed support in WPT for collecting memory stats at the end of a test: 
https://github.com/WPO-Foundation/webpagetest/commit/629f48ea5c57be57b50e1f97942c98dface593b2

Sample result: http://www.webpagetest.org/xmlResult/160212_3Z_12D1/

Specifically:
browser_process_count - Number of browser processes
browser_main_memory_kb - Full working set of the "main" browser process in KB 
(main for WPT is whatever is doing the network communications)
browser_other_private_memory_kb - Sum of the private working sets for all other 
browser processes (in KB)
browser_working_set_kb - Both working set numbers combined
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Presto: Comparing Firefox performance with other browsers (and e10s with non-e10s)

2016-02-11 Thread Eric Rahm
Really interesting project, is this currently Windows only? It would be great 
if we could get memory usage as well.

Also just to clarify, this is WPT that runs on webpagetest.org with code from 
https://github.com/WPO-Foundation/webpagetest?

-e
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Presto: Comparing Firefox performance with other browsers (and e10s with non-e10s)

2016-02-11 Thread Patrick Meenan
On Thursday, February 11, 2016 at 7:12:07 PM UTC-5, mcaste...@mozilla.com wrote:
> It would be interesting to know the specifications of the system running the 
> tests and to run them on systems with differing characteristics (e.g. 
> different graphics card, different amount of RAM, etc.).
> 
> - Marco.

The "Dulles" machines that look like were used for these tests are Windows 
Server 2008 VM's on VMWare ESXi with 2GB of Ram and 1 core allocated (SSD 
storage).  If you want to test on physical machines with GPU's you'll want the 
"Dulles_Thinkpad" location which runs on Thinkpad T430 laptops running Windows 
7 with Core i5's and Intel integrated GPU (there are much fewer VM's than 
physical machines though so tests will take longer).

If you want to run tests on arbitrary hardware you're also welcome to deploy 
the agent software on whatever test hardware you have available.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Presto: Comparing Firefox performance with other browsers (and e10s with non-e10s)

2016-02-11 Thread Patrick Meenan
On Thursday, February 11, 2016 at 7:57:57 PM UTC-5, Patrick Meenan wrote:
> On Thursday, February 11, 2016 at 7:12:07 PM UTC-5, mcaste...@mozilla.com 
> wrote:
> > It would be interesting to know the specifications of the system running 
> > the tests and to run them on systems with differing characteristics (e.g. 
> > different graphics card, different amount of RAM, etc.).
> > 
> > - Marco.
> 
> The "Dulles" machines that look like were used for these tests are Windows 
> Server 2008 VM's on VMWare ESXi with 2GB of Ram and 1 core allocated (SSD 
> storage).  If you want to test on physical machines with GPU's you'll want 
> the "Dulles_Thinkpad" location which runs on Thinkpad T430 laptops running 
> Windows 7 with Core i5's and Intel integrated GPU (there are much fewer VM's 
> than physical machines though so tests will take longer).
> 
> If you want to run tests on arbitrary hardware you're also welcome to deploy 
> the agent software on whatever test hardware you have available.

Sorry, had that backwards - much fewer physical machines than VM's - sorry for 
the confusion.  The Thinkpads also run SSD's and minimal software (really only 
Microsoft Security Essentials running/installed other than the browser).
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Presto: Comparing Firefox performance with other browsers (and e10s with non-e10s)

2016-02-11 Thread Mark Hammond

On 11/02/2016 11:38 AM, Valentin Gosu wrote:

TL;DR - Firefox does pretty well when compared to Chrome.

The Presto project is a Mozilla platform initiative that intends to look
into any performance differences between Firefox and other UserAgents in
order to highlight areas that we should look into improving and to clear
any prejudice that may be caused by FUD or past performance differences
that are no longer true.


Bug 1239709 seems related - it was opened in response to a recent 
article [1] which shows us as having worse page-load performance than 
chrome, particularly with flash involved.


Mark

[1] 
http://www.cio.com/article/2974303/browsers/the-best-web-browser-of-2015-firefox-chrome-edge-ie-and-opera-compared.html


We've begun this project by doing a set of performance runs on
WebPageTest.com in order to compare Firefox's performance against Chrome on
the home pages of the Alexa 100 (the top 100 web sites worldwide). We've
made a visualization tool which plots all of the runs on a graph, so we may
easily compare them:

http://valenting.github.io/presto-plot/plot.html

- click on a domain link to see results for each site.
- you are able to view the results sorted or unsorted, or filter by the
browser version
- you can also compare metrics other than load time - such as speed
Index, number of DNS queries, etc
- you can also compare the browsers on several connectivity profiles
(Cable, 3G, Dialup)
- You can click on each individual point to go to the WPT run and view
the results in greater detail

---
Initial results:

The results consistently show that Firefox is faster than Chrome for both
of our major metrics (load time and speedIndex). On a majority of domains
Firefox is faster on both first loads and reloads. When we analyze 3G
connectivity runs, Firefox's speedup less substantial, and we encounter
additional domains where Chrome has an edge.

Dial up connectivity is a bit more difficult to analyze. Because of the
large size of most websites, browsers are unable to load the website within
5 minutes, or it might time out. Chrome seems to have more successful data
points, and faster loads on dial up, but it is difficult to reach a
conclusion based on a limited data set.

WPT also has Nightly builds - which default to e10s ON. The several data
points we have show similar performance to non-e10s builds (which is good,
considering Nightly builds have more checks and assertions)

---
Interesting metrics:

Load time - is measured as the time from the start of the initial
navigation until the beginning of the window load event (onload).

Speed Index - is the metric recommended PageSpeedTest.com. Speed index
measures how fast a page is displayed on screen. Details:
https://goo.gl/7ha6eE

Activity time - measured as the time from the start of the initial
navigation until there was 2 seconds of no network activity after Document
Complete.  This will usually include any activity that is triggered by
javascript after the main page loads.

For most metrics a lower score is better (faster).

--
Error sources:

Websites may return different content depending on the UA string. While
this optimization makes sense for a lot of websites, in this situation it
is difficult to determine if the browser's performance or the website's
optimizations have more impact on the page load.

Since we do not log onto any sites, the data here for sites which rely on
logins to display full data (Facebook, etc) are not representative of most
users' experience and will generally have a different (much more heavily JS
and XHR-based) profile in reality.

For several data sets we may observe a certain spike in the data points –
runs that take 3x-7x times longer than usual. This may be due to a number
of reasons: a higher load on the network in the data center, a higher load
on the VMs on which the browsers are running, or the fact that websites
serve variable content – different images, different ads.
It may be that some of the page loads don't load the website completely, or
encounter errors. WPT also saves a screenshot of the page, along with data
on all of the resources it loads, so we may look at really fast loads to
make sure they are complete.

WPT has a maximum load time of 5 minutes. Any page load that takes longer
than that will be recorded as a 0 – so while it may seem to load faster,
that is not the case. Other errors may also be recorded as a 0.

WPT only loads one tab page at a time, whereas the load time on a user's
may be affected by his activity in other tabs. e10s certainly helps in this
area.

We only tested the performance of the landing page. It would be possible to
simulate user activity and navigation once the landing page is loaded, but
for this we'd need to write separate scripts for each domain we test.

Location may be an issue, as the RTT to the server or the CDN may vary. All
of the tests we ran were made on the WPT datacenter in 

Re: Presto: Comparing Firefox performance with other browsers (and e10s with non-e10s)

2016-02-11 Thread mcastelluccio
It would be interesting to know the specifications of the system running the 
tests and to run them on systems with differing characteristics (e.g. different 
graphics card, different amount of RAM, etc.).

- Marco.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Presto: Comparing Firefox performance with other browsers (and e10s with non-e10s)

2016-02-11 Thread Patrick Meenan
On Thursday, February 11, 2016 at 6:37:40 PM UTC-5, Valentin Gosu wrote:
> On 11 February 2016 at 19:46, Eric Rahm  wrote:
> 
> > Really interesting project, is this currently Windows only? It would be
> > great if we could get memory usage as well.
> >
> >
> Judging by the UA string - Windows NT 6.1; WOW64 - and the fact that we can
> run IE tests, it seems this is windows only at the moment.
> Unfortunately,  I don't think WPT tracks memory usage.
> 

Windows only for desktop agents and Android and iOS for mobile (though Android 
only supports Chrome currently, haven't had a chance to look at remote 
controlling Firefox on Android).

"Memory Usage" is  complicated.  Specially when you try to compare 
different architectures.  Working set? Virtual memory? Accounting for shared 
pages, etc.  Optimizing for the wrong thing can have negative impacts (like 
optimizing for the working set displayed in task manager is easy by forcing a 
process to page out periodically but it's artificial and not good for anybody).

WebPageTest used to track it at one point but the data wasn't actually useful 
so I removed it. If anyone has suggestions on how to do it in a useful way I'd 
be happy to add it.

> 
> > Also just to clarify, this is WPT that runs on webpagetest.org with code
> > from https://github.com/WPO-Foundation/webpagetest?
> >
> 
> Yes

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Presto: Comparing Firefox performance with other browsers (and e10s with non-e10s)

2016-02-11 Thread Eric Rahm
On Thursday, February 11, 2016 at 5:03:05 PM UTC-8, Patrick Meenan wrote:
> "Memory Usage" is  complicated.  Specially when you try to compare 
> different architectures.

Sure, but this is all Windows for desktop at least.

> Working set? Virtual memory? Accounting for shared pages, etc.

Working set (RSS) and private working set (USS) are the most interesting 
numbers. This gets tricky with multi-process setups, but a reasonable baseline 
I've been looking at is |total_memory = parent_rss + sum(child_uss)|

For example with Firefox I would be interested in the RSS of the parent process 
(firefox.exe) and the USS of the child processes (plugin-container.exe). For 
Chrome it would be more along the lines of the RSS of the main chrome process, 
and the USS of the renderer/gpu/plugin processes (and probably the RSS of the 
nacl process if that's still around).

> Optimizing for the wrong thing can have negative impacts (like optimizing for 
> the working set displayed in task manager is easy by forcing a process to 
> page out periodically but it's artificial and not good for anybody).

Artificial optimizations are an unfortunate side effect of every benchmark 
(particularly in js-land), I'm not sure it's our place to not measure something 
because we think people might game it. 

> WebPageTest used to track it at one point but the data wasn't actually useful 
> so I removed it. If anyone has suggestions on how to do it in a useful way 
> I'd be happy to add it.

See above.

-e
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Presto: Comparing Firefox performance with other browsers (and e10s with non-e10s)

2016-02-11 Thread Valentin Gosu
On 11 February 2016 at 19:46, Eric Rahm  wrote:

> Really interesting project, is this currently Windows only? It would be
> great if we could get memory usage as well.
>
>
Judging by the UA string - Windows NT 6.1; WOW64 - and the fact that we can
run IE tests, it seems this is windows only at the moment.
Unfortunately,  I don't think WPT tracks memory usage.


> Also just to clarify, this is WPT that runs on webpagetest.org with code
> from https://github.com/WPO-Foundation/webpagetest?
>

Yes
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform