On 12 February 2016 at 17:11, Randell Jesup <rje...@jesup.org> wrote:

> >    - You can click on each individual point to go to the WPT run and view
> >the results in greater detail
>
> What does "run index" mean in the graphs?  The values appear to be
> sorted from best to worst; so it's comparing best to best, next-best to
> next-best, etc?  Ah, and I see "sorted" undoes the sorting.
>

In the sorted version, it is the index of that run in the sorted array and
mostly meaningless.
The run index in the unsorted version is the index of the run in
chronological order, but the firstRun and repeatRun will share the same
index.


> I'd think displaying mean/median and std-deviation (or a bell-curve-ish)
> might be easier to understand.  But I'm no statistician. :-)  It also
> likely is easier to read when the numbers of samples don't match (or you
> need to stretch them all to the same "width"; using a bell-curve plot of
> median/mean/std-dev avoids that problem.
>

I also think it would be quite useful, but my knowledge of statistics is
pretty basic.
I tried to get the number of samples to match, but for a few domains that
didn't work. I assume there's a bug in my scripts.


>
> Thumbnails, or columns on the right for each selected browser with
> median (or mean), with the best (for that site) in green, the worst in
> red would allow eyeballing the results and finding interesting
> differences without clicking on 100 links.......  (please!)  Or to avoid
> overloading the page, one page with graphs like today, another with the
> columns I indicated (where clicking on the row takes you to the graph
> page for that side).
>

What I noticed is that pages with lots of elements, and elements that come
from different sources seem to have a higher variability. So pages such as
flickr, with lots of images with various sizes, or pages that load various
ads.


>
> >------------------
> >Error sources:
> >
> >Websites may return different content depending on the UA string. While
> >this optimization makes sense for a lot of websites, in this situation it
> >is difficult to determine if the browser's performance or the website's
> >optimizations have more impact on the page load.
>
> Might be interesting to force our UA for a series of tests to match
> theirs, or vice-versa, just to check which sites appear to care (and
> then mark them).
>

I've tried it for a few domains and it didn't make much of a difference.
I'll try it for all the domains to see if there is a pattern we could make
out.


>
> Great!  It'll be interesting to track how these change over time as well
> (or as versions get added to the list).  Again, medians/means/etc may
> help with evaluating and tracking this (or automating notices, ala Talos)
>

I thought about doing this, but Talos is always using static content on a
local connection, whereas this goes over a real network which may vary
performance, and load real websites which may change content or optimize
for different situations. I expect it's useful for confirming certain
properties, such as if page loads are faster on Fx, Chrome or Nightly, and
by how much, but probably can't get results that make sense over a longer
period of time.
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to