Hi Jonathan,
On Jul 27, 2014, at 15:00 , Jonathan Morton <[email protected]> wrote:
> A marketing number? Well, as we know, consumers respond best to "bigger is
> better" statistics. So anything reporting delay or ratio in the ways
> mentioned so far is doomed to failure - even if we convince the industry (or
> the regulators, more likely) to adopt them.
I still have hope that educating the end users is the best strategy.
>
> Another problem that needs solving is that marketing statistics tend to get
> gamed a lot. They must therefore be defined in such a way that gaming them
> is difficult without actually producing a corresponding improvement in the
> service. That's similar in nature to a security problem, by the way.
Easy just report mean latency increase for a fixed number of
measurements (say 100) and also report the standard deviation, and teach the
users both numbers need to stay small.
>
> I have previously suggested defining a "responsiveness" measurement as a
> frequency. This is the inverse of latency, so it gets bigger as latency goes
> down. It would be relatively simple to declare that responsiveness is to be
> measured under a saturating load.
>
> Trickier would be defining where in the world/network the measurement should
> be taken from and to.
If responsiveness is to be measured under load, there is only one
reasonable location for the test server, directly upstream of the CPE. All
networks are oversubscribed, especially home networks. Having all the users on
a node do full saturation tests at the same time (or even enough of those
overlapping in time) is going to expose that (usually un-problematic)
oversubscription in a way an ISP can not accept as a public benchmark… For DSL
the link saturation would also only affect the line currently testing its
speed, leaving all other users on the node alone. For setups where already the
first access link is shared like cable or GPON the congestion can not be fully
hidden (though ameliorated by staggering the measurements), but at least uplink
saturation can be avoided…
> An ISP which hosted a test server on its internal network would hold an
> unfair advantage over other ISPs, so the sane solution is to insist that test
> servers are at least one neutral peering hop away from the ISP.
Again we very much would expose the oversubscription of the ISP’s
uplink to the peering point, burning good bandwidth for what gain? (I know for
what but how would you convince an ISP that it is in their interest as well?)
> ISPs that are geographically distant from the nearest test server would be
> disadvantaged, so test servers need to be provided throughout the densely
> populated parts of the world - say one per timezone and ten degrees of
> latitude if there's a major city in it.
The fewer test servers the worse the issue, if at all we need CDNs to
distribute the traffic caused by potentially the whole internet’s endnotes
doing these tests repeatedly :) .
>
> At the opposite end of the measurement, we have the CPE supplied with the
> connection. That will of course be crucial to the upload half of the
> measurement.
>
> While we're at it, we could try redefining bandwidth as an average, not a
> peak value.
I think that Neil and Martin are working hard to teach us that averages
are not telling the right story… So maybe lowest 5% quantile of latency and
bandwidth per months,
> If the ISP has a "fair usage cap" of 300GB per 30 days, then they aren't
> allowed to claim an average bandwidth greater than 926kbps. National
> broadband availability initiatives can then be based on that figure.
Question would the proposed tests count against the CAP? Also that
assumes that not having caps is in the interest of the state? If that would be
the case legislators could have simple disallowed them?
Best Regards
Sebastian
>
> - Jonathan Morton
>
_______________________________________________
Cerowrt-devel mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/cerowrt-devel