On 20/09/16 21:27, dpr...@reed.com wrote:
I constantly see the claim that >50% of transmitted data on the Internet are 
streaming TV. However, the source seems to be as hard to nail down


I don't think the source is hard to identify. It's Sandvine press releases. That's what the periodic stories on Ars Technica are always derived from.

https://www.sandvine.com/pr/2015/12/7/sandvine-over-70-of-north-american-traffic-is-now-streaming-video-and-audio.html

  as the original claim that >50% of Internet traffic was pirated music being 
sent over bittorrent.

You recently repeated that statistic as if it were a verified fact.

I remember that in the early days of WiFi DSSS availability the claim was repeatedly made 
from podiums at conferences I attended that "the amount of WiFi in parking lots on 
Sand Hill Road [then the location of most major Silicon Valley VC firms] had made it so 
that people could not open their car doors with their remote keys".  This was not 
intended as hyperbole or a joke - I got into the habit of asking the speakers how they 
knew this, and they told me that their VC friends had all had it happen to them...

Propaganda consists of clever stories that "sound plausible" and which are 
spread by people because they seem to support something they *wish* were true for some 
reason.

I suspect that this 70% number is more propaganda of this sort.

In case it is not obvious, the beneficiaries of this particular propaganda are those who want to 
claim various things - that the Internet is now just TV broadcasting and thus should be treated 
that way (Internet Access Providers should select "channels", charge for allowing them 
through to customers, improving the "quality of programming" and censoring anything 
offensive, as just one example)

So I am extremely curious as to an actual source of such a number, how it was 
measured, and how its validity can be tested reproducibly.

Some may remember that the original discovery of "bufferbloat" was due to the 
fact that Comcast deployed Sandvine gear in its network to send RST packets for any 
connections that involved multiple concurrent TCP uploads (using DPI technology to guess 
what TCP connections to RST and the right header data to put on the RST packets).

Their argument for why they *had* to do that was that they "had data" that said 
that their network was being overwhelmed by bittorrent pirates.

In fact, the problem was bufferbloat - DOCSIS 2.0 gear that was designed to 
fail miserably under any intense upload.  The part about bittorrent piracy was 
based on claimed measurements that apparently were never in fact performed 
about the type of packets that were causing the problem.

Hence: I know it is a quixotic thing on my part, but the scientist in me wants 
to see the raw data and see the methods used to obtain it.

I have friends who actually measure Internet traffic (kc claffy, for example), 
and they do a darn good job.  The difficulty in getting data that could provide 
the 70% statistic is *so high* that it seems highly likely that no such 
measurement has ever been done, in fact.

But if someone has done such a measurement (directly or indirectly), defining 
their terms and methodology sufficiently so that it is a reproducible result, 
it would probably merit an award for technical excellence.

Otherwise, please, please, please don't lend your name to promulgating 
nonsense, even if it seems useful to argue your case.  Verify your sources.



On Monday, September 19, 2016 4:26pm, "Dave Taht" <dave.t...@gmail.com> said:

ok, I got BBR built with net-next + v2 of the BBR patch. If anyone
wants .deb files for ubuntu, I can put them up somewhere. Some quick
results:

http://blog.cerowrt.org/post/bbrs_basic_beauty/

I haven't got around to testing cubic vs bbr in a drop tail
environment, my take on matters is with fq (fq_codel) in place, bbr
will work beautifully against cubic, and I just wanted to enjoy the
good bits for a while before tearing apart the bad... and staying on
fixing wifi.

I had to go and rip out all the wifi patches to get here... as some
code landed to the ath10k that looks to break everything there, so
need to test that as a baseline first - and I wanted to see if
sch_fq+bbr did anything to make the existing ath9k driver work any
better.




On Sat, Sep 17, 2016 at 2:33 PM, Dave Taht <dave.t...@gmail.com> wrote:
On Sat, Sep 17, 2016 at 2:11 PM,  <dpr...@reed.com> wrote:
The assumption that each flow on a path has a minimum, stable  RTT fails in
wireless and multi path networks.
Yep. But we're getting somewhere serious on having stabler RTTs for
wifi, and achieving airtime fairness.

http://blog.cerowrt.org/flent/crypto_fq_bug/airtime_plot.png



However, it's worth remembering two things: buffering above a certain level is
never an improvement,
which BBR recognizes by breaking things up into separate bandwidth and
RTT analysis phases.

and flows through any shared router come and go quite frequently on the real
Internet.
Very much why I remain an advocate of fq on the routers is that your
congestion algorithm for your particular flow gets more independent of
the other flows, and ~0 latency and jitter for sparse flows is
meaningful.

Thus RTT on a single flow is not a reasonable measure of congestion. ECN marking
is far better and packet drops are required for bounding time to recover after
congestion failure.
Aww, give the code a chance, it's only been public for a day! I
haven't even got it to compile yet!

I think it is a *vast* improvement over cubic, and possibly the first delay
sensitive tcp that can compete effectively with it. I'm dying to test
it thoroughly,
but have a whole bunch other patches for wifi in my queue.

The authors suffer from typical naivete by thinking all flows are for file
transfer and that file transfer throughput is the right basic perspective,
rather than end to end latency/jitter due to sharing, and fair sharing
stability.
While I agree *strongly* that lots of short flows is how the internet
mostly operates, (I used to cite a paper on this a lot)

a huge number of bulk flows exist that has been messing up the short
flows. I think the number was something 70% of internet traffic has
become netflix-like. *anything* e2e that can reduce the negative
impact of the big fat flows on everything else is a win.




-----Original Message-----
From: "Jonathan Morton" <chromati...@gmail.com>
Sent: Sat, Sep 17, 2016 at 4:11 pm
To: "Maciej Soltysiak" <mac...@soltysiak.com>
Cc: "Maciej Soltysiak" <mac...@soltysiak.com>,
"cerowrt-devel@lists.bufferbloat.net" <cerowrt-devel@lists.bufferbloat.net>
Subject: Re: [Cerowrt-devel] BBR congestion control algorithm for TCP
innet-next


On 17 Sep, 2016, at 21:34, Maciej Soltysiak  wrote:

Cake and fq_codel work on all packets and aim to signal packet loss early to
network stacks by dropping; BBR works on TCP and aims to prevent packet loss.
By dropping, *or* by ECN marking.  The latter avoids packet loss.

  - Jonathan Morton

_______________________________________________
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


_______________________________________________
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


--
Dave Täht
Let's go make home routers and wifi faster! With better software!
http://blog.cerowrt.org


--
Dave Täht
Let's go make home routers and wifi faster! With better software!
http://blog.cerowrt.org

_______________________________________________
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel

_______________________________________________
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel

Reply via email to