Re: [Cerowrt-devel] Google working on experimental 3.8 Linux kernel for Android

2013-03-01 Thread Ketan Kulkarni
Hi David,

While I tend to agree with most of the stuff, however the complexity and
too many knobs in mobile networks do come with the added technology.

Consider from end-user perspective, getting a voice call while
surfing/downloading on 2G/3G interrupts all the download and it is
annoying.
So when LTE provides a spec to handle voip + internet both simultaneously,
its great benefit to end user.
While roaming around in LTE and moving to 2G/3G network or vice-versa, the
handoff occurs seamlessly, the internet traffic is not interrupted. This
was not the case in previous mobile generations. End-user is more satisfied
as it relates to the daily usage of mobile phones.
Similarly going ahead we might very well have handoff from wifi to LTE -
why not?

Now for (non-technical) mobile users, these are good and simple to have
features. But from networks perspective, where and how will this complexity
be handled? definitely some nodes in the network will have to worry about
LTE, UMTS, CDMA, eHRPD and what not.
This gives some idea of how really complex the network looks like -
http://www.trilliumposter.com/lte.php

>From mobile ISP perspective, they invest heavy amount in getting channel
license from governments. It takes years to cover up the amount invested.
Moreover, consider e.g. facebook embedding ads as per user's interest
areas. The ads revenue benefit facebook. However, the mobile ISPs who carry
the same data and has potentially (more relevant) information of the
subscriber cannot gain from this.
This is a real problem to solve for ISPs. Monetization is becoming more
relevant to ISPs which again will definitely lead to more complexity in
network.

>From field engineers, I think many of them are bound by what they are
"asked" to do. They have certain task to complete, with exactly what to
test, and in what time. Not all field engineers will think beyond that. If
their senior technical leads ask them to test packet drops they will test
drops. Going ahead if they are asked to test latency they will do so with
whatever available resources and knowledge they have. We can not expect an
average engineer to think beyond a certain level and do what is not
expected out of them from their seniors. Not all will think of the internet
finally.

Even the network vendors like Cisco, ALU or Ericsson will test the latency
and all the other stuff when there is real push from customers like VzW and
ATT. Definitely the vendors can very well invest, research and come up with
the latest methods and techniques of measurement and they are doing so to
some extent.  Still there is some time to come to a really good picture
here.

I think the tests like RRUL is definitely a good start and is more relevant
to the ISPs VzW and ATT because its finally they own the network. If these
companies are convinced, it will become little easy to push the vendors to
do the right stuff. It then puts the correct target for field engineers to
accomplish.
We have to accept the fact that no standard till now specifies what and how
to test latencies. This is one of many of the reasons beating bloat has
been a daunting task in the complex world.

FWIK, every ISP has a latency and jitter budget already with every node in
their network and these are well communicated to vendors. Now how do one
measure, under what scenarios is still not very clearly defined nor
understood. So every one interpret these to their knowledge and
understanding and a common consensus internally is achieved.
Who looks at e2e latency? Who looks at the complex interactions of these
nodes and their effect? - I really dont know.
Add to that the dynamic behavior of cell and radio access. All this really
complicates the stuff at network.

While we have the right step with RRUL, but its still a long way to go for
a better picture on mobile networks.

Sometimes "LTE = Long Term Employment" is an apt description.

Thanks,
Ketan

On Fri, Mar 1, 2013 at 9:57 PM,  wrote:

> I don't doubt that they test.  My point was different - there are too many
> knobs and too big a parameter space to test efectively.  And that's the
> point.
>
>
>
> I realize that it's extremely fun to invent parameters in "standards
> organizations" like 3GPP.  Everybody has their own favorite knob, and a
> great rationale for some unusual, but critically "important" customer
> requirement that might come up some day.  Hell, Linux has a gazillion (yes,
> that's a technical term in mathematics!) parameters, almost none of which
> are touched.  This reflects the fact that nothing ever gets removed once
> added.  LTE is now going into release 12, and it's completely ramified into
> "solutions" to problems that will never be fixed in the field with those
> solutions.  It's great for European Publically Funded Academic-Industry
> research - lots for those "Professors" to claim they invented.
>
>
>
> I've worked with telco contractors in the field.   They don't read
> manuals, and they don't read specs.  They have a job to do, a

Re: [Cerowrt-devel] [Bloat] some good bloat related stuff on the ICCRG agenda, IETF #86 Tuesday, March 12 2013, 13:00-15:00, room Caribbean 6

2013-03-01 Thread dpreed

+1
 
-Original Message-
From: "Wesley Eddy" 
Sent: Friday, March 1, 2013 1:29pm
To: "Matt Mathis" 
Cc: dpr...@reed.com, bloat-annou...@lists.bufferbloat.net, "bloat" 
, cerowrt-devel@lists.bufferbloat.net
Subject: Re: [Bloat] [Cerowrt-devel] some good bloat related stuff on the ICCRG 
agenda, IETF #86 Tuesday, March 12 2013, 13:00-15:00, room Caribbean 6



On 2/28/2013 2:55 PM, Matt Mathis wrote:
> Two of the tests in my model based metrics draft (for IPPM) are for
> AQM (like) tests.   One we have pretty good theory for (preventing
> standing queues in congestion avoidance) and the other we don't
> (exiting from slowstart at a reasonable window).
> 
> See: draft-mathis-ippm-model-based-metrics-01.txt
> 
> My intent is that these tests will become part of a future IPPM
> standard on what a network must do in order to support modern
> applications at specific performance levels. Although the draft
> will not specify AQM algorithms at all, it will forbid some non-AQM
> behaviors such as unreasonable standing queues.   To the extent that
> it gets traction as a standard, it will strongly encourage deployment,
> even if we are not totally convinced that our current AQM algorithms
> are 100% correct.


I like the idea.


> However, It is not clear that we need to standardize AQM - It strikes
> me as one area where we can permit pretty much unfettered diversity in
> the operational Internet as long as it meets a pretty low  "it seems
> to work" bar.


Fully agreed!  Publishing specs is only useful to get some
known-good algorithm(s) that folks can safely implement
without thinking too hard, and also to burn off any possible
ambiguities in the descriptions of the algorithms, catch any
corner cases, etc.


> For this reason it is important to deploy your favorite algorithm(s)
> ASAP, because they are all infinitely better than none, and future
> improvements will be relatively minor by comparison.
> 


Agreed, with the caveat that not *all* conceivable algorithms
are good :).  One of the things I think might be useful rather
than (or in addition to) specifying algorithms, is specifying
test setups or metrics that allow any algorithm to be checked
for sanity, as a black box.

-- 
Wes Eddy
MTI Systems___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Bloat] some good bloat related stuff on the ICCRG agenda, IETF #86 Tuesday, March 12 2013, 13:00-15:00, room Caribbean 6

2013-03-01 Thread Wesley Eddy
On 2/28/2013 2:55 PM, Matt Mathis wrote:
> Two of the tests in my model based metrics draft (for IPPM) are for
> AQM (like) tests.   One we have pretty good theory for (preventing
> standing queues in congestion avoidance) and the other we don't
> (exiting from slowstart at a reasonable window).
> 
> See: draft-mathis-ippm-model-based-metrics-01.txt
> 
> My intent is that these tests will become part of a future IPPM
> standard on what a network must do in order to support modern
> applications at specific performance levels. Although the draft
> will not specify AQM algorithms at all, it will forbid some non-AQM
> behaviors such as unreasonable standing queues.   To the extent that
> it gets traction as a standard, it will strongly encourage deployment,
> even if we are not totally convinced that our current AQM algorithms
> are 100% correct.


I like the idea.


> However, It is not clear that we need to standardize AQM - It strikes
> me as one area where we can permit pretty much unfettered diversity in
> the operational Internet as long as it meets a pretty low  "it seems
> to work" bar.


Fully agreed!  Publishing specs is only useful to get some
known-good algorithm(s) that folks can safely implement
without thinking too hard, and also to burn off any possible
ambiguities in the descriptions of the algorithms, catch any
corner cases, etc.


> For this reason it is important to deploy your favorite algorithm(s)
> ASAP, because they are all infinitely better than none, and future
> improvements will be relatively minor by comparison.
> 


Agreed, with the caveat that not *all* conceivable algorithms
are good :).  One of the things I think might be useful rather
than (or in addition to) specifying algorithms, is specifying
test setups or metrics that allow any algorithm to be checked
for sanity, as a black box.

-- 
Wes Eddy
MTI Systems
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] Google working on experimental 3.8 Linux kernel for Android

2013-03-01 Thread dpreed

I don't doubt that they test.  My point was different - there are too many 
knobs and too big a parameter space to test efectively.  And that's the point.
 
I realize that it's extremely fun to invent parameters in "standards 
organizations" like 3GPP.  Everybody has their own favorite knob, and a great 
rationale for some unusual, but critically "important" customer requirement 
that might come up some day.  Hell, Linux has a gazillion (yes, that's a 
technical term in mathematics!) parameters, almost none of which are touched.  
This reflects the fact that nothing ever gets removed once added.  LTE is now 
going into release 12, and it's completely ramified into "solutions" to 
problems that will never be fixed in the field with those solutions.  It's 
great for European Publically Funded Academic-Industry research - lots for 
those "Professors" to claim they invented.
 
I've worked with telco contractors in the field.   They don't read manuals, and 
they don't read specs.  They have a job to do, and so much money to spend, and 
time's a wasting.  They don't even work for Verizon or ATT.  They follow 
"specs" handed down, and charge more if you tell them that the specs have 
changed.
 
This is not how brand-new systems get tuned.
 
It's a Clown Circus out there, and more parameters don't help.
 
This is why "more buffering is better" continues to be the law of the land - 
the spec is defined to be "no lost packets under load".   I'm sure that the 
primary measure under load for RRUL will be "no lost packets" by the time it 
gets to field engineers in the form of "specs" - because that's what they've 
*always* been told, and they will disregard any changes as "typos".
 
A system with more than two control parameters that interact in complex ways is 
ungovernable - and no control parameters in LTE are "orthogonal", much less 
"linear" in their interaction.
 
 
 
-Original Message-
From: "Jim Gettys" 
Sent: Friday, March 1, 2013 11:09am
To: "David P Reed" 
Cc: "Ketan Kulkarni" , 
"cerowrt-devel@lists.bufferbloat.net" 
Subject: Re: [Cerowrt-devel] Google working on experimental 3.8 Linux kernel 
for Android








On Fri, Mar 1, 2013 at 10:40 AM,  <[mailto:dpr...@reed.com] dpr...@reed.com> 
wrote:

One wonders why all this complexity is necessary, and how likely it is to be 
"well tuned" by operators and their contract installers.
 
I'm willing to bet $1000 that all the testing that is done is "Can you hear me 
now" and a "speed test".  Not even something as simple and effective as RRUL.
Actually, at least some the the carriers do much more extensive testing; but 
not with the test tools we would like to see used (yet).
An example is AT&T, where in research, KK Ramakrishnan has a van with 20 or so 
laptops so he can go driving around and load up a cell in the middle of the 
night and get data.   And he's research; the operations guys do lots of testing 
I gather, but more at the radio level.
Next up, is to educate KK to run RRUL.
And in my own company, I've seen data, but it is too high level: e.g. 
performance of "web" video: e.g. siverlight, flash, youtube, etc.
A common disease that has complicated all this is the propensity for companies 
to use Windows XP internally for everything: since window scaling is turned 
off, you can't saturate a LTE link the way you might like to do with a single 
TCP connection.
- Jim



 
-Original Message-
From: "Ketan Kulkarni" <[mailto:ketku...@gmail.com] ketku...@gmail.com>
Sent: Friday, March 1, 2013 3:00am
To: "Jim Gettys" <[mailto:j...@freedesktop.org] j...@freedesktop.org>
 Cc: "[mailto:cerowrt-devel@lists.bufferbloat.net] 
cerowrt-devel@lists.bufferbloat.net" 
<[mailto:cerowrt-devel@lists.bufferbloat.net] 
cerowrt-devel@lists.bufferbloat.net>
 Subject: Re: [Cerowrt-devel] Google working on experimental 3.8 Linux kernel 
for Android





On Fri, Mar 1, 2013 at 1:33 AM, Jim Gettys <[mailto:j...@freedesktop.org] 
j...@freedesktop.org> wrote:

I've got a bit more insight into LTE than I did in the past, courtesy of the 
last couple days.
To begin with, LTE runs with several classes of service (the call them 
bearers).  Your VOIP traffic goes into one of them.
And I think there is another as well that is for guaranteed bit rate traffic.  
One transmit opportunity may have a bunch of chunks of data, and that data may 
be destined for more than one device (IIRC).  It's substantially different than 
WiFi.
Just thought to put more light on bearer stuff:

There are two ways bearers are setup: 
1. UE initiated - where User Equipment sets-up the "parameters" for bearer 
 2. Network initiated - where node like PCRF and PGW sets-up the "parameters". 
 Parameters include the Guaranteed bit-rates, maximum bit-rates. Something 
called QCI is associated with bearers. The QCI parameters are authorized at 
PCRF (policy control rule function) and there is certain mapping maintained at 
either PCRF or PGW between QCI values and DSCP and MBRs.
 These parameters enforcing

Re: [Cerowrt-devel] Google working on experimental 3.8 Linux kernel for Android

2013-03-01 Thread Jim Gettys
On Fri, Mar 1, 2013 at 10:40 AM,  wrote:

> One wonders why all this complexity is necessary, and how likely it is to
> be "well tuned" by operators and their contract installers.
>
>
>
> I'm willing to bet $1000 that all the testing that is done is "Can you
> hear me now" and a "speed test".  Not even something as simple and
> effective as RRUL.
>

Actually, at least some the the carriers do much more extensive testing;
but not with the test tools we would like to see used (yet).

An example is AT&T, where in research, KK Ramakrishnan has a van with 20 or
so laptops so he can go driving around and load up a cell in the middle of
the night and get data.   And he's research; the operations guys do lots of
testing I gather, but more at the radio level.

Next up, is to educate KK to run RRUL.

And in my own company, I've seen data, but it is too high level: e.g.
performance of "web" video: e.g. siverlight, flash, youtube, etc.

A common disease that has complicated all this is the propensity for
companies to use Windows XP internally for everything: since window scaling
is turned off, you can't saturate a LTE link the way you might like to do
with a single TCP connection.
  - Jim




>
>
> -Original Message-
> From: "Ketan Kulkarni" 
> Sent: Friday, March 1, 2013 3:00am
> To: "Jim Gettys" 
> Cc: "cerowrt-devel@lists.bufferbloat.net" <
> cerowrt-devel@lists.bufferbloat.net>
> Subject: Re: [Cerowrt-devel] Google working on experimental 3.8 Linux
> kernel for Android
>
>
>
> On Fri, Mar 1, 2013 at 1:33 AM, Jim Gettys  wrote:
>
>> I've got a bit more insight into LTE than I did in the past, courtesy of
>> the last couple days.
>> To begin with, LTE runs with several classes of service (the call them
>> bearers).  Your VOIP traffic goes into one of them.
>> And I think there is another as well that is for guaranteed bit rate
>> traffic.  One transmit opportunity may have a bunch of chunks of data, and
>> that data may be destined for more than one device (IIRC).  It's
>> substantially different than WiFi.
>>
>  Just thought to put more light on bearer stuff:
>
> There are two ways bearers are setup:
> 1. UE initiated - where User Equipment sets-up the "parameters" for bearer
> 2. Network initiated - where node like PCRF and PGW sets-up the
> "parameters".
> Parameters include the Guaranteed bit-rates, maximum bit-rates. Something
> called QCI is associated with bearers. The QCI parameters are authorized at
> PCRF (policy control rule function) and there is certain mapping maintained
> at either PCRF or PGW between QCI values and DSCP and MBRs.
> These parameters enforcing is done at PGW (in such case it is termed as
> PCEF - policy and rule enforcement function). So PGWs depending on bearers
> can certainly modify dscp bits. Though these can be modified by other nodes
> in the network.
>
> There are two types of bearers: 1. Dedicated bearers - to carry traffic
> which need "special" treatment 2. Default or general pupose bearers - to
> carry all general purpose data.
> So generally the voip, streaming videos are passed over dedicated bearers
> and apply (generally) higher GBRs, MBRs and correct dscp markings.
> And other non-latency sensitive traffic generally follows the default
> bearer.
>
> Theoretical limit on maximum bearers is 11 though practically most of the
> deployments use upto 3 bearers max.
>
> Note that these parameters may very well very based on the subscriber
> profiles. Premium/Corporate subscribers can well have more GBRs and MBRs.
> ISPs are generally very much sensitive to the correct markings at gateways
> for obvious reasons.
>
>  But most of what we think of as Internet stuff (web surfing, dns, etc)
>> all gets dumped into a single best effort ("BE"), class.
>> The BE class is definitely badly bloated; I can't say how much because I
>> don't really know yet; the test my colleague ran wasn't run long enough to
>> be confident it filled the buffers).  But I will say worse than most cable
>> modems I've seen.  I expect this will be true to different degrees on
>> different hardware.  The other traffic classes haven't been tested yet for
>> bufferbloat, though I suspect they will have it too.  I was told that those
>> classes have much shorter queues, and when the grow, they dump the whole
>> queues (because delivering late real time traffic is useless).  But trust
>> *and* verify  Verification hasn't been done for anything but BE
>> traffic, and that hasn't been quantified.
>> But each device gets a "fair" shot at bandwidth in the cell (or sector of
>> a cell; they run 3 radios in each cell), where fair is basically time
>> based; if you are at the edge of a cell, you'll get a lot less bandwidth
>> than someone near a tower; and this fairness is guaranteed by a scheduler
>> than runs in the base station (called a b-nodeb, IIIRC).  So the base
>> station guarantees some sort of "fairness" between devices (a place where
>> Linux's wifi stac

Re: [Cerowrt-devel] Google working on experimental 3.8 Linux kernel for Android

2013-03-01 Thread dpreed

One wonders why all this complexity is necessary, and how likely it is to be 
"well tuned" by operators and their contract installers.
 
I'm willing to bet $1000 that all the testing that is done is "Can you hear me 
now" and a "speed test".  Not even something as simple and effective as RRUL.
 
-Original Message-
From: "Ketan Kulkarni" 
Sent: Friday, March 1, 2013 3:00am
To: "Jim Gettys" 
Cc: "cerowrt-devel@lists.bufferbloat.net" 
Subject: Re: [Cerowrt-devel] Google working on experimental 3.8 Linux kernel 
for Android





On Fri, Mar 1, 2013 at 1:33 AM, Jim Gettys <[mailto:j...@freedesktop.org] 
j...@freedesktop.org> wrote:

I've got a bit more insight into LTE than I did in the past, courtesy of the 
last couple days.
To begin with, LTE runs with several classes of service (the call them 
bearers).  Your VOIP traffic goes into one of them.
And I think there is another as well that is for guaranteed bit rate traffic.  
One transmit opportunity may have a bunch of chunks of data, and that data may 
be destined for more than one device (IIRC).  It's substantially different than 
WiFi.

Just thought to put more light on bearer stuff:

There are two ways bearers are setup: 
1. UE initiated - where User Equipment sets-up the "parameters" for bearer 
 2. Network initiated - where node like PCRF and PGW sets-up the "parameters". 
Parameters include the Guaranteed bit-rates, maximum bit-rates. Something 
called QCI is associated with bearers. The QCI parameters are authorized at 
PCRF (policy control rule function) and there is certain mapping maintained at 
either PCRF or PGW between QCI values and DSCP and MBRs.
 These parameters enforcing is done at PGW (in such case it is termed as PCEF - 
policy and rule enforcement function). So PGWs depending on bearers can 
certainly modify dscp bits. Though these can be modified by other nodes in the 
network. 

There are two types of bearers: 1. Dedicated bearers - to carry traffic which 
need "special" treatment 2. Default or general pupose bearers - to carry all 
general purpose data.
So generally the voip, streaming videos are passed over dedicated bearers and 
apply (generally) higher GBRs, MBRs and correct dscp markings.
 And other non-latency sensitive traffic generally follows the default bearer.

Theoretical limit on maximum bearers is 11 though practically most of the 
deployments use upto 3 bearers max.

Note that these parameters may very well very based on the subscriber profiles. 
Premium/Corporate subscribers can well have more GBRs and MBRs.
 ISPs are generally very much sensitive to the correct markings at gateways for 
obvious reasons.



But most of what we think of as Internet stuff (web surfing, dns, etc) all gets 
dumped into a single best effort ("BE"), class.
The BE class is definitely badly bloated; I can't say how much because I don't 
really know yet; the test my colleague ran wasn't run long enough to be 
confident it filled the buffers).  But I will say worse than most cable modems 
I've seen.  I expect this will be true to different degrees on different 
hardware.  The other traffic classes haven't been tested yet for bufferbloat, 
though I suspect they will have it too.  I was told that those classes have 
much shorter queues, and when the grow, they dump the whole queues (because 
delivering late real time traffic is useless).  But trust *and* verify  
Verification hasn't been done for anything but BE traffic, and that hasn't been 
quantified.
But each device gets a "fair" shot at bandwidth in the cell (or sector of a 
cell; they run 3 radios in each cell), where fair is basically time based; if 
you are at the edge of a cell, you'll get a lot less bandwidth than someone 
near a tower; and this fairness is guaranteed by a scheduler than runs in the 
base station (called a b-nodeb, IIIRC).  So the base station guarantees some 
sort of "fairness" between devices (a place where Linux's wifi stack today 
fails utterly, since there is a single queue per device, rather than one per 
station).
Whether there are bloat problems at the link level in LTE due to error 
correction I don't know yet; but it wouldn't surprise me; I know there was in 
3g.  The people I talked to this morning aren't familiar with the HARQ layer in 
the system.
The base stations are complicated beasts; they have both a linux system in them 
as well as a real time operating system based device inside  We don't know 
where the bottle neck(s) are yet.  I spent lunch upping their paranoia and 
getting them through some conceptual hurdles (e.g. multiple bottlenecks that 
may move, and the like).  They will try to get me some of the data so I can 
help them figure it out.  I don't know if the data flow goes through the linux 
system in the bnodeb or not, for example.
Most carriers are now trying to ensure that their backhauls from the base 
station are never congested, though that is another known source of problems.  
And then there is the lack of AQM at peering po

Re: [Cerowrt-devel] Google working on experimental 3.8 Linux kernel for Android

2013-03-01 Thread Ketan Kulkarni
On Fri, Mar 1, 2013 at 1:33 AM, Jim Gettys  wrote:

> I've got a bit more insight into LTE than I did in the past, courtesy of
> the last couple days.
>
> To begin with, LTE runs with several classes of service (the call them
> bearers).  Your VOIP traffic goes into one of them.
> And I think there is another as well that is for guaranteed bit rate
> traffic.  One transmit opportunity may have a bunch of chunks of data, and
> that data may be destined for more than one device (IIRC).  It's
> substantially different than WiFi.
>

Just thought to put more light on bearer stuff:

There are two ways bearers are setup:
1. UE initiated - where User Equipment sets-up the "parameters" for bearer
2. Network initiated - where node like PCRF and PGW sets-up the
"parameters".
Parameters include the Guaranteed bit-rates, maximum bit-rates. Something
called QCI is associated with bearers. The QCI parameters are authorized at
PCRF (policy control rule function) and there is certain mapping maintained
at either PCRF or PGW between QCI values and DSCP and MBRs.
These parameters enforcing is done at PGW (in such case it is termed as
PCEF - policy and rule enforcement function). So PGWs depending on bearers
can certainly modify dscp bits. Though these can be modified by other nodes
in the network.

There are two types of bearers: 1. Dedicated bearers - to carry traffic
which need "special" treatment 2. Default or general pupose bearers - to
carry all general purpose data.
So generally the voip, streaming videos are passed over dedicated bearers
and apply (generally) higher GBRs, MBRs and correct dscp markings.
And other non-latency sensitive traffic generally follows the default
bearer.

Theoretical limit on maximum bearers is 11 though practically most of the
deployments use upto 3 bearers max.

Note that these parameters may very well very based on the subscriber
profiles. Premium/Corporate subscribers can well have more GBRs and MBRs.
ISPs are generally very much sensitive to the correct markings at gateways
for obvious reasons.


> But most of what we think of as Internet stuff (web surfing, dns, etc) all
> gets dumped into a single best effort ("BE"), class.
>
> The BE class is definitely badly bloated; I can't say how much because I
> don't really know yet; the test my colleague ran wasn't run long enough to
> be confident it filled the buffers).  But I will say worse than most cable
> modems I've seen.  I expect this will be true to different degrees on
> different hardware.  The other traffic classes haven't been tested yet for
> bufferbloat, though I suspect they will have it too.  I was told that those
> classes have much shorter queues, and when the grow, they dump the whole
> queues (because delivering late real time traffic is useless).  But trust
> *and* verify  Verification hasn't been done for anything but BE
> traffic, and that hasn't been quantified.
>
> But each device gets a "fair" shot at bandwidth in the cell (or sector of
> a cell; they run 3 radios in each cell), where fair is basically time
> based; if you are at the edge of a cell, you'll get a lot less bandwidth
> than someone near a tower; and this fairness is guaranteed by a scheduler
> than runs in the base station (called a b-nodeb, IIIRC).  So the base
> station guarantees some sort of "fairness" between devices (a place where
> Linux's wifi stack today fails utterly, since there is a single queue per
> device, rather than one per station).
>
> Whether there are bloat problems at the link level in LTE due to error
> correction I don't know yet; but it wouldn't surprise me; I know there was
> in 3g.  The people I talked to this morning aren't familiar with the HARQ
> layer in the system.
>
> The base stations are complicated beasts; they have both a linux system in
> them as well as a real time operating system based device inside  We don't
> know where the bottle neck(s) are yet.  I spent lunch upping their paranoia
> and getting them through some conceptual hurdles (e.g. multiple bottlenecks
> that may move, and the like).  They will try to get me some of the data so
> I can help them figure it out.  I don't know if the data flow goes through
> the linux system in the bnodeb or not, for example.
>
> Most carriers are now trying to ensure that their backhauls from the base
> station are never congested, though that is another known source of
> problems.  And then there is the lack of AQM at peering point routers
>  You'd think they might run WRED there, but many/most do not.
>  - Jim
>
>
>
>
>
> On Thu, Feb 28, 2013 at 2:08 PM, Dave Taht  wrote:
>
>>
>>
>> On Thu, Feb 28, 2013 at 1:57 PM,  wrote:
>>
>>> Doesn't fq_codel need an estimate of link capacity?
>>>
>>
>> No, it just measures delay. Since so far as I know the outgoing portion
>> of LTE is not soft-rate limited, but sensitive to the actual available link
>> bandwidth, fq_codel should work pretty good (if the underlying