Hi Michael -
 
In specific, what my code did was this:
 
It observed the IPv4 headers of *large* TCP/IP datagrams going upstream, so 
that it could construct "no-op" "content-free" datagrams that would certainly 
pass muster through all the filters and be routed exactly the same as the 
TCP/IP datagrams  that were carrying large flows.  It would remember only the 
most recent one.
 
Every K bytes of upstream traffic (K chosen so that the overhead [= minimal 
TCP/IP datagram divided by K] is a tiny percentage) it would construct a NO-OP 
TCP/IP datagram that appears to be part of that flow (same source/dest 
addr/port info, and just for grins, a duplicate sequence number and no content 
bytes at all), and set its TTL to make it time out very close to the "other 
side" of the CMTS, and queue it normally.
 
The TTL expiration causes an ICMP packet to be sent back.   My code intercepts 
that packet based on its contents, and removes it as "handled" before it gets 
processed by the TCP/IP state machines.
 
The time between the queueing of the TCP/IP NO-OP and the return of the ICMP 
packet is a direct measure of the queueing delays through the cable modem and 
CMTS.  When this grows by around "1 full datagram" from its minimum, the upload 
queue is becoming congested, and it's time to stop sending content for a bit.  
Immediately when content is held on the egress link into the cable modem from 
the router, we send another NO-OP with the short TTL, and as soon as its ICMP 
comes back, you  know the queue in the CMTS is drained, so you can resume 
sending into an empty CMTS, at a lower rate (you've just gotten a good estimate 
of the rate that you should reduce to, if you've been keeping track of how many 
bytes are flowing over the egress link.)
 
Symmetrically, you can periodically (less frequently) experiment with a 
possible rate *increase* by sending a small NO-OP packet immediately followed 
by a large/maximal sized NO-OP datagram, and using the "packet pair" concept to 
determine the bottleneck rate by measuring the time between ICMP responses.  
The time between the ICMP responses is an estimator of the achievable peak rate 
through the upstream path.
 
This assumes that the downstream (incoming) path is uncongested.   But you can 
elaborate this scheme further.
 
The goal of the "tcptraceroute" method is to get a "loopback" that follows the 
same path as an existing TCP connection, in order to get timing right.
 
If options exist to get intermediate timestamps on a route, you can also use 
similar techniques under TCP with the "NO-OP" datagram technique.
 
-----Original Message-----
From: "Michael Richardson" <[email protected]>
Sent: Monday, November 26, 2012 1:11pm
To: [email protected]
Cc: [email protected], [email protected]
Subject: Re: [Cerowrt-devel] [Cerowrt-users] QOS settings vs speedboost and 
random bandwidth



>>>>> "dpreed" == dpreed  <[email protected]> writes:
 dpreed> But I've thought about coding it again for cerowrt.  Where
 dpreed> to modularly slot it in seems to be worth thinking about.
 dpreed> Perhaps in two key pieces: an iptables/xfilter module and a
 dpreed> routing/traffic control module - with some direct
 dpreed> interaction between the two using some appropriate
 dpreed> intermodule bus/link/coordination link. 

So an uplink bitrate value with an easy to reach sysctl that
userspace can toggle?  It would be an enhancement to existing tc/qos code.

 dpreed> I'd be happy to think about defining the pieces, but I
 dpreed> really don't have time to code it, given all the other stuff
 dpreed> I've done.  I wonder if by putting it in these modules, one
 dpreed> can use existing kernel APIs. 

How precise timing do you think we need?

As I understand what you are saying, by periodically sending a few ICMP
messages (does it help if they are back to back?) and looking when they 
are returned, one can calculate the uplink bandwidth?

Or are you saying that we are measuring the point in uplink usage where
the latency begins to peak?

-- 
]       He who is tired of Weird Al is tired of life!           |  firewalls  [
]   Michael Richardson, Sandelman Software Works, Ottawa, ON    |net architect[
] [email protected] http://www.sandelman.ottawa.on.ca/ |device driver[
 Kyoto Plus: watch the video <http://www.youtube.com/watch?v=kzx1ycLXQSE>
 then sign the petition.
_______________________________________________
Cerowrt-devel mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/cerowrt-devel

Reply via email to