I’m in the process, as I volunteered to do, of reviewing the Codel draft. It is 
very well and readably written, so I don’t expect to have a lot to complain 
about. I do have three points, though.

One is a few spelling errors, noted in this email.

A second is the matter of marking and dropping. The world “drop” is used in one 
or another of its forms 120 times, and ECN marking twice, both times in 
passing. I’d like to understand better how ECN is handled, if only to have a 
statement that it is handled the same way as dropping.

The third is the definition of a “normal TCP session”, which is referred to a 
number of times but not nailed down that I have found other than that it 
appears to have a less than 100ms RTT. Later in the draft, it talks about 
accommodations for data center use, which tells me (consistent with our 
testing) that there is a lower bound below which is is less than optimal, and 
in our testing RTTs greater than 100 ms didn’t behave so well. The sweet spot 
might have been around 25 ms, with minor degradation at 50 ms, and more 
degradation at longer RTTs.

I’ll come back if I find more issues,

*** draft/draft-ietf-aqm-codel-00.txt   Fri Oct 24 08:58:40 2014
--- draft-ietf-aqm-codel-00.txt Wed Apr  1 18:43:33 2015
***************
*** 104,111 ****
      5.1.  Data Types  . . . . . . . . . . . . . . . . . . . . . . .  18
      5.2.  Per-queue state (codel_queue_t instance variables)  . . .  19
      5.3.  Constants . . . . . . . . . . . . . . . . . . . . . . . .  19
!      5.4.  Enque routine . . . . . . . . . . . . . . . . . . . . . .  19
!      5.5.  Deque routine . . . . . . . . . . . . . . . . . . . . . .  19



--- 104,111 ----
      5.1.  Data Types  . . . . . . . . . . . . . . . . . . . . . . .  18
      5.2.  Per-queue state (codel_queue_t instance variables)  . . .  19
      5.3.  Constants . . . . . . . . . . . . . . . . . . . . . . . .  19
!      5.4.  Enqueue routine . . . . . . . . . . . . . . . . . . . . . .  19
!      5.5.  Dequeue routine . . . . . . . . . . . . . . . . . . . . . .  19



***************
*** 283,289 ****


    is much slower than the link that feeds it (say, a high-speed
!    ethernet link into a limited DSL uplink) a 20 packet buffer at the
    bottleneck might be necessary to temporarily hold the 20 packets in
    flight to keep the utilization high.  The burst of packets should
    drain completely (to 0 or 1 packets) within a round trip time and
--- 283,289 ----


    is much slower than the link that feeds it (say, a high-speed
!    Ethernet link into a limited DSL uplink) a 20 packet buffer at the
    bottleneck might be necessary to temporarily hold the 20 packets in
    flight to keep the utilization high.  The burst of packets should
    drain completely (to 0 or 1 packets) within a round trip time and
***************
*** 326,332 ****

    The use of queue length is further complicated in networks that are
    subject to both short and long term changes in available link rate
!    (as in wifi).  Link rate drops can result in a spike in queue length
    that should be ignored unless it persists.  The length metric is
    problematic when what we really want to control is the amount of
    excess delay packets experience due to a persistent or standing
--- 326,332 ----

    The use of queue length is further complicated in networks that are
    subject to both short and long term changes in available link rate
!    (as in WiFi).  Link rate drops can result in a spike in queue length
    that should be ignored unless it persists.  The length metric is
    problematic when what we really want to control is the amount of
    excess delay packets experience due to a persistent or standing
***************
*** 455,461 ****
    As Kleinrock observed, the best operating point, in terms of
    bandwidth / delay tradeoff, is the peak power point since points off
    the peak represent a higher cost (in delay) per unit of bandwidth.
!    The power vs. _f_ curve for any AIMD TCP is monotone decreasing.  But
    the curve is very flat for _f_ < 0.1 followed by a increasing
    curvature with a knee around .2 then a steep, almost linear fall off
    [TSV84] [VJTARG14].  Since the previous equation showed that goodput
--- 455,461 ----
    As Kleinrock observed, the best operating point, in terms of
    bandwidth / delay tradeoff, is the peak power point since points off
    the peak represent a higher cost (in delay) per unit of bandwidth.
!    The power vs. _f_ curve for any AIMD TCP is monotonically decreasing.  But
    the curve is very flat for _f_ < 0.1 followed by a increasing
    curvature with a knee around .2 then a steep, almost linear fall off
    [TSV84] [VJTARG14].  Since the previous equation showed that goodput
***************
*** 929,935 ****
    management as described here or to adapt its principles to other
    applications.

!    Implementors are strongly encouraged to also look at Eric Dumazet's
    Linux kernel version of CoDel - a well-written, well tested, real-
    world, C-based implementation.  As of this writing, it is at:

--- 929,935 ----
    management as described here or to adapt its principles to other
    applications.

!    Implementers are strongly encouraged to also look at Eric Dumazet's
    Linux kernel version of CoDel - a well-written, well tested, real-
    world, C-based implementation.  As of this writing, it is at:

***************
*** 999,1006 ****

    "packet_t*" is a pointer to a packet descriptor.  We assume it has a
    tstamp field capable of holding a time_t and that field is available
!    for use by CoDel (it will be set by the enque routine and used by the
!    deque routine).



--- 999,1006 ----

    "packet_t*" is a pointer to a packet descriptor.  We assume it has a
    tstamp field capable of holding a time_t and that field is available
!    for use by CoDel (it will be set by the enqueue routine and used by the
!    dequeue routine).



***************
*** 1033,1039 ****
    u_int maxpacket = 512; // Maximum packet size in bytes
                     // (should use interface MTU)

! 5.4.  Enque routine

    All the work of CoDel is done in the deque routine.  The only CoDel
    addition to enque is putting the current time in the packet's tstamp
--- 1033,1039 ----
    u_int maxpacket = 512; // Maximum packet size in bytes
                     // (should use interface MTU)

! 5.4.  Enqueue routine

    All the work of CoDel is done in the deque routine.  The only CoDel
    addition to enque is putting the current time in the packet's tstamp
***************
*** 1046,1052 ****
        queue_t::enque(pkt);
    }

! 5.5.  Deque routine

    This is the heart of CoDel.  There are two branches: In packet-
    dropping state (meaning that the queue-sojourn time has gone above
--- 1046,1052 ----
        queue_t::enque(pkt);
    }

! 5.5.  Dequeue routine

    This is the heart of CoDel.  There are two branches: In packet-
    dropping state (meaning that the queue-sojourn time has gone above
***************
*** 1260,1266 ****

    An experiment by Stanford graduate students successfully used the
    linux CoDel to duplicate our published simulation work on CoDel's
!    ability to following drastic link rate changes which can be found at:
    http://reproducingnetworkresearch.wordpress.com/2012/06/06/solving-
    bufferbloat-the-codel-way/ .

--- 1260,1266 ----

    An experiment by Stanford graduate students successfully used the
    linux CoDel to duplicate our published simulation work on CoDel's
!    ability to follow drastic link rate changes which can be found at:
    http://reproducingnetworkresearch.wordpress.com/2012/06/06/solving-
    bufferbloat-the-codel-way/ .

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

_______________________________________________
aqm mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/aqm

Reply via email to