Hi Anil, 

Thank you for your comment. 
The only problem being that we received your email right after the submission 
time, so 
we can not update the document for the next IETF. 
[The last submission time for the I-D submission was 2015-07-06 23:59 UTC.]

Please see in-line,

> On 06 Jul 2015, at 21:11, Agarwal, Anil <[email protected]> wrote:
> 
> Nicolas et al,
> 
> Few minor comments and suggestions on draft-ietf-tsvwg-rtcweb-qos-04.txt -
> 
> 1.    We should require all tests to be first conducted with AQM disabled.
>       This will provide a good reference point for comparison and also help 
> to verify that
>       test network and traffic configurations are consistent across different 
> test groups.

Indeed. This is why we specify in the Methodology section: 
“
14.1.  Methodology

  One key objective behind formulating the guidelines is to help
  ascertain whether a specific AQM is not only better than drop-tail
  but also safe to deploy.  Testers therefore need to provide a
  reference document for their proposal discussing performance and
  deployment compared to those of drop-tail.
"

> 2.    In tests, where network parameters are changed dynamically (e.g., link 
> capacity, congestion level), 
>       it would be useful to capture and compare metrics during the first few 
> seconds after the change.
>       To evaluate how quickly and gracefully the algorithms respond to such 
> changes.

I agree that considering such metrics is of interests. Indeed, looking at how 
the AQM dynamically
takes back the control of the queue after such change would have an impact; as 
one example, it has
been experienced that PIE would tend to adapt more slowly to drastic changes 
than CoDel; this is 
due to the fact that PIE updates its dropping probability considering previous 
values for the probabilities
whereas CoDel does not (if in non-dropping mode). 

However, in the guidelines, we do not include much requirements on the chosen 
metrics for a given scenario. 
We rather let the tester decide the metrics of interests for a given scenario: 
“
It is therefore not
  necessary to measure all of the following metrics: the chosen metric
  may not be relevant to the context of the evaluation scenario (e.g.,
  latency vs. goodput trade-off in application-limited traffic
  scenarios).  Guidance is provided for each metric.
“

Therefore, we believe that any tester would actually dynamically record the 
metrics for this specific use case. 

> 3.    Would be useful to add to the metrics list - the number of packets 
> dropped due to tail-drop.

Measuring the number of packets dropped due to tail-drop, as opposed to the 
packets dropped by the AQM 
may be quite tricky in real-testbed; and it would be easier by simulations. 
However, we do not want the 
guidelines to be platform dependent: 
“ 
  The proposals SHOULD be evaluated on real-life systems, or they MAY
  be evaluated with event-driven simulations (such as ns-2, ns-3,
  OMNET, etc).  The proposed scenarios are not bound to a particular
  evaluation toolset.
“ 

Whenever possible, the guidelines advices to record this type of metrics: 
“
In
  addition to the end-to-end metrics, the queue-level metrics (normally
  collected at the device operating the AQM) provide a better
  understanding of the AQM behavior under study and the impact of its
  internal parameters.  Whenever it is possible (e.g., depending on the
  features provided by the hardware/software), these guidelines advice
  to consider queue-level metrics, such as link utilization, queuing
  delay, queue size or packet drop/mark statistics in addition to the
  AQM-specific parameters.
"

> 4.    We should add link rates of 1 Gbps and 10 Gbps.

The problem of considering such link rates would be anyone asking : 
“What about 1Mbps links ?” 
and so on. 

“
However, these guidelines do not present context-
  dependent scenarios (such as 802.11 WLANs, data-centers or rural
  broadband networks).
"

The main point of these guidelines is to present the key aspects of AQMs that
must be considered (burst absorption, impact of the un-responsive flows, etc.) 
whatever the application context is. 

> 5.    In the traffic mix test in section 8.1, it would be useful to test with 
> a large number of users, 
>       as specified in section 7.2.4, especially for web traffic.

The main goal of section 8.1 is to present the different types of traffic that 
must be considered. 
The number of applications may be context-dependent, and we do not want to 
impose any 
traffic load. 

“
These guidelines RECOMMEND
  to examine at least the following example: 1 bi-directional VoIP; 6
  Webs pages download (such as detailed in Section 6.2); 1 CBR; 1
  Adaptive Video; 5 bulk TCP.  Any other combinations could be
  considered and should be carefully documented.
“

A “highly loaded traffic mix scenario” would have indeed be of interest, but we 
prefer letting the door opened
for the tester to look at it. 

> 6.    Should we require that AQM algorithm parameter settings be identical 
> across all tests?
>       Or at least be a small set of profiles?

The AQM algorithm parameter settings should be identical across all tests. 
On that point, we refer to the recommendation document:
"
  [I-D.ietf-aqm-recommendation] states "AQM algorithms SHOULD NOT
  require tuning of initial or configuration parameters in common use
  cases."  A scheme ought to expose only those parameters that control
  the macroscopic AQM behavior such as queue delay threshold, queue
  length threshold, etc.
"

> 7.    It's not clear how FQ-AQM should be handled. Section 13.3 seems to punt 
> on the issue.
>       An AQM algorithm with FQ will exhibit quite different results compared 
> to one without FQ.
>       Should one compare a test scenario with AQM and a single queue vs one 
> with FQ-AQM?
> 

The combination of scheduling and AQM is complex, depending on the scheduling 
strategy. 
The guidelines can not make a focus on “how any AQM coexist with FlowQueue 
scheduler?”:

“
The rest of this memo refers to the AQM as a
  dropping/marking policy as a separate feature to any interface
  scheduling scheme.  This document may be complemented with another
  one on guidelines for assessing combination of packet scheduling and
  AQM.  We note that such a document will inherit all the guidelines
  from this document plus any additional scenarios relevant for packet
  scheduling such as flow starvation evaluation or impact of the number
  of hash buckets.
"

> My apologies if some of these suggestions have been discussed and disposed of 
> before. 
>       In which case, please ignore them.
> 

Thank you for your comments. We believe that most of your comments make sense 
and are already
discussed in the document. Let us know where you think the document could be 
made more explicit, in cases
where it may not be clear enough. 

Kind regards,

Nicolas

> Regards,
> Anil
> 

_______________________________________________
aqm mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/aqm

Reply via email to