Thank you for detailing the content of the Cable Labs document and where these
700kB come from.
Concerning your last point:
> As such I would be strongly in favour of changing the draft to actually
> describe realistic web client behaviour, rather than just summarising it
> as "repeated downloads of 700KB".
I understand that it may be a drastic simplification to just summarise the web
client behaviour as only repeated downloads of 700kB. However, the draft may
not detail realistic web client behaviour: I believe that it may be out of
topic and the draft cannot contain such level of complexity for all the covered
protocols/traffic.
I propose that the following changes:
Was:
- Realistic HTTP web traffic (repeated download of 700kB);
Changed by:
- Realistic HTTP web page downloads: the tester should at least
consider repeated downloads of 700kB - for a more accurate web traffic, a
single user web page download [White] may exploited;
What do you think ?
Regards,
Nicolas
On Apr 15, 2014, at 12:28 PM, Toke Høiland-Jørgensen <[email protected]> wrote:
> Nicolas KUHN <[email protected]> writes:
>
>> and realistic HTTP web traffic (repeated download of 700kB). As a reminder,
>> please find here the comments of Shahid Akhtar regarding these values:
>
> The Cablelabs work doesn't specify web traffic as simply "repeated
> downloads of 700KB", though. Quoting from [0], the actual wording is:
>
>> "Webs" indicates the number of simultaneous web users (repeated
>> downloads of a 700 kB page as described in Appendix A of [White]),
>
> Where [White] refers to [1] which states (in the Appendix):
>
>> The file sizes are generated via a log-normal distribution, such that
>> the log10 of file size is drawn from a normal distribution with mean =
>> 3.34 and standard deviation = 0.84. The file sizes (yi) are calculated
>> from the resulting 100 draws (xi ) using the following formula, in
>> order to produce a set of 100 files whose total size =~ 600 kB (614400
>> B):
>
> And in the main text it specifies (in section 3.2.3) the actual model
> for the web traffic used:
>
>> Model single user web page download as follows:
>>
>> - Web page modeled as single HTML page + 100 objects spread evenly
>> across 4 servers. Web object sizes are currently fixed at 25 kB each,
>> whereas the initial HTML page is 100 kB. Appendix A provides an
>> alternative page model that may be explored in future work.
>>
>> - Server RTTs set as follows (20 ms, 30 ms, 50 ms, 100 ms).
>>
>> - Initial HTTP GET to retrieve a moderately sized object (100 kB HTML
>> page) from server 1.
>>
>> - Once initial HTTP GET completes, initiate 24 simultaneous HTTP GETs
>> (via separate TCP connections), 6 connections each to 4 different
>> server nodes
>>
>> - Once each individual HTTP GET completes, initiate a subsequent GET
>> to the same server, until 25 objects have been retrieved from each
>> server.
>
>
> Which is a pretty far cry from just saying "repeated downloads of 700
> KB" and, while still somewhat bigger, matches the numbers from Google
> better in terms of distribution between page sizes and other objects.
> And, more importantly, it features the kind of parallelism and
> interactions that a real web browser does; which, as Shahid mentioned is
> (can be) quite important for the treatment it receives by an AQM.
>
> As such I would be strongly in favour of changing the draft to actually
> describe realistic web client behaviour, rather than just summarising it
> as "repeated downloads of 700KB".
>
>
> -Toke
>
>
> [0]
> http://www.cablelabs.com/wp-content/uploads/2013/11/Active_Queue_Management_Algorithms_DOCSIS_3_0.pdf
>
> [1]
> http://www.cablelabs.com/downloads/pubs/PreliminaryStudyOfCoDelAQM_DOCSISNetwork.pdf
>
> [2] https://developers.google.com/speed/articles/web-metrics
> _______________________________________________
> aqm mailing list
> [email protected]
> https://www.ietf.org/mailman/listinfo/aqm
_______________________________________________
aqm mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/aqm