Dave, Thanks for posting the helpful critique, and I hope that it helps shape the eval-guidelines draft.
I think simulations are always a "work-in-progress" of sorts - there is always something you can do better next time. By their nature, simulations are simplifications of reality, and one of the challenges is to spend time developing fidelity where it is needed and not where it is not (we've tried to be very conscious of that, but it is always good to get another set of eyes on the problem), another is to try to avoid letting the combinatorics get out of hand. And, you are right that simulations can have bugs (but as you've noted, so does the RW). Some of the items in your list we've already addressed since publishing that white paper last spring, some we may address in future work, and some we may need to agree to disagree on the importance of. Also, we are in the process of getting a bunch of our tools made public (in ns-2.36) so that the research community can potentially tackle some areas that we're not able to get to. Once we have real systems to experiment with, we'll be doing some RW experiments too (both to validate implementations, and to validate the approach). I hope we can leverage some of the work you've been doing in your testbed (e.g. RRUL) when we get to that point. I responded to a couple of Mikael's points below. -Greg On 2/9/14, 11:13 PM, "Mikael Abrahamsson" <[email protected]> wrote: >On Sun, 9 Feb 2014, Dave Taht wrote: > >> The second largest problem with original cablelabs study is that it >> only analyzed traffic at one specific (although common) setting for >> cable operators, 20Mbits down, and 5 Mbits up. A common, lower setting >> should be analyzed, as well as more premier services. Some tweaking of >> codel derived technologies (flows and quantum), and of pie (alpha and >> beta) are indicated both at lower and higher bandwidths for optimum >> results. Additionally the effects of classification, notably of >> background traffic, has not been explored. > >Just as a data point, currently Comhem in Sweden offers a DOCSIS based >cable-Internet service of up to 500/50 (500 down, 50 up). > >What would be of great interest would be to test this kind of service in >several modes, including having 10 of those on the same CMTS port and >congesting both up and downstream of the cable system itself (because >there isn't 500 megabit/s total upstream capacity to support 10 >subscribers all trying to use 50 megabit/s per user. Same thing >downstream. Would be very interesting to see how these users interacted >with each other combined with LEDBAT and having TCP flows that didn't >incur too much latency for the individual user, whilst assuring that each >subscribed line got its fair share of bandwidth from the system. In our recent simulation work (unpublished), we've been using the following configurations: 20/5 (the original cfg), 100/20, 1000/200. One finding for us was that the fq benefits diminished considerably as the data rates went up. Also, we model shared link congestion by modulating the capacity available to our tested user. This was described in our April 2013 paper. Since one user's queuing latency doesn't impact another user, the interaction between users is largely limited to how much capacity is available to our tested user at any point in time. Our model was, admittedly, a simple one, but I'm not aware of any better models at the moment. We did collect data on some real systems to validate the model, but had difficulty finding a real system that exhibited significant congestion (I'm sure several hands will go up to volunteer their own broadband connection as being the poster child for this ;-). > >> **** The upload saturation problem shown in the study >> >> Bittorrent clients have evolved to where, out of the box, there is >> a very low rate limit set, typically in the range of 50-150KBytes/sec. >> This makes bittorrent uploads a non-problem for most people. >> >> Still, benchmarking each of these phases would be worthwhile. >> Torrent can be fixed. > >Well, users still would like to set the upload rate to something close to >what they have purchased, and still have it yield to other traffic that >happens to need the upstream bw, so further study would be great. > >> Another problem with VOIP is "creeping delay", where a voip queue >>builds >> and builds and then delivers or drops a full boatload of packets to >> catch up. I have experienced this on multiple wifi based voip sessions >> where I ended up with seconds of delay on the line over time... > >This was a big problem 5+ years ago, where I think the PDV (jitter) >buffer >just grew, and never shrunk again even if network conditions improved. I >haven't experienced this lately, so for instance Skype must have fixed >this. > >-- >Mikael Abrahamsson email: [email protected] >_______________________________________________ >aqm mailing list >[email protected] >https://www.ietf.org/mailman/listinfo/aqm _______________________________________________ aqm mailing list [email protected] https://www.ietf.org/mailman/listinfo/aqm
