On Fri, Aug 05, 2005 at 03:47:57PM -0600, Chris 'Xenon' Hanson wrote: > >We're certainly not the first ones discussing this, there must be > >volumes of papers about dynamics of TCP like these, maybe someone can > >comment on whether this simple strategy is supposed to work like that :) > > Exactly 100% dead-on. > > I presume it works, it probably doesn't work optimally, but it's still > better than nothing at all. And, I think it's worth setting up on almost > any host that has less-than-unlimited bandwidth -- basically everyone. ;)
Actually, discussing the theory is not really necessary. You can verify the theory with a rather simple empiric test. Set up three boxes like this client <--- 100 mbps ---> fw <--- 100 mbps ---> server without any other connections, so the links are all idle except for traffic you explicitely cause during the test. Run 'netstat -anbi if' for both interfaces of the firewall every second and log the input/output byte counters over time, so you can later draw nice graphs of thoughput per second (with gnuplot or such). Then try downloading a large file (taking a minute or so each) from server to client, through HTTP. Enable queuing on the firewalls interface to the client. Try 50 mbps, 10 mbps and then 1 mbps. Each time download the same file, and keep the logged counter data. If the theory is correct, the graphs will nicely show so, and you can make a nice little web page which we can refer to the next time someone argues about rate-limiting incoming traffic. If the graph for server interface deviates noticably from the one for the client interface (i.e. the server does not converge to a steady stream), that would lay the theory to rest. I think it's time someone did this in the proper amateurish fashion. There's complicated theoretical papers and naive guesses, but no nice middle ground. :) Daniel
