Quoting Ian McDonald:
| > It is comparable but not equivalent, since in the above you are using a
TBF which will change the nature
| > of the traffic. I use the following (with qlen=10000):
| >
| > tc qdisc add dev eth0 root handle 1:0 netem delay ${delay}ms
| > tc qdisc add dev eth0 parent 1:1 pfifo limit ${qlen}
| >
| OK - I must confess to not studying the difference in queue
| disciplines greatly, but I did find quickly the default queue lengths
| were no use really.
I have done a little more testing and found that in this setting the FIFO
lengths apparently do not
have a big impact. But I recall from earlier experiences with TBF and
rate-limited traffic that
under-dimensioned queue lengths can lead to drops when the traffic is heavy.
| > | The reason for me putting zero % loss on return path is that we are
| > | testing unidirectional flows. As such I wanted all feedback to come
| > | back and if we lost feedback packets it would increase variability. We
| > | should test loss of feedback packets for protocol reasons but not so
| > | much for performance testing.
| > Ah - I see, so this means that for performance testing I'd have to use:?
| >
| > tc qdisc add dev eth0 root netem delay 75ms loss 10% # the
forward path
| > tc qdisc add dev eth1 root netem delay 75ms #
reverse path (no feedback loss)
| >
| Yes - I just happened to have a line with loss 0% from my python code
| which is functionally equivalent to above.
I have added that to my script and tested a bit. It didn't decrease the
variability drastically (it is
still 130 Kbps ... 244 Kbps for the 150ms RTT / 10% loss case and t=20 second
runs), but then, we are
not dealing with a scientific instrument here; and there are lots of other
factors which influence the
traffic.
I put the script online on
http://www.erg.abdn.ac.uk/users/gerrit/dccp/testing_dccp/tfrc_test.tc
Suggestions for improvements (a pythonised version would certainly be nice) are
welcome.
-
To unsubscribe from this list: send the line "unsubscribe dccp" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html