On Fri, Aug 05, 2005 at 12:58:19PM -0600, Chris 'Xenon' Hanson wrote: > >Think of it this way: Queueing says 'If you need to drop packets, drop > >these packets before those packets.' That's all it says. And the simple > >fact is that by the time the packets have reached your external interface > >*no more packets need to be dropped.* This is because the only reason to > >drop them is because you couldn't fit them on the connection, and you have > >only recieved the ones that *would* fit. > > But, isn't this a fallacy? > > I thought the theory behind it was that by dropping packets (and > therefore preventing the return ACK from the receiver) you incite the > transmitting end's TCP stack to back off a little bit on the transmission > rate? Isn't that the idea behind TCP rate control?
Well, it has this _side_effect with well-behaved peers. You can drop outgoing packets in the hope that the external peer is well-behaved and will properly back-off its TCP. But that's merely a hope. The peer can completely ignore you and continue to send at full rate. Which, in a DoS scenario, it surely will. Also, if you rely on well-behaved peers getting their throughput cut by you for behaving well, you can expect more aggressive applications to start misbehaving to achieve higher throughput. You're actually punishing well-behaved peers vs. aggressive ones. For instance, a P2P application might very well start to send redundant retransmissions in the hope to get one out of several through your queue, and it will win against a nice player. Daniel
