> This is a novel idea, but it leads to attacks against systems behind a > firewall. Consider the situation: > > attacker -> pf firewall -> server > > Attacker sends the first fragment of a huge TCP segment, say 64k. It has > enough information to be routed properly. Stock pf using "fragment > reassemble" would wait until the rest of the TCP segment arrived and not > forward the TCP segment until then. > > Your proposed modification would forward the fragment, causing the server > to cache the fragment until it expired or until the rest of the segment > arrived. > > Suppose now that the rest of the fragment never does arrive; instead the > first fragment of another huge TCP segment is sent. Again, both the > firewall and the server must cache the fragment. And so on; by this > technique an attacker can exhaust resources not only on the firewall, which > is expected to be hardened against such things, but also on any machine > behind it, even though those machines are supposed to be protected. > > While the prospect of reducing latency is nice, I don't think the potential > vulnerabilities make this a good idea.
What about adding a new check on the fragment path to limit states? e.g. "set limit fragment.start 100" Only issue I can see with this is, when limit reached, whether to : A) drop new connections B) move to normal fragment reassembly C) drop old, with the aded issue of telling the machine behind the firewall to also drop the old connection. Other idea - maybe tie a "set timeout fragment.start" with the adaptive timeouts? -- Craig
