On 2018-08-29 18:34, Tom Herbert wrote:

> Joe,
> 
> End hosts are already quite capable of dealing with reassembly,

Regardless, middleboxes shouldn't be avoiding their own effort by
creating work for others. A corollary to the Postal Principle should be
"you make the mess, you clean it up".  

FWIW, the idea of dumping just the first fragment and letting the
receiver clean it up was tried in ATM in the late 1980s and it failed
badly. It turns out that congestion isn't always a point problem - when
multiple routers in a path are overloaded (which can and does happen),
not dropping the rest of the fragments can cause downstream congestion
that wouldn't have happened otherwise and then drops to other "real"
packets. 

> I
> think you'll find the average middlebox is not prepared to handle it.

Sure, but that's a problem I'm hoping we can fix rather than encourage
continued bad behavior. 

> In truth, for this case it really doesn't save the hosts much at all.

It won't prevent endpoint attacks, but it does mitigate the effect of
useless fragment processing. And, as per above, it avoids drops to other
packets that could/should have made it through. 

> A DOS attack on fragmentation is still possible by the attacker 
> sending all but the last fragment to a port that is allowed by the
> firewall. Also, a destination host will receive all the fragments for
> reassembly by virtue of it being the having destination address in the
> packets. As discussed previously, there's no guarantee that a firewall
> will see all the packets in a fragment train in a mulithomed
> environment-- routing may take packets along different paths so they
> hit hit different firewalls for a site. The answer to that seems to be
> to somehow coordinate across all the firewalls for a site to act as
> single host-- I suppose that's possible, but it would be nice to see
> the interoperable protocol that makes that generally feasible at any
> scale.

Compared to other solutions proposed in this thread, that one is nearly
trivial to design. The issue is having operators - who deploy these
devices in ways that they should know need this feature - enable it
properly (i.e., point them all at each other). 

>> Further, acting as a host is always the right thing for any node that
>> sources packets with its own IP address -- that includes NATs and regular
>> proxies. The behavior of transparent proxies is more complex, but can be
>> similarly reasoned from the appropriate equivalence model.
> 
> Proxies aren't quite the same though.

They are three different things, as noted in the paper I posted earlier,
but they all are variants of requiring host behavior of some sort. 

> An explicit proxy at least is
> both receiving and sourcing packet based on it's own address. NAT only
> sources or receive packets with their own address half the time.

Sure, but there's more to it than just using the address...(see next
note) 

> Firewalls, never do and don't even need a host address.

Transport protocols are endpoint demultiplexers and state managers;
anything that uses that info and/or state is also acting as a host and
needs to follow at least some host requirements too (all that apply to
transports, including translation of signaling related to transport
protocols and ports). 

Joe
_______________________________________________
Int-area mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/int-area

Reply via email to