> On Aug 30, 2018, at 8:56 AM, Tom Herbert <t...@herbertland.com> wrote:
> 
>> On Wed, Aug 29, 2018 at 7:58 PM, Joe Touch <to...@strayalpha.com> wrote:
>> 
>> 
>> 
>> 
>> On 2018-08-29 18:34, Tom Herbert wrote:
>> 
>> 
>> Joe,
>> 
>> End hosts are already quite capable of dealing with reassembly,
>> 
>> 
>> Regardless, middleboxes shouldn't be avoiding their own effort by creating
>> work for others. A corollary to the Postal Principle should be "you make the
>> mess, you clean it up".
>> 
>> FWIW, the idea of dumping just the first fragment and letting the receiver
>> clean it up was tried in ATM in the late 1980s and it failed badly. It turns
>> out that congestion isn't always a point problem - when multiple routers in
>> a path are overloaded (which can and does happen), not dropping the rest of
>> the fragments can cause downstream congestion that wouldn't have happened
>> otherwise and then drops to other "real" packets.
>> 
>> 
>> I
>> think you'll find the average middlebox is not prepared to handle it.
>> 
>> 
>> Sure, but that's a problem I'm hoping we can fix rather than encourage
>> continued bad behavior.
>> 
>> 
>> In truth, for this case it really doesn't save the hosts much at all.
>> 
>> 
>> It won't prevent endpoint attacks, but it does mitigate the effect of
>> useless fragment processing. And, as per above, it avoids drops to other
>> packets that could/should have made it through.
>> 
>> 
>> A DOS attack on fragmentation is still possible by the attacker
>> sending all but the last fragment to a port that is allowed by the
>> firewall. Also, a destination host will receive all the fragments for
>> reassembly by virtue of it being the having destination address in the
>> packets. As discussed previously, there's no guarantee that a firewall
>> will see all the packets in a fragment train in a mulithomed
>> environment-- routing may take packets along different paths so they
>> hit hit different firewalls for a site. The answer to that seems to be
>> to somehow coordinate across all the firewalls for a site to act as
>> single host-- I suppose that's possible, but it would be nice to see
>> the interoperable protocol that makes that generally feasible at any
>> scale.
>> 
>> 
>> Compared to other solutions proposed in this thread, that one is nearly
>> trivial to design. The issue is having operators - who deploy these devices
>> in ways that they should know need this feature - enable it properly (i.e.,
>> point them all at each other).
>> 
> Joe,
> 
> I would be amazed if firewall vendors consider this "nearly trivial to
> design".

The coordination protocol is, and that’s all I claimed. 

> Reassembly requires memory to hold packets, a non-work
> conserving datapath, requires state to be maintained, and the
> aforementioned problems of consistent routing of fragments needs to be
> resolved. A middlebox would be performing reassembly on behalf of some
> number of backend hosts, so the memory requirement for reassembly is
> some multiplier of that needed by an individual host. Non-work
> conserving means packets need to be queued at the device which
> requires cache management and introduces delay. Requiring state in
> _stateless_ devices is a problem,

It would if they were, but they’re not. So I’m suggesting these devices need to 
add more state and work to clean up the mess they make. 

> it's likely they have neither the
> mechanisms nor the memory to support reassembly. And then there's the
> Denial Of Service considerations... the middlebox is now an obvious
> target for DOS attack on reassembly. We need to deal with this on
> hosts, but the attacks are going to be worse on middleboxes. Consider
> that a middlebox wouldn't normally know all possible hosts in the
> network, so it may very well end up reassembling packets for
> destinations that don't even exist! And on top of all of this,
> applications are still motivated to avoid fragmentation for other
> reasons, so I suspect vendors will view as a lot of work for very
> little benefit.

As they did for devices that don’t support v6. 

> 
>> 
>> Further, acting as a host is always the right thing for any node that
>> sources packets with its own IP address -- that includes NATs and regular
>> proxies. The behavior of transparent proxies is more complex, but can be
>> similarly reasoned from the appropriate equivalence model.
>> 
>> 
>> Proxies aren't quite the same though.
>> 
>> 
>> They are three different things, as noted in the paper I posted earlier, but
>> they all are variants of requiring host behavior of some sort.
>> 
>> 
>> 
>> An explicit proxy at least is
>> both receiving and sourcing packet based on it's own address. NAT only
>> sources or receive packets with their own address half the time.
>> 
>> 
>> Sure, but there's more to it than just using the address...(see next note)
>> 
>> 
>> 
>> Firewalls, never do and don't even need a host address.
>> 
>> 
>> Transport protocols are endpoint demultiplexers and state managers; anything
>> that uses that info and/or state is also acting as a host and needs to
>> follow at least some host requirements too (all that apply to transports,
>> including translation of signaling related to transport protocols and
>> ports).
> 
> Maybe so, but at best middleboxes can only approximate host behavior.
> Requiring them to perform reassembly is only addressing one symptom of
> the disease. The real disease is intermediate devices that try to
> insert themselves into transport layer protocols by DPI or trying to
> infer transport layer state. Calling them "hosts" doesn't change the
> fact that such devices will break the end-to-end model and ossify the
> Internet. As they say, "You can put lipstick on a pig, but it's still
> a pig!" :-).

That’s their problem and will increasingly render them less useful over time. 

Joe

> 
> Tom
> 
>> 
>> Joe

_______________________________________________
Int-area mailing list
Int-area@ietf.org
https://www.ietf.org/mailman/listinfo/int-area

Reply via email to