> -----Original Message-----
> From: [email protected] [mailto:[email protected]] On 
> Behalf Of Robin Whittle
> Sent: Thursday, January 28, 2010 2:44 AM
> To: RRG
> Subject: Re: [rrg] SEAL critique, PMTUD, RFC4821 = vapourware
> 
> Short version:      In the absence of better documented evidence that
>                     filtering of incoming ICMP PTBs is anything more
>                     than sporadic and quickly corrected, I can't
>                     believe what Tony wrote.
>
> 
> Hi Tony,
> 
> You wrote:
> 
> >>                  I argue against Fred Templin's position that
> >>                  ordinary RFC1191 DF=1 Path MTU Discovery (and
> >>                  therefore its RFC1981 IPv6 equivalent) is 
> "busted".
> >>
> >>                  Where is the evidence that networks filtering out
> >>                  PTB (Packet Too Big) messages is a significant
> >>                  problem?
> > 
> > 
> > This happens.  Consult some operator folks, privately and 
> quietly.  Many
> > enterprises blocked all inbound ICMP when DDoS attacks 
> started happening.
> 
> So a company XX does this, and to the extent that:
> 
>    1 - Any host in its network initiating a session involving "long"
>        packets being sent (the TCP layer or application tries to send
>        as long a packet as it can), where there is a lower PMTU
>        outside XX's network than within, on the path to the
>        destination host, will find RFC1191 PMTU unusable and will
>        therefore be unable to communicate - unless the application
>        backs off the packet length due to this failure.
> 
>    2 - Any host outside XX's network which initiates the session
>        and which involves XX's host sending "long" packets will
>        similarly be unable to communicate unless the XX hosts have
>        their own, non-RFC1191 approach to reducing packet size.
> 
> It seems like 1 would be immediately noticed within XX and presumably
> lead to an end to the filtering of incoming PTBs.
> 
> 2 could be trickier for XX's administrators to notice, since those
> who are most likely to notice it are scattered far and wide - unless
> it is noticed by whoever runs the XX side of these communications.
> 
> Where is the occurrence of this documented in anything more than the
> anecdotal way you describe it above?  I don't see any reason why it
> would always be hush-hush.

A recent accidental block was discussed in the thread
http://www.gossamer-threads.com/lists/nsp/ipv6/20779, where Google
was accidentally blocking ICMPv6 PTB.

I have heard of many similar problems where ICMPv6 PTB is being 
blocked; these take hours and occasionally days to resolve.  
Some of those can be attributed to growing pains of IPv6,
though -- perhaps a too-low limit for ICMPv6-per-second
from a router, and that just needed to be increased, or an overly-
aggressive ICMPv6 filter.

All modern routers allow limiting the number of ICMPs per second 
they send.  So if there is some reason they're sending a lot of ICMPs,
such as during an attack or when the interface is full of non-attack
traffic, they won't be able to send a legitimate PTB.  Forwarding real
traffic is more important than filtering ICMPs.  

The Cisco command is "ip icmp rate-limit unreachable", Juniper's is 
icmpv4-rate-limit.  These limits are actively discussed on operations 
lists, which implies those limits are actively used on the Internet.
 
> 
> >>                  To the extent that any such problem exists, why
> >>                  should this be accepted and further protocols
> >>                  created to work around it?  I think the networks
> >>                  which do this are non-compliant with the inter-
> >>                  working requirements of the Internet.  These
> >>                  networks should change their ways.
> > 
> > You have the cart before the horse.  Reality is not warped 
> to fit the
> > needs of the network architecture, the network architecture must be
> > molded to deal with reality.  The fact of the matter is 
> that today, we
> > have a hostile environment.  Anything and everything that 
> can be used to
> > create an attack will be used.  To protect themselves, many 
> people will
> > happily throw the baby (or at least its signaling protocol) 
> out with the
> > bathwater.
> > 
> > As a result, ICMP is dead.  Long live ICMP.
> 
> From what you wrote, I understand you regard RFC 1191 and RFC 1981
> PMTUD to be so unreliable that an alternative must be developed and
> implemented on all hosts.
> 
> Assuming this problem has been around for a while, then why wasn't
> there greater support for RFC 4821?
> 
> In the time the PMTUD WG existed a handful of people exchanged about
> a hundred emails.  A standards track RFC emerged and it is the only
> thing around which can solve the problem you refer to, other than
> applications backing off packet lengths themselves.
> 
> Maybe many UDP-based applications do this already - locally doing the
> same thing that RFC 4821 suggests, but without sharing any
> information with other packetization layers.

draft-petithuguenin-behave-stun-pmtud does it for UDP.  I know one 
video application that plans to use the technique described in 
that I-D.

> But TCP is a protocol which frequently is ready to send "long"
> packets - as long as its local MSS allows. 

And fixing PTMUD is often done by tweaking the MSS.  Cisco 
equipment has long supported that functionality (and it was
mentioned as a quick fix in the thread I cited above).  This
"fixes" PMTUD failures.  Cisco command is "ip tcp adjust-mss",
Juniper commands are 'set flow all-tcp-mss' and 'set flow tcp-mss'.

> It is reasonable to
> assume that essentially no TCP sessions today currently use RFC 4821
> PMTUD, since the only recorded implementation of it is a minimal one
> in the Linux kernel, which is turned off by default.
> 
> So on one hand, you are telling us this PTB filtering happens often
> enough to be a significant problem and is not going to go away even
> though it must be causing some noticeable communication failure for
> the networks which do it.
> 
> On the other, I see there is no alternative PMTUD mechanism in use
> for TCP exchanges.
> 
> Yet if what you say is true, then the large TCP-dependent subset of
> Internet communications must be going to hell in a handbasket right
> now, as we speak, and the only hope of fixing it is by implementing
> RFC 4821 PMTUD on every host, if only for TCP.
> 
> I look around and observe that TCP always seems to work for me and
> everyone I know - and these TCP sessions are frequently using the
> biggest packets the RFC 1191 PMTUD-controlled stack allows.
>
> If what you wrote is true, then why isn't there a much greater
> interest in PMTUD beyond RFC 1191 / 1981?  Why didn't a large number
> of people work on a much better RFC 4821 than we have now - which
> merely advises what should be done, without giving any framework for
> how it should actually be implemented in the stack and in apps, with
> the requisite API?
> 
> Even Matt Mathis http://staff.psc.edu/mathis/ writes:
> 
>   MTU is still a huge bottleneck, but this project is on hold.
>
> He seems primarily concerned with using packets longer than 
> 1500 bytes.
>
> I just read the archives of the MTU mailing list - 12 messages since
> 2005.  There was only cursory mention of the filtering problem you
> insist is so real and significant.
> 
> I just scanned each message in the archives of the IETF PMTUD WG:
> 
>   http://www.ietf.org/mail-archive/web/pmtud/current/maillist.html
> 
> from August 2005 to its closure in 2008.  I saw occasional references
> to black holes (presumably due to the filtering you mention) but no
> references to how prevalent this was.
> 
> Where is the evidence that PTB filtering is ever more than a
> transitory, mistaken, condition?

Sounds like you want a research paper?

-d

_______________________________________________
rrg mailing list
[email protected]
http://www.irtf.org/mailman/listinfo/rrg

Reply via email to