Re: IPv6 day and tunnels

2012-06-04 Thread Jimmy Hess
On 6/3/12, Jeroen Massar jer...@unfix.org wrote:
 If one is so stupid to just block ICMP then one should also accept that one
 loses functionality.
ICMP tends to get blocked by firewalls by default; There are
legitimate reasons to block ICMP, esp w V6.   Security device
manufacturers tend to indicate all the  lost functionality  is
optional functionality  not required for a working device.

 If the people in the IETF would have decided to inline the headers that are
 ICMPv6 into the IPv6 header then there for sure would have been people who
 would have blocked the equivalent of PacketTooBig in there too. As long as

Over reliance on PacketTooBig is a source of the problem;  the idea
that too large packets should be blindly generated under ordinary
circumstances, carried many hops, and dropped with an error returned a
potentially long distance that the sender in each direction is
expected to see and act upon, at the expense of high latency for both
peers, during initial connection establishment.

Routers don't always know when a packet is too big to reach their next
hop,  especially in case of Broadcast traffic,  so they don't know to
return a PacketTooBig error, especially  in the case of L2 tunneling
PPPoE for example,  there may be a L2 bridge on the network in between
routers with a lower MRU than either of the router's immediate links,
eg  because PPP, 802.1p,q + MPLS labels, or other overhead are affixed
to Ethernet frames,  somewhere on the switched path between routers.

The problem is not that Tunneling is bad;  the problem is  the IP
protocol has issues.  The protocol should be designed so that there
will not be issues with tunnelling or different MRU  Ethernet links.

The real solution is for reverse path MTU (MRU) information to be
discovered between L3 neighbors by L2 probing,  and discovered MRU
exchanged using NDP, so routers know the lowest MRU on each directly
connected interface, then for the worst case reduction in reverse path
MTU to be included in the routing information  passed via L3 routing
protocols both IGPs and EGPs  to the next hop.

That is, no router should be allowed to enter a route into its
forwarding table, until the worst case reverse MTU is discovered, to
reach that network,   with the exception,  that a  device may be
configured with a default route, and some directly connected networks.

The need for Too Big  messages is then restricted to nodes connected
to terminal networks.  And there should be no such thing as packet
fragmentation.


--
-JH



Re: IPv6 day and tunnels

2012-06-04 Thread Jeroen Massar
On 3 Jun 2012, at 22:41, Masataka Ohta mo...@necom830.hpcl.titech.ac.jp wrote:

 Joe Maimon wrote:
 
 So IPv6 fixes the fragmentation and MTU issues of IPv4 by how exactly?
 
 Completely wrongly.

Got a better solution? ;)

 Or was the fix incorporating the breakage into the basic design?
 
 Yes.
 
 Because IPv6 requires ICMP packet too big generated against
 multicast, it is designed to cause ICMP implosions, which
 means ISPs must filter ICMP packet too big at least against
 multicast packets and, as distinguishing them from unicast
 ones is not very easy, often against unicast ones.

I do not see the problem that you are seeing, to adress the two issues in your 
slides:
 - for multicast just set your max packetsize to 1280, no need for pmtu and 
thus this implosion
You think might happen. The sender controls the packetsize anyway and one 
does not want
to frag packets for multicast thus 1280 solves all of it.

 - when doing IPv6 inside IPv6 the outer path has to be 1280+tunneloverhead, if 
it is not then
you need to use a tunneling protocol that knows how to frag and reassemble 
as is acting as a
medium with an mtu less than the minimum of 1280

Greets,
 Jeroen




Re: IPv6 day and tunnels

2012-06-04 Thread Jeroen Massar
On 3 Jun 2012, at 23:20, Jimmy Hess mysi...@gmail.com wrote:

 On 6/3/12, Jeroen Massar jer...@unfix.org wrote:
 If one is so stupid to just block ICMP then one should also accept that one
 loses functionality.
 ICMP tends to get blocked by firewalls by default

Which firewall product does that?

 ; There are
 legitimate reasons to block ICMP, esp w V6.

The moment one decides to block ICMPv6 you are likely breaking features of 
IPv6, chose wisely. There are several RFCs pointing out what one could and what 
one Must never block. Packet Too Big is a very well known one that one should 
not block.

If you decide to block anyway then well, your problem that your network breaks.

   Security device
 manufacturers tend to indicate all the  lost functionality  is
 optional functionality  not required for a working device.

I suggest that you vote with your money and chose a different vendor if they 
shove that through your throat. Upgrading braincells is another option though ;)

 If the people in the IETF would have decided to inline the headers that are
 ICMPv6 into the IPv6 header then there for sure would have been people who
 would have blocked the equivalent of PacketTooBig in there too. As long as
 
 Over reliance on PacketTooBig is a source of the problem;  the idea
 that too large packets should be blindly generated under ordinary
 circumstances, carried many hops, and dropped with an error returned a
 potentially long distance that the sender in each direction is
 expected to see and act upon, at the expense of high latency for both
 peers, during initial connection establishment.

High latency? You do realize that it is only one roundtrip max that might 
happen and that there is no shorter way to inform your side of this situation?

 Routers don't always know when a packet is too big to reach their next
 hop,  especially in case of Broadcast traffic,  

You do realize that IPv6 does not have the concept of broadcast do you?! ;)

There is only: unicast, multicast and anycast
(and anycast is just unicast as it is a routing trick)

 so they don't know to
 return a PacketTooBig error, especially  in the case of L2 tunneling
 PPPoE for example,  there may be a L2 bridge on the network in between
 routers with a lower MRU than either of the router's immediate links,
 eg  because PPP, 802.1p,q + MPLS labels, or other overhead are affixed
 to Ethernet frames,  somewhere on the switched path between routers.

If you have a broken L2 network there is nothing that an L3 protocol can do 
about it.
Please properly configure it, stuff tend to work better that way.

 The problem is not that Tunneling is bad;  the problem is  the IP
 protocol has issues.  The protocol should be designed so that there
 will not be issues with tunnelling or different MRU  Ethernet links.

There is no issue as long as you properly respond with PtB and process them 
when received.
If your medium is 1280 then your medium has to solve the fragging of packets.

 The real solution is for reverse path MTU (MRU) information to be
 discovered between L3 neighbors by L2 probing,  and discovered MRU
 exchanged using NDP, so routers know the lowest MRU on each directly
 connected interface, then for the worst case reduction in reverse path
 MTU to be included in the routing information  passed via L3 routing
 protocols both IGPs and EGPs  to the next hop.

You do realize that NDP only works on the local link and not further?! ;)

Also, carrying MTU and full routing info to end hosts is definitely not 
something a lot of operators would like to do let alone see in their networks. 
Similar to you not wanting ICMP in your network even though that is the agreed 
upon standard.

 That is, no router should be allowed to enter a route into its
 forwarding table, until the worst case reverse MTU is discovered, to
 reach that network,   with the exception,  that a  device may be
 configured with a default route, and some directly connected networks.

If you want this in your network just configure it everywhere to 1280 and then 
process and answer PtBs on the edge. Your network, your problem that you will 
never use jumbo frames.

 The need for Too Big  messages is then restricted to nodes connected
 to terminal networks.  And there should be no such thing as packet
 fragmentation.

The fun thing is though that this Internet thing is quite a bit larger than 
your imaginary network...

Greets,
 Jeroen




Re: IPv6 day and tunnels

2012-06-04 Thread Owen DeLong

On Jun 3, 2012, at 11:20 PM, Jimmy Hess wrote:

 On 6/3/12, Jeroen Massar jer...@unfix.org wrote:
 If one is so stupid to just block ICMP then one should also accept that one
 loses functionality.
 ICMP tends to get blocked by firewalls by default; There are
 legitimate reasons to block ICMP, esp w V6.   Security device
 manufacturers tend to indicate all the  lost functionality  is
 optional functionality  not required for a working device.
 

If you feel the need to block ICMP (I'm not convinced this is an actual need),
then you should do so very selectively in IPv6.

Blocking packet too big messages, especially is definitely harmful in IPv6 and
PMTU-D is _NOT_ optional functionality.

Any firewall/security device manufacturer that says it is will not get any
business from me (or anyone else who considers their requirements
properly before purchasing).

 If the people in the IETF would have decided to inline the headers that are
 ICMPv6 into the IPv6 header then there for sure would have been people who
 would have blocked the equivalent of PacketTooBig in there too. As long as
 
 Over reliance on PacketTooBig is a source of the problem;  the idea
 that too large packets should be blindly generated under ordinary
 circumstances, carried many hops, and dropped with an error returned a
 potentially long distance that the sender in each direction is
 expected to see and act upon, at the expense of high latency for both
 peers, during initial connection establishment.
 

Actually, this generally will NOT affect initial connection establishment and
due to slow start usually adds a very small amount of latency about 3-5kb
into the conversation.

 Routers don't always know when a packet is too big to reach their next
 hop,  especially in case of Broadcast traffic,  so they don't know to
 return a PacketTooBig error, especially  in the case of L2 tunneling
 PPPoE for example,  there may be a L2 bridge on the network in between
 routers with a lower MRU than either of the router's immediate links,
 eg  because PPP, 802.1p,q + MPLS labels, or other overhead are affixed
 to Ethernet frames,  somewhere on the switched path between routers.

That is a misconfiguration of the routers. Any routers in such a circumstance
need their interface configured for the lower MTU or things are going to break
with or without ICMP Packet Too Big messages because even if you didn't
have the DF bit, the router has no way to know to fragment the packet.

An L2 device should not be fragmenting L3 packets.

 The problem is not that Tunneling is bad;  the problem is  the IP
 protocol has issues.  The protocol should be designed so that there
 will not be issues with tunnelling or different MRU  Ethernet links.

And there are not issues so long as things are configured correctly.
Misconfiguration will cause issues no matter how well the protocol
is designed. The problem you are describing so far is not a problem
with the protocol, it is a problem with misconfigured devices.

 The real solution is for reverse path MTU (MRU) information to be
 discovered between L3 neighbors by L2 probing,  and discovered MRU
 exchanged using NDP, so routers know the lowest MRU on each directly
 connected interface, then for the worst case reduction in reverse path
 MTU to be included in the routing information  passed via L3 routing
 protocols both IGPs and EGPs  to the next hop.

This could compensate for some amount of misconfiguration, but you're
adding a lot of overhead and a whole bunch of layering violations in
order to do it.  I think it would be much easier to just fix the configuration
errors.

 That is, no router should be allowed to enter a route into its
 forwarding table, until the worst case reverse MTU is discovered, to
 reach that network,   with the exception,  that a  device may be
 configured with a default route, and some directly connected networks.

I don't see how this would no cause more problems than you claim it
will solve.

 The need for Too Big  messages is then restricted to nodes connected
 to terminal networks.  And there should be no such thing as packet
 fragmentation.

There should be no such thing as packet fragmentation in the current
protocol. What is needed is for people to simply configure things
correctly and allow PTB messages to pass as designed.

Owen




RE: IPv6 day and tunnels

2012-06-04 Thread Matthew Huff
 An L2 device should not be fragmenting L3 packets.

Layer 2 fragmentation used (20+ years ago) to be a common thing with bridged 
topologies like token-ring to Ethernet source-routing. Obviously, no so much 
anymore (at least I hope not), but it can and does happen.

I think part of the problem is that ISPs, CDN, hosting companies, etc. have 
assumed IPv6 is just IPv4 with longer addresses and haven't spent the time 
learning the differences like what was pointed out that ICMPv6 is a required 
protocol for IPv6 to work correctly. MTU issues are an annoyance with IPv4 but 
are a brokenness with IPv6. Knowledge with come, but it may take a bit of 
beating over the head for a while.





NOC presentations

2012-06-04 Thread Stefan Liström

Hi all,

In TF-NOC we have been collecting information about NOCs for some time 
now[1]. Most of the NOCs are from research and educational organizations 
and we think it would also be very interesting to get the same kind of 
information from commercial NOCs. I understand that many commercial 
companies might not be able to share this information, but I thought it 
might be worth asking.


If you would like to share information about your NOC please check out 
our presentation template[2] for inspiration and let me know.


Even if you are not able to share information about your NOC the 
information we have gathered will hopefully still be interesting for you.


[1] http://www.terena.org/activities/tf-noc/nocs.html
[2] http://www.terena.org/activities/tf-noc/TF-NOC-flashpresentation-v2.ppt

--
Best regards
Stefan Liström



Re: IPv6 day and tunnels

2012-06-04 Thread Masataka Ohta
Jeroen Massar wrote:

 So IPv6 fixes the fragmentation and MTU issues of IPv4 by how exactly?

 Completely wrongly.
 
 Got a better solution? ;)

IPv4 without PMTUD, of course.

 Because IPv6 requires ICMP packet too big generated against
 multicast, it is designed to cause ICMP implosions, which
 means ISPs must filter ICMP packet too big at least against
 multicast packets and, as distinguishing them from unicast
 ones is not very easy, often against unicast ones.
 
 I do not see the problem that you are seeing, to adress the two
 issues in your slides:
   - for multicast just set your max packetsize to 1280, no
 need for pmtu and thus this implosion

It is a sender of a multicast packet, not you as some ISP,
who set max packet size to 1280B or 1500B.

You can do nothing against a sender who consciously (not
necessarily maliciously) set it to 1500B.

The only protection is not to generate packet too big and
to block packet too big at least against multicast packets.

If you don't want to inspect packets so deeply (beyond first
64B, for example), packet too big against unicast packets
are also blocked.

That you don't enable multicast in your network does not
mean you have nothing to do with packet too big against
multicast, because you may be on a path of returning ICMPs.
That is, you should still block them.

  You think might happen. The sender controls the packetsize
  anyway and one does not want
  to frag packets for multicast thus 1280 solves all of it.

That's what I said in IETF IPv6 WG more than 10 years ago, but
all the other WG members insisted on having multicast PMTUD,
ignoring the so obvious problem of packet implosions.

Thus, RFC2463 requires:

   Sending a Packet Too Big Message makes an exception to one
   of the rules of when to send an ICMPv6 error message, in that
   unlike other messages, it is sent in response to a packet

   received with an IPv6 multicast destination address, or a linklayer
   ^^^
   multicast or link-layer broadcast address.

They have not yet obsoleted the feature.

So, you should assume some, if not all, of them still insist
on using multicast PMTUD to make multicast packet size larger
than 1280B.

In addition, there should be malicious guys.

   - when doing IPv6 inside IPv6 the outer path has to be
 1280+tunneloverhead, if it is not then

Because PMTUD is not expected to work, you must assume MTU
of outer path is 1280B, as is specified simply restrict
itself to sending packets no larger than 1280 octets in
RFC2460.

  you need to use a tunneling protocol that knows how to
  frag and reassemble as is acting as a
  medium with an mtu less than the minimum of 1280

That's my point in my second last slide.

Considering that many inner packet will be just 1280B long,
many packets will be fragmented, as a result of stupid attempt
to make multicast PMTUD work, unless you violate RFC2460
to blindly send packets a little larger than 1280B.

Masataka Ohta



test-ipv6.com / omgipv6day.com down

2012-06-04 Thread Jason Fesler
I know a lot of people are using / pointing to test-ipv6.com .  The hardware 
picked a bad week to quit sniffing glue.

Ill be working on trying to get it back up today, I need to source hardware.  
Also looking at borrowing a VM for short term.

(speaking only for @test-ipv6.com, not for $employer  - my personal mail 
address is down too).





Re: IPv6 day and tunnels

2012-06-04 Thread Jeroen Massar
On 4 Jun 2012, at 06:36, Masataka Ohta mo...@necom830.hpcl.titech.ac.jp wrote:

 Jeroen Massar wrote:
 
 So IPv6 fixes the fragmentation and MTU issues of IPv4 by how exactly?
 
 Completely wrongly.
 
 Got a better solution? ;)
 
 IPv4 without PMTUD, of course.

We are (afaik) discussing IPv6 in this thread, I assume you typo'd here ;)

 Because IPv6 requires ICMP packet too big generated against
 multicast, it is designed to cause ICMP implosions, which
 means ISPs must filter ICMP packet too big at least against
 multicast packets and, as distinguishing them from unicast
 ones is not very easy, often against unicast ones.
 
 I do not see the problem that you are seeing, to adress the two
 issues in your slides:
  - for multicast just set your max packetsize to 1280, no
need for pmtu and thus this implosion
 
 It is a sender of a multicast packet, not you as some ISP,
 who set max packet size to 1280B or 1500B.

If a customer already miraculously has the rare capability of sending multicast 
packets in the rare case that a network is multicast enabled  then they will 
also have been told to use a max packet size of 1280 to avoid any issues when 
it is expected that some endpoint might have that max MTU.

I really cannot see the problem with this as multicast networks tend to be rare 
and very much closed. Heck, for that matter the m6bone is currently pretty much 
in a dead state for quite a while already :(

 You can do nothing against a sender who consciously (not
 necessarily maliciously) set it to 1500B.

Of course you can, the first hop into your network can generate a single PtB 
and presto the issue becomes a problem of the sender. As the sender's intention 
is likely to reach folks they will adhere to that advice too instead of just 
sending packets which get rejected at the first hop.

 The only protection is not to generate packet too big and
 to block packet too big at least against multicast packets.

No need, as above, reject and send PtB and all is fine.

 If you don't want to inspect packets so deeply (beyond first
 64B, for example), packet too big against unicast packets
 are also blocked.

Routing (forwarding packets) is in no way expection.

 That you don't enable multicast in your network does not
 mean you have nothing to do with packet too big against
 multicast, because you may be on a path of returning ICMPs.
 That is, you should still block them.

Blocking returning ICMPv6 PtB where you are looking at the original packet 
which is echod inside the data of the ICMPv6 packet would indeed require one to 
look quite deep, but if one is so determined to firewall them well, then you 
would have to indeed.

I do not see a reason to do so though. Please note that the src/dst of the 
packet itself is unicast even if the PtB will be for a multicast packet.

I guess one should not be so scared of ICMP, there are easier ways to overload 
a network. Proper BCP38 goes a long way.

 You think might happen. The sender controls the packetsize
 anyway and one does not want
 to frag packets for multicast thus 1280 solves all of it.
 
 That's what I said in IETF IPv6 WG more than 10 years ago, but
 all the other WG members insisted on having multicast PMTUD,
 ignoring the so obvious problem of packet implosions.

They did not ignore you, they realized that not everybody has the same 
requirements. With the current spec you can go your way and break pMTU 
requiring manual 1280 settings, while other networks can use pMTU in their 
networks. Everbody wins.


 So, you should assume some, if not all, of them still insist
 on using multicast PMTUD to make multicast packet size larger
 than 1280B.

As networks become more and more jumbo frame enabled, what exactly is the 
problem with this?

 In addition, there should be malicious guys.
 
  - when doing IPv6 inside IPv6 the outer path has to be
1280+tunneloverhead, if it is not then
 
 Because PMTUD is not expected to work,

You assume it does not work, but as long as per the spec people do not filter 
it, it works.

 you must assume MTU
 of outer path is 1280B, as is specified simply restrict
 itself to sending packets no larger than 1280 octets in
 RFC2460.

While for multicast enabled networks that might hit the minimum MTU this might 
be true-ish, it does not make it universally true.

 you need to use a tunneling protocol that knows how to
 frag and reassemble as is acting as a
 medium with an mtu less than the minimum of 1280
 
 That's my point in my second last slide.

Then you word it wrongly. It is not the problem of IPv6 that you chose to layer 
it inside so many stacks that the underlying medium cannot transport packets 
bigger as 1280, that medium has to take care of it.

 Considering that many inner packet will be just 1280B long,
 many packets will be fragmented, as a result of stupid attempt
 to make multicast PMTUD work, unless you violate RFC2460
 to blindly send packets a little larger than 1280B.

Your 

Re: test-ipv6.com / omgipv6day.com down

2012-06-04 Thread Jeroen Massar
On 4 Jun 2012, at 06:50, Jason Fesler jfes...@yahoo-inc.com wrote:

 I know a lot of people are using / pointing to test-ipv6.com .  The hardware 
 picked a bad week to quit sniffing glue.

You got a bunch of mirrors for it right? Should not be to tricky to get someone 
to let their act as the real thing for a bit.

Greets,
 Jeroen




Re: IPv6 day and tunnels

2012-06-04 Thread Joel Maslak
On Jun 4, 2012, at 1:01 AM, Owen DeLong o...@delong.com wrote:

 Any firewall/security device manufacturer that says it is will not get any
 business from me (or anyone else who considers their requirements
 properly before purchasing).

Unfortunately many technology people seem to have the idea, If I don't 
understand it, it's a hacker when it comes to network traffic.  And often they 
don't understand ICMP (or at least PMTU).  So anything not understood gets 
blocked.  Then there is the Law of HTTP...

The Law of HTTP is pretty simple: Anything that isn't required for *ALL* HTTP 
connections on day one of protocol implementation will never be able to be used 
universally.

This includes, sadly, PMTU.  If reaching all possible endpoints is important to 
your application, you better do it via HTTP and better not require PMTU.  It's 
also why protocols typically can't be extended today at any layer other than 
the HTTP layer.

As for the IETF trying to not have people reset DF...good luck with that 
one...besides, I think there is more broken ICMP handling than there are paths 
that would allow a segment to bounce around for 120 seconds...



Re: IPv6 day and tunnels

2012-06-04 Thread Jared Mauch

On Jun 4, 2012, at 10:07 AM, Jeroen Massar wrote:

 On 4 Jun 2012, at 06:36, Masataka Ohta mo...@necom830.hpcl.titech.ac.jp 
 wrote:
 
 Jeroen Massar wrote:
 
 So IPv6 fixes the fragmentation and MTU issues of IPv4 by how exactly?
 
 Completely wrongly.
 
 Got a better solution? ;)
 
 IPv4 without PMTUD, of course.
 
 We are (afaik) discussing IPv6 in this thread, I assume you typo'd here ;)

He is comparing  contrasting with the behavior of IPv4 v IPv6.

If your PMTU is broken for v4 because people do wholesale blocks of ICMP, there 
is a chance they will have the same problem with wholesale blocks of ICMPv6 
packets.

The interesting thing about IPv6 is it's just close enough to IPv4 in many 
ways that people don't realize all the technical details.  People are still 
getting it wrong with IPv4 today, they will repeat their same mistakes in IPv6 
as well.

-

I've observed that if you avoid providers that rely upon tunnels, you can 
sometimes observe significant performance improvements in IPv6 nitrates.  Those 
that are tunneling are likely to take a software path at one end, whereas 
native (or native-like/6PE) tends to not see this behavior.  Those doing native 
tend to have more experience debugging it as well as they already committed 
business resources to it.

- Jared


RE: IPv6 day and tunnels

2012-06-04 Thread Templin, Fred L
Hi,

There was quite a bit discussion on IPv6 PMTUD on the v6ops
list within the past couple of weeks. Studies have shown
that PTB messages can be dropped due to filtering even for
ICMPv6. There was also concern for the one (or more) RTTs
required for PMTUD to work, and for dealing with bogus
PTB messages.

The concerns were explicitly linked to IPv6 tunnels, so
I drafted a proposed solution:

https://datatracker.ietf.org/doc/draft-generic-v6ops-tunmtu/

In this proposal the tunnel ingress performs the following
treatment of packets of various sizes:

1) For IPv6 packets no larger than 1280, admit the packet
   into the tunnel w/o fragmentation. Assumption is that
   all IPv6 links have to support a 1280 MinMTU, so the
   packet will get through.

2) For IPv6 packets larger than 1500, admit the packet
   into the tunnel w/o fragmentation. Assumption is that
   the sender would only send a 1501+ packet if it has
   some way of policing the PMTU on its own, e.g. through
   the use of RC4821.

3) For IPv6 packets between 1281-1500, break the packet
   into two (roughly) equal-sized pieces and admit each
   piece into the tunnel. (In other words, intentionally
   violate the IPv6 deprecation of router fragmentation.)
   Assumption is that the final destination can reassemble
   at least 1500, and that the 32-bit Identification value
   inserted by the tunnel provides sufficient assurance
   against reassembly mis-associations.

I presume no one here would object to clauses 1) and 2).
Clause 3) is obviously a bit more controversial - but,
what harm would it cause from an operational standpoint?

Thanks - Fred
fred.l.temp...@boeing.com



Re: IPv6 day and tunnels

2012-06-04 Thread Joe Maimon



Owen DeLong wrote:



There should be no such thing as packet fragmentation in the current
protocol. What is needed is for people to simply configure things
correctly and allow PTB messages to pass as designed.

Owen



You are absolutely correct. Are you talking about IPv4 or IPv6?


Joe



Re: test-ipv6.com / omgipv6day.com down

2012-06-04 Thread Jason Fesler

On Jun 4, 2012, at 7:09 AM, Jeroen Massar wrote:

 You got a bunch of mirrors for it right? Should not be to tricky to get 
 someone to let their act as the real thing for a bit.

I've got redirects up now to spread the load across VMs.   For the next couple 
of days, I don't expect a single VM to handle the load.

Thanks to all who've sent me a response; and thanks to Host Virtual and to 
Network Design GmbH, for taking the immediate load.

Once we're stable, and I get my *official* day job requirements met for World 
IPV6 Launch, Ill come back to getting the original gear replaced.  I've got a 
couple hardware offers in (Alex, Mark, thank you), and this might just be the 
reason to flat out refresh the hardware if ixSystems has something suitable 
already built.


-jason





WaPo: SHODAN search engine exposes insecure SCADA

2012-06-04 Thread Jay Ashworth
... among other things.

  
http://www.washingtonpost.com/investigations/cyber-search-engine-exposes-vulnerabilities/2012/06/03/gJQAIK9KCV_story.html

If the gov is worried about 'cyber'-attacks on critical infrastructure,
at least now we know what tool they can use to find the low hanging fruit.

As I asked once before (:-), any black-sunglasses types hanging around?

Don't answer; just go to work.

Cheers,
-- jra
-- 
Jay R. Ashworth  Baylink   j...@baylink.com
Designer The Things I Think   RFC 2100
Ashworth  Associates http://baylink.pitas.com 2000 Land Rover DII
St Petersburg FL USA  http://photo.imageinc.us +1 727 647 1274



Re: test-ipv6.com / omgipv6day.com down

2012-06-04 Thread Jeroen Massar
On 2012-06-04 08:13, Jason Fesler wrote:
 
 On Jun 4, 2012, at 7:09 AM, Jeroen Massar wrote:
 
 You got a bunch of mirrors for it right? Should not be to tricky to
 get someone to let their act as the real thing for a bit.
 
 I've got redirects up now to spread the load across VMs.   For the
 next couple of days, I don't expect a single VM to handle the load.

I am actually not expecting that much of the hype to come out, just like
last year it will easily be forgotten unless somebody is able to spin
that PR engine really really really hard.

 Thanks to all who've sent me a response; and thanks to Host Virtual
 and to Network Design GmbH, for taking the immediate load.
 
 Once we're stable, and I get my *official* day job requirements met
 for World IPV6 Launch, Ill come back to getting the original gear
 replaced.  I've got a couple hardware offers in (Alex, Mark, thank
 you), and this might just be the reason to flat out refresh the
 hardware if ixSystems has something suitable already built.

Awesome!

Greets,
 Jeroen



Re: IPv6 day and tunnels

2012-06-04 Thread Masataka Ohta
Jeroen Massar wrote:

 IPv4 without PMTUD, of course.
 
 We are (afaik) discussing IPv6 in this thread,

That's your problem of insisting on very narrow solution
space, which is why you can find no solution and are
trying to ignore the problem.

 It is a sender of a multicast packet, not you as some ISP,
 who set max packet size to 1280B or 1500B.
 
 If a customer already miraculously has the rare capability
 of sending multicast packets in the rare case that a
 network is multicast enabled

That is the case IPv6 WG insisted on.

 then they will also have been told to use a max packet size
 of 1280 to avoid any issues when it is expected that some
 endpoint might have that max MTU.

Those who insisted on the case won't tell so nor do so.

 I really cannot see the problem with this

because you insist on IPv6.

 You can do nothing against a sender who consciously (not
 necessarily maliciously) set it to 1500B.
 
 Of course you can, the first hop into your network can
 generate a single PtB

I can, but I can't expect others will do so.

I, instead, know those who insisted on the case won't.

 No need, as above, reject and send PtB and all is fine.

As I wrote:

 That you don't enable multicast in your network does not
 mean you have nothing to do with packet too big against
 multicast, because you may be on a path of returning ICMPs.
 That is, you should still block them.

you are wrong.

 If you don't want to inspect packets so deeply (beyond first
 64B, for example), packet too big against unicast packets
 are also blocked.
 
 Routing (forwarding packets) is in no way expection.

What?

 Blocking returning ICMPv6 PtB where you are looking at the
 original packet which is echod inside the data of the
 ICMPv6 packet would indeed require one to look quite deep,
 but if one is so determined to firewall them well, then
 you would have to indeed.

As I already filter packets required by RFC2463, why, do you
think, do I have to bother only to reduce performance?

 I do not see a reason to do so though. Please note that the
 src/dst of the packet itself is unicast even if the PtB
 will be for a multicast packet.

How can you ignore the implosion of unicast ICMP?

 They did not ignore you, they realized that not everybody
 has the same requirements. With the current spec you can
 go your way and break pMTU requiring manual 1280 settings,
 while other networks can use pMTU in their networks.Everbody wins.

What? Their networks? The Internet is interconnected.

 So, you should assume some, if not all, of them still insist
 on using multicast PMTUD to make multicast packet size larger
 than 1280B.
 
 As networks become more and more jumbo frame enabled,
 what exactly is the problem with this?

That makes things worse.

It will promote people try to multicast with jumbo frames.

 Because PMTUD is not expected to work,
 
 You assume it does not work, but as long as per the spec
 people do not filter it, it works.

Such operation leaves network vulnerable and should be corrected.

 you must assume MTU
 of outer path is 1280B, as is specified simply restrict
 itself to sending packets no larger than 1280 octets in
 RFC2460.
 
 While for multicast enabled networks that might hit the 
 minimum MTU this might be true-ish, it does not make
 it universally true.

The Internet is interconnected.

  you need to use a tunneling protocol that knows how to
  frag and reassemble as is acting as a
  medium with an mtu less than the minimum of 1280

 That's my point in my second last slide.
 
 Then you word it wrongly. It is not the problem of IPv6

You should read RFC2473, an example in my slide.

 Please fix your network instead, kthx.

It is a problem of RFC2463 and networks of people who insist
on the current RFC2463 for multicast PMTUD.

If you want the problem disappear, change RFC2463.

Masataka Ohta



Re: IPv6 day and tunnels

2012-06-04 Thread Brett Frankenberger
On Mon, Jun 04, 2012 at 07:39:58AM -0700, Templin, Fred L wrote:

 https://datatracker.ietf.org/doc/draft-generic-v6ops-tunmtu/
 
 3) For IPv6 packets between 1281-1500, break the packet
into two (roughly) equal-sized pieces and admit each
piece into the tunnel. (In other words, intentionally
violate the IPv6 deprecation of router fragmentation.)
Assumption is that the final destination can reassemble
at least 1500, and that the 32-bit Identification value
inserted by the tunnel provides sufficient assurance
against reassembly mis-associations.

Fragmenting the outer packet, rather than the inner packet, gets around
the problem of router fragmentation of packets.  The outer packet is a
new packet and there's nothing wrong with the originator of that packet
fragmenting it.

Of course, that forces reassembly on the other tunnel endpoint, rather
than on the ultimate end system, which might be problematic with some
endpoints and traffic volumes.

(With IPv4 in IPv4 tunnels, this is what I've always done.  1500 byte
MTU on the tunnel, fragment the outer packet, let the other end of the
tunnel do the reassembly.  Not providing 1500 byte end-to-end (at least
with in the network I control) for IPv4 has proven to consume lots of
troubleshooting time; fragmenting the inner packet doesn't work unless
you ignore the DF bit that is typically set by TCP endpoints who want
to do PMTU discovery.)
 
 I presume no one here would object to clauses 1) and 2).
 Clause 3) is obviously a bit more controversial - but,
 what harm would it cause from an operational standpoint?

 -- Brett



RE: IPv6 day and tunnels

2012-06-04 Thread Templin, Fred L
Hi Brett,

 -Original Message-
 From: Brett Frankenberger [mailto:rbf+na...@panix.com]
 Sent: Monday, June 04, 2012 9:35 AM
 To: Templin, Fred L
 Cc: nanog@nanog.org
 Subject: Re: IPv6 day and tunnels
 
 On Mon, Jun 04, 2012 at 07:39:58AM -0700, Templin, Fred L wrote:
 
  https://datatracker.ietf.org/doc/draft-generic-v6ops-tunmtu/
 
  3) For IPv6 packets between 1281-1500, break the packet
 into two (roughly) equal-sized pieces and admit each
 piece into the tunnel. (In other words, intentionally
 violate the IPv6 deprecation of router fragmentation.)
 Assumption is that the final destination can reassemble
 at least 1500, and that the 32-bit Identification value
 inserted by the tunnel provides sufficient assurance
 against reassembly mis-associations.
 
 Fragmenting the outer packet, rather than the inner packet, gets around
 the problem of router fragmentation of packets.  The outer packet is a
 new packet and there's nothing wrong with the originator of that packet
 fragmenting it.
 
 Of course, that forces reassembly on the other tunnel endpoint, rather
 than on the ultimate end system, which might be problematic with some
 endpoints and traffic volumes.

There are a number of issues with fragmenting the outer packet.
First, as you say, fragmenting the outer packet requires the
tunnel egress to perform reassembly. This may be difficult for
tunnel egresses that are configured on core routers. Also, when
IPv4 is used as the outer encapsulation layer, the 16-bit ID
field can result in reassembly errors at high data rates
[RFC4963]. Additionally, encapsulating a 1500 inner packet in
an outer IP header results in a 1500+ outer packet - and the
ingress has no way of knowing whether the egress is capable
of reassembling larger than 1500.
 
 (With IPv4 in IPv4 tunnels, this is what I've always done.  1500 byte
 MTU on the tunnel, fragment the outer packet, let the other end of the
 tunnel do the reassembly.  Not providing 1500 byte end-to-end (at least
 with in the network I control) for IPv4 has proven to consume lots of
 troubleshooting time; fragmenting the inner packet doesn't work unless
 you ignore the DF bit that is typically set by TCP endpoints who want
 to do PMTU discovery.)

Ignoring the (explicit) DF bit for IPv4 and ignoring the
(implicit) DF bit for IPv6 is what I am suggesting.

Thanks - Fred
fred.l.temp...@boeing.com

  I presume no one here would object to clauses 1) and 2).
  Clause 3) is obviously a bit more controversial - but,
  what harm would it cause from an operational standpoint?
 
  -- Brett



Re: IPv6 day and tunnels

2012-06-04 Thread Cameron Byrne
On Sun, Jun 3, 2012 at 11:20 PM, Jimmy Hess mysi...@gmail.com wrote:
 On 6/3/12, Jeroen Massar jer...@unfix.org wrote:
 If one is so stupid to just block ICMP then one should also accept that one
 loses functionality.
 ICMP tends to get blocked by firewalls by default; There are
 legitimate reasons to block ICMP, esp w V6.   Security device
 manufacturers tend to indicate all the  lost functionality  is
 optional functionality  not required for a working device.


In case security policy folks need a reference on what ICMPv6
functionality is required for IPv6 to work correctly, please reference
http://www.ietf.org/rfc/rfc4890.txt

CB



Re: Questions about anycasting setup

2012-06-04 Thread Mehmet Akcin

On Jun 3, 2012, at 2:11 PM, Bill Woodcock wrote:

 On Jun 3, 2012, at 12:35 PM, Anurag Bhatia wrote:
 I tried doing anycasting with 3 nodes, and seems like it didn't worked well
 at all. It seems like ISPs prefer their own or their customer route (which
 is our transit provider) and there is almost no short/local route effect.
 
 Correct.  That's why you need to use the same transit providers at each 
 location.

It could be a nightmare to try to balance the traffic when you are using 
different providers. 

You can go ahead and try using path prepending but you will always find some 
strangeness going on regardless.

As Bill mentioned using the same transit will help, especially if you use a 
transit provider that has some communities pre-defined which will allow you to 
automatically advertise or not (even geographically) , and path prepend by 
simply sending communities out , you will save lots of time.

Mehmet





bgp best practice question

2012-06-04 Thread jon Heise
I need to make one of our data centers internet accessible, i plan to advertise 
a /24 out of our existing /22 network block at our new site. My question is for 
our main datacenter, is it a better idea to continue to advertise the full /22 
or advertise the remaining /23 and /24 networks ?

- Jon Heise 


Re: Wacky Weekend: The '.secure' gTLD

2012-06-04 Thread Eric Brunner-Williams
On 6/4/12 12:30 AM, Keith Medcalf wrote:
 The greatest advantage of .SECURE is that it will help ensure that all the 
 high-value targets are easy to find.

one of the rationalizations for imposing a dnssec mandatory to
implement requirement (by icann staff driven by dnssec evangelists) is
that all slds are benefit equally from the semantic.

restated, the value of protecting some bank.tld is indistinguishable
from protecting some junk.tld.

re-restated, no new tlds will offer no economic, or political,
incentives to attack mitigated by dnssec.

i differed from staff-and-dnssec-evangelists, and obviously lost.

see also all possible locations for registries already have native v6,
or can tunnel via avian carrier, another staff driven by ipv6
evangelists, who couldn't defer the v6 mandatory to implement
requirement until availability was no longer hypothetical, or
scheduled, for which difference again availed naught.

as a marketing message, sld use of .secure as a tld may be sufficient
to ensure that a sufficient density of high-value targets are indeed
slds of that tld. staff has not discovered a stability and security
requirement which is contra-indicated by such a common fate / point of
failure.

note also that the requirements for new tlds are significantly greater
than for the existing set, so whatever the .com operator does, it is
not driven by the contract compliance regime which contains either the
dnssec or v6 manditory upon delegation bogies.

-e

p.s. the usual -sec and -6 evangelicals can ... assert their inerrant
correctness as a matter of faith -- faith based policy seems to be the
norm.



Re: bgp best practice question

2012-06-04 Thread Chris Grundemann
Depends on a few things, but the main questions are probably:

Are the data-centers connected on the backside (VPN, etc. - could the
new dc failover through the main dc)?
Yes - /22
Will that /24 ever be used in the main datacenter?
Yes - /22

$0.02
~Chris


On Mon, Jun 4, 2012 at 12:36 PM, jon Heise j...@smugmug.com wrote:
 I need to make one of our data centers internet accessible, i plan to 
 advertise a /24 out of our existing /22 network block at our new site. My 
 question is for our main datacenter, is it a better idea to continue to 
 advertise the full /22 or advertise the remaining /23 and /24 networks ?

 - Jon Heise



-- 
@ChrisGrundemann
http://chrisgrundemann.com



Cat Humor

2012-06-04 Thread Eric Wieling

I'm not looking for help, just thought this was hilarious.

Mark called in from XO he stated a tech was on site and found out that client 
used a CAT 6 cable instead of a CAT 5 cable and XO doesn't have a connecting 
piece for the CAT 6 cable. he stated if client gets a wire/cable guy out there 
to fix issue, XO can send out a tech to make sure they hook up everything 
correctly.




Re: bgp best practice question

2012-06-04 Thread Saku Ytti
On (2012-06-04 11:36 -0700), jon Heise wrote:

 I need to make one of our data centers internet accessible, i plan to 
 advertise a /24 out of our existing /22 network block at our new site. My 
 question is for our main datacenter, is it a better idea to continue to 
 advertise the full /22 or advertise the remaining /23 and /24 networks ?

22 + 24. 23+24+24 makes no sense. There is no reason to fear getting extra
traffic, as traffic will plummet once hosts stop responding.

-- 
  ++ytti



Re: IPv6 day and tunnels

2012-06-04 Thread Masataka Ohta
Templin, Fred L wrote:

 Also, when
 IPv4 is used as the outer encapsulation layer, the 16-bit ID
 field can result in reassembly errors at high data rates
 [RFC4963].

As your proposal, too, gives up to have unique IDs, does that
matter?

Note that, with your draft, a route change between two
tunnels with same C may cause block corruption.

 Additionally, encapsulating a 1500 inner packet in
 an outer IP header results in a 1500+ outer packet - and the
 ingress has no way of knowing whether the egress is capable
 of reassembling larger than 1500.

Operators are responsible to have tunnel end points with
sufficient capabilities.

 
 (With IPv4 in IPv4 tunnels, this is what I've always done.  1500 byte
 MTU on the tunnel, fragment the outer packet, let the other end of the
 tunnel do the reassembly.  Not providing 1500 byte end-to-end (at least
 with in the network I control) for IPv4 has proven to consume lots of
 troubleshooting time; fragmenting the inner packet doesn't work unless
 you ignore the DF bit that is typically set by TCP endpoints who want
 to do PMTU discovery.)
 
 Ignoring the (explicit) DF bit for IPv4 and ignoring the
 (implicit) DF bit for IPv6 is what I am suggesting.
 
 Thanks - Fred
 fred.l.temp...@boeing.com
 
 I presume no one here would object to clauses 1) and 2).
 Clause 3) is obviously a bit more controversial - but,
 what harm would it cause from an operational standpoint?

   -- Brett
 
 
 




RE: IPv6 day and tunnels

2012-06-04 Thread Templin, Fred L


 -Original Message-
 From: Masataka Ohta [mailto:mo...@necom830.hpcl.titech.ac.jp]
 Sent: Monday, June 04, 2012 12:06 PM
 To: nanog@nanog.org
 Subject: Re: IPv6 day and tunnels
 
 Templin, Fred L wrote:
 
  Also, when
  IPv4 is used as the outer encapsulation layer, the 16-bit ID
  field can result in reassembly errors at high data rates
  [RFC4963].
 
 As your proposal, too, gives up to have unique IDs, does that
 matter?

This is taken care of by rate limiting at the tunnel
ingress. For IPv4-in-(foo) tunnels, rate limit is 11Mbps
which may be a bit limiting for some applications. For
IPv6-in-(foo) tunnels, rate limit is 733Gbps which
should be acceptable for most applications.

 Note that, with your draft, a route change between two
 tunnels with same C may cause block corruption.

There are several built-in mitigations for this. First,
the tunnel ingress does not assign Identification values
sequentially but rather skips around to avoid synchronizing
with some other node that is sending fragments to the same
(src,dst) pair. Secondly, the ingress chooses random fragment
sizes for the A and B portions of the packet so that the A
portion of packet 1 does not match up properly with the B
portion of packet 2 and hence will be dropped. Finally, even
if the A portion of packet 1 somehow matches up with the B
portion of packet 2 the Internet checksum provides an
additional line of defense.

  Additionally, encapsulating a 1500 inner packet in
  an outer IP header results in a 1500+ outer packet - and the
  ingress has no way of knowing whether the egress is capable
  of reassembling larger than 1500.
 
 Operators are responsible to have tunnel end points with
 sufficient capabilities.

It is recommended that IPv4 nodes be able to reassemble
as much as their connected interface MTUs. In the vast
majority of cases that means that the nodes should be
able to reassemble 1500. But, there is no assurance
of anything more!

Thanks - Fred
fred.l.temp...@boeing.com 

  (With IPv4 in IPv4 tunnels, this is what I've always done.  1500 byte
  MTU on the tunnel, fragment the outer packet, let the other end of the
  tunnel do the reassembly.  Not providing 1500 byte end-to-end (at least
  with in the network I control) for IPv4 has proven to consume lots of
  troubleshooting time; fragmenting the inner packet doesn't work unless
  you ignore the DF bit that is typically set by TCP endpoints who want
  to do PMTU discovery.)
 
  Ignoring the (explicit) DF bit for IPv4 and ignoring the
  (implicit) DF bit for IPv6 is what I am suggesting.
 
  Thanks - Fred
  fred.l.temp...@boeing.com
 
  I presume no one here would object to clauses 1) and 2).
  Clause 3) is obviously a bit more controversial - but,
  what harm would it cause from an operational standpoint?
 
-- Brett
 
 
 
 




Re: Wacky Weekend: The '.secure' gTLD

2012-06-04 Thread Andrew Sullivan
On Mon, Jun 04, 2012 at 02:49:37PM -0400, Eric Brunner-Williams wrote:
 
 one of the rationalizations for imposing a dnssec mandatory to
 implement requirement (by icann staff driven by dnssec evangelists) is

Well, I note that at least the .secure promoters haven't decided it's
a good idea:

;  DiG 9.7.3-P3  @NS15.IXWEBHOSTING.COM -t DNSKEY dot-secure.co +dnssec 
+norec +noall +comment
; (1 server found)
;; global options: +cmd
;; Got answer:
;; -HEADER- opcode: QUERY, status: NOERROR, id: 27872
;; flags: qr aa; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0

Best,

A


-- 
Andrew Sullivan
Dyn Labs
asulli...@dyn.com



Re: IPv6 day and tunnels

2012-06-04 Thread Masataka Ohta
Templin, Fred L wrote:

 As your proposal, too, gives up to have unique IDs, does that
 matter?
 
 This is taken care of by rate limiting at the tunnel

No, I'm talking about:

   Note that a possible conflict exists when IP fragmentation has
   already been performed by a source host before the fragments arrive
   at the tunnel ingress.

 Note that, with your draft, a route change between two
 tunnels with same C may cause block corruption.
 
 There are several built-in mitigations for this. First,
 the tunnel ingress does not assign Identification values
 sequentially but rather skips around to avoid synchronizing
 with some other node that is sending fragments to the same

I'm talking about two tunnels with same skip value.

 Secondly, the ingress chooses random fragment
 sizes for the A and B portions of the packet so that the A
 portion of packet 1 does not match up properly with the B
 portion of packet 2 and hence will be dropped.

You can do so with outer fragment, too. Moreover, it does not
have to be random but regular, which effectively extend ID
length.

 Finally, even
 if the A portion of packet 1 somehow matches up with the B
 portion of packet 2 the Internet checksum provides an
 additional line of defense.

Thus, don't insist on having unique IDs so much.

 It is recommended that IPv4 nodes be able to reassemble
 as much as their connected interface MTUs. In the vast
 majority of cases that means that the nodes should be
 able to reassemble 1500. But, there is no assurance
 of anything more!

I'm talking about not protocol recommendation but proper
operation.

Masataka Ohta



Re: Wacky Weekend: The '.secure' gTLD

2012-06-04 Thread Eric Brunner-Williams
On 6/4/12 3:28 PM, Andrew Sullivan wrote:
 Well, I note that at least the .secure promoters haven't decided it's
 a good idea:

the _known_ .secure-and-all-confusingly-similar-labels promoters.

the reveal is weeks away, followed by the joys of contention set
formation.

there may be more than one .secure application, and who knows, perhaps
a .sec in the bag, or a .cure, or a .seeker, or .sequre, or ...

however, yeah, the requirement bites at contract / delegation time, so
about a year in the future.

-e



Re: IPv6 day and tunnels

2012-06-04 Thread Jeroen Massar
On 2012-06-04 07:31, Jared Mauch wrote:
 
 On Jun 4, 2012, at 10:07 AM, Jeroen Massar wrote:
 
 On 4 Jun 2012, at 06:36, Masataka Ohta
 mo...@necom830.hpcl.titech.ac.jp wrote:
 
 Jeroen Massar wrote:
 
 So IPv6 fixes the fragmentation and MTU issues of IPv4 by
 how exactly?
 
 Completely wrongly.
 
 Got a better solution? ;)
 
 IPv4 without PMTUD, of course.
 
 We are (afaik) discussing IPv6 in this thread, I assume you typo'd
 here ;)
 
 He is comparing  contrasting with the behavior of IPv4 v IPv6.
 
 If your PMTU is broken for v4 because people do wholesale blocks of
 ICMP, there is a chance they will have the same problem with
 wholesale blocks of ICMPv6 packets.

Yep, people who act stupid will remain stupid...

 The interesting thing about IPv6 is it's just close enough to IPv4
 in many ways that people don't realize all the technical details.
 People are still getting it wrong with IPv4 today, they will repeat
 their same mistakes in IPv6 as well.

IMHO they should not have to need to know about technical details.

But if one is configuring firewalls one should know what one is blocking
and things might break. If one does block PtB you should realize that
you are breaking connectivity in some cases and that that is your
problem to resolve, not that of other peoples. There are various 'secure
firewall' examples for people who are unable to think for themselves and
figure out what kind of firewalling is appropriate for their environment.


 I've observed that if you avoid providers that rely upon tunnels, you
 can sometimes observe significant performance improvements in IPv6
 nitrates.  Those that are tunneling are likely to take a software
 path at one end, whereas native (or native-like/6PE) tends to not see
 this behavior.  Those doing native tend to have more experience
 debugging it as well as they already committed business resources to
 it.

Tunnels therefor only should exist at the edge where native IPv6 cannot
be made possible without significant investments in hardware and or
other resources. Of course every tunnel should at one point in time be
replaced by native where possible, thus hopefully the folks planning
expenses and hardware upgrades have finally realized that they cannot
get around it any more and have put this ipv6 feature on the list for
the next round of upgrades.


Note that software-based tunnels can be extremely quick nowadays too,
especially when given the fact that hardware can be so abundant. During
tests for sixxsd v4 I've been able to stuff 10GE through it with ease,
but the trick there is primarily also that we do not need to do  an
expensive full ipv6 address lookup as we know how the addresses are
structured and thus instead of having to do a 128bit lookup we can
restrict that to a 12 bit lookup for those tunnels, which is just a
direct jump table, much cheaper than having generic silicon that needs
to do it for 128bits, then again that same trick of course would be so
much faster in hardware that is specifically built to apply that trick.
The trick is much faster than using the software tunnels that you would
normally find in eg a Linux or BSD kernel though, also because those
tunnels look up tunnels based on the IPv4 address, thus the full 32-bit
address space instead of using the knowledge that the 128bit one can be
reduced to the 12bits that we use. The advantage of knowing one's field
and being less generic ;)

Greets,
 Jeroen



Recommendation for OOB management via IP

2012-06-04 Thread Hiten J. Thakkar

Hello!

My work place is looking for an OOB management over IP. We have 
Lantronix KVM in our Datacenter with nearly 100% uptime and Lantronix 
SLC-8/16/48 ports with 2 NICs deployed across various MDFs on campus and 
remote locations (5). On our main campus we have a parallel net, but for 
the remote locations we are looking to access Lantronix SLCs' via the 
second NIC card using IP based solution. Can you kindly make 
suggestions. I supremely appreciate your time and inputs in advance.


--

Thanks and regards,
Hiten J. Thakkar




RE: IPv6 day and tunnels

2012-06-04 Thread Templin, Fred L
 -Original Message-
 From: Masataka Ohta [mailto:mo...@necom830.hpcl.titech.ac.jp]
 Sent: Monday, June 04, 2012 1:08 PM
 To: Templin, Fred L; nanog@nanog.org
 Subject: Re: IPv6 day and tunnels
 
 Templin, Fred L wrote:
 
  As your proposal, too, gives up to have unique IDs, does that
  matter?
 
  This is taken care of by rate limiting at the tunnel
 
 No, I'm talking about:
 
Note that a possible conflict exists when IP fragmentation has
already been performed by a source host before the fragments arrive
at the tunnel ingress.
 
  Note that, with your draft, a route change between two
  tunnels with same C may cause block corruption.
 
  There are several built-in mitigations for this. First,
  the tunnel ingress does not assign Identification values
  sequentially but rather skips around to avoid synchronizing
  with some other node that is sending fragments to the same
 
 I'm talking about two tunnels with same skip value.

There are several factors to consider. First, each tunnel
ingress chooses its initial Identification value (or values)
randomly and independent of all other tunnel ingresses.
Secondly, the packet arrival rates at the various tunnel
ingresses are completely independent and in no way
correlated. So, while an occasional reassembly collision
is possible the 32-bit Identification value would make it
extremely rare. And the variability of packet arrivals
between the tunnel endpoints would make it such that a
string of consecutive collisions would never happen. So,
I'm not sure that a randomly-chosen skip value is even
necessary.
 
  Secondly, the ingress chooses random fragment
  sizes for the A and B portions of the packet so that the A
  portion of packet 1 does not match up properly with the B
  portion of packet 2 and hence will be dropped.
 
 You can do so with outer fragment, too. Moreover, it does not
 have to be random but regular, which effectively extend ID
 length.

Outer fragmentation cooks the tunnel egresses at high
data rates. End systems are expected and required to
reassemble on their own behalf.

  Finally, even
  if the A portion of packet 1 somehow matches up with the B
  portion of packet 2 the Internet checksum provides an
  additional line of defense.
 
 Thus, don't insist on having unique IDs so much.

Non-overlapping fragments are disallowed for IPv6, but
I think are still allowed for IPv4. So, IPv4 still needs
the unique IDs by virtue of rate limiting.

  It is recommended that IPv4 nodes be able to reassemble
  as much as their connected interface MTUs. In the vast
  majority of cases that means that the nodes should be
  able to reassemble 1500. But, there is no assurance
  of anything more!
 
 I'm talking about not protocol recommendation but proper
 operation.

I don't see any operational guidance recommending the
tunnel ingress to configure an MRU of 1520 or larger.

Thanks - Fred
fred.l.temp...@boeing.com

   Masataka Ohta



Re: IPv6 day and tunnels

2012-06-04 Thread Joe Maimon



Jeroen Massar wrote:



Tunnels therefor only should exist at the edge where native IPv6 cannot
be made possible without significant investments in hardware and or
other resources. Of course every tunnel should at one point in time be
replaced by native where possible, thus hopefully the folks planning
expenses and hardware upgrades have finally realized that they cannot
get around it any more and have put this ipv6 feature on the list for
the next round of upgrades.



IPv4 is pretty mature. Are there more or less tunnels on it?

Why do you think a maturing IPv6 means less tunnels as opposed to more?

Does IPv6 contain elegant solutions to all the issues one would resort 
to tunnels with IPv4?


Does successful IPv6 deployment require obsoleting tunneling?

Fail.

Today, most people cant even get IPv6 without tunnels.

And tunnels are far from the only cause of MTU lower than what has 
become the only valid MTU of 1500, thanks in no small part to people who 
refuse to acknowledge operational reality and are quite satisfied with 
the state of things once they find a them to blame it on.


I just want to know if we can expect IPv6 to devolve into 1280 standard 
mtu and at what gigabit rates.



Joe



RE: IPv6 day and tunnels

2012-06-04 Thread Templin, Fred L
 I just want to know if we can expect IPv6 to devolve into 1280 standard
 mtu and at what gigabit rates.

The vast majority of hosts will still expect 1500, so
we need to find a way to get them at least that much.

Fred
fred.l.temp...@boeing.com



Re: Cat Humor

2012-06-04 Thread Joel Esler
But a Cat 6 is one more isn't it?

http://www.youtube.com/watch?v=EbVKWCpNFhY



-- 
Joel Esler


On Monday, June 4, 2012 at 2:58 PM, Eric Wieling wrote:

 
 I'm not looking for help, just thought this was hilarious.
 
 Mark called in from XO he stated a tech was on site and found out that 
 client used a CAT 6 cable instead of a CAT 5 cable and XO doesn't have a 
 connecting piece for the CAT 6 cable. he stated if client gets a wire/cable 
 guy out there to fix issue, XO can send out a tech to make sure they hook up 
 everything correctly. 



Re: IPv6 day and tunnels

2012-06-04 Thread Jeroen Massar
On 2012-06-04 14:26, Joe Maimon wrote:
 
 
 Jeroen Massar wrote:
 

 Tunnels therefor only should exist at the edge where native IPv6 cannot
 be made possible without significant investments in hardware and or
 other resources. Of course every tunnel should at one point in time be
 replaced by native where possible, thus hopefully the folks planning
 expenses and hardware upgrades have finally realized that they cannot
 get around it any more and have put this ipv6 feature on the list for
 the next round of upgrades.
 
 
 IPv4 is pretty mature. Are there more or less tunnels on it?

I would hazard to state that there are more IPv4 tunnels than IPv6
tunnels. This as tunneling is what most people simply call VPN and
there are large swaths of those.

 Why do you think a maturing IPv6 means less tunnels as opposed to more?

More native instead of tunneling IPv6 over IPv6. Note that tunneling in
this context is used for connecting locations that do not have IPv6 but
have IPv4, not for connecting ala VPN networks where you need to gain
access to a secured/secluded network.

If people want to use a tunnel for the purpose of a VPN, then they will,
be that IPv4 or IPv6 or both inside that tunnel.

 Does IPv6 contain elegant solutions to all the issues one would resort
 to tunnels with IPv4?

Instead of having a custom VPN protocol one can do IPSEC properly now as
there is no NAT that one has to get around. Microsoft's Direct Access
does this btw and is an excellent example of doing it correctly.

 Does successful IPv6 deployment require obsoleting tunneling?

No why should it? But note that IPv6 tunnels (not VPNs) are a
transition technique from IPv4 to IPv6 and thus should not remain around
forever, the transition will end somewhere, sometime, likely far away in
the future with the speed that IPv6 is being deployed ;)

 Fail.
 
 Today, most people cant even get IPv6 without tunnels.

In time that will change, that is simply transitional.

 And tunnels are far from the only cause of MTU lower than what has
 become the only valid MTU of 1500, thanks in no small part to people who
 refuse to acknowledge operational reality and are quite satisfied with
 the state of things once they find a them to blame it on.
 
 I just want to know if we can expect IPv6 to devolve into 1280 standard
 mtu and at what gigabit rates.

1280 is the minimum IPv6 MTU. If people allow pMTU to work, aka accept
and process ICMPv6 Packet-Too-Big messages everything will just work.

This whole thread is about people who cannot be bothered to know what
they are filtering and that they might just randomly block PtB as they
are doing with IPv4 today. Yes, in that case their network breaks if the
packets are suddenly larger than a link somewhere else, that is the same
as in IPv4 ;)

Greets,
 Jeroen



RE: IPv6 day and tunnels

2012-06-04 Thread Templin, Fred L
  I just want to know if we can expect IPv6 to devolve into 1280 standard
  mtu and at what gigabit rates.
 
 1280 is the minimum IPv6 MTU. If people allow pMTU to work, aka accept
 and process ICMPv6 Packet-Too-Big messages everything will just work.
 
 This whole thread is about people who cannot be bothered to know what
 they are filtering and that they might just randomly block PtB as they
 are doing with IPv4 today. Yes, in that case their network breaks if the
 packets are suddenly larger than a link somewhere else, that is the same
 as in IPv4 ;)

But, it is not necessarily the person that filters the PTBs
that suffers the breakage. It is the original source that
may be many IP hops further down the line, who would have
no way of knowing that the filtering is even happening.

Thanks - Fred
fred.l.temp...@boeing.com 

 Greets,
  Jeroen




Re: IPv6 day and tunnels

2012-06-04 Thread Jeroen Massar
On 2012-06-04 14:55, Templin, Fred L wrote:
 I just want to know if we can expect IPv6 to devolve into 1280 standard
 mtu and at what gigabit rates.

 1280 is the minimum IPv6 MTU. If people allow pMTU to work, aka accept
 and process ICMPv6 Packet-Too-Big messages everything will just work.

 This whole thread is about people who cannot be bothered to know what
 they are filtering and that they might just randomly block PtB as they
 are doing with IPv4 today. Yes, in that case their network breaks if the
 packets are suddenly larger than a link somewhere else, that is the same
 as in IPv4 ;)
 
 But, it is not necessarily the person that filters the PTBs
 that suffers the breakage. It is the original source that
 may be many IP hops further down the line, who would have
 no way of knowing that the filtering is even happening.

It is not too tricky to figure that out actually:

$ tracepath6 www.nanog.org
 1?: [LOCALHOST]0.078ms pmtu 1500
 1:  2620:0:6b0:a::1   0.540ms
 1:  2620:0:6b0:a::1   1.124ms
 2:  ge-4-35.car2.Chicago2.Level3.net 56.557ms asymm 13
 3:  vl-52.edge4.Chicago2.Level3.net  57.501ms asymm 13
 4:  2001:1890:1fff:310:192:205:37:14961.910ms asymm 10
 5:  cgcil21crs.ipv6.att.net  92.067ms asymm 12
 6:  sffca21crs.ipv6.att.net  94.720ms asymm 12
 7:  cr81.sj2ca.ip.att.net90.068ms asymm 12
 8:  sj2ca401me3.ipv6.att.net 90.605ms asymm 11
 9:  2001:1890:c00:3a00::11fb:859189.888ms asymm 12
10:  no reply
11:  no reply
12:  no reply

and you'll at least have a good guess where it happens.

Not something for non-techy users, but good enough hopefully for people
working in the various NOCs.

Now the tricky part is where to complain to get that fixed though ;)

Greets,
 Jeroen

(tracepath6 is available on your favourite Linux, eg in the
iputils-tracepath package for Debian, for the various *BSD's one can use
scamper from: http://www.wand.net.nz/scamper/ )



Google Public DNS contact

2012-06-04 Thread alex-lists-nanog
Hello,

If anyone has a contact in the Google Group that deals with Google's
Public DNS servers ( i.e. the 8.8.8.8/8.8.4.4 creatures ) could that person
kindly drop me an email off list? 

I believe there might be an issue with some of the servers.

Thanks,
Alex




Re: IPv6 day and tunnels

2012-06-04 Thread Owen DeLong

On Jun 4, 2012, at 2:26 PM, Joe Maimon wrote:

 
 
 Jeroen Massar wrote:
 
 
 Tunnels therefor only should exist at the edge where native IPv6 cannot
 be made possible without significant investments in hardware and or
 other resources. Of course every tunnel should at one point in time be
 replaced by native where possible, thus hopefully the folks planning
 expenses and hardware upgrades have finally realized that they cannot
 get around it any more and have put this ipv6 feature on the list for
 the next round of upgrades.
 
 
 IPv4 is pretty mature. Are there more or less tunnels on it?
 

There are dramatically fewer IPv4 tunnels than IPv6 tunnels to the best of my 
knowledge.

 Why do you think a maturing IPv6 means less tunnels as opposed to more?
 

Because a maturing IPv6 eliminates many of the present day needs for IPv6 
tunnels which
is to span IPv4-only areas of the network when connecting IPv6 end points.

 Does IPv6 contain elegant solutions to all the issues one would resort to 
 tunnels with IPv4?

Many of the issues I would resort to tunnels for involve working around NAT, 
so, yes, IPv6
provides a much more elegant solution -- End-to-end addressing.

However, for some of the other cases, no, tunnels will remain valuable in IPv6. 
However, as
IPv6 end-to-end native connectivity becomes more prevalent, much of the current 
need for
IPv6 over IPv4 tunnels will become deprecated.

 Does successful IPv6 deployment require obsoleting tunneling?

No, it does not, but, it will naturally obsolete many of the tunnels which 
exist today.

 Fail.
 

What, exactly are you saying is a failure? The single word here even in context 
is
very ambiguous.

 Today, most people cant even get IPv6 without tunnels.

Anyone can get IPv6 without a tunnel if they are willing to bring a circuit to 
the right place.
As IPv6 gets more ubiquitously deployed, the number of right places will grow 
and the cost
of getting a circuit to one of them will thus decrease.

 And tunnels are far from the only cause of MTU lower than what has become the 
 only valid MTU of 1500, thanks in no small part to people who refuse to 
 acknowledge operational reality and are quite satisfied with the state of 
 things once they find a them to blame it on.

Meh... Sour grapes really don't add anything useful to the discussion.

Breaking PMTU-D is bad. People should stop doing so.

Blocking PTB messages is bad in IPv4 and worse in IPv6.

This has been well known for many years. If you're breaking PMTU-D, then stop 
that. If not, then you're not part of them.

If you have a useful alternative solution to propose, put it forth and let's 
discuss the merits.

 I just want to know if we can expect IPv6 to devolve into 1280 standard mtu 
 and at what gigabit rates.

I hope not. I hope that IPv6 will cause people to actually re-evaluate their 
behavior WRT PMTU-D and correct the actual problem. Working PMTU-D allows not 
only 1500, but also 1280, and 9000 and 9000 octet datagrams to be possible and 
segments that support 1500 work almost as well as segments that support jumbo 
frames. Where jumbo frames offer an end-to-end advantage, that advantage can be 
realized. Where there is a segment with a 1280 MTU, that can also work with a 
relatively small performance penalty.

Where PMTU-D is broken, nothing works unless the MTU end-to-end happens to 
coincide with the smallest MTU.

For links that carry tunnels and clear traffic, life gets interesting if one of 
them is the one with the smallest MTU regardless of the MTU value chosen.

Owen

 
 
 Joe




Re: Google Public DNS contact

2012-06-04 Thread Joe Provo
On Mon, Jun 04, 2012 at 06:08:47PM -0400, alex-lists-na...@yuriev.com wrote:
 Hello,
 
 If anyone has a contact in the Google Group that deals with Google's
 Public DNS servers ( i.e. the 8.8.8.8/8.8.4.4 creatures ) could that person
 kindly drop me an email off list? 
 
 I believe there might be an issue with some of the servers.
 
Have you already visited https://developers.google.com/speed/public-dns/
and contacted the public group mantioned?  See also 
https://developers.google.com/speed/public-dns/faq#support ...


-- 
 RSUC / GweepNet / Spunk / FnB / Usenix / SAGE / NewNOG



Re: IPv6 day and tunnels

2012-06-04 Thread Joe Maimon



Jeroen Massar wrote:



If people want to use a tunnel for the purpose of a VPN, then they will,
be that IPv4 or IPv6 or both inside that tunnel.





Instead of having a custom VPN protocol one can do IPSEC properly now as
there is no NAT that one has to get around. Microsoft's Direct Access
does this btw and is an excellent example of doing it correctly.


Microsoft has had this capability since win2k. I didnt see any 
enterprises use it, even those who used their globally unique and routed 
ipv4 /16 internally. NAT was not why they did not use it.


They did not use it externally, they did not use it internally.

In fact, most of them were involved in projects to switch to NAT internally.

Enterprises also happen not to be thrilled with the absence of NAT in IPv6.

Dont expect huge uptake there.



No why should it? But note that IPv6 tunnels (not VPNs) are a
transition technique from IPv4 to IPv6 and thus should not remain around
forever, the transition will end somewhere, sometime, likely far away in
the future with the speed that IPv6 is being deployed ;)



So VPN is the _only_ acceptable use of sub 1500 encapsulation?



Today, most people cant even get IPv6 without tunnels.


In time that will change, that is simply transitional.



If turning it on with a tunnel breaks things, it wont make native 
transition happen sooner.





1280 is the minimum IPv6 MTU. If people allow pMTU to work, aka accept
and process ICMPv6 Packet-Too-Big messages everything will just work.


If things break with higher mtu's then 1280 but less then 1500, there 
really is no reason at all not to use 1280, the efficiency difference is 
trivial. And on the IPv4 internet, we generally cannot control what most 
of the rest of the people on it do. Looks like we are not going to be 
doing any better on the IPv6 internet.




This whole thread is about people who cannot be bothered to know what
they are filtering and that they might just randomly block PtB as they
are doing with IPv4 today. Yes, in that case their network breaks if the
packets are suddenly larger than a link somewhere else, that is the same
as in IPv4 ;)

Greets,
  Jeroen




This whole thread is all about how IPv6 has not improved any of the 
issues that are well known with IPv4 and in many cases makes them worse.


This whole thread is all about showcasing how IPv6 makes them worse, 
simply because it is designed with this time they will do what we want 
mentality.


Joe



Re: IPv6 day and tunnels

2012-06-04 Thread Joe Maimon



Owen DeLong wrote:




Fail.



What, exactly are you saying is a failure? The single word here even in context 
is
very ambiguous.


The failure is that even now, when tunnels are critical to transition, a 
proper solution that improves on the IPv4 problems does not exist


And if tunnels do become less prevalent there will be even less impetus 
than now to make things work better.






Today, most people cant even get IPv6 without tunnels.


Anyone can get IPv6 without a tunnel if they are willing to bring a circuit to 
the right place.


Today most people cant even get IPv6 without tunnels, or without paying 
excessively more for their internet connection, or without having their 
pool of vendors shrink dramatically, sometimes to the point of none.





Breaking PMTU-D is bad. People should stop doing so.

Blocking PTB messages is bad in IPv4 and worse in IPv6.


It has always been bad and people have not stopped doing it. And 
intentional blocking is not the sole cause of pmtud breaking.





If you have a useful alternative solution to propose, put it forth and let's 
discuss the merits.


PMTU-d probing, as recently standardizes seems a more likely solution. 
Having CPE capable of TCP mss adjustment on v6 is another one. Being 
able to fragment when you want to is another good one as well.





I hope not. I hope that IPv6 will cause people to actually re-evaluate their behavior 
WRT PMTU-D and correct the actual problem. Working PMTU-D allows not only 1500, but 
also 1280, and 9000 and9000 octet datagrams to be possible and segments that 
support1500 work almost as well as segments that support jumbo frames. Where 
jumbo frames offer an end-to-end advantage, that advantage can be realized. Where 
there is a segment with a 1280 MTU, that can also work with a relatively small 
performance penalty.

Where PMTU-D is broken, nothing works unless the MTU end-to-end happens to 
coincide with the smallest MTU.

For links that carry tunnels and clear traffic, life gets interesting if one of 
them is the one with the smallest MTU regardless of the MTU value chosen.

Owen



I dont share your optimism that it will go any better this time around 
than last. If it goes at all.


Joe



RE: IPv6 day and tunnels

2012-06-04 Thread Templin, Fred L
 PMTU-d probing, as recently standardizes seems a more likely solution.
 Having CPE capable of TCP mss adjustment on v6 is another one. Being
 able to fragment when you want to is another good one as well.

I'll take a) and c), but don't care so much for b).

About fragmenting, any tunnel ingress (VPNs included) can
do inner fragmentation today independently of all other
ingresses and with no changes necessary on the egress.
It's just that they need to take precautions to avoid
messing up the final destination's reassembly buffers.

Fred
fred.l.temp...@boeing.com 




IPv6 evolution

2012-06-04 Thread Matthew Luckie
IPv6 paths that are the same as an IPv4-level path are correlated with 
better IPv6 performance according to:  Assessing IPv6 Through Web 
Access - A Measurement Study and Its Findings


http://repository.upenn.edu/ese_papers/602/

At the Feb NANOG I gave a lightning talk on trends involving dual-stack 
ASes, beginning with the fraction of AS-level paths that are the same in 
IPv4 and IPv6.  As of Jan 2012, 40-50% of AS-level paths are the same in 
v4 and v6 for dual-stacked origin ASes.


http://www.nanog.org/meetings/nanog54/presentations/Monday/Luckie_LT.pdf

We've looked further into the evolution of IPv6 since then.  In 
particular we looked at what could be.  We find that 95% of AS-level 
paths could be the same today: where an IPv4 path contains ASes that are 
not found in an IPv6 path to the same dual-stacked origin, we check to 
see if the AS is observed in an IPv6 BGP path, and thus the AS is IPv6 
capable at a minimum.  A brief blog post on this is at:


http://blog.caida.org/best_available_data/2012/06/04/ipv6-what-could-be-but-isnt-yet/

Matthew



Re: IPv6 day and tunnels

2012-06-04 Thread Owen DeLong

On Jun 4, 2012, at 3:34 PM, Joe Maimon wrote:

 
 
 Owen DeLong wrote:
 
 
 Fail.
 
 
 What, exactly are you saying is a failure? The single word here even in 
 context is
 very ambiguous.
 
 The failure is that even now, when tunnels are critical to transition, a 
 proper solution that improves on the IPv4 problems does not exist
 

A proper solution does exist... Stop blocking PTB messages. That's the proper 
solution. It was the proper solution in IPv4 and it is the proper solution in 
IPv6.

 And if tunnels do become less prevalent there will be even less impetus than 
 now to make things work better.

True, perhaps, but, I don't buy that tunnels are the only sub-1500 octet MTU 
out there, so, I think your premise here is somewhat flawed.

 
 
 
 Today, most people cant even get IPv6 without tunnels.
 
 Anyone can get IPv6 without a tunnel if they are willing to bring a circuit 
 to the right place.
 
 Today most people cant even get IPv6 without tunnels, or without paying 
 excessively more for their internet connection, or without having their pool 
 of vendors shrink dramatically, sometimes to the point of none.

It never shrinks to none, but, yes, the cost can go up dramatically. You can, 
generally, get a circuit to somewhere that HE has presence from almost anywhere 
in the world if you are willing to pay for it. Any excessive costs would be 
what the circuit vendor charges. HE sells transit pretty cheap and everywhere 
we sell, it's dual-stack native. Sure, we wish we could magically have POPs 
everywhere and serve every customer with a short local loop. Unfortunately, 
that's not economically viable at this time, so, we build out where we can when 
there is sufficient demand to cover our costs. Pretty much like any other 
provider, I would imagine. Difference is, we've been building everything native 
dual stack for years. IPv6 is what we do. We're also pretty good at IPv4, so we 
deliver legacy connectivity to those that want it as well.

 
 Breaking PMTU-D is bad. People should stop doing so.
 
 Blocking PTB messages is bad in IPv4 and worse in IPv6.
 
 It has always been bad and people have not stopped doing it. And intentional 
 blocking is not the sole cause of pmtud breaking.
 

I guess that depends on how you define the term intentional. I don't care 
whether it was the administrators intent, or a default intentionally placed 
there by the firewall vendor or what, it was someone's intent, therefore, yes, 
it is intentional. If you can cite an actual case of accidental dropping of PTB 
messages that was not the result of SOMEONE's intent, then, OK. However, at 
least on IPv6, I believe that intentional blocking (regardless of whose intent) 
is, in fact, the only source of PMTUD breakage at this point. In IPv4, there is 
some breakage in older software that didn't do PMTUD right even if it received 
the correct packets, but, that's not relevant to IPv6.

 
 If you have a useful alternative solution to propose, put it forth and let's 
 discuss the merits.
 
 PMTU-d probing, as recently standardizes seems a more likely solution. Having 
 CPE capable of TCP mss adjustment on v6 is another one. Being able to 
 fragment when you want to is another good one as well.
 

Fragments are horrible from a security perspective and worse from a network 
processing perspective. Having a way to signal path MTU is much better. Probing 
is fine, but, it's not a complete solution and doesn't completely compensate 
for the lack of PTB message transparency.

 I hope not. I hope that IPv6 will cause people to actually re-evaluate their 
 behavior WRT PMTU-D and correct the actual problem. Working PMTU-D allows 
 not only 1500, but also 1280, and 9000 and9000 octet datagrams to be 
 possible and segments that support1500 work almost as well as segments that 
 support jumbo frames. Where jumbo frames offer an end-to-end advantage, that 
 advantage can be realized. Where there is a segment with a 1280 MTU, that 
 can also work with a relatively small performance penalty.
 
 Where PMTU-D is broken, nothing works unless the MTU end-to-end happens to 
 coincide with the smallest MTU.
 
 For links that carry tunnels and clear traffic, life gets interesting if one 
 of them is the one with the smallest MTU regardless of the MTU value chosen.
 
 Owen
 
 
 I dont share your optimism that it will go any better this time around than 
 last. If it goes at all.
 

It is clearly going, so, the if it goes at all question is already answered. 
We're already seeing a huge ramp in IPv6 traffic leading up to ISOC's big 
celebration of my birthday (aka World IPv6 Launch) since early last week. I 
have no reason to expect that that traffic won't remain at the new higher 
levels after June 6. There are too many ISPs, Mobile operators, Web site 
operators and others committed at this point for it not to actually go. Also, 
since there's no viable alternative if it doesn't go, that pretty well insures 
that it will go one way or 

RE: IPv6 day and tunnels

2012-06-04 Thread Templin, Fred L
Hi Owen,

I am 100% with you on wanting to see an end to filtering
of ICMPv6 PTBs. But, tunnels can take matters into their
own hands today to make sure that 1500 and smaller gets
through no matter if PTBs are delivered or not. There
doesn't really even need to be a spec as long as each
tunnel takes the necessary precautions to avoid messing
up the final destination.

The next thing is to convince the hosts to implement
RFC4821...

Thanks - Fred
fred.l.temp...@boeing.com

 -Original Message-
 From: Owen DeLong [mailto:o...@delong.com]
 Sent: Monday, June 04, 2012 4:00 PM
 To: Joe Maimon
 Cc: nanog@nanog.org
 Subject: Re: IPv6 day and tunnels
 
 
 On Jun 4, 2012, at 3:34 PM, Joe Maimon wrote:
 
 
 
  Owen DeLong wrote:
 
 
  Fail.
 
 
  What, exactly are you saying is a failure? The single word here even in
 context is
  very ambiguous.
 
  The failure is that even now, when tunnels are critical to transition, a
 proper solution that improves on the IPv4 problems does not exist
 
 
 A proper solution does exist... Stop blocking PTB messages. That's the
 proper solution. It was the proper solution in IPv4 and it is the proper
 solution in IPv6.
 
  And if tunnels do become less prevalent there will be even less impetus
 than now to make things work better.
 
 True, perhaps, but, I don't buy that tunnels are the only sub-1500 octet
 MTU out there, so, I think your premise here is somewhat flawed.
 
 
 
 
  Today, most people cant even get IPv6 without tunnels.
 
  Anyone can get IPv6 without a tunnel if they are willing to bring a
 circuit to the right place.
 
  Today most people cant even get IPv6 without tunnels, or without paying
 excessively more for their internet connection, or without having their
 pool of vendors shrink dramatically, sometimes to the point of none.
 
 It never shrinks to none, but, yes, the cost can go up dramatically. You
 can, generally, get a circuit to somewhere that HE has presence from
 almost anywhere in the world if you are willing to pay for it. Any
 excessive costs would be what the circuit vendor charges. HE sells transit
 pretty cheap and everywhere we sell, it's dual-stack native. Sure, we wish
 we could magically have POPs everywhere and serve every customer with a
 short local loop. Unfortunately, that's not economically viable at this
 time, so, we build out where we can when there is sufficient demand to
 cover our costs. Pretty much like any other provider, I would imagine.
 Difference is, we've been building everything native dual stack for years.
 IPv6 is what we do. We're also pretty good at IPv4, so we deliver legacy
 connectivity to those that want it as well.
 
 
  Breaking PMTU-D is bad. People should stop doing so.
 
  Blocking PTB messages is bad in IPv4 and worse in IPv6.
 
  It has always been bad and people have not stopped doing it. And
 intentional blocking is not the sole cause of pmtud breaking.
 
 
 I guess that depends on how you define the term intentional. I don't care
 whether it was the administrators intent, or a default intentionally
 placed there by the firewall vendor or what, it was someone's intent,
 therefore, yes, it is intentional. If you can cite an actual case of
 accidental dropping of PTB messages that was not the result of SOMEONE's
 intent, then, OK. However, at least on IPv6, I believe that intentional
 blocking (regardless of whose intent) is, in fact, the only source of
 PMTUD breakage at this point. In IPv4, there is some breakage in older
 software that didn't do PMTUD right even if it received the correct
 packets, but, that's not relevant to IPv6.
 
 
  If you have a useful alternative solution to propose, put it forth and
 let's discuss the merits.
 
  PMTU-d probing, as recently standardizes seems a more likely solution.
 Having CPE capable of TCP mss adjustment on v6 is another one. Being able
 to fragment when you want to is another good one as well.
 
 
 Fragments are horrible from a security perspective and worse from a
 network processing perspective. Having a way to signal path MTU is much
 better. Probing is fine, but, it's not a complete solution and doesn't
 completely compensate for the lack of PTB message transparency.
 
  I hope not. I hope that IPv6 will cause people to actually re-evaluate
 their behavior WRT PMTU-D and correct the actual problem. Working PMTU-D
 allows not only 1500, but also 1280, and 9000 and9000 octet datagrams to
 be possible and segments that support1500 work almost as well as segments
 that support jumbo frames. Where jumbo frames offer an end-to-end
 advantage, that advantage can be realized. Where there is a segment with a
 1280 MTU, that can also work with a relatively small performance penalty.
 
  Where PMTU-D is broken, nothing works unless the MTU end-to-end happens
 to coincide with the smallest MTU.
 
  For links that carry tunnels and clear traffic, life gets interesting
 if one of them is the one with the smallest MTU regardless of the MTU
 value chosen.
 
  Owen
 

Re: IPv6 day and tunnels

2012-06-04 Thread Masataka Ohta
Templin, Fred L wrote:

 I'm not sure that a randomly-chosen skip value is even
 necessary.

It is not necessary, because, for ID uniqueness fundamentalists,
single event is bad enough and for most operators, slight
possibility is acceptable.

 Outer fragmentation cooks the tunnel egresses at high
 data rates.

Have egresses with proper performance. That's the proper
operation.

 End systems are expected and required to
 reassemble on their own behalf.

That is not a proper operation of tunnels.

 Thus, don't insist on having unique IDs so much.
 
 Non-overlapping fragments are disallowed for IPv6, but
 I think are still allowed for IPv4. So, IPv4 still needs
 the unique IDs by virtue of rate limiting.

Even though there is no well defined value of MSL?

 I'm talking about not protocol recommendation but proper
 operation.
 
 I don't see any operational guidance recommending the
 tunnel ingress to configure an MRU of 1520 or larger.

I'm talking about not operation guidance but proper
operation.

Proper operators can, without any guidance, perform proper
operation.

Masataka Ohta



Re: IPv6 day and tunnels

2012-06-04 Thread Jeroen Massar
On 2012-06-04 15:27, Joe Maimon wrote:
 
 
 Jeroen Massar wrote:
 
 
 If people want to use a tunnel for the purpose of a VPN, then they will,
 be that IPv4 or IPv6 or both inside that tunnel.

 
 
 Instead of having a custom VPN protocol one can do IPSEC properly now as
 there is no NAT that one has to get around. Microsoft's Direct Access
 does this btw and is an excellent example of doing it correctly.
 
 Microsoft has had this capability since win2k. I didnt see any
 enterprises use it, even those who used their globally unique and routed
 ipv4 /16 internally. NAT was not why they did not use it.
 
 They did not use it externally, they did not use it internally.
 
 In fact, most of them were involved in projects to switch to NAT
 internally.
 
 Enterprises also happen not to be thrilled with the absence of NAT in IPv6.

What I read that you are saying is that you know a lot of folks who do
not understand the concept of end-to-end reachability and think that NAT
is a security feature and that ICMP is evil.

That indeed matches most of the corporate world quite well. That they
are heavily misinformed does not make it the correct answer though.


 Dont expect huge uptake there.

Every problem has it's own solution depending on the situation.

Direct Access is a just another possible solution to a problem.

If NATs would not have existed and the IPSEC key infra was better
integrated into Operating Systems the uptake for IPSEC based VPNs would
have likely been quite a bit higher by now. But all guess work.

 No why should it? But note that IPv6 tunnels (not VPNs) are a
 transition technique from IPv4 to IPv6 and thus should not remain around
 forever, the transition will end somewhere, sometime, likely far away in
 the future with the speed that IPv6 is being deployed ;)
 
 
 So VPN is the _only_ acceptable use of sub 1500 encapsulation?

Why would anything be 'acceptable'? If you have a medium that only can
carry X bytes per packet, then that is the way it is, you'll just have
to be able to frag IPv6 packets on that medium if you want to support IPv6.

And the good thing is that if you can support jumbo frames, just turn it
on and let pMTU do it's work. Happy 9000's ;)


 Today, most people cant even get IPv6 without tunnels.

 In time that will change, that is simply transitional.
 
 
 If turning it on with a tunnel breaks things, it wont make native
 transition happen sooner.

Using tunnels does not break things. Filtering PTB's (which can happen
anywhere in the network, thus also remotely) can break things though.

Or better said: mis-configuring systems break things.

 This whole thread is all about how IPv6 has not improved any of the
 issues that are well known with IPv4 and in many cases makes them worse.

You cannot unteach stupid people to do stupid things.

Protocol changes will not suddenly make people understand that what they
want to do is wrong and breaks said protocol.

Greets,
 Jeroen




Re: test-ipv6.com / omgipv6day.com down

2012-06-04 Thread Mark Andrews

What's really needed is a service that looks up a given web page
over IPv6 from behind a 1280 byte MTU link and reports if all the
elements load or not.   It dumps a list of elements with success/fail.

This would be useful to send the idiots that block ICMPv6 PTB yet
send packets bigger than 1280 bytes out too.

Mark
-- 
Mark Andrews, ISC
1 Seymour St., Dundas Valley, NSW 2117, Australia
PHONE: +61 2 9871 4742 INTERNET: ma...@isc.org



RE: test-ipv6.com / omgipv6day.com down

2012-06-04 Thread Frank Bulk
Much of that can be found here: http://www.wand.net.nz/pmtud/

Frank

-Original Message-
From: Mark Andrews [mailto:ma...@isc.org] 
Sent: Monday, June 04, 2012 6:54 PM
To: Jeroen Massar
Cc: nanog@nanog.org
Subject: Re: test-ipv6.com / omgipv6day.com down


What's really needed is a service that looks up a given web page
over IPv6 from behind a 1280 byte MTU link and reports if all the
elements load or not.   It dumps a list of elements with success/fail.

This would be useful to send the idiots that block ICMPv6 PTB yet
send packets bigger than 1280 bytes out too.

Mark
-- 
Mark Andrews, ISC
1 Seymour St., Dundas Valley, NSW 2117, Australia
PHONE: +61 2 9871 4742 INTERNET: ma...@isc.org






Re: test-ipv6.com / omgipv6day.com down

2012-06-04 Thread Owen DeLong
http://ipv6chicken.net

Owen

On Jun 4, 2012, at 4:54 PM, Mark Andrews wrote:

 
 What's really needed is a service that looks up a given web page
 over IPv6 from behind a 1280 byte MTU link and reports if all the
 elements load or not.   It dumps a list of elements with success/fail.
 
 This would be useful to send the idiots that block ICMPv6 PTB yet
 send packets bigger than 1280 bytes out too.
 
 Mark
 -- 
 Mark Andrews, ISC
 1 Seymour St., Dundas Valley, NSW 2117, Australia
 PHONE: +61 2 9871 4742 INTERNET: ma...@isc.org




Re: IPv6 day and tunnels

2012-06-04 Thread Mark Andrews

What we need is World 1280 MTU day where *all* peering links are
set to 1280 bytes for IPv4 and IPv6 and are NOT raised for 24 hours
regardless of the complaints.  This needs to be done annually.

-- 
Mark Andrews, ISC
1 Seymour St., Dundas Valley, NSW 2117, Australia
PHONE: +61 2 9871 4742 INTERNET: ma...@isc.org



Re: test-ipv6.com / omgipv6day.com down

2012-06-04 Thread Jeroen Massar
On 2012-06-04 16:58, Owen DeLong wrote:
 http://ipv6chicken.net

$ dig -t any ipv6chicken.net

;  DiG 9.8.1-P1  -t any ipv6chicken.net
;; global options: +cmd
;; Got answer:
;; -HEADER- opcode: QUERY, status: NXDOMAIN, id: 16935

The chicken cannot cross the road as the chicken does not exist.

Greets,
 Jeroen



Re: test-ipv6.com / omgipv6day.com down

2012-06-04 Thread Mark Andrews

In message c8343920-c2bc-4e2d-bd1f-df1268486...@delong.com, Owen DeLong 
writes:
 http://ipv6chicken.net
 
 Owen

doesn't exist.

;  DiG 9.9.1  ipv6chicken.net
;; global options: +cmd
;; Got answer:
;; -HEADER- opcode: QUERY, status: NXDOMAIN, id: 5059
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;ipv6chicken.net.   IN  A

;; AUTHORITY SECTION:
net.879 IN  SOA a.gtld-servers.net. 
nstld.verisign-grs.com. 1338855235 1800 900 604800 86400

;; Query time: 0 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Tue Jun  5 10:14:40 2012
;; MSG SIZE  rcvd: 117

 
 On Jun 4, 2012, at 4:54 PM, Mark Andrews wrote:
 
  
  What's really needed is a service that looks up a given web page
  over IPv6 from behind a 1280 byte MTU link and reports if all the
  elements load or not.   It dumps a list of elements with success/fail.
  
  This would be useful to send the idiots that block ICMPv6 PTB yet
  send packets bigger than 1280 bytes out too.
  
  Mark
  -- 
  Mark Andrews, ISC
  1 Seymour St., Dundas Valley, NSW 2117, Australia
  PHONE: +61 2 9871 4742 INTERNET: ma...@isc.org
 
-- 
Mark Andrews, ISC
1 Seymour St., Dundas Valley, NSW 2117, Australia
PHONE: +61 2 9871 4742 INTERNET: ma...@isc.org



Re: IPv6 day and tunnels

2012-06-04 Thread Owen DeLong
I kind of like the idea. I suspect that $DAYJOB would be less enthusiastic.

Owen

On Jun 4, 2012, at 5:13 PM, Mark Andrews wrote:

 
 What we need is World 1280 MTU day where *all* peering links are
 set to 1280 bytes for IPv4 and IPv6 and are NOT raised for 24 hours
 regardless of the complaints.  This needs to be done annually.
 
 -- 
 Mark Andrews, ISC
 1 Seymour St., Dundas Valley, NSW 2117, Australia
 PHONE: +61 2 9871 4742 INTERNET: ma...@isc.org




Re: IPv6 day and tunnels

2012-06-04 Thread Joe Maimon



Jeroen Massar wrote:




That indeed matches most of the corporate world quite well. That they
are heavily misinformed does not make it the correct answer though.


Either you are correct and they are all wrong, or they have a 
perspective that you dont or wont see.


Either way I dont see them changing their mind anytime soon.

So how about we both accept that they exist and start designing the 
network to welcome rather than ostracize them, unless that is your intent.




And the good thing is that if you can support jumbo frames, just turn it
on and let pMTU do it's work. Happy 9000's ;)


pMTU has been broken in IPv4 since the early days.

It is still broken. It is also broken in IPv6. It will likely still be 
broken for the forseeable future. This is


a) a problem that should not be ignored

b) a failure in imagination when designing the protocol

c) a missed opportunity to correct a systemic issue with IPv4





Or better said: mis-configuring systems break things.


Why do switches auto-mdix these days?

Because insisting that things will work properly if you just configure 
them correctly turns out to be inferior to designing a system that 
requires less configuration to achieve the same goal.


Automate.




This whole thread is all about how IPv6 has not improved any of the
issues that are well known with IPv4 and in many cases makes them worse.


You cannot unteach stupid people to do stupid things.

Protocol changes will not suddenly make people understand that what they
want to do is wrong and breaks said protocol.

Greets,
  Jeroen




You also cannot teach protocol people that there is protocol and then 
there is reality.


Relying on ICMP exception messages was always wrong for normal network 
operation.


Best,

Joe



Re: test-ipv6.com / omgipv6day.com down

2012-06-04 Thread Bryan Irvine
's/net/com'



On Mon, Jun 4, 2012 at 5:15 PM, Mark Andrews ma...@isc.org wrote:

 In message c8343920-c2bc-4e2d-bd1f-df1268486...@delong.com, Owen DeLong 
 writes:
 http://ipv6chicken.net

 Owen

 doesn't exist.

 ;  DiG 9.9.1  ipv6chicken.net
 ;; global options: +cmd
 ;; Got answer:
 ;; -HEADER- opcode: QUERY, status: NXDOMAIN, id: 5059
 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

 ;; OPT PSEUDOSECTION:
 ; EDNS: version: 0, flags:; udp: 4096
 ;; QUESTION SECTION:
 ;ipv6chicken.net.               IN      A

 ;; AUTHORITY SECTION:
 net.                    879     IN      SOA     a.gtld-servers.net. 
 nstld.verisign-grs.com. 1338855235 1800 900 604800 86400

 ;; Query time: 0 msec
 ;; SERVER: 127.0.0.1#53(127.0.0.1)
 ;; WHEN: Tue Jun  5 10:14:40 2012
 ;; MSG SIZE  rcvd: 117


 On Jun 4, 2012, at 4:54 PM, Mark Andrews wrote:

 
  What's really needed is a service that looks up a given web page
  over IPv6 from behind a 1280 byte MTU link and reports if all the
  elements load or not.   It dumps a list of elements with success/fail.
 
  This would be useful to send the idiots that block ICMPv6 PTB yet
  send packets bigger than 1280 bytes out too.
 
  Mark
  --
  Mark Andrews, ISC
  1 Seymour St., Dundas Valley, NSW 2117, Australia
  PHONE: +61 2 9871 4742                 INTERNET: ma...@isc.org

 --
 Mark Andrews, ISC
 1 Seymour St., Dundas Valley, NSW 2117, Australia
 PHONE: +61 2 9871 4742                 INTERNET: ma...@isc.org




Re: test-ipv6.com / omgipv6day.com down

2012-06-04 Thread Owen DeLong
My bad... It's .com not .net.

http://www.ipv6chicken.com

Owen

On Jun 4, 2012, at 5:14 PM, Jeroen Massar wrote:

 On 2012-06-04 16:58, Owen DeLong wrote:
 http://ipv6chicken.net
 
 $ dig -t any ipv6chicken.net
 
 ;  DiG 9.8.1-P1  -t any ipv6chicken.net
 ;; global options: +cmd
 ;; Got answer:
 ;; -HEADER- opcode: QUERY, status: NXDOMAIN, id: 16935
 
 The chicken cannot cross the road as the chicken does not exist.
 
 Greets,
 Jeroen




Re: IPv6 day and tunnels

2012-06-04 Thread Masataka Ohta
Joe Maimon wrote:

 pMTU has been broken in IPv4 since the early days.

 It is still broken. It is also broken in IPv6. It will likely
 still be broken for the forseeable future. This is

 Relying on ICMP exception messages was always wrong for normal network 
 operation.

Agreed.

The proper solution is to have a field in IPv7 header to
measure PMTU. It can be a 8 bit field, if fragment granularity
is 256B.

Masataka Ohta



ATT DSL IPv6

2012-06-04 Thread Grant Ridder
Hi Everyone,

Does anyone know about IPv6 on ATT residential DSL circuits?  About 8 or 9
months ago i ran through several IPv6 tests (http://test-ipv6.iad.vr.org/)
and they all passed.  With all the talk of IPv6 day over the past week i
decided to run it again just out of curiosity.  However to my surprise, it
is returning the result of IPv4 only now.  Any ideas why they would have
rolled back IPv6?

Thanks,
Grant


Re: test-ipv6.com / omgipv6day.com down

2012-06-04 Thread Mark Andrews

http://ipv6chicken.com/ tests the path to me.  It doesn't check the
path back to the sites I want to reach though it does provide a
independent third party if there is complainst that PTB's are not
being generated.  It would be useful if it reported the MTU that
was eventually used.  Most OS's have a hook to retrieve this.

-- 
Mark Andrews, ISC
1 Seymour St., Dundas Valley, NSW 2117, Australia
PHONE: +61 2 9871 4742 INTERNET: ma...@isc.org



Re: test-ipv6.com / omgipv6day.com down

2012-06-04 Thread Matthew Luckie
 What's really needed is a service that looks up a given web page
 over IPv6 from behind a 1280 byte MTU link and reports if all the
 elements load or not.   It dumps a list of elements with success/fail.

 This would be useful to send the idiots that block ICMPv6 PTB yet
 send packets bigger than 1280 bytes out too.

http://www.wand.net.nz/scamper/

Works on MacOS X and FreeBSD.  It uses IPFW and rules 1-500 as
necessary.  Example below, showing a website sending  1280 but
ignoring PTBs sent to it.

$ sudo scamper -F ipfw -I tbit -t pmtud -u 'http://www.sapo.pt/'
2001:8a0:2104:ff:213:13:146:140
tbit from 2001:470:d:4de:21f:3cff:fe20:bf4e to 2001:8a0:2104:ff:213:13:146:140
 server-mss 1460, result: pmtud-fail
 app: http, url: http://www.sapo.pt/
 [  0.048] TX SYN 64  seq = 0:0 
 [  0.254] RX SYN/ACK 64  seq = 0:1 
 [  0.255] TX 60  seq = 1:1 
 [  0.255] TX230  seq = 1:1(170)
 [  0.450] RX 60  seq = 1:171   
 [  0.469] RX   1460  seq = 1:171(1400) 
 [  0.469] TX PTB   1280  mtu = 1280
 [  0.470] RX   1460  seq = 1401:171(1400)  
 [  3.467] RX   1460  seq = 1:171(1400) 
 [  3.467] TX PTB   1280  mtu = 1280
 [  9.468] RX   1460  seq = 1:171(1400) 
 [  9.468] TX PTB   1280  mtu = 1280
 [ 21.471] RX   1460  seq = 1:171(1400) 
 [ 21.471] TX PTB   1280  mtu = 1280
 [ 31.933] RX RST 60  seq = 1:4294923802   



Re: Recommendation for OOB management via IP

2012-06-04 Thread Ameen Pishdadi
What's wrong with a dsl connection doesn't need to be anything fancy just 
reliable enough to be up when your other stuff is down 

Thanks,
Ameen Pishdadi


On Jun 4, 2012, at 3:45 PM, Hiten J. Thakkar hthak...@ucsc.edu wrote:

 Hello!
 
 My work place is looking for an OOB management over IP. We have Lantronix KVM 
 in our Datacenter with nearly 100% uptime and Lantronix SLC-8/16/48 ports 
 with 2 NICs deployed across various MDFs on campus and remote locations (5). 
 On our main campus we have a parallel net, but for the remote locations we 
 are looking to access Lantronix SLCs' via the second NIC card using IP based 
 solution. Can you kindly make suggestions. I supremely appreciate your time 
 and inputs in advance.
 
 -- 
 
 Thanks and regards,
 Hiten J. Thakkar
 
 



Re: IPv6 day and tunnels

2012-06-04 Thread Owen DeLong

On Jun 4, 2012, at 5:33 PM, Masataka Ohta wrote:

 Joe Maimon wrote:
 
 pMTU has been broken in IPv4 since the early days.
 
 It is still broken. It is also broken in IPv6. It will likely
 still be broken for the forseeable future. This is
 
 Relying on ICMP exception messages was always wrong for normal network 
 operation.
 
 Agreed.
 
 The proper solution is to have a field in IPv7 header to
 measure PMTU. It can be a 8 bit field, if fragment granularity
 is 256B.
 
   Masataka Ohta

If you're going to redesign the header, I'd be much more interested in having 
32 bits for the destination ASN so that IDR can ignore IP prefixes altogether.

Owen




Re: IPv6 day and tunnels

2012-06-04 Thread Owen DeLong

On Jun 4, 2012, at 5:21 PM, Joe Maimon wrote:

 
 
 Jeroen Massar wrote:
 
 
 
 That indeed matches most of the corporate world quite well. That they
 are heavily misinformed does not make it the correct answer though.
 
 Either you are correct and they are all wrong, or they have a perspective 
 that you dont or wont see.
 

He is correct. I have seen their perspective and it is, in fact, misinformed 
and based largely on
superstition.

 Either way I dont see them changing their mind anytime soon.

Very likely true, unfortunately. Zealots are rarely persuaded by facts, 
science, or anything based
in reality, choosing instead to maintain their bubble of belief even to the 
point of historically killing
those that could not accept their misguided viewpoint.

Nonetheless, over time, even humans eventually figured out that Galileo was 
right and the world
is, indeed round, does, in fact orbit the sun (and not the other way around) 
and is not, in fact, at
the center of the universe.

Given that we were able to overcome the catholic church with those facts 
eventually, I suspect that
overcoming corporate IT mythology over time will be somewhat sooner and easier. 
It might eve take
less than 100 years instead of several hundred.

 So how about we both accept that they exist and start designing the network 
 to welcome rather than ostracize them, unless that is your intent.

I would rather educate them and let them experience the errs of their ways 
until they learn than damage the network in the pursuit of inclusion in this 
case. If you reward bad behavior with adaptation to that behavior and 
accommodation, you get more bad behavior. This was proven with appeasement of 
hitler in the 40s (hey, someone had to feed Godwin's law, right?) and has been 
confirmed with the recent corporate bail-outs,  bank bail-outs and the mortgage 
crisis.

One could even argue that the existing corporate attitudes about NAT are a 
reflection of this behavior being rewarded with ALGs and other code constructs 
aimed at accommodating that bad behavior.

 
 
 And the good thing is that if you can support jumbo frames, just turn it
 on and let pMTU do it's work. Happy 9000's ;)
 
 pMTU has been broken in IPv4 since the early days.
 

PMTUD is broken in IPv4 since the early days because it didn't exist in the 
early days. PMTUD is a relatively recent feature for IPv4.

PMTUD has been getting progressively less broken in IPv4 since it was 
introduced.

 It is still broken. It is also broken in IPv6. It will likely still be broken 
 for the forseeable future. This is
 

PMTU-D itself is not broken in IPv6, but some networks do break PMTU-D. 

 a) a problem that should not be ignored

True. Ignoring ignorance is no better than accommodating it. The correct answer 
to ignorance is education.

 b) a failure in imagination when designing the protocol

Not really. In reality, it is a failure of implementers to follow the published 
standards. The protocol, as designed, works as expected if deployed in 
accordance with the specifications.

 c) a missed opportunity to correct a systemic issue with IPv4

There are many of those (the most glaring being the failure to address 
scalability of the routing system). However, since, as near as I can tell, 
PMTU-D was a new feature for IPv6 which was subsequently back-ported to IPv4, I 
am not sure that statement really applies in this case.

Many of the features we take for granted in IPv4 today were actually introduced 
as part of IPv6 development, including IPSEC, PMTU-D, CIDR notation for prefix 
length, and more.

 Or better said: mis-configuring systems break things.
 
 Why do switches auto-mdix these days?

Because it makes correct configuration easier. You can turn this off on most 
switches, in fact, and if you do, you can still misconfigure them.

Any device with a non-buggy IPv6 implementation, by default does not block 
ICMPv6 PTB messages.  If you subsequently deliberately misconfigure it to block 
them, then, you have taken deliberate action to misconfigure your network.

 Because insisting that things will work properly if you just configure them 
 correctly turns out to be inferior to designing a system that requires less 
 configuration to achieve the same goal.

Breaking PMTU-D in IPv6 requires configuration unless the implementation is 
buggy. Don't get me started on how bad a buggy Auto MDI/X implementation can 
make your life. Believe me, it is far worse than PMTU-D blocking.

 Automate.

Already done. Most PMTU-D blocks in IPv6 are the result of operators taking 
deliberate configuration action to block packets that should not be blocked. 
That is equivalent to turning off Auto-MDI/X or Autonegotiation on the port.


 This whole thread is all about how IPv6 has not improved any of the
 issues that are well known with IPv4 and in many cases makes them worse.
 
 You cannot unteach stupid people to do stupid things.
 

I disagree. People can be educated. It may take more effort 

Re: IPv6 day and tunnels

2012-06-04 Thread Jeroen Massar
On 2012-06-04 17:57, Owen DeLong wrote:
[..]
 If you're going to redesign the header, I'd be much more interested
 in having 32 bits for the destination ASN so that IDR can ignore IP
 prefixes altogether.

One can already do that: route your IPv6 over IPv4 IPv4 has 32bit
destination addresses remember? :)

It is also why it is fun if somebody uses a 32-bit ASN to route IPv4, as
one is not making the problem smaller that way. ASNs are more used as
identifiers to avoid routing loops than as actual routing parameters.

Greets,
 Jeroen




Re: IPv6 day and tunnels

2012-06-04 Thread Jimmy Hess
On 6/4/12, Owen DeLong o...@delong.com wrote:
[snip]
 Probing as you have proposed requires you to essentially do a binary search
 to arrive
 at some number n where 1280≤n≤9000, so, you end up doing something like
 this:
[snip]
 So, you waited for 13 timeouts before you actually passed useful
 traffic? Or, perhaps you putter along at the lowest possible MTU until you
[snip]
Instead of waiting for 13 timeouts, start with 4 initial probes in
parallel,  and react rapidly to the responses you receive;  say
9000,2200, 1500, 830.

Don't wait until any timeouts until the possible MTUs are narrowed.


FindLocalMTU(B,T)
   Let  B  :=  Minimum_MTU
   Let T  := Maximum_MTU
   Let  D :=   Max(1, Floor( ( (T - 1)  -  (B+1) ) / 4 ))
   Let  R := T
   Let  Attempted_Probes := []

   While  ( ( (B + D)  T  )   orAttempted_Probes is not Empty )  do
IfR is not a member  of   Attempted_Probes   or  Retries  1  then
   AsynchronouslySendProbeOfSize  (R)
   Append (R,Tries) to list of Attempted_Probes  if not exists
   or  if  (R,Tries) already in list then increment Retries.
   else
   T :=  R - 1
   Delete from Attempted_Probes (R)
   end

if  ( (B +D)  T )AsynchronouslySendProbeOfSize  (B+ D)
if  ( (B + 2*D)  T )AsynchronouslySendProbeOfSize  (B+ 2*D)
if  ( (B + 3*D)  T )   AsynchronouslySendProbeOfSize  (B+ 3*D)
if  ( (B + 4*D)  T )   AsynchronouslySendProbeOfSize  (B+ 4*D)

 Wait_For_Next_Probe_Response_To_Arrive()
 Wait_For_Additional_Probe_Response_Or_Short_Subsecond_Delay()
 Add_Probe_Responses_To_Queue(Q)

 R :=  Get_Largest_Received_Probe_Size(Q)
 If ( R  T ) then
   T := R
 end

 If ( R  B  )   then
   B := R
   D := Max(1, Floor( ( (R - 1)  -  (B+1) ) / 4 ))
 end
done

Result :=  B


#

If you receive the response at n=830 first, then wait 1ms and send the
next 4 probes 997   1164  1331  1498,  and resend the n=1500 probe
If 1280 is what the probe needs to detect.  You'll receive a
response for 1164 , so wait 1ms  then retry  n=1498
next 4 probes are  1247  1330  1413  1496
 if 1280 is what the probe needs to detect,  You'll receive a
response for 1247, so wait 1ms  resend n=1496
 next 4 probes are   1267 1307  1327   1347
if 1280 is what you neet to detect, you'll receive
response for  1267, so
   retry n=1347  wait 1ms
  next 4 probes are:   1276  1285 1294 1303
  next 4 probes are:   1277 1278 1279 1280
 next 2 parallel probes are:  1281 1282

You hit after  22 probes,  but you only needed to wait for n=1281   n=1282
and their retry to time out.


--
-JH