RE: www.RT.com bad dns record

2016-07-08 Thread Tony Hain
Matt Palmer wrote:
> On Thu, Jul 07, 2016 at 06:36:23PM -0700, Ca By wrote:
> > On Thursday, July 7, 2016, Spencer Ryan  wrote:
> >
> > > Dotted-quad notation is completely valid, and works fine.
> > >
> > > https://en.wikipedia.org/wiki/IPv6_address#Presentation
> > >
> > > http://[:::37.48.108.112] loads fine in my browsers.
> >
> > It may be legit on your network, but people generally don't do
> > that If they publish a  record, it usually has a legit v6
address in it.
> 
> That is a legit IPv6 address.  That it won't work on a host that is
IPv6-only is a
> different issue, and one I agree is probably an unexpected and unwanted
> side effect.

This doesn't sound like a host issue, but a broken dns64 implementation. If
it checked the content of the  response for an ::... answer and
treated that as an A-only response, the host would never be involved.

Tony





RE: Netflix banning HE tunnels

2016-06-08 Thread Tony Hain
Matthew, 

I was not complaining about the business model, or the need to comply with 
content provider requirements. The issue is the pathetic implementation choice 
that Netflix made when a trivial alternative was available. I agree that 
setting up rwhois and trusting the 3rd party tunnel providers to provide valid 
information is substantially more effort than the ROI on this would justify, 
but a redirect to IPv4-only requires no additional 3rd party trust for geo-loc 
than an IPv4 connection to begin with, would still catch the bad actors, yet 
works correctly for those trying to move the Internet forward. 

Tony


> -Original Message-
> From: NANOG [mailto:nanog-boun...@nanog.org] On Behalf Of Matthew
> Huff
> Sent: Wednesday, June 08, 2016 12:45 PM
> To: Laszlo Hanyecz; nanog@nanog.org
> Subject: RE: Netflix banning HE tunnels
> 
> The content providers wouldn't care if it was a very small number of people
> evading their region restrictions, but it isn't a small number. Those avoiding
> it are already not in good faith. While I don't agree with the content
> providers business model, it's their content, their rules.
> 
> If you don't think it's right that Netflix is blocking VPNs and tunnels, then
> switch to Hulu and/or Amazon, however it's just matter of time before they
> start blocking VPNs and tunnels themselves.
> 
> I agree that matching Geolocation with source IP addresses is a bad idea, but
> until someone comes up with a better idea and gets it implemented ( one
> that can't be modified by the end user), people with a business model that
> depends on it will continue to block based on IP. "Good faith" will be
> laughed at, and rightly so.
> 
> 
> 
> 
> Matthew Huff | 1 Manhattanville Rd Director of Operations   |
> Purchase, NY 10577 OTA Management LLC   | Phone: 914-460-4039
> aim: matthewbhuff| Fax:   914-694-5669
> 
> 
> > -Original Message-
> > From: NANOG [mailto:nanog-boun...@nanog.org] On Behalf Of Laszlo
> > Hanyecz
> > Sent: Wednesday, June 8, 2016 3:34 PM
> > To: nanog@nanog.org
> > Subject: Re: Netflix banning HE tunnels
> >
> >
> >
> > On 2016-06-08 18:57, Javier J wrote:
> > > Tony, I agree 100% with you. Unfortunately I need ipv6 on my media
> > subnet
> > > because it's part of my lab. And now that my teenage daughter is
> > > complaining about Netflix not working g on her Chromebook I'm
> > starting to
> > > think consumers should just start complaining to Netflix. Why should
> > I have
> > > to change my damn network to fix Netflix?
> > >
> > > In her eyes it's "daddy fix Netflix" but the heck with that. The man
> > hours
> > > of the consumers who are affected to work around this issue is less
> > than
> > > the man hours it would take for Netflix to redirect you with a 301
> > > to
> > an
> > > ipv4 only endpont.
> > >
> > > If Netflix needs help with this point me in the right direction.
> > > I'll
> > be
> > > happy to fix it for them and send them a bill.
> > >
> >
> > They're doing the same thing with IPv4 (banning people based on the
> > apparent IP address).  Your IPv4 numbers may not be on their blacklist
> > at the moment, and disabling IPv6 might work for you, but the
> > underlying problem is the practice of GeoIP/VPN blocking, and the
> > HE.net tunnels are just one example of the collateral damage.
> >
> > I don't know why Netflix and other GeoIP users can't just ask
> > customers where they are located, instead of telling them.  It is
> > possible that some user might lie, but what about "assume good faith"?
> > It shows how much they value you as a customer if they would rather
> > dump you than trust you to tell them where you are located.
> >
> > -Laszlo
> >




RE: Netflix banning HE tunnels

2016-06-08 Thread Tony Hain
Ca By wrote:
> On Tuesday, June 7, 2016, chris  wrote:
> 
> > it really feels alot like what net neutrality was supposed to avoid.
> > making a policy where there is different treatment of one set of bits
> > over another
> >
> > "your ipv6 bits are bad but if you turn it off the ipv4 bits are just fine"
> >
> > someone mentioned the fact that netflix is not just a content company
> > but also acting as a network operator maybe the two should be separate
> >
> > i also find it ironic that they arent big fans of ISPs who use NAT or
> > CGN and dont have 1 customer per IP yet their stifiling ipv6 and
> > telling users to turn it off. you really cant have it both ways and
> > complain about NAT and also say you recommend shutting off ipv6 :)
> >
> > hopefully they will realize imposing their own policy on how customers
> > use their networks and the internet  this isnt worth losing customers
> > over
> >
> > chris
> >
> >
> 
> Again. An HE tunnel is not production ipv6. It is a toy.

Well, "service that works" from an OTT provider vs. "useless crap that is 
unsupported" from the L2 provider would beg to differ about the definition of 
toy. While there has been substantial effort by the participants on this list 
to get IPv6 deployed across their national network, the local support team from 
my ISP continues to give me the "IPv6 is not supported" crap response when I 
complain that all I am getting for a business class connection is a /64, and I 
need a /48. 

> 
> Telling people to turn of HE tunnel is NOT the same as turning off
> production ipv6.

Rather than telling people to turn off IPv6, Netflix should have just 
redirected to an IPv4-only name and let that geo-loc deal with it. If the 
account was trying to use a vpn to bypass geo-loc, it would still fail, but 
those trying to bypass lethargic ISP deployment/support of IPv6 would not 
notice unless they looked. Given that they are likely watching the Netflix 
content at the time, they would be very unlikely to notice the packet headers 
so this would never have become an issue. 

Fortunately in my case since I view Netflix through Chromecasts, I can turn off 
IPv6 on the media subnet and not impact the rest of my IPv6 use. I shouldn't 
have to do that, but the ability to isolate traffic is one reason people on  
this list need to get over the historic perception that a customer network is a 
single flat subnet. Allocating space on that assumption simply perpetuates the 
problems that come along with it. There is no technical reason to allocate 
anything longer than a /48, but for those that insist on doing so, please, 
please, please, don't go longer than a /56. Even a phone is a router that 
happens to have a voice app built in, so mobile providers need to stop the 
assumption that "it only needs a single subnet". 

Tony


> 
> CB
> 
> 
> > On Tue, Jun 7, 2016 at 6:35 PM, Elvis Daniel Velea  > > wrote:
> >
> > > apparently, all they see is 3 people complaining on this mailing list..
> > > well, this makes it 4 with me (and I have a bunch of people in
> > > various countries complaining on facebook that they have been banned
> > > from using netflix because they use an HE tunnel.
> > >
> > > their answer - TURN IPV6 OFF!!! you're a techie so if you know how
> > > to setup a tunnel, you must know how to redirect netflix to use IPv4
> only...
> > > really?
> > > the answer just pisses me off!
> > >
> > > Netflix, YOU are the ones forcing people to turn IPv4 off... this is
> > > just insane. tens (if not hundred) of thousands of people chose to
> > > use HE tunnels because their ISP does not offer IPv6..
> > > do you really expect all of them to turn it off? do you really want
> > > IPv6 usage in the world to go down by a few percent because you are
> > > unable to figure out how to serve content?
> > >
> > > I know nobody at Netflix will even answer to the e-mails on this list..
> > > but I hope that they will at least acknowledge the problem and
> > > figure an other way to block content by country.
> > > ie: they could try to talk to HE to register each tunnel in a
> > > database that points to the country of the user..
> > >
> > > cheers,
> > > elvis
> > >
> > >
> > > On 6/8/16 1:01 AM, chris wrote:
> > >
> > >> I am also in the same boat with a whole subnet affected even
> > >> without a tunnel, tried multiple netflix support channels starting
> > >> in early march and the ranges is still blocked 3 months later.
> > >>
> > >> I was a big fan of the service and somewhat of an addict up till
> > >> this
> > but
> > >> I've really been shocked how this has been (mis)handled
> > >>
> > >> chris
> > >>
> > >> On Tue, Jun 7, 2016 at 7:23 AM, Davide Davini  > >
> > >> wrote:
> > >>
> > >> Today I discovered Netflix flagged my IPv6 IP block as "proxy/VPN"
> > >> and I
> > >>> can't use it if I don't disable the HE tunnel, which is the only
> > >>> way
> > for
> > >>> me 

RE: Binge On! - And So This is Net Neutrality?

2015-11-26 Thread Tony Hain
Keenan Tims wrote:
> To: nanog@nanog.org
> Subject: Re: Binge On! - And So This is Net Neutrality?
> 
> I'm surprised you're supporting T-Mob here Owen. To me it's pretty
> clear: they are charging more for bits that are not streaming video.
> That's not neutral treatment from a policy perspective, and has no basis in
> the cost of operating the network.

I have no visibility into what the line
"T‐Mobile will work with content providers to ensure that our networks work 
together to properly"
actually means, but they could/should be using this as a tool to drive content 
sources to IPv6. 

Trying to explain to consumers why an unlimited data plan only works for a tiny 
subset of content is a waste of energy. Picking a category and "encouraging" 
that content to move, then after the time limit, pick the next category, 
rinse/repeat, is a way to move traffic away from the 6/4 nat infrastructure 
without having to make a big deal about the IP version to the consumer, and at 
the same time remove "it costs too much" complaints from the sources. If I were 
implementing such a plan, I would walk the list of traffic sources based on 
volume to move traffic as quickly as possible, so it makes perfect sense to me 
that they would start with video.

Tony


> 
> Granted, the network itself is neutral, but the purported purpose of NN in
> my eyes is twofold: take away the influence of the network on user and
> operator behaviour, and encourage an open market in network services
> (both content and access). Allowing zero-rating based on *any* criteria
> gives them a strong influence over what the end users are going to do with
> their network connection, and distorts the market for network services.
> What makes streaming video special to merit zero-rating?
> 
> I like Clay's connection to the boiling frog. Yes, it's "nice" for most
> consumers now, but it's still distorting the market.
> 
> I'm also not seeing why they have to make this so complicated. If they can
> afford to zero-rate high-bandwidth services like video and audio streaming,
> clearly there is network capacity to spare. The user behaviour they're
> encouraging with free video streaming is *precisely* what the incumbents
> claimed was causing congestion to merit throttling a few years ago, and still
> to this day whine about constantly. I don't have data, but I would expect
> usage of this to align quite nicely with their current peaks.
> 
> Why not just raise the caps to something reasonable or make it unlimited
> across the board? I could even get behind zero-rating all 'off-peak-hours'
> use like we used to have for mobile voice; at least that makes sense for the
> network. What they're doing though is product differentiation where none
> exists; in fact the zero-rating is likely to cause more load on the system 
> than
> just doubling or tripling the users'
> caps. That there seems to be little obvious justification for it from a 
> network
> perspective makes me vary wary.
> 
> Keenan
> 
> On 2015-11-23 18:05, Owen DeLong wrote:
> >
> >> On Nov 23, 2015, at 17:28 , Baldur Norddahl
>  wrote:
> >>
> >> On 24 November 2015 at 00:22, Owen DeLong 
> wrote:
> >>
> >>> Are there a significant number (ANY?) streaming video providers
> >>> using UDP to deliver their streams?
> >>>
> >>
> >> What else could we have that is UDP based? Ah voice calls. Video calls.
> >> Stuff that requires low latency and where TCP retransmit of stale
> >> data is bad. Media without buffering because it is real time.
> >>
> >> And why would a telco want to zero rate all the bandwidth heavy media
> >> with certain exceptions? Like not zero rating media that happens to
> >> compete with some of their own services, such as voice calls and video
> calls.
> >>
> >> Yes sounds like net neutrality to me too (or not!).
> >>
> >> Regards,
> >>
> >> Baldur
> >
> > All T-Mobile plans include unlimited 128kbps data, so a voice call is
> > effectively already zero-rated for all practical purposes.
> >
> > I guess the question is: Is it better for the consumer to pay for
> > everything equally, or, is it reasonable for carriers to be able to
> > give away some free data without opening it up to everything?
> >
> > To me, net neutrality isn’t as much about what you charge the customer
> > for the data, it’s about whether you prioritize certain classes of
> > traffic to the detriment of others in terms of service delivery.
> >
> > If T-Mobile were taking money from the video streaming services or
> > only accepting certain video streaming services, I’d likely agree with
> > you that this is a neutrality issue.
> >
> > However, in this case, it appears to me that they aren’t trying to
> > give an advantage to any particular competing streaming video service
> > over the other, they aren’t taking money from participants in the program,
> and consumers stand to benefit from it.
> >
> > If you see an actual way in which it’s better for everyone if 

RE: Extraneous "legal" babble--and my reaction to it.

2015-09-09 Thread Tony Hain
Dovid Bender wrote:
> I would. Once I see legal stuff I know to stop reading. It does not hurt
> anyone. Not sure why this hurts so much. Some things will remain a
> mystery.
> 

No mystery ... It wastes bits that could otherwise be used to watch cat videos. 
 ;)

Tony




RE: Remember Internet-In-A-Box?

2015-07-17 Thread Tony Hain
Ricky Beamwrote:
 On Wed, 15 Jul 2015 22:32:19 -0400, Mark Andrews ma...@isc.org wrote:
  You can blame the religious zealots that insisted that everything DHCP
  does has to also be done via RA's.
 
 I blame the anti-DHCP crowd for a lot of things. RAs are just dumb.
 There's a reason IPv4 can do *everything* through DHCP -- hell, even boot
 menu lists are sent in dhcp pakcets.

The reason is that DHC was the longest lived working group in IETF history.
It took over 15 years of changes to get what you consider a working
implementation. At the point the IPv6 RA was specified, it was very
difficult for people to get addressing and routers consistently configured
via dhcp, let alone everything else that was added. 

 
  The XP box is in an even worse situation if you try to run it on a
  v6-only network.
 
  Which is fixable with a third party DHCPv6 client / manual
  configuration of the nameservers.
 
 Just like no IP stack was fixable in the 80's. No. Just, No. There are
millions
 upon millions of internet users I wouldn't trust to double click
setup.exe.
 
  None of which is the fault of the protocol.
 
 Actually, it's 100% the fault of the protocol. IPv6-only networking has
been a
 cluster-f*** from day one. And it still doesn't f'ing work today.
 Until there is *A* standard to implement, that stands still for more than
an
 hour before something else critical gets bolted on to it, people are
going
 to continue to ignore IPv6.

So if you want to wait for a stable specification, why did you ever
implement IPv4? Here we are 35+ years later and there are still changes to
the base IPv4 header in the works.
http://tools.ietf.org/rfcmarkup?doc=draft-dreibholz-ipv4-flowlabel  How
could anyone ever implement a target that has continued to move for that
long a period? With over 5,000 documents describing the continuous changes
to IPv4, there is obviously A standard to implement in there somewhere.

Clearly some people have figured out how to deploy IPv6, but if you want to
wait, that is your choice. 

 
 Yes, my XP machines work fine with IPv6... on a network using SLAAC,
 where
 IPv4 (DHCPv4) is still enabled and providing the various bits necessary to
do
 anything other than ping my gateway.

The XP implementation was never expected to last as long as it did, The
delay in shipping the Vista/W7 stack resulted in quite a bit of
functionality being late. The entire point of the XP implementation was to
put a working API in the hands of app developers. It was never intended to
be used in IPv6-only networks 15 years after its release. 

Tony




RE: Speaking of NTP...

2015-07-16 Thread Tony Hain
I have had a consistent 10ms offset on a set of servers for the last 5 years. 
After extensive one-way tracing, it turns out there is a 20ms asymmetry 
within the Seattle Westin colo between HE  Comcast, causing all the IPv6 
peers appearing over the HE tunnel to be 10ms offset from everything else. 
There may be other instances of indirect peering causing a static asymmetric 
path delay, and NTP will report that as an offset of half of the difference. 

Tony


 -Original Message-
 From: NANOG [mailto:nanog-bounces+alh-ietf=tndh@nanog.org] On
 Behalf Of Rafael Possamai
 Sent: Thursday, July 16, 2015 8:53 AM
 To: Matthew Huff
 Cc: nanog@nanog.org
 Subject: Re: Speaking of NTP...
 
 Depending on how exactly you have these servers configured with relation
 to one another, small variations from one single source can be augmented
 down the line.
 
 https://en.wikipedia.org/wiki/Propagation_of_uncertainty
 
 
 
 On Mon, Jul 13, 2015 at 8:17 AM, Matthew Huff mh...@ox.com wrote:
 
  We have 5 NTP server:  2 x stratum 1 rubidium oscillator time servers
  with GPS sync, and 3 servers running NTP 4.2.6p5-3 synced to external
  internet based NTP stratum 1 servers. We monitor our NTP environment
  closely, and over the last 10+ years, normally all of our NTP servers
  are sync'ed within
  +/- 2 msec. Starting last Friday, we started seeing some remote NTP
  +servers
  with GPS reference consistently offset by 10 msec.
 
  Any one else seeing this?
 
  
  Matthew Huff | 1 Manhattanville Rd
  Director of Operations   | Purchase, NY 10577
  OTA Management LLC   | Phone: 914-460-4039
  aim: matthewbhuff| Fax:   914-694-5669
 
 



RE: Dual stack IPv6 for IPv4 depletion

2015-07-15 Thread Tony Hain
Joe Maimon wrote:
 Jared Mauch wrote:
 
 
  This isn’t really a giant set of naysayers IMHO, but there is enough
 embedded logic in devices that it doesn’t make that much sense.
 
 Enough to scuttle all previous drafts.
 
  linux
 
 a little google comes up with this
 
 http://www.gossamer-threads.com/lists/linux/kernel/866043
 
 It defies reason to compare that kind of update to ipv6.
 
  various *bsd flavors
 
  That effort would need to have everyone moving in the same direction
 now which seems unlikely.
 
  - Jared
 
 All I ever wanted to see was that the (minimal) effort was made possible.
 No guarantee of its success should be required for that. Even now.
 
 Because by doing so, you guarantee failure.


Joe, 

It appears you are asking for the world to sanction your local efforts. There 
is nothing stopping you from deploying and using that space if you can. Asking 
for a change in the standards status though will only lead to confusion and 
anguish. If 15 years ago it had been, or would now be changed to unicast, 
people would expect to be able to use it as they use the rest of the space. 
Those with access to source for all their devices could accomplish that, but 
everyone else would have to beat on vendors and wait an indeterminate time to 
get usable code, and that still would not fix rom based devices. On the other 
hand people with source don't need any standards change, they can just turn it 
on. 

If you want the additional effort to manage a global distribution of the space 
so it is not just an extension of 1918, then you have to acknowledge that it 
would only last a few weeks at best. While ARIN managed to change policy and 
slow things down, when APnic flamed out they burned through 6 /8's in 8 weeks 
and were accelerating, while Ripe was burning through one every 3 months, and 
Lacnic was accelerating through their last one over 4 months. So ignoring pent 
up demand since they have all been out for awhile now, and assuming that they 
space was generically usable, you get 8 weeks tops. Recognizing that they are 
not generically usable though it will likely take quite a bit longer than that. 

This is not being a naysayer, it is simply presenting issues that have been 
raised and considered many times over the last 15 years. There is a lot of work 
to make that space usable, and as you pointed out above the smallest part of 
that is the code change. In the context of the amount of work required in 
relation to the few weeks of gain that would result, it has always been 
difficult to establish much interest. At the end of the day it is not that much 
more work to fix all the devices to run IPv6. At that point you have no 
limitations, while 240/4 still leads to the place where the IPv4 pool is 
exhausted. 

Tony





RE: Dual stack IPv6 for IPv4 depletion

2015-07-15 Thread Tony Hain
George Metz wrote:
   snip
   Split the difference, go with a /52
 
 
  That's not splitting the difference. :)  A /56 is half way between a
  /48 and a /64. That's 256 /64s, for those keeping score at home.
 
 
 It's splitting the difference between a /56 and a /48. I can't imagine short 
 of
 the Nanotech Revolution that anyone really needs eight thousand separate
 networks, and even then... Besides, I recall someone at some point being
 grumpy about oddly numbered masks, and a /51 is probably going to trip
 that. :)
 
 I think folks are missing the point in part of the conservationists, and all 
 the
 math in the world isn't going to change that. While the... let's call them 
 IPv6
 Libertines... are arguing that there's no mathematically foreseeable way
 we're going to run out of addresses even at /48s for the proverbial soda
 cans, the conservationists are going, Yes, you do math wonderfully.
 Meantime is it REALLY causing anguish for someone to only get
 256 (or 1024, or 4096) networks as opposed to 65,536 of them? If not, why
 not go with the smaller one? It bulletproofs us against the unforeseen to an
 extent.

You are looking at this from the perspective of a network manager, and not 
considering the implications of implementing plug-n-play for consumers. A 
network manager can construct a very efficient topology with a small number of 
bits, but automation has to make gross waste trade-offs to just work when a 
consumer plugs things together without understanding the technology 
constraints. 

Essentially the conservationist argument is demanding waste, because the 
unallocated prefixes will still be sitting on the shelf in 400 years. It would 
be better to allocate them now and allow innovation at the cpe level, rather 
than make it too costly for cpe vendors to work around all the random 
allocation sizes in addition to the random ways people plug the devices 
together. 

 
 As an aside, someone else has stated that for one reason or another IPv6 is
 unlikely to last more than a couple of decades, and so even if something
 crazy happened to deplete it, the replacement would be in place anyhow
 before it could. I would like to ask what about the last 20 years of IPv6
 adoption in the face of v4 exhaustion inspires someone to believe that just
 because it's better that people will be willing to make the change over?

TDM voice providers had 100 years of history on their side, but voip won, 
because cheaper always wins. 






RE: Overlay broad patent on IPv6?

2015-07-14 Thread Tony Hain
There is prior art here, and likely patents held by HP
http://tools.ietf.org/html/draft-bound-dstm-exp-04


 -Original Message-
 From: NANOG [mailto:nanog-boun...@nanog.org] On Behalf Of Baldur
 Norddahl
 Sent: Monday, July 13, 2015 10:10 AM
 To: nanog@nanog.org
 Subject: Fwd: Overlay broad patent on IPv6?
 
 Nah what you describe is a different invention. Someone probably already
 has a patent on that.
 
 The browser will do a DNS lookup on slashdot.org and then cache that -
 forever (or until you restart the browser). Yes it will ignore the TTL (apps
 don't get the TTL at all, so apps don't know). Same happens if you ssh to
 yourserver.someplace.com. One DNS lookup, the traffic sticks there forever
 or until the session is terminated. DNS is horrible for this.
 
 If they had a IPv4 internal private network going you would not need to
 hook unto the DNS at all. Just get IP address when something wants to be
 routed out the WAN port. Also the NAT table is a good indicator of when
 you can release the address again.
 
 On other words, that would work, but the system described in the patent
 app wont.
 
 Of course both systems are useless. I can not imagine any end user that
 wont have a ton of IPv4 going on for the next decade to come. And when
 time comes, we are more likely to NAT64 than this.
 
 Regards,
 
 Baldur
 
 
 
 
 
 On 13 July 2015 at 18:04, Blake Dunlap iki...@gmail.com wrote:
 
  The point is you'd already have a 192 address or something, and it
  would only grab the external address for a short duration for use as
  an external PAT address, thus oversubscribing the ip4 pool to users
  who need it (based on dns). Its still pretty broken, but less broken
  than you describe.
 
  On Mon, Jul 13, 2015 at 8:55 AM,  a.l.m.bu...@lboro.ac.uk wrote:
   Hi,
   This is actually a good idea. Roll out an IPV6 only network and
   only
  pass
   out an IPV4 address if it's needed based on actual traffic.
  
   yes...shame someones applied for a patent on that! ;-)
  
   alan
 



RE: ARIN IPV4 Countdown

2015-07-14 Thread Tony Hain
Owen DeLong wrote:
 I vote for a /24 lotto to get rid of the rest!

That would take too long to get organized. Just suspend fees and policy
requirements and give one to each of the first 400 requestors. Overall it
would reduce costs related to evaluating need, so the lack of fee income
would not be a major loss. 

 
 (just kidding)

I am not ... It is long past time to move on, so getting rid of the
distraction might help with those still holding out hope.

Tony

 
 Owen
 
  On Jul 14, 2015, at 04:37 , Scott, Robert D. rob...@ufl.edu wrote:
 
  If you have been keeping an eye on the ARIN IPV4 countdown, they
 allocated their last /23 yesterday. There are only 400 /24s in the pool
now.
 
  https://www.arin.net/resources/request/ipv4_countdown.html
 
  Robert D. Scottrob...@ufl.edu
  Network Engineer 3 352-273-0113 Phone
  UF Information Technology  321-663-0421 Cell
  Network Services   352-273-0743 FAX
  University of Florida
  Florida Lambda Rail352-294-3571 FLR NOC
  Gainesville, FL  32611 3216630...@messaging.sprintpcs.com
 
 



RE: ARIN IPV4 Countdown

2015-07-14 Thread Tony Hain
Randy Bush wrote:
  I am not ... It is long past time to move on, so getting rid of the
  distraction might help with those still holding out hope.
 
 i think that is unfair to the ipv6 fanboys (and girls).  ipv6 use is
increasing
 slowly.  i bet it hits 10% by the time we retire.

Are you planning to retire this year? Select a logistic curve for 1800 days
forward at:

https://www.vyncke.org/ipv6status/project.php

While the base curve it runs on is running ahead of the measured traffic
curve, the measure of IPv6 enabled browsers is a reasonable indicator for
what is happening.

Tony




RE: Dual stack IPv6 for IPv4 depletion

2015-07-14 Thread Tony Hain
Mel Beckman wrote:
 Owen,
 
 By the same token, who 30 years ago would have said there was anything
 wrong with giving single companies very liberal /8 allocations? 

Actually 30 years ago it was very difficult to get a /8 even for a US Gov
organization. I have firsthand experience with being refused. As much as
people on this list like to paint a fantasy about 'the liberal policies of
the good-old- days' it was not as wild and loose as it is often made out to
be. 40 years ago it was easier to get a /8 than it was 30 year ago, but
there were still restrictions. At the end of the day, your impact on the
routing system determined which bucket you were put in, because the global
routing table was the scarce resource that needed management.


 Companies
 that for the most part wasted that space, leading to a faster exhaustion
of
 IPv4 addresses. History cuts both ways.

Call it waste if you want, but it is more likely that it was just allocation
a decade ahead of need, and that need would likely not have developed if the
global routing system collapsed due to too many /16's being allocated before
routers could handle that. 

 
 I think it's reasonable to be at least somewhat judicious with our
spanking
 new IPv6 pool. That's not IPv4-think. That's just reasonable caution.

Reasonable caution was only allocating 1/8th of the space up front, and
recommending that end sites be limited to a /48 without justifying more (rfc
3177). IPv4-think is refusing to acknowledge the math, and insisting that
just because the average consumer has been limited to a single subnet for
the last 15 years, that it was all they will ever need. Rewind the clock 16
years and you found that the restriction was a single mac-address, because
'nobody needs anything more than a single computer'. 

CPE developers have to manage their costs, and they will build to the limits
of what is available across the majority of providers. When that is
artificially restricted by unnecessary  IPv4-think conservation, you will
build a deployed base that has limited capability. Just as it is still
taking time to remove the deployed base of IPv4-only cpe, getting rid of
limitations will be slow, difficult, and costly. Fast forward 30 years, and
the network managers of the day will be asking why the clowns who insisted
on such an artificially restricted allocation model could be so short
sighted because they will not have been tainted by or understand
IPv4-think. 

IPv6 is not the last protocol known to mankind. IF it burns out in 400-500
years, something will have gone terribly wrong, because newer ideas about
networking will have been squashed along the way. 64 bits for both hosts and
routing was over 3 orders of magnitude more than sufficient to meet the
design goals for the IPv4 replacement, but in the context of the dot-com
bubble there was a vast outcry from the ops community that it would be
insufficient for the needs of routing. So the entire 64 bits of the original
proposal was given to routing, and the IETF spent another year arguing about
how many bits more to add for hosts. Now, post bubble burst, we are left
with 32,768x the already more than sufficient number of routing prefixes,
but IPv4-think conservation believes we still need to be extremely
conservative about allocations. 

Tony

 
 We can always be more generous later.
 
  -mel beckman
 
  On Jul 14, 2015, at 10:04 AM, Owen DeLong o...@delong.com wrote:
 
  30 years ago, if you'd told anyone that EVERYONE would be using the
  internet 30 years ago, they would have looked at you like you were stark
 raving mad.
 
  If you asked anyone 30 years ago will 4 billion internet addresses be
  enough if everyone ends up using the internet?, they all would have
told
 you no way..
 
  I will again repeat. Let's try liberal allocations until we use up the
  first /3. I bet we don't finish that before we hit other scaling limits
of IPv6.
 
  If I'm wrong and we burn through the first /3 while I am still alive,
  I will happily help you get more restrictive policy for the remaining
  3/4 of the IPv6 address space while we continue to burn through the
 second /3 as the policy is developed.
 
  Owen
 
 
  On Jul 14, 2015, at 06:23 , George Metz george.m...@gmail.com wrote:
 
  That's all well and good Owen, and the math is compelling, but 30 years
 ago if you'd told anyone that we'd go through all four billion IPv4
addresses
 in anyone's lifetime, they'd have looked at you like you were stark raving
 mad. That's what's really got most of the people who want (dare I say more
 sane?) more restrictive allocations to be the default concerned; 30 years
ago
 the math for how long IPv4 would last would have been compelling as well,
 which is why we have the entire Class E block just unusable and large
blocks
 of IP address space that people were handed for no particular reason than
it
 sounded like a good idea at the time.
 
  It's always easier to be prudent from the get-go than it is to rein in
the
 insanity 

RE: How long will it take to completely get rid of IPv4 or will it happen at all?

2015-06-27 Thread Tony Hain
Bob Evans wrote:
 
 Our fundamental issue is that an IPv4 address has no real value as
networks
 still give them away, it's pennies in your pocket. Everything of use needs
to
 have a cost to motivate for change. Establishing that now won't create
 change it will first create greater conservation. There will be a cost
that will
 be reached before change takes place on a scale that matters.
 
 Networks set the false perception and customer expectation that address
 space is free and readily available. Networks with plenty, still land many
 customers today by handing over a class C to customer with less than 10
 servers and 5 people in an office.
 
 We have a greater supply for packets to travel than we do for addresses
 required to move packets. Do you know how many packets a single IP
 address can generate or utilize, if it was attached too The World's
Fastest
 Internet in someplace like Canadaland or Sweden on init7's Fiber7 ?  No
 matter how large the pipe the answer is always, all of it. It's address
space
 we should now place a price upon. Unlike, My Space's disappearance when
 Facebook arrived there is no quick jump to IPv6. There is no coordinated
 effort required that involves millions of people to change browser window
 content.
 
 But to answer your question...
 
 Everything that is handed over for free is perceived as having no value.
 Therefore, IPv4 has to cost much more than the cost to change to IPv6
today.
 While the IPv6 addresses are free, it is expensive to change.
 Businesses spend lots of money on a free lunches. It's going to take at
least
 the price of one good lunch per IP address per month to create the
 consideration for change. That's about $30 for 2 people in California.
 Offering a /48 of free IPv6 space to everyone on the planet didn't make it
 happen.
 
 There is no financial incentive to move to IPv6. In fact there is more
reason
 not to change than to change. The new gear cost $$$ (lots of it didn't
 work well and required exploration to learn that),  IT people need hours
to
 implement (schedules are full of day-to-day issues), networks keep growing
 with offerings that drop Internet costs and save everyone money, business
 as usual is productive on IPv4 (business doesn't have time for
distraction),
 many of us get distracted by something more immediate and interesting
 than buying a new wi-fi router for the home.
 
 What will come first ?
 A) the earths future core rotation changes altering the ionosphere in such
a
 way that we are all exposed to continuous x-rays that shorten our lifespan
  OR
 B) the last IPv4 computer running will be reconfigured to IPv6
 
 Thank You
 Bob Evans
 CTO
 

Rewind the clock 20 years s/ipv4/sna/  s/ipv6/ipv4/   and/or
rewind the clock 15 years s/ipv4/tdm/ s/ipv6/voip/ 
and your rant is exactly what was coming out of enterprises and carriers at
those times. The only thing more constant than change in this industry is
the intransigence of the luddites that believe they are the masters of the
universe and will refuse to move with the tide. Sometimes (like in the case
of IPv4) they can build a strong seawall that will hold the tide back for a
decade, but rest assured that the tide always wins. 

I have looked and can't find the references, but I distinctly remember
Businessweek or Fortune magazine covers in the late 90's with phrases to the
effect of 'SNA Forever' or 'SNA is for real business/IPv4 is an experimental
toy'. I have also been in meetings with carriers and been told No end
customer will ever fill a DS-3. Those are inter-city exchange circuits, and
there isn't enough data in the world to fill one, having just told them we
were connecting CERN to Cal-tech. 

To the point of the original question, look to history for some indication.
While people in the late 90's were busy trying to figure out how to
translate web pages to SNA terminals, within ~ 5 years, the noise was gone.
I am sure you will still find pockets of legacy SNA in use, but nobody
cares. Then look at the education system. Once you retire-out the tenured
dinosaurs that are still teaching classfull IPv4, followed by a generation
of upstarts that never learned about those tiny 32-bit locators which could
only possibly identify 1% of the connected devices they are aware of, it
will die off. Until then, it will move to the backwaters where nobody cares.

When you ignore the costs of maintaining an ever crumbling foundation, and
just look at the cost of replacement, then you can mentally justify staying
in the past. If you are honest about the TCO, and include both the wizardry
created by the network masters and the difficult to quantify increased cost
of all the software that has to work around that, then a cost based analysis
is valid. Unfortunately there has been enough myopic focus on
network-specific costs on this list that a decade has been lost that could
have been used to update software and reduce the future  timeframe that IPv4
needs to be supported.


RE: Android (lack of) support for DHCPv6

2015-06-10 Thread Tony Hain


From: Lorenzo Colitti [mailto:lore...@colitti.com] 
Sent: Tuesday, June 09, 2015 11:47 PM
To: Tony Hain
Cc: Mikael Abrahamsson; Chris Adams; NANOG
Subject: Re: Android (lack of) support for DHCPv6

On Wed, Jun 10, 2015 at 3:38 PM, Tony Hain alh-i...@tndh.net wrote:I claim 
that there is a platform bug, because there is never a reason to
ignore the WiFi RA. Use the other flag to set a preference if that is
needed, but ignoring the RA just breaks things in unexpected ways. LC has
did a hand-wave that the ignore RA flag is needed for battery life, but
beyond that we appear to be stuck in a world where Clueless OEMs believe in
breaking one network when another might exist.

This is not how current Android works. Each network can run IPv4, IPv6 or both 
independently of any other network. If you can reproduce this on a device 
running current Android (preferably a Nexus device), please file a bug.
 
There is indeed an issue with OEMs dropping RAs when the screen is off. Because 
it is the OEM that provides the wifi firmware and not Android, it's not really 
fair to say it's an Android bug. FWIW, recent Nexus devices do not have that 
bug.


My Nexus tablet does not have a Cell interface, and T-Mobile has stopped 
releasing updates for my phone, so I can't test that. For the issue I saw in 
the past, there was no screen-off event. All I had to do was enable the IPv6 
APN, and given that I live on the edge of the service area the link would drop 
at some point shortly after. At that point the expected behavior is that IPv6 
would still work via wifi, but no. While it still has an address, and can talk 
to anything on the wire, it has no router because that was removed and the RA 
is being ignored. 

I agree the OEM's are likely the problem here, but the platform should not 
allow them to create an invalid network state. Doing so only insures that they 
will pick the wrong options and break the network unnecessarily.

Tony




RE: Android (lack of) support for DHCPv6

2015-06-10 Thread Tony Hain
 -Original Message-
 From: NANOG [mailto:nanog-boun...@nanog.org] On Behalf Of Mikael
 Abrahamsson
 Sent: Tuesday, June 09, 2015 10:39 PM
 To: Chris Adams
 Cc: nanog@nanog.org
 Subject: Re: Android (lack of) support for DHCPv6
 
 On Tue, 9 Jun 2015, Chris Adams wrote:
 
  Android devices (Samsung and LG) upgraded to Lollipop, I no longer
  have functioning IPv6 on wifi.  They connect and get an address (with
  privacy extensions even), but do not install an IPv6 default route.
  They can talk to local IPv6 devices, but not the Internet.
 
 My Nexus4 with Android 5.1.1 works just fine with IPv6 on wifi.
 
 So talk to your handset manufacturer, they must have broke something.

I filed a platform bug on this back in the ICS timeframe, and it still
persists. As I recall, there are 2 flags provided by the OS related to RA
handling. Rather than using the one that sets a preference between the Cell
vs. Wifi interface, at least Samsung (possibly others) have chosen to use
the other flag that says to completely ignore the WiFi RA if an RA on the
Cell interface has ever occurred. This means devices that have no IPv6 on
their Cell interface will appear to work fine on WiFi. 

I claim that there is a platform bug, because there is never a reason to
ignore the WiFi RA. Use the other flag to set a preference if that is
needed, but ignoring the RA just breaks things in unexpected ways. LC has
did a hand-wave that the ignore RA flag is needed for battery life, but
beyond that we appear to be stuck in a world where Clueless OEMs believe in
breaking one network when another might exist.

As a general comment about this thread; people need to treat the handset as
a ROUTER and get over it. Just do a PD and treat it like any other router.
Ignore routing protocol announcements from it if when it is run by a
customer, but that is no different than any other CPE router. Most handsets
now days are more capable than most consumer CPE routers, so moving past the
'it is just a voice endpoint' mindset is appropriate.

Tony


 
 --
 Mikael Abrahamssonemail: swm...@swm.pp.se



RE: Android (lack of) support for DHCPv6

2015-06-10 Thread Tony Hain
Ray Soucy  wrote:
 
 Respectfully disagree on all points.
 
 The statement that Android would still not implement DHCPv6 NA, but it would
 implement DHCPv6 PD. is troubling because you're not even willing to
 entertain the idea for reasons that are rooted in idealism rather than
 pragmatism.

In Lorenzo's defense, I believe he is taking the long term pragmatic position, 
while you appear to be taking the short term idealistic position. 

For argument's sake... let's assume that a shiny new browser comes along the is 
designed to limit third party cross site correlation and tracking. It does this 
by using a different source address for every destination. This browser works 
fine on any network that allows N1, but is stuck in the myopic historical 
world of older browsers on networks where N=1. To the pragmatism point, would 
you rather have a device like that do N NA requests (creating N ND state 
entries), or have it do PD (creating 1 ND + 1 Routing entry)?

Tony

 
 Very disappointing to see that this is the position of Google.
 
 
 On Wed, Jun 10, 2015 at 10:58 AM, Lorenzo Colitti lore...@colitti.com
 wrote:
 
  On Wed, Jun 10, 2015 at 10:06 PM, Ray Soucy r...@maine.edu wrote:
 
  Actually we do support DHCPv6-PD, but Android doesn't even support
  DHCPv6 let alone PD, so that's the discussion here, isn't it?
 
 
  It is possible to implement DHCPv6 without implementing stateful
  address assignment.
 
  If there were consensus that delegating a prefix of sufficient size
  via
  DHCPv6 PD of a sufficient size is an acceptable substitute for
  stateful
  IPv6 addressing in the environments that currently insist on stateful
  DHCPv6 addressing, then it would make sense to implement it. In that
  scenario, Android would still not implement DHCPv6 NA, but it would
  implement DHCPv6 PD.
 
  What needs to be gauged about that course of action is how much
  consensus would be achieved, whether network operators would actually
  use it (IPv6 has a long and distinguished history of people claiming
  I can't support
  IPv6 until I get feature X, feature X appearing, and people changing
  their claim to I can't support IPv6 until I get feature Y), and how
  much of this discussion would be put to bed.
 
  That course of action would seem most feasible if it were accompanied
  by an IETF document that explained the deployment model and clarified
  what sufficient size is.
 
 
  Universities see a constant stream of DMCA violation notices that
  need to be dealt with and not being able to associate a specific IPv6
  address to a specific user is a big enough liability that the only
  option is to not use IPv6.
 
 
  It's not the *only* option. There are large networks - O(100k) IPv6
  nodes
  - that do ND monitoring for accountability, and it does work for them.
  Many devices support this via syslog, even. As you can imagine, my
  Android device gets IPv6 at work, even though it doesn't support
  DHCPv6. Other universities, too. It's obviously  not your chosen or
  preferred mechanism, but it does work.
 
 
 
 
 --
 Ray Patrick Soucy
 Network Engineer
 University of Maine System
 
 T: 207-561-3526
 F: 207-561-3531
 
 MaineREN, Maine's Research and Education Network www.maineren.net



RE: Android (lack of) support for DHCPv6

2015-06-10 Thread Tony Hain
Ray Soucy wrote:
 I don't really feel I was trying to take things out of context, but the full 
 quote
 would be:
 
 If there were consensus that delegating a prefix of sufficient size via
 DHCPv6 PD of a sufficient size is an acceptable substitute for stateful
 IPv6 addressing in the environments that currently insist on stateful
 DHCPv6 addressing, then it would make sense to implement it. In that scenario,
 Android would still not implement DHCPv6 NA, but it would implement DHCPv6
 PD.
 
 To me, that's essentially saying:
 
 EVEN IF we decided to support DHCPv6-PD, and that's a big IF, we will never
 support stateful address assignment via DHCPv6.
 
 This rings especially true when compared against the context of everything 
 else
 you've written on the subject.
 
 I think that's how most others on this list would read it as well.
 
 If that isn't what you meant to say, then I'm sorry.  I'm certainly not 
 trying to put
 words in your mouth.
 
 I still feel that it's a very poor position to take.
 
 Given that you don't speak for Google on the subject, if you're not the 
 decision
 maker for this issue on Android, could you pull in the people at Google who 
 are,
 or at least point us to them?
 
 A lot of us would like the chance to make our case and expose the harm that
 Android is doing by not supporting DHCPv6.
 
 I think the Android team is very overconfident in their ability to shape the
 direction of IPv6 adoption, especially with years old Android releases being 
 still
 in production and it taking incredibly long for changes to trickle down 
 through
 the Android ecosystem.
 
 That delay is also why we have a hard time accepting the mindset that IF you
 see a need for it in the future you'll add it.  That will be 4 to 8 years too 
 late.
 
 
 

So the flip side of that point is; how many decades does it take to trickle 
things through the operational networks? Having seen this first hand at the 
operator, infrastructure vendor and OS vendor perspectives, the general network 
operations community considers anything that makes Application Development 
harder to be their problem. Persistent messages like don't waste time on 
IPv6 development because we are only going to deploy IPv4 and I need shiny 
feature X NOW  caused at least one decade of delay in infrastructure products 
doing anything. Now we appear to be stuck in another decade of delay based on 
it is not exactly the same as IPv4. 

Like it or not, the OS vendors actually cater to the Application Developers, as 
they are the ones that produce something useful to the end user. Their job is 
to be the intermediary between the needs of the apps, and the availability (or 
lack) of network resources. (FWIW: as much as people on this list don’t like 
them this is exactly why I made sure XP did automated IPv6 over IPv4 tunneling) 
Fault the OS vendors if you want to, but they are often trying to make the 
networks appear more capable and consistent than they really are. To a first 
order this is the primary innovation of the iPhone, in telling the carriers 
no you don't get to fragment the OS or application functionality. 

At the end of the day though, N = 1 is the most likely result of an NA 
deployment today. Once that engrained in the next generation of network 
operators, they will do everything they can to resist change, because their 
security architecture and all their tools assume N = 1 (or are we already 
there?). Taking the opportunity of change to also change the mindset toward PD 
allows N  1. Enforcing an NA model where N  1 eventually fails as N blows out 
the ND cache. 

Tony


 
 
 On Wed, Jun 10, 2015 at 12:29 PM, Lorenzo Colitti lore...@colitti.com
 wrote:
 
  On Thu, Jun 11, 2015 at 12:36 AM, Jeff McAdams je...@iglou.com wrote:
 
  Then you need to be far more careful about what you say. When you
  said Android would still not support... you, very clearly, made a
  statement of product direction for a Google product.
 
 
  Did you intentionally leave the in that scenario, words that came
  right before the ones you quoted?
 
  How does a sentence that says in that scenario, android would X
  constitute a statement of direction?
 
 
 
 
 --
 Ray Patrick Soucy
 Network Engineer
 University of Maine System
 
 T: 207-561-3526
 F: 207-561-3531
 
 MaineREN, Maine's Research and Education Network www.maineren.net



RE: certification (was: eBay is looking for network heavies...)

2015-06-07 Thread Tony Hain
Randy Bush wrote:
  
  but you can't move packets on pieces of paper.

Or can you?  RFC's 6214 2549 1149
;)



RE: AWS Elastic IP architecture

2015-06-01 Thread Tony Hain


 -Original Message-
 From: NANOG [mailto:nanog-boun...@nanog.org] On Behalf Of
 Christopher Morrow
 Sent: Monday, June 01, 2015 7:24 AM
 To: Matt Palmer
 Cc: nanog list
 Subject: Re: AWS Elastic IP architecture
 
 On Mon, Jun 1, 2015 at 1:19 AM, Matt Palmer mpal...@hezmatt.org
 wrote:
  On Sun, May 31, 2015 at 10:46:02PM -0400, Christopher Morrow wrote:
  So... ok. What does it mean, for a customer of a cloud service, to be
  ipv6 enabled?
 
  IPv6 feature-parity with IPv4.
 
  My must-haves, sorted in order of importance (most to least):
 
  o Is it most important to be able to terminate ipv6 connections (or
  datagrams) on a VM service for the public to use?
 
 
 and would a headerswapping 'proxy' be ok? there's (today) a 'header
 swapping proxy' doing 'nat' (sort of?) for you, so I imagine that whether the
 'headerswapping' is v4 to v4 or v6 to v4 you get the same end effect:
 People can see your kitten gifs.
 
  o Is it most important to be able to address every VM you create with
  an ipv6 address?
 
 why is this bit important though? I see folk, I think, get hung up on this, 
 but I
 can't figure out WHY this is as important as folk seem to want it to be?
 
 all the vms have names, you end up using the names not the ips... and thus
 the underlying ip protocool isn't really important? Today those names
 translate to v4 public ips, which get 'headerswapped' into v4 private
 addresses on the way through the firehedge at AWS. Tomorrow they may
 get swapped from v6 to v4... or there may be v6 endpoints.
 
  o Is it most important to be able to talk to backend services
  (perhaps at your prem) over ipv6?
 
  If, by backend services, you mean things like RDS, S3, etc, this is
  in the right place.
 
 
 I meant 'your oracle financials installation at $HOMEBASE'. Things like
 'internal amazon services' to me are a named endpoint and:
   1) the name you use could be resolving to something different than the
 external view
   2) it's a name not an ip version... provided you have the inny and it's an
 outy, I'm not sure that what ip protocol you use on the RESTful request
 matters a bunch.
 
  o Is it most important that administrative interfaces to the VM
  systems (either REST/etc interfaces for managing vms or 'ssh'/etc) be
  ipv6 reachable?
 
  I don't see, especially if the vm networking is unique to each
  customer, that 'ipv6 address on vm' is hugely important as a
  first/important goal. I DO see that landing publicly available
  services on an ipv6 endpoint is super helpful.
 
  Being able to address VMs over IPv6 (and have VMs talk to the outside
  world over IPv6) is *really* useful.  Takes away the need to NAT anything.
 
 but the nat isn't really your concern right (it all happens magically for 
 you)?
 presuming you can talk to 'backend services' and $HOMEBASE over ipv6
 you'd also be able to make connections to other v6 endpoints as well.
 there's little difference REALLY between v4 and v6 ... and jabbing a
 connection through a proxy to get v6 endpoints would work 'just fine'.
 (albeit protocol limitations at the higher levels could be interesting if the
 connection wasn't just 'swapping headers')
 
  Would AWS (or any other cloud provider that's not currently up on the
  v6 bandwagon) enabling a loadbalanced ipv6 vip for your public
  service (perhaps not just http/s services even?) be enough to relieve
  some of the pressure on other parties and move the ball forward
  meaningfully enough for the cloud providers and their customers?
 
  No.  I'm currently building an infrastructure which is entirely
  v6-native internally; the only parts which are IPv4 are public-facing
  incoming service endpoints, and outgoing connections to other parts of
  the Internet, which are proxied.  Everything else is talking amongst
  themselves entirely over IPv6.
 
 that's great, but I'm not sure that 'all v6 internally!' matters a whole 
 bunch? I
 look at aws/etc as bunch of goo doing
 computation/calculation/storage/etc with some public VIP (v4, v6,
 etc) that are well defined and which are tailored to your userbase's
 needs/abilities.
 
 You don't actually ssh to 'ipv6 literal' or 'ipv4 literal', you ssh to
 'superawesome.vm.mine.com' and provide http/s (or whatever) services via
 'external-service-name.com'. Whether the 1200 vms in your private network
 cloud are ipv4 or ipv6 isn't important (really) since they also talk to
 eachother via names, not literal ip numbers. There isn't NAT that you care
 about there either, the name/ip translation does the right thing (or should)
 such that 'superawesome.vm.availzone1.com' and
 'superawesome.vm.availzone2.com' can chat freely by name without
 concerns for underlying ip version numbers used (and even without caring
 that 'chrissawesome.vm.availzone1.com' is 10.0.0.1 as well.

Look at the problem in the other direction, and you will see that addresses 
often matter. What if you want to deny ssh connections from a particular 
address range? The source 

RE: AWS Elastic IP architecture

2015-06-01 Thread Tony Hain
 snip

  What I read in your line of comments to Owen is that the service only does
 a header swap once and expects the application on the VM to compensate.
 In that case there is an impact on the cost of deployment and overall utility.
 
 'compensate' ? do you mean 'get some extra information about the real
 source address for further policy-type questions to be answered' ?

Yes. Since that is not a required step on a native machine, there would be 
development / extra configuration required. While people that are interested in 
IPv6 deployment would likely do the extra work, those who just want it to 
work would delay IPv6 services until someone created the magic. Unfortunately 
that describes most of the people that use hosted services, so external proxy / 
nat approaches really do nothing to further any use of IPv6.

 
 I would hope that in the 'header swap' service there's as little overhead
 applied to the end system as possible... I'd like my apache server to answer
 v6 requests without having a v6 address-listening-port on my machine. For
 'web' stuff 'X-forwarded-for' seems simple, but breaks for https :(

So to avoid the exceedingly simple config change of Listen 80 rather than 
Listen x.x.x.x:80 you would rather not open the IPv6 port? If the service 
internal transport is really transparent, https would work for free. I don't 
have any data to base it on, but I always thought that scaling an e-commerce 
site was the primary utility in using a hosted VM service. If that is true, it 
makes absolutely no sense to do a proxy VIP thingy for IPv6 port 80 to fill the 
cart, then fail the connection when trying to check-out. As IPv4 becomes more 
fragile with the additional layering of nats, the likelihood of that situation 
goes up, causing even more people to want to turn off the IPv6 vip. It is 
better for the service to appear to be down at the start than to have customers 
spend time then fail at the point of gratification, because they are much more 
likely to forget about an apparent service outage than to forgive wasting their 
time.

 
 Oh, so what if the 'header swap' service simply shoveled the v6 into a gre
 (or equivalent) tunnel and dropped that on your doorstep?
 potentially with an 'apt-get install aws-tunnelservice'  ? I would bet in the
 'vm network' you could solve a bunch of this easily enough, and provide a v6
 address inside the tunnel on the vm providing the services.
 
 loadbalancing is a bit rougher (more state management) but .. is doable.

I think tunneling would be more efficient and manageable overall. I have not 
thought through the trade-offs between terminating it on the host vs inside the 
VM, but gut feel says that for the end-user / application it might be better 
inside the vm so there is a clean interface, while for service manageability it 
would be better on the host, even though some information might get lost in the 
interface translation. As long as the IP header that the VM stack presents to 
the application is the same as the one presented to the vip (applies outbound 
as well), the rest is a design detail that is best left to each organization.

Tony




RE: AWS Elastic IP architecture

2015-06-01 Thread Tony Hain
Hugo  Slabbert wrote:
 snip
 
 On this given point, though: Facebook -ne generic hosting platform

True, but it does represent a business decision to choose IPv6. The relevant
point here is that the NEXT facebook/twitter/snapchat/... is likely being
pushed by clueless investors into outsourcing their infrastructure to
AWS/Azure/Google-cloud. This will prevent them from making the same business
decision about system efficiency and long term growth that Facebook made due
to decisions made by the cloud service operator.

From my perspective, most of this conversation has centered on the needs of
the service, and tried very hard to ignore the needs of the customer despite
Owen and others repeatedly raising the point. While the needs of the service
do impact the cost of delivery, a broken service is still broken. Personally
I would consider free to be overpriced for a broken service, but maybe
that is just me. 

In any case, if the VM interface doesn't present what looks like a native
IPv6 service to the application developer, IPv6 usage will be curtailed and
IPv4 growing pains will continue to get worse. 

Tony



RE: AWS Elastic IP architecture

2015-06-01 Thread Tony Hain


 -Original Message-
 From: christopher.mor...@gmail.com
 [mailto:christopher.mor...@gmail.com] On Behalf Of Christopher Morrow
 Sent: Monday, June 01, 2015 5:10 PM
 To: Tony Hain
 Cc: Hugo Slabbert; Matt Palmer; nanog list
 Subject: Re: AWS Elastic IP architecture
 
 On Mon, Jun 1, 2015 at 7:20 PM, Tony Hain alh-i...@tndh.net wrote:
  True, but it does represent a business decision to choose IPv6. The
  relevant point here is that the NEXT facebook/twitter/snapchat/...
  is likely being pushed by clueless investors into outsourcing their
  infrastructure to AWS/Azure/Google-cloud.
 
 ;; ANSWER SECTION:
 www.snapchat.com.   3433IN  CNAME   ghs.google.com.
 ghs.google.com. 21599   IN  CNAME   ghs.l.google.com.
 ghs.l.google.com.   299 IN  A   64.233.176.121
 
 snapchat seems to be doing just fine on 'google cloud services' though? oh:
 
 ;; ANSWER SECTION:
 www.snapchat.com.   3388IN  CNAME   ghs.google.com.
 ghs.google.com. 21599   IN  CNAME   ghs.l.google.com.
 ghs.l.google.com.   299 IN  2607:f8b0:4002:c06::79
 
 ha!

Try https://snapchat.com and see if you ever get an IPv6 connection... Yes an 
application aware proxy can hack some services into appearing to work, but they 
really fail the service customer because a site may appear to be up over IPv6 
until the user switches to https, then having to switch to IPv4 end up 
appearing dead because IPv4 routing is having a bad hair day. 





RE: Multiple vendors' IPv6 issues

2015-05-27 Thread Tony Hain
David,

While I agree with you that there is no excuse for the general IPv6 brokenness 
across all vendors, they are just doing what participants on lists like this 
one tell them. NameShame may help a little, but until a large number of people 
get serious and stop prioritizing IPv4 in their purchasing demands, the vendors 
are not going to prioritize IPv6. Until the vendors clearly hear a collective  
we are not buying this product because IPv6 is broken, everyone will get 
exactly the behavior you are witnessing. 

While I appreciate the challenges you are facing, it is likely that you will be 
helped by documenting the percentage of IPv6 traffic you see when things do 
work. While it may not be much now, that can change quickly and will provide 
internal ammunition when you try to take a stand about refusing to use a 
product. If your IPv6 percentage  grows anywhere near the 2x/yr rate that 
Google has been seeing it won't take long before IPv6 is the driving protocol. 
For fun, project this 
http://www.google.com/intl/en/ipv6/statistics.html   forward 4 years and hand 
it to the vendors that can't get their IPv6 act together. Then ask them how 
they plan to still be in business at that point ..

Tony


 -Original Message-
 From: NANOG [mailto:nanog-boun...@nanog.org] On Behalf Of David
 Sotnick
 Sent: Tuesday, May 26, 2015 4:19 PM
 To: NANOG
 Subject: Multiple vendors' IPv6 issues
 
 Hi NANOG,
 
 The company I work for has no business case for being on the IPv6-Internet.
 However, I am an inquisitive person and I am always looking to learn new
 things, so about 3 years ago I started down the IPv6 path. This was early
 2012.
 
 Fast forward to today. We have a /44 presence for our company's multiple
 sites; All our desktop computers have been on the IPv6 Internet since June,
 2012 and we have a few s in our external DNS for some key services —
 and, there have been bugs. *Lots* of bugs.
 
 Now, maybe (_maybe_) I can have some sympathy for smaller network
 companies (like Arista Networks at the time) to not quite have their act
 together as far as IPv6 goes, but for larger, well-established companies to
 still have critical IPv6 bugs is just inexcusable!
 
 This month has just been the most disheartening time working with IPv6.
 
 Vendor 1:
 
 Aruba Networks. Upon adding an IPv6 address to start managing our WiFi
 controller over IPv6, I receive a call from our Telecom Lead saying that or
 WiFi VoIP phones have just gone offline. WHAT? All I did was add an IPv6
 address to a management interface which has *nothing* to do with our VoIP
 system or SSID, ACLs, policies, roles, etc.
 
 Vendor 2:
 
 Palo Alto Networks: After upgrading our firewalls from a version which has a
 nasty bug where the IPv6 neighbor table wasn't being cleaned up properly
 (which would overflow the table and break IPv6), we now have a *new*
 IPv6 neighbor discovery bug where one of our V6-enabled DMZ hosts just
 falls of the IPv6 network. The only solution: clear the neighbor table on the
 Palo Alto or the client (linux) host.
 
 Vendor 3:
 
 Arista Networks: We are seeing a very similar ND bug with Arista. This one is
 slightly more interesting because it only started after upgrading our Arista
 EOS code — and it only appears to affect Virtual Machines which are behind
 our RedHat Enterprise Virtualization cluster. None of the hundreds of
 VMware-connected hosts are affected. The symptom is basically the same
 as the Palo Alto bug. Neighbor table gets in some weird state where ND
 breaks and the host is unreachable until the neighbor table is cleared.
 
 Oh, and the final straw today, which is *almost* leading me to throw in the
 IPv6 towel completely (for now): On certain hosts (VMs), scp'ing a file over
 the [Arista] LAN (10 gigabit LAN) takes 5 minutes over IPv6 and 1 second
 over IPv4. What happened?
 
 It really saddens me that it is still not receiving anywhere near the kind of
 QA (partly as a result of lack of adoption) that IPv4 has.
 
 Oh, and let's not forget everybody's favorite vendor, Cisco. Why is it,
 Cisco, that I have to restart my IPv6 OSPF3 process on my ASA every time my
 Palo Alto firewall crashes and fails over, otherwise none of my VPN clients
 can connect via IPv6?
 
 Why do you hurt me so, IPv6? I just wanted to be friends, and now I just
 want to break up with you. Maybe we can try to be friends again when your
 vendors get their shit together.
 
 -David



RE: Comcast thinks it ok to install public wifi in your house

2014-12-11 Thread Tony Hain
 -Original Message-
 From: NANOG [mailto:nanog-boun...@nanog.org] On Behalf Of Bob Evans
 Sent: Thursday, December 11, 2014 7:30 AM
 To: nanog@nanog.org
 Subject: Re: Comcast thinks it ok to install public wifi in your house
 
 
 I think it's more than AC power issuewho knows what strength level
they
 program that SSID to work at ?  More wifi signal you are exposed to
without
 your knowledge and more...read on.
 
The CPU would be running the idle loop if it wasn't handling these packets,
so power consumption outside the RF transmitter is irrelevant.

Given it is a part-15 consumer device, you can assume no more than 100mw on
the signal level. Assume someone lights that up 24x7x365.25  ... (an
unrealistic continuous broadcast from a source on the wired side, but for a
worst case back-of-the-envelope calculation it is close enough). The
transmitter is not going to be 100% efficient, so let's pick 33% to make the
calculation easier to follow.

.3 W x 24 hrs = 7.2 Whrs/day
7.2Whrs/day x $.00011/Whr*= $.000792/day
$.000792/day x 365.25 days/yr = $.289278/yr

*YMMV based on the local rate per kWhr.


So for any realistic local kWhr rate in the coverage area, the result is
less than $1/yr. This case is arguing a substantial burden has been imposed
as the result of consuming vastly more electricity, but any realistic use
of that additional signal over an entire year is less than the cost of a
stamp used to mail in just one month's bill payment. 

The lawyers in this case need a substantial fine for abusing the court
system. 
Tony




RE: Linux: concerns over systemd adoption and Debian's decision to switch

2014-10-23 Thread Tony Hain
Randy wrote:
 I've enjoyed kernel hot patches (ksplice) until now.
 
 So my primary concern is that updates to systemd appears to require a full
 reboot:
 
 http://forums.fedoraforum.org/showthread.php?t=300166
 
 Is systemd really like a 2nd 'kernel' -- demanding mass reboots every time
a
 security issue is discovered?
 
 I hope not!
 
 --
 ~Randy

Given that their focus is on reducing boot-time, why wouldn't they want to
highlight that point by making you do it often??

It is clear that the systemd developers are on a solid track to catch up
with Windows-9x/ME. At the timeframe of Win9x/ME development, Gates was
hammering boot-time : boot-time : boot-time on a regular basis. My
feedback to him and his direct reports was that getting rid of reasons to
reboot was much more important than making an all too common activity less
unpleasant. Clearly a monolithic super ker-daemon will be at least as easy
to use and maintain as a Win9x machine ...  ;)

Tony





RE: Dealing with abuse complaints to non-existent contacts

2014-08-10 Thread Tony Hain
I have found the scaling is better if you make it the abusing providers problem 
to contact you. Whenever a range gets blocked, the bounce message tells the 
mail originator to take their money and find a new hosting provider that does 
not support/tolerate spam. When legitimate originators have contacted their 
provider about that message, the sources that were inadvertently hosting the 
abuse are happy to find out more so they can resolve the problem, and they 
provide a working contact in the process, even if the registered one fails. 

The down side is that it requires the legitimate originator to pay attention to 
the bounce and decide they want to take action. The hope is that eventually 
more money will flow toward those hosting providers that are diligent about 
resolving issues. 

Tony


 -Original Message-
 From: NANOG [mailto:nanog-boun...@nanog.org] On Behalf Of Suresh
 Ramasubramanian
 Sent: Monday, August 11, 2014 11:04 AM
 To: Mark Andrews
 Cc: goe...@anime.net; nanog@nanog.org
 Subject: Re: Dealing with abuse complaints to non-existent contacts
 
 Good luck getting action from foreign LE through the mlat system
 
 You might get a response, oh, in the next two years or so. IF you can find
 local LE willing to push the case forward.
 
 Beyond that while RIRs are not the internet police they do owe it to the
 community to be more vigilant on dud contact addresses, and also do a
 lot^W bit more due diligence when allocating IP space, and when appointing
 LIRs.
  On 11-Aug-2014 6:37 am, Mark Andrews ma...@isc.org wrote:
 
 
  In message cb3ca09e-b16f-4101-aec2-aee12c982...@delong.com,
 Owen
  DeLong
  writes:
  
   On Aug 10, 2014, at 1:28 PM, goe...@anime.net wrote:
  
On Mon, 11 Aug 2014, Paul S. wrote:
It would appear you've done your part in trying to reach out (and
subsequently failed), so the next step to go is dropping all
traffic
  from
it.
   
Nothing wrong with trying to protect your own customer from
people who cannot be bothered to do their own due diligence.
   
It would be nice if allocations would be revoked due to
invalid/fake contact info.
   
-Dan
  
   I kind of agree, but past efforts in this regard have not met with
   consensus from the ARIN community.
  
   If you believe this to be the case, I suggest putting it into
   template format and submitting to pol...@arin.net.
  
   I'm happy to help if you would like. Subscribing to arin-ppml will
   allow you to participate in the community discussion of the policy
 proposal.
  
   Owen
 
  It really isn't the RIR's job to withdraw allocations due to bad
  behaviour as much as many of us would like it to be.  Failure to
  maintain valid contact details however is within the purview of the
  RIRs.
 
  If you are being attacked, report the attack to your LEA.  Let the
  LEA's maintain intellegence on which networks are permitting attacks
  to be launched from their address space.  They can work with LEA in
  the network's juristiction to get the attacks stops and offenders
  prosecuted.  LEA's can in theory also get courts to issue orders to
  filter offending address blocks by all ISP's in their juristiction.
 
  Mark
  --
  Mark Andrews, ISC
  1 Seymour St., Dundas Valley, NSW 2117, Australia
  PHONE: +61 2 9871 4742 INTERNET: ma...@isc.org
 



RE: Need trusted NTP Sources

2014-02-06 Thread Tony Hain
 -Original Message-
 From: Notify Me [mailto:notify.s...@gmail.com]
 Sent: Thursday, February 06, 2014 4:54 AM
 To: Aled Morris
 Cc: nanog@nanog.org; Martin Hotze
 Subject: Re: Need trusted NTP Sources
 
 Raspberries! Not common currency here either, but let's see!

While I would be using a Pi if I were doing it now, a few years ago I put
together a circuit that used a $100 outdoor mast-mount GPS receiver* with a
PPS out, to feed an RS232 connection to 3 FreeBSD 8.1 systems compiled with:
options PPS_SYNC#  
I don't know if that is still required in 10.0, and I understand Linux has
since fixed the kernel time resolution issues it was having, so research
into current OS configuration is required. To make the local time reference
preferred over external references, in ntp conf:
server 127.127.20.1 mode 8 minpoll 4 maxpoll 4 prefer
The diagram is at http://tndh.net/~tony/GPS-PPS-5v-ttl_232-box.pdf
While there is 'some assembly required', the components to feed existing
servers may be easier to come by than a Pi, and an outdoor receiver will
have better reception than the Adafruit one stuck inside a datacenter. 

As others have said, several external references help protect against any
one source having a bad day, but you should also be aware that network
asymmetry WILL impact your results so factor topology into your source
selection. Using this setup and OWAMP** I was able to track down a ~20ms
peering asymmetry between HE  Comcast inside the Seattle Westin colo,
which still persists.*** It would appear from the time delay that one of
their intermediaries is not really present in the building, but using a
fiber loop to a city about 400 miles away (Boise, or Medford ??). I am not
aware of the specific topology, other than traceroute shows different
intermediaries in each direction at one IP hop, with one taking 20ms longer
than the other to move between the same HE  Comcast routers inside that
colo. What I can see is the impact it has of showing the IPv6 connected NTP
peers as ~10ms off of the local IPv4 ones  the GPS receiver. 

Good luck


* MR-350P
http://www.amazon.com/Globalsat-Waterproof-External-Receiver-without/dp/B001
ENYWJC/ref=sr_sp-atf_title_1_1?ie=UTF8qid=1391734470sr=8-1keywords=mr-350
p

** OWAMP  http://software.internet2.edu/owamp/

*** ntpq -p
 remote   refid  st t when poll reach   delay   offset
jitter

==
xPPS(1)  .PPS.0 l7   16  3770.0000.001
0.002
oGPS_NMEA(1) .GPS.0 l7   16  3770.0000.001
0.002 
*bigben.cac.wash .GPS.1 u   69   64  372   13.0581.638
36.654 
+clock.fmt.he.ne .CDMA.   1 u   15   64  373   32.6411.938
28.828
-chronos6.es.net .CDMA.   1 u9   64  377   92.321   10.473
2.335
-2001:4f8:2:d::1 129.6.15.29  2 u   31   64  377   35.5459.912
43.519
-time0.apple.com 17.150.142.121   2 u2   64  377   44.922   -1.275
26.193


 grateful for all the input and responses, this list is amazing as usual.
 
 On Thu, Feb 6, 2014 at 1:41 PM, Aled Morris al...@qix.co.uk wrote:
  On 6 February 2014 12:30, Martin Hotze m.ho...@hotze.com wrote:
 
   I'm trying to help a company I work for to pass an audit, and we've
   been told we need trusted NTP sources (RedHat doesn't cut it).
   Being located in Nigeria, Africa,
 
   [...]
 
  So build your own stratum 1 server (maybe a second one with DCF77 or
  whatever you can use for redundancy),
 
 
  I don't think DCF77 is going to reach Nigeria.
 
  Aled




RE: turning on comcast v6

2013-12-31 Thread Tony Hain
(Yes this is a top post ... get over it)

Thank you Leo for doing such a great job in this scenario of explaining why
acronym familiarity has much more to do with people's entrenched positions,
than the actual network manageability they claim to be worried about. The
hyperbolic nonsense in   replace every ethernet switch in your entire
network with new hardware that supports RA Guard, and then deploy new
configuration on every single port of every single device in your network.
Please develop a capital justification plan for Mr MoneyBagsCEO for
replacing every switch in your network so you can safely deploy IPv6.  ,
clearly shows that it is the spooky acronym RA that is more important to
focus on than reality.It also does a nice job of wrapping up the point
about why an IPv6 rollout needs a long term plan with appropriate multi-year
budgeting.  ;)

For starters, in the scenario described, you only need 1 port protected, and
that is for the person that would be doing the configuration, so it is
likely pointless. Do you really believe that dhcp messages picked up by the
rogue router wouldn't end up answering with the wrong values and breaking
both IPv4  IPv6? Next, do you really believe that DHCP Guard for an IPv4
aware switch will do anything when an IPv6 DHCP message goes by? Don't you
have to replace every switch and reconfigure anyway? Or is rogue DHCP
service a problem that goes away with IPv4? Why do people continue to insist
that a cornerstone of their network security model is tied to an inherently
insecure protocol that was never intended to be used as a security tool? ...
but I digress ...

There are two very different models for IPv6 address/information allocation,
and each needs to be fully functional and independent of the other; period.
Unfortunately there have been too many voices demanding a 'one size fits
all' approach within the IETF, and we have gotten to the current situation
where you need to deploy parts of both models to have a functional network.
RFC 6106 is a half-baked concession from the 'dhcp is the only true way'
crowd to allow home networks to be functional, but if you want anything more
than DNS you have to return to the one-true-way, simply because getting
consensus for a more generic dhcp-options container in the RA was not going
to happen. The Routing Information DHCP option has been held hostage by
those that might be described as a 'dhcp is broken by design' crowd, because
many saw that as a bargaining point for consensus around a more feature rich
RA. Both hard line positions preventing utility in the other model are
wrong, but in the presence of a leadership mantra of one-size-fits-all,
neither side was willing to allow complete independent functionality to the
other. 

Making progress on the Routing Information option requires a clear scenario
to justify it, because vast swamp of dhcp options that have ever been used
in IPv4 are not brought forward without some current usage case. Lee was
asking for that input, and while the scenario you paint below might be
helped by that option, it presumes that every device on the network has
additional configuration to ignore an errant RA sent from the router being
configured simply because the network is supposed to using DHCP. The only
devices I know of that attempt to ignore an RA are Samsung's Android image
which do the stupid thing of configuring an address from the RA on the lan,
but refuse to create a routing entry from it if there has ever been a route
via the 4G radio. (that is fundamentally a platform bug because google lets
them set the knobs that way instead of doing the right thing as a metric
bias between the available routes for fall back when one or the other goes
away)   

Ryan's different dhcp answers based on auth state is a use case, and if in
widespread use as a way around 1X, it might get enough support by itself to
carry the day. If there are other use cases which are not self-contradictory
justifications of maintaining acronym familiarity, they will spread the
support base and make it easier to get past the objections. This is not
about which model is 'right', if anything limiting it is about minimizing
the different ways people can hang themselves without realizing the risks
beforehand. At the end of the day, the IETF's job is to document
technologies so vendors can implement for consistent behavior in
independently managed networks. Vendors will build whatever they are paid to
build, but if you want generic COTS, then lots of people need to justify a
specific behavior with some level of consistency to get that documented as
the consensus approach. So far there have not been enough consistent
scenarios to get an RI option passed.

Tony


 -Original Message-
 From: Leo Bicknell [mailto:bickn...@ufp.org]
 Sent: Monday, December 30, 2013 3:25 PM
 To: Lee Howard
 Cc: Jamie Bowden; North American Network Operators' Group
 Subject: Re: turning on comcast v6
 
 
 On Dec 30, 2013, at 2:49 PM, Lee Howard 

RE: turning on comcast v6

2013-12-31 Thread Tony Hain
Ryan Harden wrote:
...
 
 IMO, being able to hand out gateway information based on $criteria via
 DHCPv6 is a logical feature to ask for. Anyone asking for that isn't
trying to tell
 you that RA is broken, that you're doing things wrong, or that their way
of
 thinking is more important that yours. They're asking for it because they
have
 a business need that would make their deployment of IPv6 easier. Which,
 IMO, should be the goal of these discussions. How do we make it so
 deploying IPv6 isn't a pain in the butt? No one is asking to change the
world,
 they're asking for the ability to manage their IPv6 systems the same way
they
 do IPv4.

As I said in the response to Leo, this issue has been raised before and
couldn't get traction because the combination of a one-size-fits-all mantra
from the leadership with concession such that the dhcp model would be
self-contained, would have led to the end of the RA model. You are correct,
neither way is better, and both need to operate independently or in
combination, but getting there requires a clear use case, or many similar
cases, to make progress. 

I believe you are correct in that many people do use the dhcp option to
assign the router, but quantifying that is a very difficult task because
that community rarely worries about driving standards to get their way. I
find that most of this community finds innovative ways to reuse tools
defined for a different purpose, but its close enough to accomplish the task
at hand while avoiding the cost of getting a vendor to build something
specific. That is all fine until the original backer of the tool goes a
different direction, and ongoing evolution requires someone to justify its
continued support. The scattered community has so many different corner-case
uses it is hard to make a clear and quantified need for what the tool should
become. 

The primary reason that this is even a discussion is that the decision was
made long ago in the DHCP WG to avoid bringing forward unused baggage from
the evolution of IPv4 and dhcp by not bringing any options forward until
someone documented an ongoing use for it. That remains the only real
requirement I am aware of for getting a dhcp option copied forward from IPv4
to IPv6; document a widespread use case. This one has had an artificial
requirement of getting past the dhcp vs. RA model wars, but that would have
been, and still is easy enough to beat down with sufficiently documented
use. Documented use is where things fail, because we loop back to the point
about the people using it don't participate in driving the process to
demonstrate how widespread the use actually is, and what specific
functionality is being used to make sure the new definition is sufficient. 

Lee asked the question about use cases, and you were the only one that
offered one with substance. Compound that with the point that nobody else
jumped in with a 'me too', and the case could be made that you are looking
for a standard to be defined around your local deployment choices. Not to
say your deployment is isolated, wrong, or shouldn't be considered
best-practice, rather that it is hard to demonstrate consensus from a single
voice. Besides documenting the use case, it will help to fight off
objections by also documenting why this innovative use is deployed rather
than another widely deployed choice (in the case you present, why not
802.1X?, not that it is better, just 'why not' ; and I personally consider
pre-dated or inconsistent implementations at deployment as a perfect
justification, but that is just my take).  At the end of the day, if
operators don't actively participate in the standards process, the outcome
will not match their expectations. 

Tony






RE: Naive IPv6 (was ATT UVERSE Native IPv6, a HOWTO)

2013-12-04 Thread Tony Hain
Brian Dickson wrote:
  And root of the problem was brought into existence by the insistence
  that every network (LAN) must be a /64.

Get your history straight. The /64 was an outcome of operators deciding
there was not enough room for hierarchy in the original proposal for the
IPv6 address as 64 bits (including hosts), despite its ability to deliver 3
orders of magnitude more than the IAB goal statement. Given that position,
the entire 64 bits was given to *ROUTING* and we argued for another year
about how many bits to add for hosts on the lan. The fact it came out to 64
was an artifact of emerging 64 bit processors and the desire to avoid any
more bit shifting than necessary. Yes, autoconfig using the mac was a
consideration, but that was a convenient outcome, not the driving factor.

Yet here we are 15 years later and the greedy, or math challenged, still
insist on needing even more bits. Stop worrying about how many bits there
are in the lan space. That abundance allows for technical innovation that
will never be possible in the stingy world of centralized control. Consider
a world where the 'central control' crowd only allows one application on the
network (voice), and innovation is defined as 'only deploy something with an
immediate income stream' (caller id). In an environment like that, were do
new things come from? You can't prove a demand exists without deployment,
yet you can't get deployment without a proven demand. Enter the ott Internet
which leveraged the only allowed app via an audio modulation hack and built
something entirely different, where innovation was allowed to flourish. Now
go back to the concept of miserly central control of lan bits and figure out
how one might come up with something like RFC3971 (SEND) that would work in
any network. 


Rob Seastrom wrote:
 
 Re-working your conclusion statement without redoing the math, This
 leaves room for 2^15 such ISPs (a mere 16384), from the current /3.

Interesting; that was the IAB design goal for total end system count.  
2^12 networks supporting 2^15 end systems.

 
 Oddly enough, I'm OK with that.  :)

So am I, and if we do burn through that before a replacement network
technology comes along, there are still 6 more buckets that large to do it
differently.

Tony








RE: ATT UVERSE Native IPv6, a HOWTO

2013-12-02 Thread Tony Hain
Ricky Beam wrote:
 On Fri, 29 Nov 2013 08:39:59 -0500, Rob Seastrom r...@seastrom.com
 wrote:
  So there really is no excuse on ATT's part for the /60s on uverse
6rd...
 
 Except for a) greed (we can *sell* larger slices) and b) demonstrable
user
 want/need.
 
 How many residential, home networks, have you seen with more than one
 subnet?  The typical household (esp Uverse) doesn't even customize the
 provided router.  Even a CCIE friend of mine has made ZERO changes to his
 RG -- ATT turned off WiFi and added the static block at install. (I know
 NANOG is bad sample as we're all professionals and setup all kinds of
weird
 configurations at home. I have 3 nets in continuous use... a legacy
public
 subnet from eons ago (I never renumbered), an RFC1918 subnet overlapping
 that network (because it's too small), and a second RFC1918 net from a
 second ISP)
 
 I wouldn't use the word generous, but a /60 (16 LANs) is way more than
 what 99% of residential deployments will need for many years.  We've
 gotten by with a single, randomly changing, dynamic IP for decades.  Until
 routers come out-of-the-box setup for a dozen networks, non-networking
 pros aren't going to need it, or even know that it's possible. (and the
default
 firewalling policy in Windows is going to confuse a lot of people when
 machines start landing in different subnets can see each other.)
 
 Handing out /56's like Pez is just wasting address space -- someone *is*
 paying for that space. Yes, it's waste; giving everyone 256 networks when
 they're only ever likely to use one or two (or maybe four), is
intentionally
 wasting space you could've assigned to someone else. (or
 **sold** to someone else :-)) IPv6 may be huge to the power of huge, but
 it's still finite. People like you are repeating the same mistakes from
the early
 days of IPv4... the difference is, we won't be around when
 people are cursing us for the way we mismanaged early allocations.
 Indeed, a /64 is too little (aka bare minimum) and far too restrictive,
but it
 works for most simple (default) setups today. Which leads to DHCPv6 PD...
a
 /60 is adequate -- it's the minimal space for the rare cases where
multiple
 nets are desirable or necessary. The option for /56 or even /48 should
exist
 (esp. for business), but the need for such large address spaces are an
 EXCEPTION in residential settings. (and those are probably non-residential
 users anyway.) [FWIW, HE.net does what they do as marketing. And it works,
 btw.]

The rant above represents a braindead short-sighted thought process. If you
focus on the past as justification for current actions, you guarantee that
the future will always look exactly like the past. If you even hint at a /64
as the standard for residential deployment, you will find that all the CPE
vendors will hard code that, and you will never get it undone when you
change your mind. All because you stated up front that they will only ever
need what they have been using in the past. 

You don't see multi-subnet residential today from the consumer viewpoint,
but they do widely exist supporting deployment of watch your dvr from any
set-top, where a premise subnet handles that traffic off of the consumer
lan. That one example is why there should NEVER be a /64, because you are
already at 2 subnets that should be using the same shorter prefix. Trying to
develop the automation necessary for consumer plug-n-play subnets shows that
even a /56 is virtually unusable. A /55 makes more sense for a topology with
moderate constraints, but if you are already shorter than a 56, it doesn't
make sense to stop there. This is a hard concept for professional network
engineers, because their market place value is based on the ability to
efficiently manage topologies to fit within address resource constraints.
Consumers have no desire to understand the technology, they just want to
plug stuff together and have it sort out what it needs to do. That
unconstrained topology coupled with unmanaged device automation requires
excess address resource. 

YES THAT IS A WASTE. But having the address space sitting on the shelf at
IANA when someone comes along with a better idea in the next few hundred
years is also a waste. Get over it, the address space excessively larger
than we will ever deploy so it is wasted ... The only open issue is how we
utilize the resource until the next thing comes along. If it sits on the
shelf, you constrain innovation. If you 'waste it' by deploying it before
people can really use it, you piss-off the existing engineering staff. From
my perspective, the latter will die off, but stifling innovation robs future
generations of capabilities they could/should have had to make the world a
better place. 

Tony





RE: NAT64 and matching identities

2013-11-23 Thread Tony Hain
So it turns out that in many cases a missing www is causing the no IPv4
response, and someone from Alexa does need to explain what is going on. For
the entire top-1m.csv file, 35,554 entries returned no IPv4. For each
entry in the csv file;
some return NXDOMAIN:
discart.ru  --  NXDOMAIN
 www.discart.ru resolves, but Alexa file missing www.
 Alexa position:4721,discart.ru

some return without any answer:
bp.blogspot.com  -- No Answer
 www.bp.blogspot.com resolves, but Alexa file missing www.
 Alexa position:87,bp.blogspot.com

while others point to MX-only entries:
akamaihd.net  --  MX-only
 www.akamaihd.net resolves, but Alexa file missing www.
 Alexa position:74,akamaihd.net

The version of the code I have been using strips the www. and tries again,
but obviously it also needs to add the www. and retry. In any case, the
Alexa file points to names that do not serve web content, so the entire 'top
1M' list is suspect.

Tony


 -Original Message-
 From: Tony Hain [mailto:alh-i...@tndh.net]
 Sent: Friday, November 22, 2013 3:50 PM
 To: 'Owen DeLong'
 Cc: sherfe...@amazon.com; 'NANOG List'
 Subject: RE: NAT64 and matching identities
 
 Someone from Alexa really needs to answer how that list is created because
 their web site discussion is way too hand-wavy, but given that neither of
 those appear to be currently valid names, and 1.1.1.1 is on the list at
all, there
 must be some measure of cross link and redirection occurrences. For the
 entire top-1m, I show today's file has 2815 as dotted-quad. In the top
 50,000 there are 1790 with no IPv4   no IPv6.  Clearly they don't bother
 to prune the list for validity. ~4% of the next 25,000 names are dead
(50,000-
 75,000), and one can only guess that as you get further down the list the
 percentage of dead names will continue to go up. I have a full 1M run in
 process, but would not count on it completing before Monday.
 
 Just to add a level of 'extra effort' to the process, I increased the
number of
 attempts to 10, and the time between attempts to 10 seconds. With that,
 dead names in the top 1000:
 akamaihd.netno IPv4   no IPv6
 bp.blogspot.com no IPv4   no IPv6
 delta-search.comno IPv4   no IPv6
 bannersdontwork.com no IPv4   no IPv6
 cloudfront.net  no IPv4   no IPv6
 doorblog.jp no IPv4   no IPv6
 uimserv.net no IPv4   no IPv6
 linksynergy.com no IPv4   no IPv6
 lipixeltrack.comno IPv4   no IPv6
 australianbrewingcompany.comno IPv4   no IPv6
 searchfun.inno IPv4   no IPv6
 greatappsdownload.com   no IPv4   no IPv6
 klikbca.com no IPv4   no IPv6
 jobfindgold.infono IPv4   no IPv6
 adnxs.com   no IPv4   no IPv6
 rakuten.ne.jp   no IPv4   no IPv6
 sweetpacks-search.com   no IPv4   no IPv6
 yomiuri.co.jp   no IPv4   no IPv6
 incredibar-search.com   no IPv4   no IPv6
 searchgol.com   no IPv4   no IPv6
 livedoor.bizno IPv4   no IPv6
 workercn.cn no IPv4   no IPv6
 
 FWIW: in the top 50,000, I show 1525 has IPv4  has IPv6   0  no IPv4
has
 IPv6. In other words, there are more dead names than there are 
 records, and there are not any IPv6-only sites in that group.
 
 Tony
 
 
  -Original Message-
  From: Owen DeLong [mailto:o...@delong.com]
  Sent: Friday, November 22, 2013 1:48 PM
  To: Tony Hain
  Cc: joel jaeggli; valdis.kletni...@vt.edu; NANOG List
  Subject: Re: NAT64 and matching identities
 
  So one has to wonder how those names made it into the top 100 list if
  it's supposed to be a top 100 web sites, since they are obviously not
  web
 sites.
  (at least in the case of the two in the top 100)
 
  Owen
 
  On Nov 22, 2013, at 1:28 PM, Tony Hain alh-i...@tndh.net wrote:
 
   The only thing it explicitly strips out are dotted-quads, which
   don't occur until # 4255. The code makes five passes at
   getaddrinfo() for
   IPv4 before giving up, and then it checks for a leading www and if
   that exists it strips it off and does the 5 tries loop again, then
   later the same process for IPv6. For the top 100 run:
   akamaihd.netno IPv4   no IPv6
   bp.blogspot.com no IPv4   no IPv6
  
   FWIW :::
   Dotted-quad's in the top 10,000
   4255,92.242.195.24
   4665,1.1.1.1
   5079,92.242.195.231
   6130,1.254.254.254
   9518,208.98.30.70
  
   whois 92.242.195.24
   ...
   netname:Respina
   descr:  BroadBand IP Pool
   country:IR
   ...
   route:  92.242.195.0/24
  
   Respina BroadBand IP Pool in the top 100,000
   4255,92.242.195.24
   5079,92.242.195.231
   10059,92.242.195.233
   23912,92.242.195.30
   31520,92.242.195.111
   35867,92.242.195.235
   95233,92.242.195.129

RE: NAT64 and matching identities

2013-11-22 Thread Tony Hain
Lee Howard wrote:
...
  There is obviously a long tail of ip4 destinations, but nearly all
  of 500 of the Alexa global 500 have ip6 listeners,
 
  Do you have a data source for that?  I see no indication of IPv6
  listeners on 85% of the top sites.
 
 A slightly different metric, 44% of USA content available on IPv6:
 
 http://6lab.cisco.com/stats/
 
 Right, weighted by DNS queries.
 Compare to http://www.vyncke.org/ipv6status/detailed.php?country=us
 and http://www.employees.org/~dwing/-stats/
 
 Not equivalent to nearly all of Alexa 500.

Using a derivative of Dan Wings code from a couple of years back I get:

The top 5 websites:  records and IPv6 connectivity
   count with A:5   (100.000%)
count with :4   ( 80.000%)
Of the 4 hosts with  records, testing connectivity to TCP/80:
 count with IPv6 ok:4   (100.000%)

The top 10 websites:  records and IPv6 connectivity
   count with A:   10   (100.000%)
count with :6   ( 60.000%)
Of the 6 hosts with  records, testing connectivity to TCP/80:
 count with IPv6 ok:6   (100.000%)

The top 25 websites:  records and IPv6 connectivity
   count with A:   25   (100.000%)
count with :   10   ( 40.000%)
Of the 10 hosts with  records, testing connectivity to TCP/80:
 count with IPv6 ok:   10   (100.000%)

The top 50 websites:  records and IPv6 connectivity
   count with A:   50   (100.000%)
count with :   21   ( 42.000%)
Of the 21 hosts with  records, testing connectivity to TCP/80:
 count with IPv6 ok:   21   (100.000%)

The top 100 websites:  records and IPv6 connectivity
   count with A:   98   ( 98.000%)
count with :   30   ( 30.000%)
Of the 30 hosts with  records, testing connectivity to TCP/80:
 count with IPv6 ok:   30   (100.000%)

The top 250 websites:  records and IPv6 connectivity
   count with A:  248   ( 99.200%)
count with :   56   ( 22.400%)
Of the 56 hosts with  records, testing connectivity to TCP/80:
 count with IPv6 ok:   56   (100.000%)

The top 500 websites:  records and IPv6 connectivity
   count with A:  494   ( 98.800%)
count with :   91   ( 18.200%)
Of the 91 hosts with  records, testing connectivity to TCP/80:
 count with IPv6 ok:   91   (100.000%)

The top 1000 websites:  records and IPv6 connectivity
   count with A:  990   ( 99.000%)
count with :  132   ( 13.200%)
Of the 132 hosts with  records, testing connectivity to TCP/80:
 count with IPv6 ok:  132   (100.000%)

The top 2500 websites:  records and IPv6 connectivity
   count with A: 2479   ( 99.160%)
count with :  216   (  8.640%)
Of the 216 hosts with  records, testing connectivity to TCP/80:
 count with IPv6 ok:  214   ( 99.074%)

The top 5000 websites:  records and IPv6 connectivity
   count with A: 4959   ( 99.220%)
count with :  354   (  7.083%)
Of the 354 hosts with  records, testing connectivity to TCP/80:
 count with IPv6 ok:  347   ( 98.023%)

The top 1 websites:  records and IPv6 connectivity
   count with A: 9918   ( 99.230%)
count with :  600   (  6.003%)
Of the 600 hosts with  records, testing connectivity to TCP/80:
 count with IPv6 ok:  575   ( 95.833%)

Original code developed by dw...@employees.org.
manual run by tony on arabian.tndh.net using ./IPv6-check .
on Fri Nov 22 09:48:17 PST 2013  (elapsed: 00:08:33, t: 15).
Top 1 websites based on Alexa top-1m.csv.





RE: NAT64 and matching identities

2013-11-22 Thread Tony Hain
The only thing it explicitly strips out are dotted-quads, which don't occur
until # 4255. The code makes five passes at getaddrinfo() for IPv4 before
giving up, and then it checks for a leading www and if that exists it strips
it off and does the 5 tries loop again, then later the same process for
IPv6. For the top 100 run:
akamaihd.netno IPv4   no IPv6
bp.blogspot.com no IPv4   no IPv6

FWIW :::
Dotted-quad's in the top 10,000
4255,92.242.195.24
4665,1.1.1.1
5079,92.242.195.231
6130,1.254.254.254
9518,208.98.30.70

 whois 92.242.195.24
...
netname:Respina
descr:  BroadBand IP Pool
country:IR
...
route:  92.242.195.0/24

Respina BroadBand IP Pool in the top 100,000
4255,92.242.195.24
5079,92.242.195.231
10059,92.242.195.233
23912,92.242.195.30
31520,92.242.195.111
35867,92.242.195.235
95233,92.242.195.129


 -Original Message-
 From: Owen DeLong [mailto:o...@delong.com]
 Sent: Friday, November 22, 2013 12:16 PM
 To: joel jaeggli
 Cc: valdis.kletni...@vt.edu; Tony Hain; NANOG List
 Subject: Re: NAT64 and matching identities
 
 It would be way more than 2 if it were CNAME, methinks.
 
 Owen
 
 On Nov 22, 2013, at 12:12 PM, joel jaeggli joe...@bogus.com wrote:
 
  On 11/22/13, 12:01 PM, valdis.kletni...@vt.edu wrote:
  On Fri, 22 Nov 2013 10:18:27 -0800, Tony Hain said:
 
  The top 100 websites:  records and IPv6 connectivity
count with A:   98   ( 98.000%)
 count with :   30   ( 30.000%)
  Of the 30 hosts with  records, testing connectivity to TCP/80:
  count with IPv6 ok:   30   (100.000%)
 
  Statistics whoopsie, or are there actually 2 sites in the top100 that
  are IPv6-only?
 
  IN CNAME ? or is that being accounted for.
 
 
 




RE: NAT64 and matching identities

2013-11-22 Thread Tony Hain
Someone from Alexa really needs to answer how that list is created because
their web site discussion is way too hand-wavy, but given that neither of
those appear to be currently valid names, and 1.1.1.1 is on the list at all,
there must be some measure of cross link and redirection occurrences. For
the entire top-1m, I show today's file has 2815 as dotted-quad. In the top
50,000 there are 1790 with no IPv4   no IPv6.  Clearly they don't bother
to prune the list for validity. ~4% of the next 25,000 names are dead
(50,000-75,000), and one can only guess that as you get further down the
list the percentage of dead names will continue to go up. I have a full 1M
run in process, but would not count on it completing before Monday.

Just to add a level of 'extra effort' to the process, I increased the number
of attempts to 10, and the time between attempts to 10 seconds. With that,
dead names in the top 1000:
akamaihd.netno IPv4   no IPv6
bp.blogspot.com no IPv4   no IPv6
delta-search.comno IPv4   no IPv6
bannersdontwork.com no IPv4   no IPv6
cloudfront.net  no IPv4   no IPv6
doorblog.jp no IPv4   no IPv6
uimserv.net no IPv4   no IPv6
linksynergy.com no IPv4   no IPv6
lipixeltrack.comno IPv4   no IPv6
australianbrewingcompany.comno IPv4   no IPv6
searchfun.inno IPv4   no IPv6
greatappsdownload.com   no IPv4   no IPv6
klikbca.com no IPv4   no IPv6
jobfindgold.infono IPv4   no IPv6
adnxs.com   no IPv4   no IPv6
rakuten.ne.jp   no IPv4   no IPv6
sweetpacks-search.com   no IPv4   no IPv6
yomiuri.co.jp   no IPv4   no IPv6
incredibar-search.com   no IPv4   no IPv6
searchgol.com   no IPv4   no IPv6
livedoor.bizno IPv4   no IPv6
workercn.cn no IPv4   no IPv6

FWIW: in the top 50,000, I show 1525 has IPv4  has IPv6   0  no IPv4
has IPv6. In other words, there are more dead names than there are 
records, and there are not any IPv6-only sites in that group.

Tony


 -Original Message-
 From: Owen DeLong [mailto:o...@delong.com]
 Sent: Friday, November 22, 2013 1:48 PM
 To: Tony Hain
 Cc: joel jaeggli; valdis.kletni...@vt.edu; NANOG List
 Subject: Re: NAT64 and matching identities
 
 So one has to wonder how those names made it into the top 100 list if it's
 supposed to be a top 100 web sites, since they are obviously not web
sites.
 (at least in the case of the two in the top 100)
 
 Owen
 
 On Nov 22, 2013, at 1:28 PM, Tony Hain alh-i...@tndh.net wrote:
 
  The only thing it explicitly strips out are dotted-quads, which don't
  occur until # 4255. The code makes five passes at getaddrinfo() for
  IPv4 before giving up, and then it checks for a leading www and if
  that exists it strips it off and does the 5 tries loop again, then
  later the same process for IPv6. For the top 100 run:
  akamaihd.netno IPv4   no IPv6
  bp.blogspot.com no IPv4   no IPv6
 
  FWIW :::
  Dotted-quad's in the top 10,000
  4255,92.242.195.24
  4665,1.1.1.1
  5079,92.242.195.231
  6130,1.254.254.254
  9518,208.98.30.70
 
  whois 92.242.195.24
  ...
  netname:Respina
  descr:  BroadBand IP Pool
  country:IR
  ...
  route:  92.242.195.0/24
 
  Respina BroadBand IP Pool in the top 100,000
  4255,92.242.195.24
  5079,92.242.195.231
  10059,92.242.195.233
  23912,92.242.195.30
  31520,92.242.195.111
  35867,92.242.195.235
  95233,92.242.195.129
 
 
  -Original Message-
  From: Owen DeLong [mailto:o...@delong.com]
  Sent: Friday, November 22, 2013 12:16 PM
  To: joel jaeggli
  Cc: valdis.kletni...@vt.edu; Tony Hain; NANOG List
  Subject: Re: NAT64 and matching identities
 
  It would be way more than 2 if it were CNAME, methinks.
 
  Owen
 
  On Nov 22, 2013, at 12:12 PM, joel jaeggli joe...@bogus.com wrote:
 
  On 11/22/13, 12:01 PM, valdis.kletni...@vt.edu wrote:
  On Fri, 22 Nov 2013 10:18:27 -0800, Tony Hain said:
 
  The top 100 websites:  records and IPv6 connectivity
   count with A:   98   ( 98.000%)
count with :   30   ( 30.000%)
  Of the 30 hosts with  records, testing connectivity to TCP/80:
 count with IPv6 ok:   30   (100.000%)
 
  Statistics whoopsie, or are there actually 2 sites in the top100
  that are IPv6-only?
 
  IN CNAME ? or is that being accounted for.
 
 
 




RE: Reverse DNS RFCs and Recommendations

2013-10-31 Thread Tony Hain
John Levine wrote:
 Right.  Spam filtering depends on heuristics.  Mail from hosts without
 matching forward/reverse DNS is overwhelmingly bot spam, so checking for
 it is a very effective heuristic.

Leading digit is clearly in widespread use beyond 3com  1and1. One of the most 
effective heuristics in my acl list is:
\N^.*@\d{3,}\.(cn|com|net|org|us|asia)

In the last few hours it has picked off multiple messages from each of these:
caro...@8447.com
jef...@3550.com
ronal...@0785.com
kevi...@2691.com
debora...@3585.com
kimberl...@5864.com
sara...@0858.com
zav...@131.com
qgmklyy...@163.com
pjp...@163.com
fahu...@163.com
danie...@4704.com
hele...@2620.com





RE: It's the end of the world as we know it -- REM

2013-04-24 Thread Tony Hain
Lee Howard wrote:
 On 4/23/13 7:44 PM, Geoff Huston g...@apnic.net wrote:
 
 On 24/04/2013, at 8:10 AM, Andrew Latham lath...@gmail.com wrote:
 
  On Tue, Apr 23, 2013 at 5:41 PM, Valdis Kletnieks
  valdis.kletni...@vt.edu wrote:
  I didn't see any mention of this Tony Hain paper:
 
  http://tndh.net/~tony/ietf/ARIN-runout-projection.pdf
 
  ARIN predicted to run out of IP space to allocate in August this year.
 
  Are you ready?
 
 
 The prediction of runout business is extremely hard. All of these
 predictions are based on the basic premise that what happened yesterday
 will most likely happen tomorrow.
 
 If I were any good at predicting things, I would use my powers for evil.
 Your model and Tony's differ largely on how many yesterdays are
 considered; and, Tony's new model weights yesterday more heavily than
 yesteryear, on the guess that recent history is more predictive than
distant
 past history.

Indeed, the current set of actors appears to be different than the
historical set, with a very different deployment-model/demand-curve. 

 
 Meanwhile. . .
 
 
 actors. In the address world it was observed that less than 1% (its
 closer to around 0.5%)  individual allocations account for more than
 half of the number of allocated addresses. This becomes a problem in
 the predictive models, as the dominant factor in address consumption is
 now the actions of some 20 or so very large entities.
 
 Fortunately, very large companies are slow to change.
 Also, John Curran said during discussion at PPML of extra-regional
 allocations: At the current rate, this is the majority of allocations
we're
 making.  So, a different 0.5% than most people are probably thinking of.
 
 I believe he said this growth trend Leads to a runout Q4-2013 or Q1-2014,
 with certainty.
 
 
 
 
 
 Following a single largish allocation in early 2012 we've seen the ARIN
 address consumption rate increase somewhat, and the average rate of
 address consumption is currently around 2M addresses per month. If this
 rate of address consumption continues, the ARIN will reach its last /8
 in early 2014, and if this rate persists, then the registry will
 exhaust its pool around the end of that year, or early 2015.
 
 
 Sorry, is this to say, If this rate of consumption continues or If this
rate of
 increase continues?  I believe the difference is that several
organizations are
 rapidly progressing through ARIN slow start, using their space in
significantly
 less than three months.

I only looked at organizations that had multiple allocations larger than a
/20 in the last 9 months. There may well be as many, or more that have had
multiple /22,/21,/20 sequences in that window, but if they are that small at
this point, they might never get to a /16 before the pool runs out if the
larger ones keep going. 

 
 
 
 However, personally I find it a little hard to place a high probability
 on Tony's projected exhaustion date of August this year.

I was not trying to place any probability on the outcome, and would tend to
agree with you that August 2013 is not particularly likely, but would say it
much more likely than having anything left by August 2014. That said, a new
set of players showing compound growth in short timeframes is not what the
historical-model projections are based on, so we do need to look more at
current behavior than the distant past.

  I also have to
 qualify that by noting that while I think that a runout of the
 remaining
 40 M addresses within 4 months is improbable, its by no means impossible.
 If we saw a re-run of the address consumption rates that ARIN
 experienced in 2010, then it's not outside the bounds of plausibility
 that ARIN will be handing out its last address later this year.
 
 It largely depends on whether the new organizations getting address space
 hit a growth ceiling (or plateau).  If they do so soon, we return to the
nearly
 linear Potaroo Projection.  If they continue to grow (especially if they
 represent a new business model and others follow suit) then the Hain
 Hypothesis holds.

There is another open question about the growth rate in the number of new
players showing compound growth in deployments. It may not be that any of
them individually gets large enough to make a significant dent on their own,
but if there is compound growth in the number of new slow-start actors, you
still have compound growth in demand, but you may not be looking in the
right place to see why the numbers are large enough to matter.


The really troubling thing that I don't get is why RR got a pile of little
blocks rather than a /12 up front. I don't know if that is an impact of
broken policy, internal deployment decisions about 'right size' allocations
rather than intentional deaggregation, or trying to 'fly under the radar'.
If it is a policy problem it might be worth trying to understand and maybe
fix any long term impact on market transfers. 

Tony


 
 Lee
 
 
 
 
 thanks,
 
 Geoff
 
 
 
 
 





RE: How far must muni fiber operators protect ISP competition?

2013-02-05 Thread Tony Hain
IMHO:   level of clue is a minor point, as that can be bought. The fundamental 
issues for a project like this are funding, and intent. Well-funded 
organizations that lack intent are just problem children that like to tie up 
the courts to keep others from making progress. The target for a project like 
you describe is the organization with intent, but lacks funding. Yes some of 
those will have an easier time by not having to acquire the appropriate level 
of clue, but they may not last long if they don't. Part of your calculation has 
to be level of churn you are willing to impose on the city as the low-price 
competitors come and go.

Tony


 -Original Message-
 From: Jay Ashworth [mailto:j...@baylink.com]
 Sent: Tuesday, February 05, 2013 8:09 AM
 To: NANOG
 Subject: How far must muni fiber operators protect ISP competition?
 
 - Original Message -
  From: Owen DeLong o...@delong.com
 
  Actually, as I understood what was proposed, you would bring Cable
  Coop and/or other such vendors into the colo space adjacent to the MMR
  and let them sell directly to the other service providers and/or
  customers.
 
 I am of two minds at this point, on this topic.
 
 The goal of this project, lying just atop improving the city's position in the
 world, is to do so by making practical competition between service providers,
 to keep prices as low as possible.
 
 when I delve into the realm of things like this, some people could make a
 relatively defensible argument that I am disadvantaging ISPs who are smart
 enough to know about this sort of service on their own, by helping out those
 who are not.
 
 I'm not sure if that argument outweighs the opposing one, which is that I
 should be *trying* to advantage those smaller, less savvy operators, as
 they're the sort I want as providers.
 
 I think this particular point is one of opinion; I solicit such.
 
 Cheers,
 -- jra
 --
 Jay R. Ashworth  Baylink   
 j...@baylink.com
 Designer The Things I Think   RFC 2100
 Ashworth  Associates http://baylink.pitas.com 2000 Land Rover DII
 St Petersburg FL USA   #natog  +1 727 647 1274




RE: Programmers can't get IPv6 thus that is why they do not have IPv6 in their applications....

2012-11-28 Thread Tony Hain
Dobbins, Roland wrote:
 On Nov 28, 2012, at 11:18 AM, Andrew Sullivan wrote:
 
  If the entire deployment path automatically requires 84 layers of NAT
 sludge, that's what gets tested, cause it works for everybody.
 
 Hence my questions regarding the actual momentum behind end-to-end
 native IPv6 deployment.  Inertia is generally only overcome when there's a
 clear positive economic benefit to doing so - 'savings', assuming there
 actually are any, are a) almost always exaggerated and b) generally not a
 powerful enough incentive to alter the status quo.

That is why the preference is biased toward IPv6 when it is available. If
you expect the end users to make a conscious choice it will never happen. If
the underlying OS components make that choice for them, you end up with a
transition.

Open the page that Tim Chown sent out :  http://6lab.cisco.com/stats/ 
Select World-scale data  :  then open IPv6 Prefix  User graphs.  Look at
the correlation between IPv6-alive prefixes  user %. 

Those users never made a conscious choice, the OS did it as soon as it had a
path to the target. As more prefixes light up, the 'unconscious pent up
demand' will make that User curve even steeper. The primary bottleneck at
this point is and will be CPE. Fixing that will likely require a financial
incentive to get consumers to 'upgrade' their working box. Normal lifecycle
replacements will take a long time, requiring larger investments in cgn's,
so as soon as the new cpe is available in sufficient quantity at a
reasonable price point, any MBA can go make the case you are looking for
about why it is cheaper to do a cpe subsidy than it is to invest in a
never-ending cgn saga (if they can't figure it out have someone hire an MBA
from the mobile providers who transition handsets off the old network all
the time). 

Getting the cpe vendors to ship in quantity requires the ISP engineering
organizations to say in unison we are deploying IPv6 and will only
recommend products that pass testing. As long as there are voices calling
for 444nat in the flavor-of-the-week, cpe vendors will not focus on the long
term goal, because they will see the interim steps as opportunity to extract
more cash for short-life products. So will infrastructure vendors for that
matter. Indecision and scatter-shot approaches only increase the number of
things that need to be bought, deployed, and operated. That overall
additional cost is a complete waste to the operator / end user, and clear
profit for the vendors. 

You claim to be looking for the economic incentive, but are looking with
such a short time horizon that all you see are the 'waste' products vendors
are pushing to make a quick sale, knowing that you will eventually come back
for yet-another-hack to delay transition, and prop up your expertise in a
legacy technology. The same thing happened with the SNA faithful 15 years
ago, and history shows what happened there.

Tony





RE: Big day for IPv6 - 1% native penetration

2012-11-26 Thread Tony Hain
Dobbins, Roland wrote:

 On Nov 26, 2012, at 10:36 PM, Cameron Byrne wrote:
 
  Ipv6 is not important for users, it is important for network operators
who
 want to sustain their business.
 
 I agree with the first part; not sure I agree with the second part.

Operators are all free to choose their own planning horizons. History is
littered with the remnants of those with limited vision.

 
  Nope. Nobody will leave money on the table by alienating users.
 
 I think it may be possible to make money with compelling IPv6-only
 content/services/applications.

If you believe that is true you should do it and prove the point.
Unfortunately most people that actually deploy and support applications
can't make the math come out right when the access providers don't provide a
path to 99% of the paying customers, then do just about everything they can
to hobble bypass approaches.

 
  Apple and msft os' s now make a clear judgement on that. So, you need to
 update your perspective.
 
 I'm not very interested in their judgement.  So, I'm quite happy with my
 perspective, thanks.

The overall system includes the perspective of app developers, not just BGP
knob twisters, so the point of having a widespread api base is critical to
making progress. 

 
  Does not matter. And it will not happen.
 
 Proof by repeated assertion doesn't sway me.

It will happen, just not anytime soon. As the access networks get deployed,
traffic will shift, so eventually the question about the expense of
maintaining an ever more complex IPv4 version of the app to deal with
multi-layer nat to support a dwindling user base will have to be answered. 

 
  The better question, for an isp, is what kind of ipv4 secondary market
 budget do you have? How hot is your cgn running?  Like ALGs much ?
 Security and attribute much ?
 
 These are important, yes.
 
  Again , users dont care or know about v4 or v6. This is purely a network
 operator and app issue (cough cough ... skype).
 
 It's my contention that IPv6 won't be widely deployed unless/until end-
 customers call up their ISPs demanding this 'IPv6 or whatever' thing they
 need to accomplish some goal they have.

And it is the contention of app developers that they can't make money on an
app that that can't reach 99% of the intended user base.  The entire point
of tunnels is to break this absurd deadlock where access won't deploy
without apps and apps won't deploy without access. Instead of getting on
with it, there is an ongoing entrenchment and search for the utopian
one-size-fits-all zero-cost transition plan. All this does is show how
widespread the denial is, where people are refusing to let go of an entire
career's worth of 'expertise' to keep up with the technology changes.
Fortunately some have moved on, and are deploying despite the extra effort
required in the short term. 

Once there are a substantial number of IPv6 access networks, the traffic
volume will shift rapidly and people will start asking why the core is even
aware of IPv4. At that point maintaining IPv4 will become the end user's
problem, and they will have to find legacy tunnel providers if they want to
keep that going. IPv4 won't die, it will just become an edge problem because
the only reason to keep it running will be devices with embedded IPv4-only
stacks which won't be replaced for 10 years. 

Tony







RE: Big day for IPv6 - 1% native penetration

2012-11-20 Thread Tony Hain
Tomas Podermanski wrote:
 
 Hi,
 
 It seems that today is a big day for IPv6. It is the very first time
when
 native IPv6 on google statistics
 (http://www.google.com/intl/en/ipv6/statistics.html) reached 1%. Some
 might say it is tremendous success after 16 years of deploying IPv6 :-)
 
 T.

Or one could look at it as; despite 16 years of lethargy and lack of
deployment by access networks, the traffic still finds a way.  ;)

Tony







RE: Big day for IPv6 - 1% native penetration

2012-11-20 Thread Tony Hain
Mike Jones wrote:
 
 On 20 November 2012 16:05, Patrick W. Gilmore patr...@ianai.net wrote:
  On Nov 20, 2012, at 08:45 , Owen DeLong o...@delong.com wrote:
 
  It is entirely possible that Google's numbers are artificially low
  for a number of reasons.
 
  AMS-IX publishes stats too:
  https://stats.ams-ix.net/sflow/
 
  This is probably a better view of overall percentage on the Internet than a
 specific company's content.  It shows order of 0.5%.
 
  Why do you think Google's numbers are lower than the real total?
 
 
 They are also different stats which is why they give such different numbers.
 
 In a theoretical world with evenly distributed traffic patterns if 1% of users
 were IPv6 enabled it would require 100% of content to be IPv6 enabled
 before your traffic stats would show 1% of traffic going over IPv6.
 
 If these figures are representative (google saying 1% of users and AMSIX
 saying 0.5% of traffic) then it would indicate that dual stacked users can 
 push
 ~50% of their traffic over IPv6. If this is even close to reality then that 
 would
 be quite an achievement.

If you assume that Youtube/Facebook/Netflix are 50% of the overall traffic, why 
wouldn't a dual stacked end point have half of its traffic as IPv6 after June???

Tony







RE: IPv4 address length technical design

2012-10-03 Thread Tony Hain
 Sadiq Saif [mailto:sa...@asininetech.com] wrote:

 On Wed, Oct 3, 2012 at 12:13 PM, Chris Campbell ch...@ctcampbell.com
 wrote:
  Is anyone aware of any historical documentation relating to the choice of 32
 bits for an IPv4 address?
 
  Cheers.
 
 I believe the relevant RFC is RFC 791 - https://tools.ietf.org/html/rfc791

Actually that was preceded by RFC 760, which in turn was a derivative of IEN 
123. I believe the answer to the original question is partially available on a 
series of pages starting at :   
http://www.networksorcery.com/enp/default1101.htm 
IEN 2 is likely to be of particular interest ... 






RE: The Department of Work and Pensions, UK has an entire /8

2012-09-21 Thread Tony Hain
 -Original Message-
 From: Nick Hilliard [mailto:n...@foobar.org]
 Sent: Friday, September 21, 2012 9:13 AM
 To: Tony Hain
 Cc: nanog@nanog.org
 Subject: Re: The Department of Work and Pensions, UK has an entire /8
 
 On 21/09/2012 00:47, Tony Hain wrote:
  You are comparing IPv6 to the historical deployment of IPv4. Get with
  the times and realize that CGN/LSN breaks all those wonderful
  location-aware apps people are so into now, not to mention raising the
  cost for operating the network which eventually get charged back to the
 user.
 
 Address translation (already commonplace on many networks) is the
 consequence of the lack of a scalable addressing model.  Yup, NAT breaks
 lots of things.  Piles.  It sucks.
 
  Nanog in general has a problem taking the myopic viewpoint 'the only
  thing that matters is the network'.
 
 Networking people build and (in some cases) care about networks.  It's not
 the job of nanog people to fret about software development.
 
  The real costs are in app development and support
 
 It's certainly one of the costs.  And application developers have only
just
 begun to realise that they now need to be aware of the network.
 Previously, they could just open up sockets and fling data around.  Now
they
 need to handle protocol failover and multiple connectivity addresses and
the
 like.  Yep, it's an extra cost point - one which has been studiously
ignored by
 most ipv6 evangelists over the lifetime of ipv6.

App developers have never wanted to be aware of the network. As far as they
are concerned it is the network managers job to get bits from the endpoint
they are on to the endpoint they want to get to. Making them do contortions
to figure out that they need to, and then how to, tell the network to do
that adds complexity to their development and support. This is not an IPv6
issue, it is historic reality. The only place IPv6 gets involved is that it
offers a way back to the transparent end-to-end consistent addressing model.
The actual path may have firewalls which prevent communication, but that
happens on both versions and has nothing to do with the simplicity of a
consistent addressing model.

 
  That depends on your time horizon and budget cycles. If your org
  suffers from the short-term focus imposed by Wall Street,
 
 Most organisations are in this category, not just those beholden to the
 whims of Wall Street.
 
  If operators would put less effort into blocking transition
  technologies and channel that energy into deploying real IPv6, the
  sorry state wouldn't be there.
 
 There are never shortages of fingers when failures happen, whether they be
 used for wagging or pointing.
 
  For all the complaints about 6to4, it was never intended to be the
  mainstay, it was supposed to be the fall back for people that had a
  lame ISP that was not doing anything about IPv6.
 
 6to4 is full of fail.  Inter-as tunnelling is a bad idea.  

And something that is easy to fix by simply deploying a 6to4 relay in each
AS and announcing the correct IPv6 prefix set to make it symmetric. 

 Asymmetric inter-as
 tunnelling is worse, and asymmetric inter-as tunnelling based on anycast
 addresses with no hope of tracing blackholes is complete protocol fail.
 
 Despite the total failure that it causes the ipv6 world, we couldn't even
agree
 on v6ops@ietf that 6to4 should be recategorised as historical.  My
facepalm
 ran over my wtf.
 
 But really, 6to4 is a minor player.
 
  All the complaining about 6rd-waste
  is just another case of finding excuses because an
  ISP-deployed-6to4-router with a longer than /16 announcement into the
  IPv6 table does a more efficient job, and would not have required new
  CPE ... Yes that violates a one-liner in an RFC, but changing that
  would have been an easier fix than an entirely new protocol definition
and
 allocation policy discussion.
 
 I'm not understanding the 6rd hate here.  Intra-as tunnelling is fine,
because
 the network operator has control over all points along the way. It fixes
the
 problem of having eyeball access devices which don't support v6 properly.
 Don't hate it - it's useful for some operators, and quite good for
deploying v6
 over an older infrastructure.

There is no 6rd hate here. I personally spent many hours helping Remi tune
up the original doc and get in into the IETF process. My point was that we
didn't need to go through that entire process and have extended policy
discussions about what size prefix each org needs when they are deploying
6rd. At the end of the day, a 6to4 relay at every point that has a 6rd
router does exactly the same thing at the tunneling level (except that 6to4
always results in a /48 for the customer). It may have resulted in more
prefixes being announced into the IPv6 table, but given the ongoing
proliferation of intentional deaggretation for traffic engineering, there
may eventually be just as many IPv6 prefixes announced with 6rd. 

 
  So far neither MSFT or AAPL has been willing

RE: Big Temporary Networks

2012-09-20 Thread Tony Hain
 -Original Message-
 From: Masataka Ohta [mailto:mo...@necom830.hpcl.titech.ac.jp]
 Sent: Wednesday, September 19, 2012 11:21 PM
 To: David Miller
 Cc: nanog@nanog.org
 Subject: Re: Big Temporary Networks
 
 David Miller wrote:
 
  So, a single example of IPv4 behaving in a suboptimal manner would be
  enough to declare IPv4 not operational?
 
 For example?

Your own example ---

 -Original Message-
 From: Masataka Ohta [mailto:mo...@necom830.hpcl.titech.ac.jp]
 Sent: Wednesday, September 19, 2012 1:26 AM
 To: nanog@nanog.org
 Subject: Re: Big Temporary Networks

 ...  that a very crowded train arrives at a station and all the smart
phones of passengers try to connect to APs ...

IPv4 has a train load of devices unicasting and retransmitting all the
dropped packets, where an IPv6 multicast RA allows all the devices to
configure based on reception of a single packet. Therefore IPv4 is
suboptimal in its abuse of the air link which could have been used for
real application traffic instead of being wasted on device configuration.
Thus by extension using your logic it is not operational. 


Just because you personally want IPv6 to be nothing more than IPv4 in every
aspect is no reason to troll the nanog list and create confusion that causes
others to delay their IPv6 deployment. Your complaints about IPv6 behavior
on wifi ignore the point that IPv6 ND behavior was defined before or in
parallel as wifi was defined by a different committee. There will always be
newer L2 technologies that arrive after an L3 protocol is defined, and the
behavior of the L3 will be 'suboptimal' for the new L2. When the issue is
serious enough to warrant documentation, addendum documents are issued. When
it is simply a matter of personal preference, it is hard to get enough
support to get those documents published. 

Tony





RE: The Department of Work and Pensions, UK has an entire /8

2012-09-20 Thread Tony Hain
 -Original Message-
 From: Joe Maimon [mailto:jmai...@ttec.com]
 Sent: Thursday, September 20, 2012 7:11 AM
 To: George Herbert
 Cc: nanog@nanog.org
 Subject: Re: The Department of Work and Pensions, UK has an entire /8

 ...
 
 Baking in bogonity is bad.

Really ???  If stack vendors had not taken the statement about 'future use'
at face value and had built the stacks assuming the 240/4 was just like all
the other unicast space; then someone came up with a clever idea that was
incompatible with the deployed assumption such that the vast deployed base
would be confused by any attempt to deploy the new clever idea, wouldn't
your argument be that 'the stack vendors broke what I want by not believing
the text that said future use'? Undefined means undefined, so there is no
reasonable way to test the behavior being consistent with some future
definition. The only thing a stack vendor can do is make sure the space is
unused until it is defined. At that point they can fix future products, but
there is no practical fix for the deployed base.

 
 Predicting the (f)utility of starting multi-year efforts in the present
for future
 benefit is self-fulfilling.

To some degree yes. In this particular case, why don't you personally go out
and tell all those people globally (that have what they consider to be
perfectly working machines) that they need to pay for an upgrade to a yet to
be shipped version of software so you can make use of a handful of addresses
and buy yourself a couple of years delay of the inevitable. If you can
accomplish that I am sure the list would bow down to your claims that there
was not enough effort put into reclamation. 

 
 Let us spin this another way. If you cannot even expect mild change such
as
 240/4 to become prevalent enough to be useful, on what do you base your
 optimism that the much larger changes IPv6 requires will?

240/4 is only 'mild' to someone that doesn't have to pay for the changes.
IPv6 does require more change, but in exchange it provides longevity that
240/4 can't. 

Denial is  a hard thing to get over, but it is only the first stage in
process of grieving. IPv4 is dead, and while the corpse is still wandering
about, it will collapse soon enough. No amount of bargaining or negotiation
will prevent that. Just look back to the claims in the '90s about
SNA-Forever and 'Serious Business doesn't operate on research protocols' to
see what is ahead. Once the shift starts it will only take 5 years or so
before people start asking what all the IPv4 fuss was about. 

IPv6 will happen with or without the ISPs, just like IPv4 happened despite
telco efforts to constrain it. The edges need functionality, and they will
get it by tunneling over the ISPs if necessary, just like the original
Internet deployed as a tunnel over the voice network. You can choose to be a
roadblock, or choose to be part of the solution. History shows that those
choosing to be part of the solution win out in the end.

Tony

 
 Joe




RE: The Department of Work and Pensions, UK has an entire /8

2012-09-20 Thread Tony Hain
 -Original Message-
 From: Nick Hilliard [mailto:n...@foobar.org]
 Sent: Thursday, September 20, 2012 2:37 PM
 To: Tony Hain
 Cc: nanog@nanog.org
 Subject: Re: The Department of Work and Pensions, UK has an entire /8
 
 On 20/09/2012 20:14, Tony Hain wrote:
  Once the shift starts it will only take 5 years or so before people
  start asking what all the IPv4 fuss was about.
 
 Tony, ipv4 succeeded because it was compelling enough to do so (killer
apps
 of the time: email / news / ftp, later www instead of limited BBS's,
teletext,
 etc), because the billing model was right for longhaul access (unmetered
 instead of the default expensive models at the time) and because it worked
 well over both LANs and WANs (unlike SNA, IPX, decnet, etc).
 
 ipv6 has none of these benefits over ipv4. 

You are comparing IPv6 to the historical deployment of IPv4. Get with the
times and realize that CGN/LSN breaks all those wonderful location-aware
apps people are so into now, not to mention raising the cost for operating
the network which eventually get charged back to the user. 

 The only thing in its favour is a
 scalable addressing model.  Other than that, it's a world of pain with
 application level support required for everything, poor CPE connectivity,
lots
 of ipv6-incapable hardware out in the world, higher support costs due to
 dual-stacking, lots of training required, roll-out costs, licensing costs
(even on
 service provider equipment - and both vendors C and J are guilty as
accused
 here),

Nanog in general has a problem taking the myopic viewpoint 'the only thing
that matters is the network'. The real costs are in app development and
support, so crap in the middle of the network (which is only there to keep
the network managers from learning something new) will be worked around.
While shifting apps to IPv6 has a cost, doing that is a onetime operation
vs. having to do it over and over for each app class and new wart that the
network managers throw into the middle. IPv4 even with all its warts makes a
reasonable global layer-2 network which IPv6 will run over just fine. (Well
mostly ... I am still chasing a 20ms tunnel asymmetry which is causing all
my IPv6 NTP peers to appear to be off by 10ms)


 poor application failover mechanisms (ever tried using outlook when
 ipv6 connectivity is down?), etc.

Outlook fails when IPv4 service is lame (but I could have stopped at the
second word). I use Outlook over IPv6 regularly, and have had more problems
with exhausted IPv4 DHCP pools than I have with Outlook over IPv6 in the
last 10 years. 

 
 The reality is that no-one will seriously move to ipv6 unless the pain of
 address starvation substantially outweighs all these issues from a
business /
 financial perspective.  It may be happening in places in china - where
there is
 ipv4 significant address starvation and massive growth, but in places of
 effectively full internet penetration and relatively plentiful
 ipv4 addresses (e.g. the US + Europe + large parts of asia), its
disadvantages
 substantially outweigh its sole advantage.

That depends on your time horizon and budget cycles. If your org suffers
from the short-term focus imposed by Wall Street, then you will have a hard
time making a case before the customers have started jumping ship in
significant enough numbers that you will never get them back. If your time
horizon is measured in decade units, you will have an easier time explaining
how a 5 year roll out will alleviate costs and minimize pain down the road.
Most of the press and casual observers didn't get the point of the 2003
DoD/US Fed mandates for 2008. That date was not picked because they believed
they needed IPv6 in production by 2008, it was picked because they had
significant new equipment purchases starting at that point that would be in
production well past the point when it becomes likely those devices will
find themselves in some part of the world where 'IPv6-only' is the network
that got deployed. The only way you turn a ship that big is to set a hard
date and require things that won't make sense to most people until much
later. 

 
 I wish I shared your optimisation that we would soon be living in an ipv6
 world, but the sad reality is that its sorry state bears more than a
passing
 resemblance to the failure of the OSI protocol stack.

If operators would put less effort into blocking transition technologies and
channel that energy into deploying real IPv6, the sorry state wouldn't be
there. For all the complaints about 6to4, it was never intended to be the
mainstay, it was supposed to be the fall back for people that had a lame ISP
that was not doing anything about IPv6. All the complaining about 6rd-waste
is just another case of finding excuses because an ISP-deployed-6to4-router
with a longer than /16 announcement into the IPv6 table does a more
efficient job, and would not have required new CPE ... Yes that violates a
one-liner in an RFC, but changing that would have been

RE: using reserved IPv6 space

2012-07-14 Thread Tony Hain
Randy Bush wrote:
  The fact that your prefix is a Secret Sauce that isn't known to the
  rest of the world won't matter much to an attacker.  One 'ifconfig' on
  whatever beachhead machine the attacker has inside your net, and it's
  not Secret Sauce anymore, it's just another bottle of Thousand Island
  dressing...
 
 security through obsurity is such tempting koolaid.  people fall for it
 continually and repeatedly.

Some people have different Layer 8-9 requirements than others. I am not
saying they are 'right', just that 'easier' is a relative term based on what
part of the problem is generating the most heat at the moment.

 
 i especially like the one where filtering ula at your border is thought to
be any
 different than filtering a bit of global at your border.

There is no difference in the local filtering function, but *IF* all transit
providers put FC00::/7 in bogon space and filter it at every border, there
is a clear benefit when someone fat-fingers the config script and announces
what should be a locally filtered prefix (don't we routinely see unintended
announcements in the global BGP table).   I realize that is a big IF, but
bogon filtering happens fairly consistently in IPv4, so there is no reason
to believe it will be less so in IPv6. 

Tony







RE: IPv6 /64 links (was Re: ipv6 book recommendations?)

2012-06-12 Thread Tony Hain
Masataka Ohta wrote:
 Karl Auer wrote:
 
  : I've seen links with up to 15k devices where ARP represented
  : a significant part of the link usage, but most weren't (yet) IPv6.
 
  MLD noise around a router is as bad as ARP/ND noise.
 
  Possibly true, but that's another discussion.
 
 Then, you could have simply argued that there is no ARP problem with IPv6,
 because ND, not ARP, were another discussion.
 
  That's how IPv6 along with SLAAC is totally broken.
 
  I think we have different ideas of what constitutes totally broken.
 
 It is because you avoid to face the reality of MLD.


MLD != ND
MLD == IGMP
ND ~= ARP

ND is less overhead on end systems than ARP because it is only received by
nodes that are subscribed to a specific multicast group rather than
broadcast reception by all. There is no difference in L2 resolution traffic
at the packet level on the network. There are multicast join messages for
groups specific to ND use, but those should not be frequent, and were a
specific tradeoff in minor additional network load to reduce significant end
system load. There are DAD messages that impact group members, but in IPv4
there are gratuitous ARP broadcasts which impact all nodes, so while the
number of messages for that function is the same, the system-wide impact is
much lower.

Multicast group management is inherently noisy, but a few more bits on the
wire reduces the load on the significantly larger number of end systems. Get
over it ... 

Tony





RE: IPv6 /64 links (was Re: ipv6 book recommendations?)

2012-06-12 Thread Tony Hain
Masataka Ohta
 Tony Hain wrote:
 
  It is because you avoid to face the reality of MLD.
 
  MLD != ND
  MLD == IGMP
 
 OK.
 
  ND ~= ARP
 
 Wrong, because ND requires MLD while ARP does not.

Note the ~ ...  And ARP requires media level broadcast, which ND does not.
Not all media support broadcast. 

 
  ND is less overhead on end systems than ARP
 
 Today, overhead in time is more serious than that in processor load.
 
 As ND requires MLD and DAD, overhead in time when addresses are
 assigned is very large (several seconds or more if multicast is not very
 reliable), which is harmful especially for quicking moving mobile hosts.

So leveraging broadcast is why just about every implementation does a
gratuitous ARP-and-wait multiple times, which is no different than DAD
timing? MLD does not need to significantly increase time for address
assignment. If hosts are moving quickly the fabric needs to be able to keep
up with that anyway, so adding a new multicast member needs to be fast
independent of IPv6 address assignment.

 
  because it is only received by
  nodes that are subscribed to a specific multicast group rather than
  broadcast reception by all.
 
 Broadcast reception by all is good because that's how ARP can detect
 duplicated addresses without DAD overhead in time.

BS ... Broadcasts are dropped all the time, so some nodes miss them and they
need to be repeated which causes further delay. On top of that, the
widespread practice of a gratuitous ARP was the precedent for the design of
DAD. 

 
  Multicast group management is inherently noisy,
 
 Thus, IPv6 is inherently noisy while IPv4 is not.
 
  but a few more bits on the
  wire reduces the load on the significantly larger number of end
  systems. Get over it ...
 
 First of all, with CATENET model, there is no significantly large number
of end
 systems in a link.

Clearly you have never looked at some networks with  64k nodes on a link.
Not all nodes move, and not all networks are a handful of end systems per
segment.

 
 Secondly, even if there are significantly large number of end systems in a
 link, with the end to end principle, network equipments must be dumb while
 end systems must be intelligent, which means MLD snooping is unnecessary
 and end systems must take care of themselves, violation of which results
in
 inefficiencies and incompleteness of ND.

MLD snooping was a recent addition to deal with intermediate network devices
that want to insert themselves into a process that was designed to bypass
them. That is not a violation of the end systems taking care of themselves,
it is an efficiency issue some devices chose to assert that isn't strictly
required for end-to-end operation. 

Just because you have never liked the design choices and tradeoffs made in
developing IPv6 doesn't make them wrong. I don't know anybody that is happy
with all aspects of the process, but that is also true for all the bolt-on's
developed to keep IPv4 running over the last 30 years. IPv4 had its day, and
it is time to move on. Continuing to complain about existing IPv6 design
does nothing productive. If there are constructive suggestions to make the
outcome better, take them to the IETF just like all the constructive changes
made to IPv4.

Tony





RE: Yahoo and IPv6

2011-05-10 Thread Tony Hain
Igor Gashinsky wrote:
 ::  In any case, the content side can mitigate all of the latency
 related issues
 ::  they complain about in 6to4 by putting in a local 6to4 router and
 publishing
 ::  the corresponding 2002:: prefix based address in DNS for their
 content. They
 ::  choose to hold their breath and turn blue, blaming the network
 for the lack
 ::  of 5-9's access to the eyeballs when they hold at least part of a
 solution
 ::  in their own hands.
 :: 
 ::  Looking at that from the content provider side for a second, what
 is their motivation for doing it? The IETF created 6to4, and some
 foolish OS and/or hardware vendors enabled it by default. So you're
 saying that it's up to the content providers to spend money to fix a
 problem they didn't create, when the easy/free solution is simply not
 to turn on IPv6 at all? I completely fail to see an incentive for the
 content providers to do this, but maybe I'm missing something.
 :: 
 
 So, just for the record, I am not speaking for my employer, and am
 speaking strictly for myself here, and I'm going to try to keep this my
 one and only message about finger-pointing :)
 
 The time for finger-pointing is over, period, all we are all trying to
 do
 now is figure out how to deal with the present (sucky) situation. The
 current reality is that for a non-insignificant percentage of users
 when
 you enable dual-stack, they are gong to drop off the face of the
 planet.
 Now, for *you*, 0.026% may be insignificant (and, standalone, that
 number
 is insignificant), but for a global content provider that has ~700M
 users,
 that's 182 *thousand* users that *you*, *through your actions* just
 took
 out.. 182,000 - that is *not* insignificant
 
 *That* is what world ipv6 day is about to me -- getting enough
 attention
 at the problem so that all of us can try to move the needle in the
 right
 direction. If enough users realize that they are broken, and end up
 fixing themselves, then it will be a resounding success. And, yes, to
 me, disabling broken ipv6 *is* fixing themselves. If they turn broken
 ipv6 into working ipv6, even better, I just hope all the access
 networks
 staffed up their helpdesk to deal with the call volumes..
 
 And, if the breakage stats remain bad, well, that's what DNS
 whitelists/blacklists are going to be for..
 
 :: While we're not directly a content provider, we do host several of
 them and we do
 :: run the largest network of 6to4 relays that I am aware of. In our
 experience at HE,
 :: this has dramatically improved the IPv6 experience for our clients.
 As such, I would
 :: think that providing a better user experience should serve as
 reasonable motivation
 :: for any rational content provider. It's not like running 6to4 relays
 is difficult or
 :: expensive.
 
 No, running *return* 6to4 relays is not difficult at all, in fact, some
 content providers have a ton of them up right now. The problem is that
 content providers can't control the forward relays, 

So take the relays out of the path by putting up a 6to4 router and a 2002::
prefix address on the content servers. Longest match will cause 6to4
connected systems to prefer that prefix while native connected systems will
prefer the current prefix. The resulting IPv4 path will be exactly what it
is today door-to-door. Forcing traffic through a third party by holding to a
purity principle for dns, and then complaining about the results is not
exactly the most productive thing one could do.

 or protocol 41
 filtering that's out in the wild. 

Putting 2002:: in dns will not fix this, but it is not clear to me where
this comes from. The argument is that enterprise firewalls are blocking it,
but that makes no sense because many/most enterprises are in 1918 space so
6to4 will not be attempted to begin with, and for those that have public
space internally the oft-cited systems that are domain members will have
6to4 off by default. To get them to turn it on would require the IT staff to
explicitly enable it for the end systems but then turn around and block it
at the firewall ... Not exactly a likely scenario.

The most likely source of public space for non-domain joined systems would
be universities, but no one that is complaining about protocol 41 filtering
has shown that the source addresses are coming from those easily
identifiable places. 

That leaves the case of networks that use public addresses internally, but
nat those at the border. This would confuse the client into thinking 6to4
should be viable, only to have protocol 41 blocked by the nat. These
networks do exist, and the only way to detect them would be to have an
instrumented 6to4 router or relay that compared the IPv4-bits in the source
address between the two headers. They don't have to match exactly because a
6to4 router would use its address as a source, but if the embedded bits said
25.25.25.25 while the external IPv4 header said 18.25.25.25 one might
suspect there was a nat in the path.

 Also, not all breakage is 

RE: Yahoo and IPv6

2011-05-09 Thread Tony Hain
 -Original Message-
 From: Doug Barton [mailto:do...@dougbarton.us]
 Sent: Monday, May 09, 2011 12:11 PM
 To: Jared Mauch
 Cc: nanog@nanog.org; Arie Vayner
 Subject: Re: Yahoo and IPv6
 
 On 05/09/2011 10:27, Jared Mauch wrote:
  I do feel the bar that Yahoo is setting is too high.  There are a lot
 of network elements that are broken, either DNS servers, home
 'gateway/nat' devices, or other elements in the delegation chain.
 
 Publicly held corporations are responsible to their shareholders to get
 eyeballs on their content. *That* is their job, not promoting cool new
 network tech. When you have millions of users hitting your site every
 day losing 1/2000 is a large chunk of revenue. The fact that the big
 players are doing world IPv6 day at all should be celebrated, promoted,
 and we should all be ready to take to heart the lessons learned from
 it.
 
 The content providers are not to be blamed for the giant mess that IPv6
 deployment has become. If 6to4 and Teredo had never happened, in all
 likelihood we wouldn't be in this situation today.

Which situation ??? The one where the content can demonstrate how broken the
networks really are? Or the one where the content sites are exposed for
their lack of prior planning? 

The entire point of those technologies you are complaining about was to
break the stalemate between content and network, because both sides will
always wait and blame the other. The fact that the content side chose to
wait until the last possible minute to start is where the approach falls
down. Expecting magic to cover for lack of proactive effort 5-10 years ago
is asking a bit much, even for the content mafia. 

In any case, the content side can mitigate all of the latency related issues
they complain about in 6to4 by putting in a local 6to4 router and publishing
the corresponding 2002:: prefix based address in DNS for their content. They
choose to hold their breath and turn blue, blaming the network for the lack
of 5-9's access to the eyeballs when they hold at least part of a solution
in their own hands.

We are about the witness the most expensive, complex, blame-fest of a
transition that one could have imagined 10 years ago. This is simply due to
the lack of up-front effort that both sides have demonstrated in getting to
this point. Now that time has expired, all that is left to do is sit back
and watch the fireworks.

Tony

 
 
 --
 
   Nothin' ever doesn't change, but nothin' changes much.
   -- OK Go
 
   Breadth of IT experience, and depth of knowledge in the DNS.
   Yours for the right price.  :)  http://SupersetSolutions.com/





RE: IPv6 - a noobs prespective

2011-02-09 Thread Tony Hain
Franck Martin wrote:
 This is dual stack, my recommendation is disable IPv6 on your servers
 (so your clients will still talk to them on IPv4 only), and let your
 client goes IPv6 first. Once you understand what is happening, get on
 IPv6 on your servers.

You don't have to disable IPv6 on the servers, just don't put a  in dns. 
The simplest way to move forward is to get the entire path in place without the 
key to knowing is there, then for a few test subjects either provide a 
different dns response, or distribute a host file. Making the mass change of 
enabling the servers at the point you expect service to work is just asking for 
support calls...

 
 Alternatively, use someone else network to understand IPv6. Attend,
 NANOG, ICANN, IETF, they always have IPv6 enabled, you can better
 understand how your machine reacts, what tools you have, how to do
 ping, debug, packet capture,...
 
 For the firewall, shorewall does IPv4 and IPv6, with a relatively
 simple interface and is free...
 
 - Original Message -
 From: William Herrin b...@herrin.us
 To: Robert Lusby nano...@gmail.com
 Cc: nanog@nanog.org
 Sent: Thursday, 10 February, 2011 7:03:01 AM
 Subject: Re: IPv6 - a noobs prespective
 
 On Wed, Feb 9, 2011 at 6:00 AM, Robert Lusby nano...@gmail.com wrote:
  I also get why we need IPv6, that it means removing the NAT (which,
 surprise
  surprise also runs our Firewall), and I that I might need new kit for
 it.
 
  I am however *terrified* of making that move. There is so many new
 phrases,
  words, things to think about etc
 
 The thing that terrifies me about deploying IPv6 is that apps
 compatible with both are programmed to attempt IPv6 before IPv4. This
 means my first not-quite-correct IPv6 deployments are going to break
 my apps that are used to not having and therefore not trying IPv6. But
 that's not the worst part... as the folks my customers interact with
 over the next couple of years make their first not-quite-correct IPv6
 deployments, my access to them is going to break again. And again. And
 again. And I won't have the foggiest idea who's next until I get the
 call that such-and-such isn't working right.
 
 Regards,
 Bill Herrin
 
 
 
 --
 William D. Herrin  her...@dirtside.com  b...@herrin.us
 3005 Crane Dr. .. Web: http://bill.herrin.us/
 Falls Church, VA 22042-3004





RE: ipv4's last graph

2011-02-02 Thread Tony Hain
So in the interest of 'second opinions never hurt', and I just can't get my
head around APnic sitting at 3 /8's, burning 2.3 /8's in the last 2 months
and the idea of a 50% probability that their exhaustion event occurs Aug.
2011, here are a couple other graphs to consider.
http://www.tndh.net/~tony/ietf/IPv4-rir-pools.pdf
http://www.tndh.net/~tony/ietf/IPv4-rir-pools-zoom.pdf

Tony


 -Original Message-
 From: Geoff Huston [mailto:g...@apnic.net]
 Sent: Tuesday, February 01, 2011 12:12 PM
 To: Randy Bush
 Cc: NANOG Operators' Group
 Subject: Re: ipv4's last graph
 
 
 On 01/02/2011, at 7:02 PM, Randy Bush wrote:
 
  with the iana free pool run-out, i guess we won't be getting those
 nice
  graphs any more.  might we have one last one for the turnstiles?  :-
 )/2
 
  and would you mind doing the curves now for each of the five rirs?
  gotta give us all something to repeat endlessly on lists and in
 presos.
 
 but of course.
 
 http://www.potaroo.net/tools/ipv4/rir.jpg
 
 This is a different graph - it is a probabilistic graph that shows the
 predicted month when the RIR will be down to its last /8 policy
 (whatever that policy may be), and the relative probability that the
 event will occur in that particular month.
 
 The assumption behind this graph is that the barricades will go up
 across the regions and each region will work from its local address
 pools and service only its local client base, and that as each region
 gets to its last /8 policy the applicants will not transfer their
 demand to those regions where addresses are still available. Its not
 possible to quantify how (in)accurate this assumption may be, so beyond
 the prediction of the first exhaustion point (which is at this stage
 looking more likely to occur in July 2011 than not) the predictions for
 the other RIRs are highly uncertain.
 
 Geoff





RE: ipv4's last graph

2011-02-02 Thread Tony Hain
 -Original Message-
 From: Vincent Hoffman [mailto:jh...@unsane.co.uk]
 Sent: Wednesday, February 02, 2011 9:44 AM
 To: nanog@nanog.org
 Subject: Re: ipv4's last graph
 
 On 02/02/2011 17:22, Matthew Petach wrote:
  On Wed, Feb 2, 2011 at 9:01 AM, Tony Hain alh-i...@tndh.net wrote:
  So in the interest of 'second opinions never hurt', and I just can't
 get my
  head around APnic sitting at 3 /8's, burning 2.3 /8's in the last 2
 months
  and the idea of a 50% probability that their exhaustion event occurs
 Aug.
  2011, here are a couple other graphs to consider.
  http://www.tndh.net/~tony/ietf/IPv4-rir-pools.pdf
  http://www.tndh.net/~tony/ietf/IPv4-rir-pools-zoom.pdf
 
  Tony
  Two things:
 
  1) you'll get better uptake of your graph if it's visible as a simple
   image, rather than requiring a PDF download.  :/
 Not wishing to advertise google but
 
 http://docs.google.com/viewer?url=http://www.tndh.net/~tony/ietf/IPv4-
 rir-pools.pdf
 and
 http://docs.google.com/viewer?url=http://www.tndh.net/~tony/ietf/IPv4-
 rir-pools-zoom.pdf
 
 
 works for me without needing to download a pdf viewer

For some reason that viewer didn't work here, so I added jpg's to the site.
http://www.tndh.net/~tony/ietf/IPv4-rir-pools.jpg
http://www.tndh.net/~tony/ietf/IPv4-rir-pools-zoom.jpg


 
 Vince
 
 
 
  2) labelling the Y axis would help; I'm not sure what the scale
  of 1-8 represents, unless it's perhaps the number of slices of
  pizza consumed per staff member per address allocation request?

I thought about leaving it off completely, but figured I would be asked for
scale. It is /8's remaining until they drop into their 'last allocation'
policy. I will see if I can figure out how to fit that into something
readable. 


 
  But I do agree with what seems to be your driving message, which
  is that Geoff could potentially be considered optimistic.  ^_^;

Geoff has always been the optimist ...  ;0


 
  Matt





RE: ipv4's last graph

2011-02-02 Thread Tony Hain
 -Original Message-
 From: Richard Barnes [mailto:richard.bar...@gmail.com]
 Sent: Wednesday, February 02, 2011 10:44 AM
 To: Tony Hain
 Cc: Vincent Hoffman; nanog@nanog.org
 Subject: Re: ipv4's last graph
 
 Note that the ARIN, APNIC, and RIPE lines should all basically level
 out to asymptotes after they hit 1 /8 left, due to the soft run out
 policies in place [1][2][3].  Either that, or just consider arriving
 at 1 /8 left as depletion.

The /8 that applies to those policies has not been allocated yet ... ask
again tomorrow.

Would it make more sense to mark the graph at 1 with an asterisk, or just
leave those out of this graph all together? If you care about how well the
policy is managing the end of the pool, then marking 1 is the right thing,
while if you only care about when 'old policy' stops then it makes more
sense to just leave them off.

Tony

 
 Geoff: How are your graphs dealing with these policies?
 
 [1] https://www.arin.net/policy/nrpm.html#four10
 [2] http://www.apnic.net/policy/add-manage-policy#9.10.1
 [3] http://ripe.net/ripe/policies/proposals/2010-02.html
 
 
 
 On Wed, Feb 2, 2011 at 1:11 PM, Tony Hain alh-i...@tndh.net wrote:
  -Original Message-
  From: Vincent Hoffman [mailto:jh...@unsane.co.uk]
  Sent: Wednesday, February 02, 2011 9:44 AM
  To: nanog@nanog.org
  Subject: Re: ipv4's last graph
 
  On 02/02/2011 17:22, Matthew Petach wrote:
   On Wed, Feb 2, 2011 at 9:01 AM, Tony Hain alh-i...@tndh.net
 wrote:
   So in the interest of 'second opinions never hurt', and I just
 can't
  get my
   head around APnic sitting at 3 /8's, burning 2.3 /8's in the
 last 2
  months
   and the idea of a 50% probability that their exhaustion event
 occurs
  Aug.
   2011, here are a couple other graphs to consider.
   http://www.tndh.net/~tony/ietf/IPv4-rir-pools.pdf
   http://www.tndh.net/~tony/ietf/IPv4-rir-pools-zoom.pdf
  
   Tony
   Two things:
  
   1) you'll get better uptake of your graph if it's visible as a
 simple
        image, rather than requiring a PDF download.  :/
  Not wishing to advertise google but
 
 
 http://docs.google.com/viewer?url=http://www.tndh.net/~tony/ietf/IPv4-
  rir-pools.pdf
  and
 
 http://docs.google.com/viewer?url=http://www.tndh.net/~tony/ietf/IPv4-
  rir-pools-zoom.pdf
 
 
  works for me without needing to download a pdf viewer
 
  For some reason that viewer didn't work here, so I added jpg's to the
 site.
  http://www.tndh.net/~tony/ietf/IPv4-rir-pools.jpg
  http://www.tndh.net/~tony/ietf/IPv4-rir-pools-zoom.jpg
 
 
 
  Vince
 
 
 
   2) labelling the Y axis would help; I'm not sure what the scale
   of 1-8 represents, unless it's perhaps the number of slices of
   pizza consumed per staff member per address allocation request?
 
  I thought about leaving it off completely, but figured I would be
 asked for
  scale. It is /8's remaining until they drop into their 'last
 allocation'
  policy. I will see if I can figure out how to fit that into something
  readable.
 
 
  
   But I do agree with what seems to be your driving message, which
   is that Geoff could potentially be considered optimistic.  ^_^;
 
  Geoff has always been the optimist ...  ;0
 
 
  
   Matt
 
 
 
 




RE: ipv4's last graph

2011-02-01 Thread Tony Hain
The individual RIR graphs won't be around long enough to be worth the
effort... ;) 

FWIW: the Jan. 2011 global burn rate (outbound from the RIRs) for
/24-equivlents was 18.97 seconds. At the Jan. rate, APnic won't last to June
and Ripe might make to the end of August, then chaos ensues. Is there really
any value in trying to distribute graphs that will all be flat before the
end of the year?

Tony


 -Original Message-
 From: Randy Bush [mailto:ra...@psg.com]
 Sent: Tuesday, February 01, 2011 12:02 AM
 To: Geoff Huston
 Cc: NANOG Operators' Group
 Subject: ipv4's last graph
 
 with the iana free pool run-out, i guess we won't be getting those nice
 graphs any more.  might we have one last one for the turnstiles?  :-)/2
 
 and would you mind doing the curves now for each of the five rirs?
 gotta give us all something to repeat endlessly on lists and in presos.
 
 randy




RE: Using IPv6 with prefixes shorter than a /64 on a LAN

2011-01-25 Thread Tony Hain
Owen DeLong wrote:
 ..
 I suspect that there are probably somewhere between 30,000
 and 120,000 ISPs world wide that are likely to end up with a /32
 or shorter prefix.

A /32 is the value that a start-up ISP would have. Assuming that there is a
constant average rate of startups/failures per year, the number of /32's in
the system should remain fairly constant over time. 

Every organization with a *real* customer base should have significantly
shorter than a /32. In particular every organization that says I can't give
my customers prefix length X because I only have a /32 needs to go back to
ARIN today and trade that in for a *real block*. There should be at least 10
organizations in the ARIN region that qualify for a /20 or shorter, and most
would likely be /24 or shorter. 

As Owen said earlier, proposal 121 is intended to help people through the
math. Please read the proposal, and even if you don't want to comment on the
PPML list about it, take that useless /32 back to ARIN and get a *real
block* today.

Tony








RE: IPv6 - real vs theoretical problems

2011-01-10 Thread Tony Hain
... yes I know you understand operational issues.

While managed networks can 'reverse the damage', there is no way to fix that
for consumer unmanaged networks. Whatever gets deployed now, that is what
the routers will be built to deal with, and it will be virtually impossible
to change it due to the 'installed base' and lack of knowledgeable
management. 

It is hard enough getting the product teams to accept that it is possible to
build a self-configuring home network without having that be crippled by
braindead conservation. The worst possible value I can see for delegation to
the home is /56, yet that is the most popular value because people have
their heads so far into the dark void of conservation they can't let accept
that the space will be 'wasted sitting on the shelf at IANA when somebody
comes along with a better idea in the next 500 years'. 

I understand the desire to 'do it like we do with IPv4', because that
reduces the learning curve, but it also artificially restricts IPv6, ensures
that the work is doubled to remove the restraints later, and makes it even
harder to show value in the short term because 'it is just like IPv4 with a
different bit pattern'. IPv6 is not just IPv4 with bigger addresses no
matter what the popular mantra is. The only way you can even get close to
that kind of argument is if you totally myopic on BGP, and even then there
are differences. 

Bottom line, just fix the tools to deal with the reality of IPv6, and move
on. 
Tony


 -Original Message-
 From: Deepak Jain [mailto:dee...@ai.net]
 Sent: Thursday, January 06, 2011 2:01 PM
 To: NANOG list
 Subject: IPv6 - real vs theoretical problems
 
 
 Please, before you flame out, recognize I know a bit of what I am
 talking about. You can verify this by doing a search on NANOG archives.
 My point is to actually engage in an operational discussion on this and
 not insult (or be insulted).
 
 While I understand the theoretical advantages of /64s and /56s and /48s
 for all kinds of purposes, *TODAY* there are very few folks that are
 actually using any of them. No typical customer knows what do to do
 (for the most part) with their own /48, and other than
 autoconfiguration, there is no particular advantage to a /64 block for
 a single server -- yet. The left side of the prefix I think people and
 routers are reasonably comfortable with, it's the host side that
 presents the most challenge.
 
 My interest is principally in servers and high availability equipment
 (routers, etc) and other things that live in POPs and datacenters, so
 autoconfiguration doesn't even remotely appeal to me for anything. In a
 datacenter, many of these concerns about having routers fall over exist
 (high bandwidth links, high power equipment trying to do as many things
 as it can, etc).
 
 Wouldn't a number of problems go away if we just, for now, follow the
 IPv4 lessons/practices like allocating the number of addresses a
 customer needs --- say /122s or /120s that current router architectures
 know how to handle -- to these boxes/interfaces today, while just
 reserving /64 or /56 spaces for each of them for whenever the magic day
 comes along where they can be used safely?
 
 As far as I can tell, this crippling of the address space is
 completely reversible, it's a reasonable step forward and the only
 operational loss is you can't do all the address jumping and
 obfuscation people like to talk about... which I'm not sure is a loss.
 
 In your enterprise, behind your firewall, whatever, where you want
 autoconfig to work, and have some way of dealing with all of the dead
 space, more power to you. But operationally, is *anything* gained today
 by giving every host a /64 to screw around in that isn't accomplished
 by a /120 or so?
 
 Thanks,
 
 DJ
 
 




RE: IPv6 - real vs theoretical problems

2011-01-10 Thread Tony Hain
*requested anonymous* wrote:
 (I don't post on public mailing lists, so, please consider this
 private.
 That is, I don't care if the question/reply are public, just, not the
 source.)
 
 On 1/10/11 11:46 AM, Tony Hain wrote:
  ... yes I know you understand operational issues.
 
  While managed networks can 'reverse the damage', there is no way to
 fix that
  for consumer unmanaged networks. Whatever gets deployed now, that is
 what
  the routers will be built to deal with, and it will be virtually
 impossible
  to change it due to the 'installed base' and lack of knowledgeable
  management.
 
  It is hard enough getting the product teams to accept that it is
 possible to
  build a self-configuring home network without having that be crippled
 by
  braindead conservation. The worst possible value I can see for
 delegation to
  the home is /56, yet that is the most popular value because people
 have
   ^
 Why would you say /56 is the worst possible value?  Just curious --

I am actually trying to develop a simple set of 'auto conf' rules for all the 
CPE vendors to build against, and for a Joe-sixpack plug-n-play network 
configuration a /56 means there is only one topology option beyond single 
subnet. 

 my provider doesn't offer IPv6 yet, but, I think they will soon.
 I was going to ask for a /56 for my home net.  If I ever get around
 to using them to set up a domain for my wife's business, I will ask
 for a /48, but, for a house without a private domain, /56 seems
 perfect.

You are thinking of a managed network. Connect a random graph of boxes, then 
figure out a subnet scheme that all cpe vendors can implement that will 
correctly deal with prefix delegation and hierarchical routing. 


 I don't expect to run out in my lifetime, or even my children's
 or grandchildren's lifetimes if somehow the house stays in the family
 ;-)
 How many subnets will they really need, no matter if every lightbulb
 is on the net?

Wrong question. In a managed network that would be the right question, but in 
an unmanaged one the right question is how many sub-delegations and how many 
branches per sub-delegate are going to be automatically figured out. 

 
 My frame of reference is that while we need to make the addresses big
 enough, we also need to preserve the hierarchy.  There is no shortage
 of addresses, nor will there be, ever, but there could be a shortage
 of levels in the hierarchy. I assume you would like a home to have a
 /48?  But, from my provider's /32, that is only 4 levels at the
 assumed nibble boundary.  I think my provider could use another
 two levels.

If your provide has more than 10,000 customers they should never have gotten a 
/32. The braindead notion that everyone needed to rush out and get a /32 has 
not helped get IPv6 deployed. The /32 value was the default one for a startup 
provider. Every provider with a customer base should have done a plan for a /48 
per customer, then gotten the right size block to start with. Any provider with 
a /32 and more than 10k customers needs to do that now and swap for 'a real 
block', instead of trying to squeeze their customers into a tiny block due to 
their insufficient initial request. 

 
 I also think ~256 subnets has stood the test of time -- seldom in
 the last 25 years has a geographically contiguous enterprise network
 (such as a university or company) required more than 256 subnets --
 except for cisco, microsoft, et al., but not, e.g. most colleges,
 universities, research centers, etc.  More addresses, sure, but,
 not usually more than 256 subnets.  So, even in a world where
 every possible device has its own set of addresses -- how many
 subnets will I really need?

Again, wrong question. Most of the possible subnets in a Joe-sixpack 
configuration will be 'wasted'. So what? That space will be wasted sitting on 
the shelf at IANA in 500 years when someone comes up with a better idea. IPv6 
is not the last protocol known to mankind (unless the 2012 predictions are 
true), so most of its potential space will be wasted. Get over that point and 
accept that innovation requires thinking differently than the limited myopia of 
the past.

 
 Also from my frame of reference -- we need to work on making addressing
 and re-addressing easier and more automatic for consumers anyway, so,
 if /56 is not enough, we can easily and painlessly switch to a /52
 with no problems.  

Easy in a managed network where it is possible to update code and expect that 
things will happen in a timeframe that makes development worth the effort. 
Impossible in consumer land where it is well documented that things are never 
updated, and all vendors need to play by the same simple rules because there is 
no hope that the consumer will know how to tweak them.

 And, if I decide to grow an enterprise from home,
 I feel that I should be able to re-address as needed over the course
 of time anyway, so, I would rather make re-addressing easier than
 put all my

RE: Interesting IPv6 viral video

2010-10-28 Thread Tony Hain
No idea where this came from, and no I didn't have any part in it. If I had,
the rental rates on addresses would have been much more in the range of
extortion... ;)

 -Original Message-
 From: Kevin Oberman [mailto:ober...@es.net]
 Sent: Thursday, October 28, 2010 1:59 PM
 To: Zaid Ali
 Cc: NANOG list
 Subject: Re: Interesting IPv6 viral video
 
  Date: Thu, 28 Oct 2010 13:08:02 -0700
  From: Zaid Ali z...@zaidali.com
 
  Not quite accurate and a bit too dramatic on the panic side but the
 approach
  is interesting to put C-Level folks in the hot seat about v6. Would
 be
  interesting also to see if folks here get asked by C-Level folks bout
 IPv6.
 
  http://www.youtube.com/watch?v=eYffYT2y-Iw
 
 Cute. Silly, but cute. I think I see Tony Hain's hand somewhere in
 this.
 
 Tony???
 --
 R. Kevin Oberman, Network Engineer
 Energy Sciences Network (ESnet)
 Ernest O. Lawrence Berkeley National Laboratory (Berkeley Lab)
 E-mail: ober...@es.netPhone: +1 510 486-8634
 Key fingerprint:059B 2DDF 031C 9BA3 14A4  EADA 927D EBB3 987B 3751




RE: IPv6 Routing table will be bloated?

2010-10-26 Thread Tony Hain
You didn't miss anything, past ARIN practice has been broken, though using
sparse allocation it is not quite as bad as you project. In any case, ISP's
with more than 10k customers should NEVER get a /32, yet that is what ARIN
insisted on giving even the largest providers in the region. Every ISP
should go back to ARIN, turn in the lame /32 nonsense they were given (that
allocation size is for a startup ISP with 0 customers), follow that with an
'initial allocation' request that is based on your pop structure with a /48
per customer including projected growth. I don't care what you actually
allocate to your customers at this point, just get a large enough block to
begin with that you could give everyone a /48 the way policy allows. There
is absolutely no reason to have to grovel at ARIN's feet every few months as
you grow your IPv6 deployment. Get a 'real block' up front.

Tony


 -Original Message-
 From: Jack Bates [mailto:jba...@brightok.net]
 Sent: Tuesday, October 26, 2010 6:58 AM
 To: nanog@nanog.org
 Subject: IPv6 Routing table will be bloated?
 
 So, the best that I can tell (still not through debating with RIR), the
 IPv6 routing table will see lots of bloat. Here's my reasoning so far:
 
 1) RIR (ARIN in this case, don't know other RIR interpretations) only
 does initial assignments to barely cover the minimum. If you need more
 due to routing, you'll need to provide every pop, counts per pop, etc,
 to show how v6 will require more than just the minimums (full routing
 plan and customer counts to justify routing plan). HD-Ratio has NO
 bearing on initial allocation, and while policy dictates that it
 doesn't
 matter how an ISP assigns to customer so long as HD-Ratio is met, that
 is not the case when providing justification for the initial
 allocation.
 
 2) Subsequent requests only double in size according to policy (so just
 keep going back over and over since HD is met immediately due to the
 minimalist initial assignment?)
 
 So I conclude that since I get a bare minimum, I can only assign a bare
 minimum. Since everything is quickly maxed out, I must request more
 (but
 only double), which in turn I can assign, but my customer assignments
 (Telcos/ISPs in this case) will be non-contiguous due to the limited
 available space I have to hand out. This will lead to IGP bloat, and in
 cases of multi-homed customers whom I provide address space for, BGP
 bloat.
 
 I'm small, so my bloat factor is small, but I can quickly see this
 developing exactly as my v4 network did (if it was years ago when I
 first got my v4 allocation, growing to today, for each allocation I got
 for v4, I'd expect similar out of v6). Sure, the end user gets loads of
 space with those nice /48's, but the space within ISPs and their ISP
 customers is force limited by initial allocations which will create
 fragmentation of address space. This is brought about due to the dual
 standard of initial vs subsequent allocations (just enough to cover
 existing vs HD Ratio).
 
 As an example, Using HD-Ratios as an initial assignment metric can
 warrant a /27, whereas the minimalist approach may only warrant a
 heavily utilized /30. 3 bits doesn't seem like much, but it's a huge
 difference in growth room. Bare minimums, as provided by me, only
 included the /24 IPv4 DHCP pools converted with a raw conversion as /32
 IPv4 = /48 IPv6 network
 
 Am I missing something, or is this minimalist approach going to cause
 issues in BGP the same as v4 did?
 
 
 Jack




RE: ARIN recognizes Interop for return of more than 99% of 45/8 address block

2010-10-20 Thread Tony Hain
John Curran wrote:
 On Oct 20, 2010, at 11:35 AM, Christopher Morrow wrote:
 
  yes, sorry.. since this was returned to ARIN, I assumed the ARIN
  region drain rate.
 
 Ah, good point.  It may end up in the global pool, so comparison to
 either drain rate is quite reasonable.

For what it's worth, at this point it really doesn't matter much if 45/8
stays at ARIN or goes back to IANA. RIPE is due for a pair around 12/15 --
IANA @ 10
APnic burned through the last 2 in 67 days, so given end of year slowing
they should be back for another pair around 1/2/11.  -- IANA @ 8
If 45/8 goes back to IANA, ARIN is due to get a pair around 2/1. -- IANA @
6 + 45/8
APnic comes back for the last one + 45/8 around 3/1 -- IANA @ 5 triggers
end of pool.

If ARIN keeps 45/8, the only difference would be they would probably not
qualify for the estimated 2/1/11 allocation, so that event would be delayed
and when 45/8 was used up so they would qualify, there would only be 1 left
because APnic would have gotten their pair around 3/1 first. Any way you cut
it, around 3/1/11 APnic/ARIN/RIPE will each be holding around 4 /8's + their
remaining share of 'Various Registries' (ARIN's last one could sit at IANA
for an additional couple of weeks, but they would still be next in line
unless they take too long). 

The wild card to the above scenario is Afrinic. They are close enough for an
unusual event to qualify them for another /8 from IANA within this
timeframe. If that happens, the disposition of 45/8 could impact how many
APnic is holding around 3/1/11. Given that APnic is burning through a /8 on
an accelerating pace currently at 33 days, +/- one will impact how soon the
address market takes off. If Afrinic gets one and 45/8 stays at ARIN, APnic
will pick up the last pair at IANA around 3/1 and be completely out within 6
months. If 45/8 goes back, ARIN picks up 2 in early Feb, so with Afrinic
getting one there would only be one left for APnic. 

Tony







RE: Definitive Guide to IPv6 adoption

2010-10-18 Thread Tony Hain
This 'get a /32' BAD ADVICE has got to stop. There are way too many people
trying to force fit their customers into a block that is intended for a
start-up with ZERO customers.

Develop a plan for /48 per customer, then go to ARIN and get that size
block. Figure out exactly what you are going to assign to customers later,
but don't tie your hands by asking for a block that is way too small to
begin with. Any ISP with more than 30k customers SHOULD NOT have a /32, and
if they got one either trade it in or put it in a lab and get a REAL block. 

Tony


 -Original Message-
 From: Brandon Kim [mailto:brandon@brandontek.com]
 Sent: Saturday, October 16, 2010 1:59 PM
 To: nanog@nanog.org
 Subject: RE: Definitive Guide to IPv6 adoption
 
 
 Thanks everyone who responded. This list is such a valuable wealth of
 information.
 
 Apparently I was wrong about the /64 as that should be /32 so thanks
 for that correction
 
 Thanks again especially on a Saturday weekend!
 
 
 
  From: rdobb...@arbor.net
  To: nanog@nanog.org
  Date: Sat, 16 Oct 2010 16:09:43 +
  Subject: Re: Definitive Guide to IPv6 adoption
 
 
  On Oct 16, 2010, at 10:56 PM, Joel Jaeggli wrote:
 
   Then move on to the Internet which as with most things is where the
 most cuurent if not helpful information resides.
 
 
  Eric Vyncke's IPv6 security book is definitely worthwhile, as well,
 in combination with Schudel  Smith's infrastructure security book (the
 latter isn't IPv6-specific, but is the best book out there on
 infrastructure security):
 
  http://www.ciscopress.com/bookstore/product.asp?isbn=1587055945
 
  http://www.ciscopress.com/bookstore/product.asp?isbn=1587053365
 
  -
 --
  Roland Dobbins rdobb...@arbor.net // http://www.arbornetworks.com
 
 Sell your computer and buy a guitar.
 
 
 
 
 
 =




RE: Only 5x IPv4 /8 remaining at IANA

2010-10-18 Thread Tony Hain
Owen DeLong wrote:
 ...
 
 It's really unfortunate that most people don't understand the
 distinction.
 If they did, it would help them to realize that NAT doesn't actually do
 anything for security, it just helps with address conservation
 (although
 it has some limits there, as well).

Actually nat does something for security, it decimates it. Any 'real'
security system (physical, technology, ...) includes some form of audit
trail. NAT explicitly breaks any form of audit trail, unless you are the one
operating the header mangling device. Given that there is no limit to the
number of nat devices along a path, there can be no limit to the number of
people operating them. This means there is no audit trail, and therefore NO
SECURITY. 

 
 IPv6 with SI is no less secure than IPv4 with SI+NAT. If you're worried
 about address and/or topological obfuscation, then, IPv6 offers you
 privacy addresses with rotating numbers. However, that's more a
 privacy issue than a security issue, unless you believe in the idea
 of security through obscurity which is pretty well proven false.

A different way to look at this is less about obscurity, and more about
reducing your overall attack surface. A node using a temporal address is
vulnerable while that address is live, but as soon as it is released that
attack vector goes away. Attackers that harvest addresses through the
variety of transactions that a node my conduct will have a limited period of
time to try to exploit that. 

This is not to say that you don't want stateful controls, just that if
something inside the stateful firewall has been compromised there will be a
limited period of time to use the dated knowledge.

Tony







RE: [Re: http://tools.ietf.org/search/draft-hain-ipv6-ulac-01]

2010-04-26 Thread Tony Hain
While I appreciate Bill's attempt to raise attention to the draft, I needed
to update it anyway with the intent to greatly simplify things and hopefully
clarify at the same time. Given the interest level in this thread, I will
ask for comments here before publishing the updated I-D.

Replace intro paragraph 2 with:
In any case, the prefixes defined here are not expected to appear in the
routing system for the global Internet.  A frequent question is how this
prefix differs from a PI assignment. The simple answer is in expectation of
global public routability.  A PI assignment has the expectation that it
could appear in the global DFZ, where a ULA-C registration is expected to be
limited reach as service providers are free to, and expected to filter the
entire FC00::/7 prefix as bogon space.  The appropriate use of these
prefixes is within a single network administration, or between privately
interconnected networks that want to ensure that private communications do
not accidentally get routed over the public Internet as might happen with
PI.


Replace section 3.2  3.3 with:
Global IDs MUST be allocated under a single allocation and registration
authority.  The IAB SHOULD designate IANA as the registration authority. As
policies differ around the world, IANA SHOULD delegate to the RIRs in a
manner similar to the /12 approach used for the 2000::/3 prefix.  The RIRs
SHOULD establish registration policies which recognize that ULA-C prefixes
are not a threat to the global DFZ, and therefore easier to justify.
Organizations that don't want an ongoing relationship with the RIRs SHOULD
be directed to RFC 4193.

The requirements for ULA-C allocation and registrations are:

   - Globally Unique.
   - Available to anyone in an unbiased manner.
   - Available as a single prefix when justified to align subnet structures
with PA or PI.


Other clean up as necessary to align with this simplified text. The point is
to remove as much policy as possible from the IETF text, and leave that to
each region.

Comments welcome, and to the extent they are operationally related to the
DFZ could remain on nanog, but otherwise should be on the IETF 6man list.

Tony



 -Original Message-
 From: Owen DeLong [mailto:o...@delong.com]
 Sent: Monday, April 26, 2010 8:33 AM
 To: Stephen Sprunk
 Cc: North American Noise and Off-topic Gripes
 Subject: Re: [Re: http://tools.ietf.org/search/draft-hain-ipv6-ulac-01]
 
 
 On Apr 26, 2010, at 7:20 AM, Stephen Sprunk wrote:
 
  On 24 Apr 2010 21:01, Mark Smith wrote:
  On Thu, 22 Apr 2010 01:48:18 -0400
  Christopher Morrow morrowc.li...@gmail.com wrote:
 
  On Wed, Apr 21, 2010 at 5:47 PM, Mark Smith
  na...@85d5b20a518b8f6864949bd940457dc124746ddc.nosense.org wrote:
 
  So what happens when you change providers? How are you going to
 keep using globals that now aren't yours?
 
  use pi space, request it from your local friendly RIR.
 
  I was hoping that wasn't going to be your answer. So do you expect
 every residential customer to get a PI from an RIR?
 
 
  The vast majority of residential customers have no idea what
 globals
  or PI are.  They use PA and they're fine with that--despite being
  forcibly renumbered every few hours/days.  (Many ISPs deliberately
 tune
  their DHCP servers to give residential customers a different address
  each time for market segmentation reasons.)
 
 The majority of residential cusotmers bitch about paying $20/month for
 what they have and are not planning to multihome.
 
 This was a comment about multihoming.
 
 FWIW, this residential user has PI from an RIR (v4 and v6) and is
 multihomed
 using it.  It works fine.
 
  The only semi-rational justification for ULA-C is that organizations
  privately internetworking with other organizations are scared of ULA-
 R
  collisions.  However, PI solves that problem just as readily.  If one
  cannot afford or qualify for PI, or one wants a non-PI prefix due to
  delusions of better security, one can use a private deconfliction
  registry, e.g. http://www.sixxs.net/tools/grh/ula/.
 
 The claim being made which I was attempting to refute had nothing to
 do with residential. IT was that ULA-C with NAT at the border would
 allow an organization to semi-transparently switch back and forth
 between providers. This is a (somewhat) common practice in IPv4
 for delivering (degraded) multihoming.
 
 Owen




RE: IPv6 Deployment for the LAN

2009-10-22 Thread Tony Hain
David Conrad wrote:
  Ok, lets start with not breaking the functionality we have today
  in IPv4.  Once you get that working again we can look at new
  ideas (like RA) that might have utility. Let the new stuff live/die
 on
  it's own merits.  The Internet is very good at sorting out the useful
  technology from the crap.
 
 Right.  I'll admit some confusion here.  If the IETF, due to religion
 or aesthetics, is blocking attempts at making DHCPv6 do what network
 operators _need_ (as opposed to want), why haven't network operators
 routed around the problem and gone and funded folks like ISC,
 NLNetLabs, Cisco, Juniper, et al., to implement what they need?
 
  At conferences I keep hearing It would be great if the IETF had
  more operator input.  Yet whenever we try to provide operationally
  useful advice we are ridiculed for not being smart enough to know
  how things should work.
 
  How do we fix that?
 
 You seem to be asking how do we make people not stupid.  Folks tend
 to simplify reality so that it fits their world view.  Stupid people
 attempt to force that simplified reality onto others.  You can either
 play their game, attempting to get them to understand reality is often
 more complicated than we'd like, or route around them.  Or you can post
 to NANOG... :-)


The root of the argument I see in this entire thread is the assumption that
'what we have for IPv4 has *always* been there'. There is lots of finger
pointing at the IETF for failure to define 15 years ago, what we have
evolved as the every-day assumptions about the IPv4 network of today. SLAAC
was presented specifically to deal with the DHCP failure modes of the time,
and the very real possibility that DHCP might not make it as a technology
that operators would want to deploy. While lots of evolution happened in the
DHCP space to deal with changing operational requirements, it is still a
complex approach (which may be why it is favored by those that like to
maintain a high salary).  To be fair, there were failures in the IETF, as
the SLAAC and DCHP sides couldn't get out of each other's way; so now
instead of having 2 independent models for provisioning, we have 2
interdependent models. RFC 5006 is one step at breaking that
interdependence, and more are needed on the DHCPv6 side, yet these steps
can't happen if people sit in the corner and do the 'they won't listen to
me' routine. 

For those that insist that DHCP is 'the only way to know who is using an
address', have you considered dDNS? Oh wait, that moves the trust point to a
different service that in the enterprise is typically run by the host admin,
not the network admin, or in the SP case crosses the trust boundary in the
wrong direction ... my bad. Seriously, there are ways to figure this out, as
Ron Broersma pointed out on Monday. I am not arguing for one model over the
other, as they each have their benefits and trade-offs. The real issue here
is that some people are so locked into one approach that they refuse to even
consider that there might be an alternative way to achieve the same goal, or
that the actual goal will change over time in the face of external
requirements. 

IPv4 continues to be a work-in-progress 30 years later, and IPv6 will be no
different. The fact that there is not functional parity between the
operational aspects is due to operators insisting until very recently that
'the only thing that matters is IPv4'. It should not be a surprise that IPv4
is where the majority of the work in the IETF happened. Despite my attempts
to get the IETF to stop wasting effort on what is clearly a dead-end, this
is still true today. As drc points out, you can continue to post complaints
on the nanog list, or if you want real change make sure your vendors get a
clear message about IPv6 being the priority, then make your point on the
appropriate IETF WG list.

At the end of the day, the way networks are operated today is not the way
they will be operated in 5 years, just as it is not the same as they way
they were operated 5 year ago. Asserting a snap-shot perspective about
'requirements' has its place, but everyone needs to recognize that this is
an evolving environment. Changing the base protocol version is just one more
step on that evolutionary path. 

Tony






RE: Practical numbers for IPv6 allocations

2009-10-06 Thread Tony Hain
Doug Barton wrote:
 [ I normally don't say this, but please reply to the list only, thanks.
 ]
 
 I've been a member of the let's not assume the IPv6 space is
 infinite school from day 1, even though I feel like I have a pretty
 solid grasp of the math. Others have alluded to some of the reasons
 why I have concerns about this, but they mostly revolve around the
 concepts that the address space is not actually flat (i.e., it's going
 to be carved up and handed out to RIRs, LIRs, companies, individuals,
 etc.) and that both the people making the requests and the people
 doing the allocations have a WIDE (pardon the pun) variety of
 motivations, not all of which are centered around the greater good.
 
 I'm also concerned that the two main pillars of what I semi-jokingly
 refer to as the profligate school of IPv6 allocation actually
 conflict with one another (even if they both had valid major premises,
 which I don't think they do). On the one hand people say, The address
 space is so huge, we should allocate and assign with a 50-100 year
 time horizon and on the other they say, The address space is so
 huge, even if we screw up 2000::/3 we have 7 more bites at the apple.
 I DO believe that the space is large enough to make allocation
 policies with a long time horizon, but relying on trying again if we
 screw up the first time has a lot of costs that are currently
 undefined, and should not be assumed to be trivial. 

I agree with the point about undefined costs, but the biggest one that is
easy to see is that 100-300 years from now when someone thinks about moving
on to the second /3, this entire discussion will have been lost and there
will be an embedded-for-generations expectation that the model is cast in
stone for all of the /3's.

 It also ignores
 the fact that if we reduce the pool of /3s because we do something
 stupid with the first one we allocate from it reduces our
 opportunities to do cool things with the other 7 that we haven't
 even thought of yet.

www.tndh.net/~tony/ietf/draft-hain-ipv6-geo-addr-00.txt shows a different
way to allocate space, using only 1/16 the total space to achieve a /48
globally on a 6m grid. Other ideas will emerge, so you are correct that we
can't assume we have 8 shots at this, but if the first pass is really bad
the second will be so draconian in restrictions that you will never get to
the third.

 
 In regards to the first of the profligate arguments, the idea that
 we can do anything now that will actually have even a 25 year horizon
 is naively optimistic at best. It ignores the day-to-day realities of
 corporate mergers and acquisitions, residential customers changing
 residences and/or ISPs, the need for PI space, etc. IPv6 is not a set
 it and forget it tool any more than IPv4 is because a lot of the same
 realities apply to it.

Well mostly. It is not set  forget, but a lot of the day-to-day in IPv4 is
wrapped up in managing subnet sizes to 'avoid waste'. In an IPv6 environment
the only concern is the total number of subnets needed to meet
routing/access policy, avoiding the nonsense of continually shifting the
subnet size to align with the number of endpoints over time.

 
 You also have to keep in mind that even if we could come up with a
 theoretically perfect address allocation scheme at minimum the
 existing space is going to be carved up 5 ways for each of the RIRs to
 implement. (When I was at IANA I actually proposed dividing it along
 the 8 /6 boundaries, which is sort of what has happened subsequently
 if you notice the allocations at 2400::/12 to APNIC, 2800::/12 to
 LACNIC and 2c00::/12 to AfriNIC.)
 
 Since it's not germane to NANOG I will avoid rehashing the why RA and
 64-bit host IDs were bad ideas from the start argument. :)

People need to get over it... the original design was 64 bits for both hosts
and routing exceeding the design goal by 10^3, then routing wanted more so
it was given the whole 64 bits. The fact that 64 more bits was added is not
routing's concern, but the IPv4-conservation mindset can't seem to let it go
despite having 10^6 more space to work with than the target. It could have
been 32 bits (resulting in a 96 bit address), but given that 64 bit
processors were expected to be widespread, it makes no sense to use less
than that.

 
 In the following I'm assuming that you're familiar with the fact that
 staying on the 4-byte boundaries makes sense because it makes reverse
 DNS delegation easier. It also makes the math easier.

I assume you meant 4-bit.   ;)
 ^^^

 
 As a practical matter we're stuck with /64 as the smallest possible
 network we can reliably assign. A /60 contains 16 /64s, which
 personally I think is more than enough for a residential customer,
 even taking a long view into consideration. 

Stop looking backward. To achieve the home network of the last millennium a
small number of subnets was appropriate. Constraining the world to that
through allocation is a self-fulfilling way to 

RE: AH or ESP

2009-05-26 Thread Tony Hain
Merike Kaeo wrote:
...
   ESP-Null came about when folks
 realized AH could not traverse NATs.

Thus the absolute reason why people should promote AH to kill off the 66nat
nonsense. Just because you can't use it for IPv4 is no reason to avoid using
it for IPv6 now and let its momentum suppress the 66CGN walled garden
mindset. 

Tony






RE: IPv6 Confusion

2009-02-19 Thread Tony Hain
Randy Bush wrote:
  The fact that the *nog community stopped participating in the IETF
 has
  resulted in the situation where functionality is missing, because
 nobody
  stood up and did the work to make it happen.
 
 the ops gave up on the ietf because it did no good to participate.  so
 the choice was spend the time accomplishing nothing or do something
 else
 with one's time.
 
 this is a slight exaggeration.  it took me less than five years to get
 rid of NLAs, TLAs, ...  wooo wooo!

Those were put in at the insistence of the ops / routing community as a way
to constrain the routing table, by using the technology definition as a way
to enforce a no-PI policy. The fact that it moved policy control from the
RIRs to the IETF was later recognized as a problem, and moving it back was
what took the time. 

The 'give-up' attitude is now coming home as a set of definitions that are
not meeting the operational needs. This is not a criticism of anyone, but
the general global expectation of instant gratification is causing people to
give up on long cycle issues that need active feedback to keep the system in
check. Many in the *nog community criticize their management for having a
long-range vision that only reaches to the end of the next quarter, and this
is a case where the engineering side of the house is not looking far enough
forward. If you don't give the vendors a couple of years notice that you
require IPv6, don't expect it to be what you want. Then if you expect
multiple vendors to implement something close to the same and the way you
want it, you need to engage at the IETF to make sure the definition goes the
right way. Working group chairs are supposed to be facilitators for the work
of the group, not dictators. If you are having a problem with a WG chair,
inform the AD. If that doesn't help, inform the nomcom that the AD is not
responsive. 

Giving up will only let the system run open-loop, and you should not be
surprised when the outcome is not what you expect.

Tony






RE: IPv6 Confusion

2009-02-18 Thread Tony Hain
David Conrad wrote:
 Tony,
 
 On Feb 17, 2009, at 12:17 PM, Tony Hain wrote:
  This being a list of network engineers, there is a strong bias
  toward tools
  that allow explicit management of the network. This is a fine
  position, and
  those tools need to exist. There are others that don't want, or need
  to know
  about every bit on the wire, where 'as much automation as possible'
  is the
  right set of tools.
 
 No question.  However, as this is a list of network engineers who are
 the folks who need to deploy IPv6 in order for others who may not care
 about every bit on the wire to make (non-internal) use of it, I'd
 think the bias displayed here something that might carry some weight.

Automated tunneling works around those who choose not to deploy native
support.

 
  Infighting at the IETF kept the RA from informing the
  end systems about DNS, and kept DHCPv6 from informing them about
 their
  router. The result is that you have to do both DHCP  RA, when each
  should
  be capable of working without the other.
 
 Yeah.  Rants about the IETF should probably be directed elsewhere.

That was not a rant, just an informational observation.

 
  As far as dnssec, while the question is valid, blaming the IPv6
  design for
  not considering something that 10+ years later is still not
  deployed/deployable, is a bit of a stretch.
 
 Uh, no.  That's not what I was saying.  I was saying that stateless
 auto-configuration made certain assumptions about how naming and
 addressing worked that weren't necessarily well thought out (clients
 updating the reverse directly in a DNSSEC-signed environment was just
 an example).  Perhaps it's just me, but it feels like there was a
 massive case of NIH syndrome in the IPv6 working groups that network
 operators are now paying the price for.  However, as I said, rants
 about the IETF should probably be directed elsewhere.

Actually this should be flipped as a rant against the *nog community. If you
didn't participate in defining it, you can't complain about the outcome. The
only way the IETF works well is with an active feedback loop that injects
operational reality into the process. That used to exist in the joman WG,
but stopped when the *nogs splintered off and stopped participating. I can
already hear Randy complaining about being shouted down, and yes that
happens, but that is really a call for -more- active voices, not
disengagement. The bottom line is, if you want something to be defined in a
way that works for you, you have to participate in the definition. 

 
  Or, we simply continue down the path of more NATv4.
  While this is the popular position, those that have thought about it
  realize
  that what works for natv4 at the edge, does not work when that nat
  is moved
  toward the core.
 
 Yeah, multi-layer NAT sucks.  I was amazed when I was speaking with
 some African ISPs that had to go this way today because their telecoms
 regulatory regime required them to obtain addresses from the national
 PTT and that PTT only gave them a single address.  I would argue that
 if we want to avoid this outcome (and make no mistake, there are those
 who like this outcome as it means end users are only content
 consumers, which fits into their desired business models much more
 nicely), we need to make IPv6 look more like IPv4 so that network
 operators, end users, content providers, network application
 developers, etc., have minimal change in what they do, how they do it,
 or how they pay for it. Part of that is getting familiar tools (e.g.,
 DHCP, customer provisioning, management, etc.) working the way it
 works in IPv4.  Taking advantage of all the neato features IPv6 might
 provide can come later.

People have to stand up and put money on the table if they expect things to
get fixed. The working parts of IPv6 that exist are due to the ISPs in Japan
and the US DoD putting their money where their mouth is, and they got what
they needed. The *nog community appears to be holding their breath waiting
for 1:1 parity before they start, which will never happen.

 
 However, I have a sneaking suspicion it might already be too late...

CGN will be deployed, but can be used as a tool to wean customers off of
IPv4. If the world goes the way of current-price==IPv6+CGN, with
IPv6+publicIPv4 costing substantially more, there will be a drop off in use
of IPv4 because the CGN breaks lots of stuff and people won't pay extra to
work around it for any longer than they need to. 

Tony 





RE: IPv6 Confusion

2009-02-18 Thread Tony Hain
Justin Shore wrote:
 ...
 At this point I'm looking at doing 6to4 tunnels far into the future.

You can forget that, as CGN will break 6to4. Get used to teredo (miredo),
and if that is impeded don't be surprised when IPv6 over SOAP shows up. 

Tony 






RE: IPv6 Confusion

2009-02-18 Thread Tony Hain
Owen DeLong wrote:
 ...
 If you want SLAAC or RA or whatever, more power to you.  Some
 installations
 do not.  They want DHCP equivalent functionality with the same
 security model.

It is always amusing when people equate DHCP with security...  Outside of
that, I do agree  with you that the operational model around DHCP needs to
be complete and stand-alone, just as the RA model needs to be. Right now
neither works stand-alone.

FWIW: there is SEND (RFC 3971) to deal with rouge RA's and other miscreant
behavior. Implementations have been slow to come to market because network
operators are not demanding it from their vendors.

Tony 






RE: IPv6 Confusion

2009-02-18 Thread Tony Hain
Leo Bicknell wrote:
 ...
 But, when DHCPv6 was developed the great minds of the world decided
 less functionality was better.  There /IS NO OPTION/ to send a default
 route in DHCPv6, making DHCPv6 fully dependant on RA's being turned on!
 So the IETF and other great minds have totally removed the capability
 for operators to work around this problem.

No, the decision was to not blindly import all the excess crap from IPv4. If
anyone has a reason to have a DHCPv6 option, all they need to do is specify
it. The fact that the *nog community stopped participating in the IETF has
resulted in the situation where functionality is missing, because nobody
stood up and did the work to make it happen.

Tony 







RE: IPv6 Confusion

2009-02-18 Thread Tony Hain
Daniel Senie wrote:
 ...
  No, the decision was to not blindly import all the excess crap from
 IPv4. If
  anyone has a reason to have a DHCPv6 option, all they need to do is
 specify
  it. The fact that the *nog community stopped participating in the
 IETF has
  resulted in the situation where functionality is missing, because
 nobody
  stood up and did the work to make it happen.
 
 Because clearly everything done in IPv4 space was crap, or should be
 assumed to be crap. Therefore, everything that's been worked out and
 made to function well in the last 25+ years in IPv4 space should be
 tossed and re-engineered. OSI anyone?

That is not what the decision said. The point was that the DHCP WG was not
going to decide for you what was necessary or appropriate to carry forward.
Rather than add baggage that nobody actually uses, there is nothing until
someone says 'I need that'. Never mind that DHCP wasn't defined when the
IPng work started, and wasn't in widespread use yet when DHCPv6 was being
started ... 

 
 The point, which seems to elude many, is that rightly or wrongly there
 is an assumption that going from IPv4 to IPv6 should not involve a step
 back in time, not on  security, not on central configuration
 capability,
 not on the ability to multihome, and so forth. The rude awakening is
 that the IPv6 evangelists insisting everyone should get with the
 program failed to understand that the community at large would expect
 equivalent or better functionality.

Yes people expect 1:1 functionality, but how many of them are stepping up to
the table with $$$ to make that happen... In the US, it is only the DoD. In
the ISP space, most of it comes from Japan. If you are not finding what you
want, put money in front of a vendor and see what happens... ;)

 
 Ultimately the only bit of light emerging above all the heat generated
 by this thread is a simple observation: Engineers make lousy
 salespeople.

;)

Tony







RE: IPv6 Confusion

2009-02-18 Thread Tony Hain
Leo Bicknell wrote:
 ...
 The last time I participated a working group chair told me operators
 don't know what they are talking about and went on to say they should
 be ignored.

So did you believe him and stop participating?  Seriously, the -ONLY- way
the IETF can be effective is for the ops community to provide active
feedback. If you don't provide input, don't be surprised when the output is
not what you want.

Tony






RE: IPv6 Confusion

2009-02-17 Thread Tony Hain
While people frequently claim that auto-config is optional, there are
implementations (including OS-X) that don't support anything else at this
point. The basic message is that you should not assume that the host
implementations will conform to what the network operator would prefer, and
you need to test.


One last comment (because I hear just more bits a lot in the *nog
community)... Approach IPv6 as a new and different protocol. If you approach
it as IPv4 with more bits, you will trip over the differences and be
pissed off. If you approach it as a different protocol with a name that
starts with IP and runs alongside IPv4 (like we used to do with decnet,
sna, appletalk...), you will be comforted in all the similarities. You will
also hear lots of noise about 'lack of compatibility', which is just another
instance of refusing to recognize that this is really a different protocol.
At the end of the day, it is a packet based protocol that moves payloads
around. 

Tony


 -Original Message-
 From: Carl Rosevear [mailto:carl.rosev...@demandmedia.com]
 Sent: Tuesday, February 17, 2009 10:58 AM
 To: Owen DeLong
 Cc: nanog@nanog.org
 Subject: RE: IPv6 Confusion
 
 Thanks to all that responded on and off-list.  My confusion is mostly
 cleared-up.  The points that are unclear at this point are generally
 unclear to most people, it seems due to lack of operational experience
 with IPv6.  Feel free to keep responding to this topic as its all very
 interesting but I think my needs have been met.  Owen, this one from
 you tied it all together.  Thanks all!
 
 
 
 --Carl
 
 
 
 
 -Original Message-
 From: Owen DeLong [mailto:o...@delong.com]
 Sent: Tuesday, February 17, 2009 10:41 AM
 To: Carl Rosevear
 Cc: nanog@nanog.org
 Subject: Re: IPv6 Confusion
 
 
 On Feb 17, 2009, at 8:59 AM, Carl Rosevear wrote:
 
  So, I understand the main concepts behind IPv6.  Most of my peers
  understand.  We all have a detailed understanding of most things
  IPv4.  I have Googled and read RFCs about IPv6 for HOURS.  That
  said, to quickly try to minimize people thinking I am an idiot who
  asks before he reads, I need some answers.  First of all, several of
  my friends who feel they are rather authoritative on the subject of
  things network-related have given me conflicting answers.  So what's
  the question? ...
 
  How does IPv6 addressing work?
 
 There are a lot of different possible answers to that question, many
 of which are accurate.
 
 In general:
 
 It's a 128 bit address.  Routing is done on VLSM, but, generally for
 DNS purposes, these
 are expected to be at least on nibble boundaries.
 
 There is an intent to support what is known as EUI-64, which means
 every subnet should
 be a /64, however, there are people who number smaller subnets and
 that is supposed
 to work, but, it will break certain IPv6 things like stateless
 autoconfiguration (which is
 optional).
 
  I know it's been hashed and rehashed but several orgs I am
  associated with are about to ask for their allocations from ARIN and
  we are all realizing we don't really know how the network / subnet
  structure trickles down from the edge to the host.  We really don't
  have a firm grasp of all of this as there seems to be multiple
  options regarding how many addresses should be assigned to a host,
  if the MAC address should be included in the address or if that is
  just for auto-configuration purposes or what the heck the deal is.
  There are a lot of clear statements out there and a lot that are
  clear as mud.  Unfortunately, even when trying to analyze which RFC
  superseded another.  Can I just subnet it all like IPv4 but with
  room to grow or is each host really going to need its own /84 or
  something?  I can't see why hosts would need any more addresses than
  today but maybe I'm missing something because a lot of addressing
  models sure allow for a huge number of unique addresses per host.
 
 You can subnet it just like IPv4.  Each host does not need it's own
 subnet (/64, not /84 for the most part).
 The theory behind /64 subnets was to support a way for a host to use
 what it already knows (MAC
 address) and possibly some additional clues (Router Announcement) from
 the wire to configure
 its own IPv6 address on an interface.  Whether or not this was a good
 idea is still controversial, but,
 whether or not it's how IPv6 is going to work is not.  IPv6 is
 designed to work with Stateless
 Autoconfiguration whether we like it or not.  DHCPv6 so far is
 prevented from providing
 default router information (or many of the other things you're used to
 having DHCP do)
 as it currently stands.
 
 
  My buddy and I are about to go to Barnes and Noble, not having and
  luck with standard internet media but then we realized...  how will
  we know if any of that is really what we are looking for either?
 
 It's a fair point.  There is a good FAQ/Wiki on the ARIN web site.
 That may be a good place to
 start.
 
  From what I can tell, 

RE: IPv6 Confusion

2009-02-17 Thread Tony Hain
Owen DeLong wrote:
 On Feb 17, 2009, at 11:28 AM, Tony Hain wrote:
 
  While people frequently claim that auto-config is optional, there are
  implementations (including OS-X) that don't support anything else at
  this
  point. The basic message is that you should not assume that the host
  implementations will conform to what the network operator would
  prefer, and
  you need to test.
 
 I can configure OS-X statically, so, that simply isn't true.
 
 What is true is that there are many implementations which do not (yet)
 support DHCPv6.  That is not the same as don't support anything
 else.

Fair enough about OS-X, but that is not the only non-dhcpv6 implementation
out there. How exactly do you configure a static address on a sensor with no
UI?

My point was 'don't assume',  test.

 
 
 
 
  One last comment (because I hear just more bits a lot in the *nog
  community)... Approach IPv6 as a new and different protocol. If you
  approach
  it as IPv4 with more bits, you will trip over the differences and
 be
  pissed off. If you approach it as a different protocol with a name
  that
  starts with IP and runs alongside IPv4 (like we used to do with
  decnet,
  sna, appletalk...), you will be comforted in all the similarities.
  You will
  also hear lots of noise about 'lack of compatibility', which is just
  another
  instance of refusing to recognize that this is really a different
  protocol.
  At the end of the day, it is a packet based protocol that moves
  payloads
  around.
 
 The problem here, IMHO, stems from the fact that unlike DECnet,
 Appletalk, SNA, et. al., IPv6 is intended as a replacement for
 IPv4. (None of the other protocols was ever intended to replace
 any of the others).
 
 As a replacement, the IETF realized that at the current scale of the
 internet when they began designing IPv6, a flag day conversion
 (like what happened when we went to IPv4) was not possible.
 Unfortunately, the migration plan set forth by the IETF made many
 assumptions (especially on vendor preparedness and rate of
 adoption prior to IPv4 runout) that have not proven out, so, the
 Everyone who has IPv4 starts running dual-stack before we
 need any IPv6 only connectivity plan is not going to prove out.
 
 More unfortunately, there is no real contingency plan for how
 migration happens absent that scenario and we are, therefore,
 in for some interesting times ahead.

Whine, whine, whine... we are where we are, and no amount of whining will
change the fact that people outside of Japan chose not to think ahead. The
primary point of dual-stack was to decouple the requirement for the end
systems to change all the apps at once. Most of the *nog community doesn't
get, or care to get, the costs of the end system operations staff because
they are focused on the costs related to moving the bits around. There are
tunneling functions defined, so you don't have to get 'dual-stack
everywhere' before you can take another step. Those are not as 'efficient'
as dual-stack when moving the bits around, and require operational
management, but that is a cost trade-off that can be made. People that
insist on delivering only one version will force unnatural acts at the edge,
while delivering both will allow people to move at their own pace. 

Like it or not, the end systems are moving without the *nog community. Fire
up uTorrent and see how many 6to4  teredo connected peers you end up with
(I am generally seeing about 1/4-1/3 of the set). This is 'real dual-stack
at the edge', and works around the laggard ISP deployments. The Internet was
built by tunneling over the laggard telco's using the voice technology
available, and the next generation of it will be built the same way if the
*nog community buries its head in the same dark place that the telco's did.

Tony 







RE: IPv6 Confusion

2009-02-17 Thread Tony Hain
David Conrad wrote:
 On Feb 17, 2009, at 11:28 AM, Tony Hain wrote:
  Approach IPv6 as a new and different protocol.
 
 Unfortunately, I gather this isn't what end users or network operators
 want or expect.  I suspect if we want to make real inroads towards
 IPv6 deployment, we'll need to spend a bit more time making IPv6 look,
 taste, and feel like IPv4 and less time berating folks for IPv4-
 think (not that you do this, but others here do). 

I am not trying to berate anyone, just point out that your starting
perspective will impact how you see the differences. From what I have seen,
people are generally happy when they find similarities, and pissed off when
the find differences. Therefore, if you start by assuming it is different,
you will be much happier.

 For example,
 getting over the stateless autoconfig religion (which was never fully
 thought out -- how does a autoconfig'd device get a DNS name
 associated with their address in a DNSSEC-signed world again?) and
 letting network operators use DHCP with IPv6 the way they do with IPv4.

There are many religious positions here, and none are any more valid than
the others. At the end of the day, each approach needs to be complete and
stand-alone, but due to religious fighting, all approaches are required to
exist at once for anything to work. 

This being a list of network engineers, there is a strong bias toward tools
that allow explicit management of the network. This is a fine position, and
those tools need to exist. There are others that don't want, or need to know
about every bit on the wire, where 'as much automation as possible' is the
right set of tools. Infighting at the IETF kept the RA from informing the
end systems about DNS, and kept DHCPv6 from informing them about their
router. The result is that you have to do both DHCP  RA, when each should
be capable of working without the other. 

As far as dnssec, while the question is valid, blaming the IPv6 design for
not considering something that 10+ years later is still not
deployed/deployable, is a bit of a stretch. This all comes down to trust
anchors, and personally I question the wisdom of anyone that considers DHCP
to be a valid trust anchor. It gets that status because it is something
tangible that is reasonably well understood, but there is nothing in its
design, or the way that it is deployed in practice that makes it worthy of
anything related to trust. An out-of-band trust cert between an
auto-configured end system and a ddns service makes much more sense than a
DHCP service based on believing that the end system will not bother to spoof
its mac address. 

 
 Or, we simply continue down the path of more NATv4.

While this is the popular position, those that have thought about it realize
that what works for natv4 at the edge, does not work when that nat is moved
toward the core. If people really go down this path, the end applications
will do even more levels of tunneling over the few things that work, and the
network operators will lose all visibility into what is really going on
(IPv6 tunnel servers are just the modern modem pools, and tunneling over
http will become more common if that is the only path that works). 

Tony 





RE: IPv6 Confusion

2009-02-17 Thread Tony Hain
Joe Provo wrote:
 This is highly amusing, as for myself and many folks the experience
 of these 'other protocols', when trying to run in open, scalable,
 and commercially-viable deployments, was to encapsulate in IP(v4)
 at the LAN/WAN boundary.  It is no wonder that is the natural reaction
 to IPv6 by those who have survived and been successful with such
 operational simplicity.

There is nothing preventing you from doing the same thing again, ... except
oh yea, lack of addresses and the bloating routing table as ever smaller
address blocks are traded on eBay. 

Seriously, you could easily do the same thing by encapsulating IPv4 over
IPv6. One might even consider using one /64 for internal IPv4 routes
(embedding the IPv4 as the next 32 bits), then another /64 for each IPv4
peer, to reduce the number of IPv6 routes you need to carry everywhere. At
the edges where it matters there would be a /96 routing entry, but even if
all of the /96 prefixes were enumerated everywhere the table would be the
same size as the IPv4 one would have been. 

Tony





RE: IPv6 routing /48s

2008-11-25 Thread Tony Hain
Jack Bates wrote:

 .
 Yes and no. The test that was being run used 6to4 addresses, so every
 6to4 capable device did try to reach it via 6to4, since that is
 preferred over IPv4. If it had used non-6to4 addressing, then IPv4
 would
 had been preferred on those hosts that didn't have non-6to4 addresses.
 
 This is one reason why I believe using 6to4 addresses to be an issue
 for
 content providers. If they use non-6to4 addressing for the content,
 then
most people will prefer IPv4 except for those who have configured
 non-6to4 addresses or altered the labels to force 6to4 to work with
 non-6to4 addressing.
 
 Gads, is it appropriate to just say Native when referring to non-6to4?
 lol

Terminology is a mess, because 2001:: could be tunneled directly from the
end system, and 2002:: might be received in an RA by an end system so it
doesn't know the difference between that and any other prefix. 

In any case, content providers can avoid the confusion if they simply put up
a local 6to4 router alongside their 2001:: prefix, and populate DNS with
both. Longest match will cause 2001:: connected systems to chose that dst,
while 6to4 connected systems will chose 2002:: as the dst. There is no need
to offer transit service to the entire world, and the latency/path selection
for 6to4 to their demark will be exactly the same as the IPv4 service. 

Access providers could offer a localized 6to4 relay by ignoring the comment
in the RFC that says 'must advertize 2002::/16', and send the appropriate
~32-40 into the IPv6 routing system that corresponds just to the customers
being served by that relay. Yes the IPv6 routing system gets bigger, but
there aren't enough people that would consider deploying a 6to4 relay to
create a --real-- problem. FUD mongers will scream, but be pragmatic and
only announce what is necessary to get some stable service to your
customers. Running this service also provides a way to document how many
people are using IPv6 despite the lack of availability on the distribution
media. Now we get constant reports of no-traffic, because it is bypassing
the local ISP monitoring system. On a recent torrent ~ 1/3 of the peers were
connecting via IPv6, and most of the content my client exchanged was via
those peers rather than the IPv4 ones. Clearly there is traffic, it is just
going over the top to work around the lack of the service that is required
...

Tony




RE: IPv6 Wow

2008-10-23 Thread Tony Hain
Nathan Ward wrote:
...
 2) If Teredo relays are deployed close to the service (ie. content,
 etc.) then performance is almost equivalent to IPv4. 6to4 relies on
 relays being close to both the client and the server, which requires
 end users' ISPs to build at least *some* IPv6 infrastructure, maintain
 transit, etc. When you consider that this infrastructure and transit
 is quite likely to be over long tunnels to weird parts of the world,
 this is a bad thing. Putting relays close to the content helps for the
 reverse path (ie. content - client), however the forward path (client
 - content) is likely to perform poorly.


Not quite correct. 6to4 does not require transiting a relay if the target is
another 6to4 site. What this means is that a clueful content provider will
put up a 6to4 router alongside whatever native service they provide, then
populate the dns with both the native and 6to4 address. A properly
implemented client will do the longest prefix match against that set, so a
6to4 client will go directly to the content provider's 6to4 router, while a
native client will take the direct path. The only time an anycast relay
needs to be used is when the server is native-only and the client is
6to4-only. 

Tony