Re: Thinking differently about the site local problem (was: RE:site local addresses (was Re: Fw: Welcome to the InterNAT...))
Jeroen Massar wrote: John Stracke wrote: Jeroen Massar wrote: Ad-hoc networks are another similar case, where two machines are connected via ad-hoc wireless, bluetooth, firewire, or similar. In any other way do you like remembering and typing over 128bit addresses?? :) :: is your friend. If you're building an ad hoc, point-to-point network, you can pick convenient addresses. :: as in all 0's which corresponds to 'not bound'? No, as in a string of 0s. If you set up your own isolated network, you can make one host be 1::1 and the other 1::2. Most OS's require a (unique) hostname to be entered/automatically generated on install False. And is there any reasoned argument instead of the simple 'false'? It seems pretty obvious: no OS can require a unique hostname at install time, because it has no way of checking uniqueness. The Unices I've installed (various versions of Solaris and Linux), even if they prompt for a hostname, will accept the default of localhost.localdomain. In addition, many, many machines (especially those bought preinstalled) are installed from standardized images, and have standardized hostnames. -- /\ |John Stracke |[EMAIL PROTECTED] | |Principal Engineer|http://www.centive.com | |Centive |My opinions are my own. | || |God does not play games with His loyal servants. Whoo-ee,| |where have you *been*? --_Good Omens_ | \/
Re: Thinking differently about the site local problem (was: RE:site local addresses (was Re: Fw: Welcome to the InterNAT...))
The lack of IPv6 literal address support in the version of wininet.dll that shipped with Windows XP was for reasons of engineering expediency, in other words, MS deliberately shipped a broken product. Oh, look, release notes, known issue statements, bugtracker entries... Seems like everybody is deliberately shipping broken products... Yeah, I was chastised in private mail for shooting the messenger, and appropriately so. It is a bit difficult for me to understand how support for address literals could be considered such a low priority that it could be omitted from a shipping product. Recently when I wrote an app that used v6 and urls, the first routine I wrote was one that would take either an address literal or a DNS name, plus a port number, and return a properly filled-in sockaddr structure (no, getaddrinfo by itself isn't sufficient), and the second one I wrote was one that would parse a URL and extract either a DNS name or address literal from that. Writing both routines, and the test cases, and testing the routines on several different platforms took about 2 hours. Admittedly I'm a bit biased. I've measured the percentage of email delivery failures due to various reasons and discovered that DNS misconfiguration was high on the list (MTA miscnfiguration was also high). I've also attempted to measure the amount of delay caused by DNS lookups. So I understand better than most why it's important - for both diagnostic and performance reasons - to support address literals. That and I suspect there's a philosophical difference in writing APIs. I believe in thinking hard about what an API needs to do before writing it, so that the API once implemented will be able to be used for a wide set of purposes. That and I try hard to only have to implement the API once, because the overhead in context switching my brain back to the API when I need to add functionality to it is almost certainly larger that the effort required to completely implement the API the first time. If it's too difficult to implement it completely, there's probably something wrong with the design. I do understand and use stubbing, but I regard that as a technique to be used for quick prototypes that are going to be discarded anyway. I also understand the notion of biasing the testing toward the most frequently-used features on the assumption that such testing will uncover the most common bugs. But there are important features that are not frequently used, and there are bugs that are important to fix even though they are in seldom-used code. Features used for diagnostics, and security bugs are good examples of these. Then there's the problem that when a 800-pound gorilla ships code, that code largely defines expectations for what will and will not work in practice - often moreso than the standards themselves. So if MS ships code that doesn't support address literals, then nobody will attempt to use address literals unless they can detect that the client isn't using MS code. For this and other reasons, I believe that 800-pound gorillas have a greater responsibility than their competitors to ship code that works properly. OTOH, servers that take the trouble to recognize that the client isn't a MS client can provide their clients with significantly better response time by giving those clients address literals in internal references (either in HTTP referrals or in HTML).
Re: Thinking differently about the site local problem (was: RE:site local addresses (was Re: Fw: Welcome to the InterNAT...))
Are the apps for which IPv6 is enabled that -can not- use address literals? If so, then Steve is wrong and the DNS has become critical infrastructure to the working of the Internet. anyone who believes that the DNS is not critical infrastructure for just about every single purpose the Internet is used for is either living in a fantasy world or has redefined the Internet to be something that's strictly at layer 3 and below. agreed. but there's a difference between saying that DNS is critical infrastructure and that it's appropriate to use DNS every time an address is needed. DNS is necessary, not sufficient. Keith
Re: Thinking differently about the site local problem (was: RE:site local addresses (was Re: Fw: Welcome to the InterNAT...))
Jeroen Massar wrote: Ad-hoc networks are another similar case, where two machines are connected via ad-hoc wireless, bluetooth, firewire, or similar. In any other way do you like remembering and typing over 128bit addresses?? :) :: is your friend. If you're building an ad hoc, point-to-point network, you can pick convenient addresses. Most OS's require a (unique) hostname to be entered/automatically generated on install False. -- /\ |John Stracke |[EMAIL PROTECTED] | |Principal Engineer|http://www.centive.com | |Centive |My opinions are my own. | || |God does not play games with His loyal servants. Whoo-ee,| |where have you *been*? --_Good Omens_ | \/
Re: Thinking differently about the site local problem (was: RE:site local addresses (was Re: Fw: Welcome to the InterNAT...))
The lack of IPv6 literal address support in the version of wininet.dll that shipped with Windows XP was for reasons of engineering expediency, in other words, MS deliberately shipped a broken product. I do, however, also remember a discussion on one of the IPv6 mailing lists about this, and it seemed that there were several members of the IPv6 community at large who thought it was great that we weren't currently supporting them. Apparently there are those who think hard-coding IP addresses (of any version) in URLs is a bad idea. yes, there are those who think that it's desirable to force all apps to suffer the delays and unreliability of DNS every time a connection is established. they are deluded. Keith
Re: Thinking differently about the site local problem (was: RE:site local addresses (was Re: Fw: Welcome to the InterNAT...))
Sounds like you both are arguing that the DNS has become embedded and the applications that use IP are unusable without a working DNS. as a practical matter, this was true even in IPv4. yes, you can often use address literals in either v4 or v6 apps, but this isn't practical for ordinary users on an ordinary basis. and in both v4 and v6, several essential apps (e.g. email, the web) have explicit dependencies on DNS. yes you can use address literals in email addresses and URLs but there is no assurance that an email address or URL with an address literal is equivalent to the same address or URL with a domain instead of the address. Both email and the web define their resources in relation to a DNS name, not relative to a host or address. of course it is possible to write apps that do not use DNS, but this is rarely done. Keith
Re: Thinking differently about the site local problem (was: RE:site local addresses (was Re: Fw: Welcome to the InterNAT...))
There was some discussion about this deprecation as the Techpreviews (Win2k/NT4) did support literal url's. The XP version and up though won't support it to overcome one major 'problem': website 'designers' embedding IP's inside websites to 'speed things up' (go figure). perfectly reasonable thing to do. browsers that don't support it are broken.
Re: Thinking differently about the site local problem (was: RE:site local addresses (was Re: Fw: Welcome to the InterNAT...))
There was some discussion about this deprecation as the Techpreviews (Win2k/NT4) did support literal url's. The XP version and up though won't support it to overcome one major 'problem': website 'designers' embedding IP's inside websites to 'speed things up' (go figure). perfectly reasonable thing to do. browsers that don't support it are broken. It's perfectly reasonable to not support RFC2732? or? it's perfectly reasonable for sites to use address literals in URLs referenced by their web pages.
Re: Thinking differently about the site local problem (was: RE:site local addresses (was Re: Fw: Welcome to the InterNAT...))
(i) RFC 2821 can be read (and was intended to be read) to prohibit the use of an address literal in a HELO or EHLO command unless the relevant host has no DNS name. (sections 3.6, 4.1.1.1, 4.1.4) these days it's sort of odd to think that a host has a distinguished DNS name - hosts quite ordinarily have either an emphemeral DNS name, multiple DNS names, or no DNS name. (ii) The use of address literals is described as a mechanism to bypass a barrier, not one for normal use (RFC2821, section 4.1.3) right. about the only reasonable use of an address literal is for testing, or to reach the postmaster at a particular host associated with a particular address (since postmaster is the only address that is guaranteed to be valid when associated with an address literal - and even this is often not true in practice) (iii) On the other hand, the address literal should still be provided in the From clause of a Received field. Received field information is expected to not be picked up by other software and protocols, but the inclusion of address information there is very leak-friendly. this is different than using address literals in addresses. email addresses are defined relative to DNS names because you cannot properly send mail to an email address without an MX lookup. OTOH MTAs are still expected to be hosts with addresses. of course it is possible to write apps that do not use DNS, but this is rarely done. Yep. And as pointed out earlier, we have pushed back strongly against such protocol proposals and implementations. many apps that are used in practice are not standardized; we need to be careful about believing that what's good for standard apps is good for every app. I could certainly make a case for some apps to have their own naming systems and their own name-to-address lookup mechanisms independent of DNS, or more generally, for alternate means of mapping resource names (say URNs) to IP addresses that did not involve DNS names or DNS queries. But it's difficult to believe that such apps would not employ DNS names at all - if nothing else, for initial configuration.
Re: Thinking differently about the site local problem (was: RE:site local addresses (was Re: Fw: Welcome to the InterNAT...))
of course it is possible to write apps that do not use DNS, but this is rarely done. why not just embed the ip addresses in the data payloads? death to nats! :-)
Re: Thinking differently about the site local problem (was: RE:site local addresses (was Re: Fw: Welcome to the InterNAT...))
Tony Hain wrote: Margaret Wasserman wrote: Of course, in the case of site-local addresses, you don't know for sure that you reached the _correct_ peer, unless you know for sure that the node you want to reach is in your site. Since the address block is ambiguous, routing will assure that if you reach a node it is the correct one. That's backwards: Since the address block is ambiguous, routing *cannot* assure that if you reach a node it is the correct one. Nobody can, because we equate addresses with identities. Consider a peer-to-peer conferencing session, with three participants A, B, and C. A and B are at the same site; C is at a separate site; both sites use the same range of site-local addresses. Each has two addresses, AG, BG, CG and AL, BL, CL (Global and Local). A initiates the session by connecting to B and C (assume for the moment that this is not a problem). B and C provide A with their addresses; to complete the mesh, A tells B to connect to C at CG or CL. Now, B isn't going to connect to *both*, so it'll have some heuristic to pick one. Suppose it picks CL (*). But, whoops, B's site has some host D, with DL==CL. So B winds up connecting to the wrong host, and doesn't realize it. (*) Not an unreasonable supposition. If the app is looking at the addresses, it might well notice that CL is on a locally attached subnet, and use that. Or the app might connect to both in parallel (non-blocking connect()), and use the address it reaches first, as a first cut at discovering the most efficient path (that's what I did when I implemented this some time back). Being on the same network, D will probably respond before C. -- /\ |John Stracke |[EMAIL PROTECTED] | |Principal Engineer|http://www.centive.com | |Centive |My opinions are my own. | || |God does not play games with His loyal servants. Whoo-ee,| |where have you *been*? --_Good Omens_ | \/
Re: Thinking differently about the site local problem (was: RE:site local addresses (was Re: Fw: Welcome to the InterNAT...))
Stephen Sprunk wrote: I've dealt with many companies interconnecting where both use RFC1918 space -- NAT is the first thing discussed. You forget, these people are connecting for a _business reason_ and there is real money to be lost if they mess up. And how much real money do they lose by having to work around those NATs? -- /\ |John Stracke |[EMAIL PROTECTED] | |Principal Engineer|http://www.centive.com | |Centive |My opinions are my own. | || |God does not play games with His loyal servants. Whoo-ee,| |where have you *been*? --_Good Omens_ | \/
Re: Thinking differently about the site local problem (was: RE:site local addresses (was Re: Fw: Welcome to the InterNAT...))
on 3/31/2003 11:01 AM Bill Manning wrote: Is may be worth noting that RIRs have -NEVER- made presumptions on routability of the delegations they make. Probably more accurate to say that they have never guaranteed routability. They make all kinds of presumptions about routability. One of the reasons they claim to refuse (say) private /24 is that it isn't going to be widely routable. By default, this implies that larger delegations are presumed to be routable, or else they wouldn't assign them either. -- Eric A. Hallhttp://www.ehsco.com/ Internet Core Protocols http://www.oreilly.com/catalog/coreprot/
Re: Thinking differently about the site local problem (was: RE:site local addresses (was Re: Fw: Welcome to the InterNAT...))
--On tirsdag, april 01, 2003 11:33:46 -0800 Bill Manning [EMAIL PROTECTED] wrote: Are the apps for which IPv6 is enabled that -can not- use address literals? If so, then Steve is wrong and the DNS has become critical infrastructure to the working of the Internet. anyone who believes that the DNS is not critical infrastructure for just about every single purpose the Internet is used for is either living in a fantasy world or has redefined the Internet to be something that's strictly at layer 3 and below. there are advantages to being able to keep the layer 3 infrastructure running for a while (hours, I think) without referring to the DNS. But for end-user purposes, the DNS being down equates to the Internet being down. Infrastructure doesn't get much more critical than that. Harald
Re: Thinking differently about the site local problem (was: RE:site local addresses (was Re: Fw: Welcome to the InterNAT...))
applications cannot be expected to deal with filters in any way other than to report that the communication is prohibited. the well known flag exists and is called ICMP. Well, that is emphatically *NOT* what application developers do. They do not just observe that it does not work, they try to work around, e.g. routing messages to a different address, at a different time, through a third party, or through a different protocol. that's because (a) customers demand that apps work even in the presence of NAT, no matter how unreasonable this is, and (b) the way most filters are implemented, there's no good way for an app to tell whether a communications failure is due to a network or host failure or to a prohibition. ICMP is the only mechanism we have to do this, and it's not widely used. Silently dropping packets is certainly not the right way to get an application to stop trying. ICMP messages won't achieve that either: since ICMP is insecure, it is routinely ignored. ICMP may be routinely ignored, but on false premises. ICMP is no less secure than anything else in TCP or IP that is routinely trusted. Which actually poses an interesting question: when should an application just give up? IMHO, there is only one clear-cut case, i.e. when the application actually contacted the peer and obtained an explicit statement that the planned exchange should not take place -- the equivalent of a 4XX or 5XX error in SMTP or HTTP. I'd claim that ICMP prohibited is another case for giving up. note that a 4xx error is an explicit it's okay to retry indication. Keith
Re: Thinking differently about the site local problem (was: RE:site local addresses (was Re: Fw: Welcome to the InterNAT...))
Well, that is emphatically *NOT* what application developers do. They do not just observe that it does not work, they try to work around, e.g. routing messages to a different address, at a different time, through a third party, or through a different protocol. Indeed, correctly coded applications will use a getaddrinfo() and then a connect() in a loop until succesful. it's perfectly reasonable to connect to an address without first doing a DNS lookup. even when you need to do a DNS lookup, getaddrinfo() doesn't always do what you need. Keith
Re: Thinking differently about the site local problem (was: RE:site local addresses (was Re: Fw: Welcome to the InterNAT...))
Tony Hain wrote: Margaret Wasserman wrote: Of course, in the case of site-local addresses, you don't know for sure that you reached the _correct_ peer, unless you know for sure that the node you want to reach is in your site. Since the address block is ambiguous, routing will assure that if you reach a node it is the correct one. This FUD needs to stop! Right up till the point where two companies start communicating with one another directly with site-locals. Even if there is a router frob to keep the scopes scoped, you can bet it won't be used until someone realizes that the above problem occurred. Eliot
Re: Thinking differently about the site local problem (was: RE:site local addresses (was Re: Fw: Welcome to the InterNAT...))
--On Monday, March 31, 2003 12:17:44 -0800 Eliot Lear [EMAIL PROTECTED] wrote: Since the address block is ambiguous, routing will assure that if you reach a node it is the correct one. This FUD needs to stop! Right up till the point where two companies start communicating with one another directly with site-locals. Even if there is a router frob to keep the scopes scoped, you can bet it won't be used until someone realizes that the above problem occurred. In every network (well, larger than a single subnet behind a firewall, that is) I've seen, where there were RFC1918 addresses routed on the inside, these things happened, although in v4-land. It is madness. It must stop. With v6, we can make it stop. So, SL must go away, for it is an invitation to madness. All things SL is claimed to solve are solveable with unique addresses too, as long as you've got enough of them. The rest is just simple (perhaps tedious) work that every operations-aware person I know of would prefer to madness. -- Måns Nilssonhttp://vvv.besserwisser.org pgp0.pgp Description: PGP signature
Re: Thinking differently about the site local problem (was: RE:site local addresses (was Re: Fw: Welcome to the InterNAT...))
Indeed, correctly coded applications will use a getaddrinfo() and then a connect() in a loop until succesful. it's perfectly reasonable to connect to an address without first doing a DNS lookup. I think nobody can't help you if you are using hardcoded IP's. The only case you have an IP without DNS is when you get it passed from another layer/entity (eg in a FTP from the server). uh, no. you can get IP addresses from any number of sources other than DNS, including from other processes that exist on other nodes. It's a perfectly reasonable thing to do. Can you identify those so that getaddrinfo() can be expanded to fix these cases? getaddrinfo() cannot be fixed. it's major premise - that the host has the knowledge to make decisions about which of several addresses is best to use - is fundamentally flawed, except in a few corner cases.
Re: Thinking differently about the site local problem (was: RE:site local addresses (was Re: Fw: Welcome to the InterNAT...))
On Mon, 31 Mar 2003 15:43:38 -0600 Matt Crawford [EMAIL PROTECTED] wrote: All things SL is claimed to solve are solveable with unique addresses too, as long as you've got enough of them. The rest is just simple (perhaps tedious) work that every operations-aware person I know of would prefer to madness. All right, how do you make internal site communications completely oblivious to a change in your externally-visible routing prefix? You declare that any app that keeps connections around for more than some time period T (say for 30 days) have a mechanism for detecting and recovering from prefix changes. That solves the problem for all apps, not just for local apps.
Re: Thinking differently about the site local problem (was: RE:site local addresses (was Re: Fw: Welcome to the InterNAT...))
On Mon, 31 Mar 2003 15:49:03 -0600 Matt Crawford [EMAIL PROTECTED] wrote: Let's assume that there is a FooBar server in SiteA. If another node in SiteA (NodeA) is communicating via a multi-party application to a node in SiteB (NodeB), and wants to refer NodeB to the FooBar server in SiteA, what does it do? I thought we agreed, completely outside of IPv6 concerns, that shipping addresses in application data was bad. what's this we stuff? some individuals may have thought this was bad, but there's never been widespread agreement. actually it's bad to force all apps to use DNS names - which are often less reliable, slower, less correct, and more ambiguous than IP addresses.
Re: Thinking differently about the site local problem (was: RE:site local addresses (was Re: Fw: Welcome to the InterNAT...))
On Mon, 31 Mar 2003 16:12:51 -0600 Matt Crawford [EMAIL PROTECTED] wrote: All right, how do you make internal site communications completely oblivious to a change in your externally-visible routing prefix? You declare that any app that keeps connections around for more than some time period T (say for 30 days) have a mechanism for detecting and recovering from prefix changes. That solves the problem for all apps, not just for local apps. Ah, well, if we're allowed to solve problems by fiat, let's just declare that everyone do the right thing about site-local addresses, automatically drop unauthorized packets, end hunger and violence, and brush their teeth. well, it's about like declaring by fiat that all apps should always use DNS names, that apps should never use IP addresses, and that DNS should be aware of network topology -- without bothering to consult with apps writers to see whether this will actually work. look, we've basically got three choices for address stability. either (a) sites never renumber, (b) they renumber occasionally, or (c) they NAT. we haven't figure out how to make (a) work and allow routing to scale, or to allow enterprise networks to split or merge, etc. we have tried very hard to work around the problems with (c) and failed miserably. (b) is the only remaining option. so it's not so much a matter of declaring 'by fiat' that apps need to be able to survive renumbering, as setting expectations for which apps need to be able to survive renumbering while they're running. and it appears feasible to set expectations in such a way that most apps need not worry about it. Keith
Re: Thinking differently about the site local problem (was: RE:site local addresses (was Re: Fw: Welcome to the InterNAT...))
This has nothing to do with sitelocal but more with the fact that a host can have multiple paths from A to B: internet ;) multiple paths does not imply multiple addresses.
RE: Thinking differently about the site local problem (was: RE:site local addresses (was Re: Fw: Welcome to the InterNAT...))
Did anybody consider just handing out a /48 (or a bit smaller) automagically with each DNS registration? --On Friday, March 28, 2003 10:36 AM -0800 Tony Hain [EMAIL PROTECTED] wrote: John C Klensin wrote: Tony, I've been trying to get my mind around the various issues here, and I keep getting back to the same place, so I think I need to embarrass myself by making a proposal that I find frightening. Let's assume, as I think you have suggested, that SL is all about local addresses and filtering, and not about some special prefix that applications need to recognize. I'm still not sure I believe that, but let's assume it is true and see where that takes us. Let's also remember the long path that got us to CIDR and 1918. Our original position was that anyone using TCP/IP (v4) should have unique address space. I remember many discussions in which people were told don't just grab an address on the theory that you would never connect. Our experience has been that, sooner or later, you might connect to the public network, or connect to someone else who has used 'private' (or 'squatter') space... unique addresses will save you, and everyone else, a lot of trouble. In that context, 1918 and its predecessors came out of two threads of developments: * we were running short of addresses and wanted to discourage unconnected (or hidden) networks from using up public space and * we hoped that, by encouraging such isolated networks to use some specific address ranges, those ranges could be easily and effectively filtered at the boundaries. We can debate how well either really worked, or what nasty side-effects they caused, but probably it makes little difference in the last analysis except to note that, no matter what we do, leaks happen. Now one of the problems IPv6 was supposed to solve was too little address space or, more specifically, our having to make architecturally bad decisions on the basis of address space exhaustion. I hope we have solved it. If we haven't, i.e., if the address space is still too small, then the time to deal with that issue is right now (or sooner), before IPv6 is more broadly deployed (and it better be variable-length the next time, because, if we are conceptually running short of space already, it would be, IMO, conclusive proof that we have no idea how to specify X in X addresses will be enough). But suppose we really do have enough address space (independent of routing issues). In that context, is site local just a shortcut to avoid dealing with a more general problem? Should we have a address allocation policy that updates the policies of the 70s but ignores the intermediate we are running out steps? Should I be able to go to an RIR and ask for unique space for an isolated network, justify how much of it I need, and get it -- with no promises that the addresses can be routed (and, presumably, without pushing a wheelbarrow full of dollars/ euros/ yen/ won/ yuan/...)? The problems with this theory are that a registry costs money to run, and it requires an organization to expose their business plan (never mind figuring out who is really qualified to judge the validity of any given justification). Even when the big bad US Gov. was picking up the tab, there were cost control measures that required someone to validate the request (I was one such sanity checker). If we create a space that requires registration, it will become a simple -biggest wallet gets the most space- arrangement, because it is in the financial interest of the registry to accept all requests. The only push back to that is to set the price per prefix high enough that the registry doesn't need more cash to run, but that, and the recurring nature of those costs, will cause people to avoid the registry and use random numbers. The other point in this is that you can't force people to register until there is a technical reason for it, like making routing work. Of course, this takes us fairly far onto the path of having to think about multihomed hosts, not just multihomed LANs, but, as others have pointed out, the notion of multiple addresses (or multiple prefixes) for a given host (or interface) takes us rather far down that path anyway. Figuring out which address to use is a problem we need to solve, with or without SL, or the whole idea of multiple addresses on hosts, especially dumb hosts, is going to turn out to be a non-starter. And, as Louis, Noel, and others have pointed out, it is hard. But, if we can find a solution, even one that is just mostly locally-optimal and that fails occasionally, then it seems to me that your position ultimately gives no advantages to a reserved site-local form over unique, but non-routable, addresses. The advantages of the latter appear obvious, starting with being able to identify the sources of address leaks and the notion that routability is a separate negotiation with providers (and their peers and other customers) and
Re: Thinking differently about the site local problem (was: RE:site local addresses (was Re: Fw: Welcome to the InterNAT...))
Did anybody consider just handing out a /48 (or a bit smaller) automagically with each DNS registration? Routing Table Bloat. If you can figure out how to do this in a CIDR aggregation context, or otherwise work around the table problem, the IETF and NANOG will quite certainly jointly nominate you for sainthood. ;) ...right after you get lynched for heresy. Keith
Re: Thinking differently about the site local problem (was: RE:site local addresses (was Re: Fw: Welcome to the InterNAT...))
Tony is right -- any registration process costs resources. agreed, though the cost of registering a domain name should serve as a useful upper bound. at least with address blocks you don't have to worry about I18N, trademark infringement, etc. But, if these addresses are assumed to be not routable, then there shouldn't be any routing table bloat. Put differently, once can conceive of three ways to get addresses: * From an RIR, as PI space * From an ISP, as PD CIDR space. * From some other process, as long-prefix, almost certainly unroutable, isolated space. actually it's highly desirable if such addresses *are* routable by private agreement, just not by default. I don't see why we shouldn't be able to choose from the above three options.