I think we're missing the point.

There is a perceived need by "many" operators for an address space that can
be used for 'local' use, where 'local' is 'not part of the global
internet'.

We need to dispel that perception.


Operators are free to filter traffic going to and from their networks as they see fit. What's not reasonable is for the architecture to tie policy, or any expectation of filtering, to the kind of address prefix in use.

- it over-constrains use of the address space for more legitimate purposes
- it conflicts with the desirability of being able to communicate between networks using non-global addresses
- the notion of 'local' is too vague and too subject to variation
- any attempt to encode policy in address bits is going to be too inflexible to be applied to all apps and/or all hosts on a network - this is true even if you restrict it to "appliance" hosts
- the idea that 'local' hosts are somehow trustworthy is a very dubious one
- it imposes an onerous burden on apps that are expected to work across "local prefix" boundaries


I agree that it's desirable to have a way for the network to communicate policy to hosts and apps, but this isn't a suitable mechanism for doing so.

(If we want to overload addresses, there are lots of other ways to overload them that would be useful. e.g. why not encode TOS in address bits? after all, existing routing policy mechanisms would make it easy to implement.)

Fundamentally, there are 2 requirements:

- Free (both financially and administratively)
or within epsilon of free.

- Approximately unique

Operationally, we impose an additional requirement:

- Using such space should not add to the size of the global routing tables.

no problem with any of the above - just with the idea that these are "local" as opposed to "non-routable in the public network". and the current proposal satisfies the above criteria quite nicely.


 allowing such addresses to be globally
routeable has two drawbacks:
- affects global routing tables, possibly badly
- raises 'approximately unique' requirement to 'unique'

Introducing compulsory filtering outside whatever administrative boundaries
these addresses have removes these two drawbacks.

That's the wrong way to define it. People need to be able to route between private networks even if some or all of those networks are not connected to the public internet. So we need to be able to route between networks that use nonglobal addresses.


Assuming the 'unique enough' mechanism is unique enough, the only difference
between a 'local' address and a 'global' address which is administratively
filtered is that EVERYBODY is filtering the local address,

only at the boundaries between their network and the public internet. They're free to filter or not filter such addresses at private interconnections between their networks and other networks. They're free to provide transit for nonglobal addresses also.


- To applications? None. A local address may be treated as a global
address, and will function identically to any filtered global address. The
only difference is that the local address prefix provides a strong hint that
the address WILL be filtered, thus increasing the ability of an application
to make address selection choices if it so wishes.

False. Apps can't expect networks to filter all traffic using nonglobal addresses at their boundaries, because there are too many reasons why networks using nonglobal addresses might need to exchange traffic over private links. Apps can't even tell whether a potential peer that uses a global address is unreachable, since some networks using nonglobal addresses will still need to exchange traffic over private links with other networks that use global addresses.


- To routers and infrastructure? Many of these devices SHOULD have filters
configured to discard local addresses.

Except at the boundaries between their networks and the public internet, this is entirely a matter of local network policy.


Nothing breaks that wont break already.

False. People can certainly configure their networks now in ways that will break things, and of course this won't change. But what you are proposing is to _encourage_ people to configure their networks in ways that will break things, while at the same time making their networks less flexible. (e.g. by requiring any network that wants to connect with another network to acquire global addresses whether or not it connects to the public network).


 The cost to those that don't want
to use these addresses is minimal.

False. If we were to adopt what you propose, everyone would be burdened with apps that tried to cope with a hodgepodge of addresses with varying and uncertain reachability. This would increase the cost for everyone.


 And some people gain functionality they
particularly want.  Where is the problem?

See above. Just because people want a certain kind of functionality doesn't mean we should implement in in the way that seems obvious to people who haven't considered all of the implications.


I'll offer one comment on the Hinden/Haberman draft. Although I'm generally
in favour of it, the draft completely partitions the space into that which
is registered and that which is allocated using the provided random
algorithm. No space is left for alternative mechanisms, such as MAC based
allocation presented in draft-white-auto-subnet-alloc or
draft-hinden-ipv6-global-site-local.

Good point. Offhand I'd think that a MAC could be used as an alternative way to generate a "random" number - run it through MD5 or some such.


Keith


-------------------------------------------------------------------- IETF IPv6 working group mailing list [EMAIL PROTECTED] Administrative Requests: https://www1.ietf.org/mailman/listinfo/ipv6 --------------------------------------------------------------------

Reply via email to