Re: IP network address assignments/allocations information?

1999-12-08 Thread Harald Tveit Alvestrand

At 21:17 07.12.99 -0500, Daniel Senie wrote:

Sounds to me like at best I'd trade a NAT box with firewalling for a
serious firewall.

Right. Insecure devices require protection, always.

  I have ZERO interest in allowing the kinds of things
you describe to occur from outside. While you may not mind someone
hacking into the microphone on your PC and using it as a bug I am a
little less trusting.

 
  OTOH, if you combine NAT with 6to4 for home networks, the
  picture starts to look a bit better.  Think of 6to4 as the
  generic ALG that rids you of the need to have separate ALGs
  for most of the applications that NAT happens to break.

So, will any of our ISP readers go on the record as saying they'll
provide users of dialup and DSL/Cable lines to have a large block of
addresses each, instead of just a single host address?

If you do the "native" IPv6 address assignment, it's impossible to route on 
anything smaller than a /64.
You then have 2^63 addresses for manual configuration within the subnet, in 
addition to the ability to connect anything with a MAC address without an 
address clash.

So the answer is "yes".
--
Harald Tveit Alvestrand, EDB Maxware, Norway
[EMAIL PROTECTED]



Re: IP network address assignments/allocations information?

1999-12-08 Thread Kim Hubbard

At 06:05 PM 12/7/99 -0800, Rick H Wesson wrote:

randy,

just because routers meltdown from leaks and mis-configurations is not a
reasonable justification for ARIN's tight policies on IPv4 allocations,
which kim stated earlier was to keep space aggrigated for router memory
requirements, adding speed and processing power to that definition still
does not justify the strict policy decisions.

-rick

Hmm, so you don't believe we should bother with aggregation of address
space?  You should come to an ARIN public policy meeting and propose
this..should be interesting :-)

Kim

On Tue, 7 Dec 1999, Randy Bush wrote:

 it's not the memory.  it's the processing power required which is quite
 non-linear.
 
 it's not the memory for the /24s in old b space, it's the horrifying *large*
 and *long* meltdowns caused by inadvertant leakage of bogus announcements of
 /24s in old b space
 
 randy
 



Re: IP network address assignments/allocations information?

1999-12-08 Thread Yakov Rekhter

Noel,

  From: Ed Gerck [EMAIL PROTECTED]
 
  maybe this is what the market wants -- a multiple-protocol Internet,
  where tools for IPv4/IPv6 interoperation will be needed ... and valued.
 
 This relates to an approach that seems more fruitful, to me - let's try and
 figure out things that sidestep this incredibly divisive, upsetting and
 fundamentally unproductive argument, and try and find useful things we can do
 to make things work better.
 
  Which can, undoubtably, be put in a sound theoretical framework for
  NATs, in network topology. NATs do not have to be a hack.
 
 Well, the fundamental architectural premise of NAT's *as we know them today* -
 that there are no globally unique names at the internetwork level - is one
 which is inherently problematic (long architectural rant explaining why
 omitted). So I don't think that the classic NAT model is a good idea,
 long-term.

I would say that the fundamental architectural premise of NATs is that
globally unique names at the internetwork layer are not carried in the
network layer header. This is not to say that such names don't exist -
just that they aren't in the IP header.

Yakov.



Re: IP network address assignments/allocations information?

1999-12-08 Thread J. Noel Chiappa

 From: Yakov Rekhter [EMAIL PROTECTED]

 the fundamental architectural premise of NAT's *as we know them today*
 - that there are no globally unique names at the internetwork level

 I would say that the fundamental architectural premise of NATs is that
 globally unique names at the internetwork layer are not carried in the
 network layer header. This is not to say that such names don't exist -
 just that they aren't in the IP header.

That may be true in some future variant of NAT - and if so, I'd be *much*
happier with it (I don't any problem with limited use of names with local
scope) - but my take is that it's not the case today.

And no, DNS names are *not* what I was thinking of when I said "names at the
internetwork level"! :-) For one thing, they don't contain *location*
information.

Noel



Re: IP network address assignments/allocations information?

1999-12-08 Thread Ed Gerck



"J. Noel Chiappa" wrote:

  From: Ed Gerck [EMAIL PROTECTED]

  maybe this is what the market wants -- a multiple-protocol Internet,
  where tools for IPv4/IPv6 interoperation will be needed ... and valued.

 This relates to an approach that seems more fruitful, to me - let's try and
 figure out things that sidestep this incredibly divisive, upsetting and
 fundamentally unproductive argument, and try and find useful things we can do
 to make things work better.

I suggest we first revisit the concept of collaboration itself.  IMO, collaboration
can no longer be understood as similar agents doing similar things at the same
time but as different agents doing different things at different times, for the same
objective.  Then, we can build protocols that will support this notion of
collaboration, where diversity is not ironed-out by hypotheses but actually
*valued* and used in interoperation.

  Which can, undoubtably, be put in a sound theoretical framework for
  NATs, in network topology. NATs do not have to be a hack.

 Well, the fundamental architectural premise of NAT's *as we know them today* -
 that there are no globally unique names at the internetwork level - is one
 which is inherently problematic (long architectural rant explaining why
 omitted).

That fundamental premise is trivially true (so, no need for rant ;-) ). However,
this is not what I was mentioning, as I think we are talking about something
even more fundamental.  A topology is simply a division of space, simply
speaking.

In these terms, data is no longer an absolute quantity.  Indeed, when thinking
about data in communication processes (networks) it has so far seemed
possible and undisputed to regard data as “information in numerical form
that can be digitally transmitted or processed”, and whose total quantity is
preserved when a system is divided into sub-systems or when different data
from different sources are compared. Actually, this picture is wrong to a large
extent and NATs are the living proof of it -- there are natural laws also in
cyberspace.

The very concept of data needs thus to revisited. Suppose we define data as the
*difference* D2 - D1 that can be measured between two states of data systems.
Then, it can be shown that this difference can be measured by means of a
communication process only if 1 and 2 are two states of the same closed system.
When they are not, NATs are a solution to create a third-system, a common
reference between 1 and 2.  Which can be conceptual or physical or both, but is
needed. In this formalism, a numerical value for data can be defined even though
1 and 2 may belong to different systems, or even though the data systems may be
open --  the only restriction is to have a common reference.

This is the mind-picture we need to overcome IMO -- that data is absolute. It is
not and this answer implies that we need to find "data laws" in order to describe
exchanges of data much in the same way as we needed to develop Thermodynamic
laws in order to describe exchanges of energy (itself, not an absolute concept,
either).

 So I don't think that the classic NAT model is a good idea, long-term.

I suggest we don't yet have a "NAT model", in engineering sense, where
a model fits in a larger model and so on. All we have is a "NAT hack".
And, I agree that the NAT hack is not a good idea, even mid-term.

 However, I think it's a bit of a logical fault to think that the only options
 are i) IPv6 and ii) NAT's.

Yes, especially NATs as they are -- somewhat born out of need, not so
much design.

  NATs ... seem to have been discovered before being modeled, that is
  all.

 Umm, not quite, IIRC. Papers by Paul Tsuchiya and Van Jacobsen discussed the
 concept a long time before any were commercially available.

Discussed the concept, as one may argue that telegraph systems also did when
they needed to define telegraph codes in each station, so that different
"John  Smith" could respectively get their proper messages even though
they all "shared" the same name.

What I meant is not this. What I meant is an ab initio model of  data in
network systems, where NATs are one instance of a third-system that is
*needed* in order to provide a common but quite arbitrary reference for
"measuring" data between different systems, without requiring any
change to them.  In such a formalism, there are data levels NATs can handle
and others it cannot, try as one may  -- which needs to be recognized and
provided for each case, by yet other objects.

Cheers,

Ed Gerck




Re: IP network address assignments/allocations information?

1999-12-08 Thread Perry E. Metzger


Harald Tveit Alvestrand [EMAIL PROTECTED] writes:
 A /48 leaves 16 bits for subnetting, before you hit the 64 bits of flatspace.

And remember, if we ever need to, we can start subnetting the bottom
64 bits, at the loss of one form of stateless autoconf (which I'm
starting to find, in deployment, is too unpleasant to use on my nets anyway).



Re: IP network address assignments/allocations information?

1999-12-08 Thread Ed Gerck



Lloyd Wood wrote:

 On Wed, 8 Dec 1999, Ed Gerck wrote:

  The very concept of data needs thus to revisited. Suppose we define data as the
  *difference* D2 - D1 that can be measured between two states of data systems.
  Then, it can be shown that this difference can be measured by means of a
  communication process only if 1 and 2 are two states of the same closed system.

 Since not all system state is communicated and any communication is a
 near-minimum abstraction of system state, this idea is a non-starter.

I understand your doubts; this is a new approach.  But communication is
not a "near-minimum abstraction of system state" -- whatever you mean
to communicate by that ;-)  The very failure of your communication in that
phrase (and, my very failure to communicate to you in my phrase) exemplifies
my phrase, however.

  When they are not, NATs are a solution to create a third-system, a common
  reference between 1 and 2.  Which can be conceptual or physical or both, but is
  needed. In this formalism, a numerical value for data can be defined even though
  1 and 2 may belong to different systems, or even though the data systems may be
  open --  the only restriction is to have a common reference.
 
  This is the mind-picture we need to overcome IMO -- that data is absolute. It is
  not and this answer implies that we need to find "data laws" in order to describe
  exchanges of data much in the same way as we needed to develop Thermodynamic
  laws in order to describe exchanges of energy (itself, not an absolute concept,
  either).

 Absolute zero always seemed pretty damn absolute to me.

There is no absolute value of energy associated with absolute zero
temperature -- if that is what you mean.  There are many quantities
which are not absolute, distance is another example (besides
energy and data). Phase is another.  But there are absolute
quantitites, of course.

 Taking your energy analogy further and better, NATs (and firewalls)
 are the protocol equivalent of Maxwell's demons;

No. This analogy is not correct. Note also that Maxwell's demon has
been proved not to be possible, even theoretically.

  What I meant is not this. What I meant is an ab initio model of  data in
  network systems, where NATs are one instance of a third-system that is
  *needed* in order to provide a common but quite arbitrary reference for
  "measuring" data between different systems, without requiring any
  change to them.  In such a formalism, there are data levels NATs can handle
  and others it cannot, try as one may  -- which needs to be recognized and
  provided for each case, by yet other objects.

 For every type of molecule or energy level you might encounter, you
 have to add another demon.

There are no demons here. If you agree that data is not absolute then
my explanation follows.  If you do not agree, then please tell me if
"2=2" is true or false -- it is a simple expression, a simple data point
given by "2=2".  But your answer, whatever it is, will prove it is
not absolute.

What is the significance of this?  Not to make matters more complicated but
to recognize that NATs are not demons ;-)

In other words,  either we have *one* closed data system (IPv4, IPv6, etc.) where
we can easily define data values by difference in data states (where an
arbitrary value of zero is assigned to a system-wide reference state),  or
we have *many* systems where we need NATs to provide reference states
between different systems in order to communicate between them.

Since IPv6 defines a larger system, it can encompass a series of different
IPv4 systems linked by NATs. However, since we *do* expect to encounter
IPv4 systems, even if IPv6 is extremely sucessful (say, takes over 80% of
the universe), then it follows that we will always need NATs to provide a
common reference between different systems.  Thus, it is worthwhile IMO
to model them and use them well, not demonize them :-)))

Cheers,

Ed Gerck