And yes, additional IP addresses were going to cost dramatically more. NAT
was a simple case of economics... but on the other hand, I don't experience
any "lack" because of it. I don't play UDP-based games or employ any of the
other relatively new protocols that are so sensitive to
The NAT problems only
start when the protocol carries IP address/port information (such
as the FTP 'PORT' command), and the NAT isn't aware of that protocol's
translation requirements
this is a popular misconception; it's a bit like saying that y2k
only breaks programs that store years in
It seems to me that we may be able to recapture some aspects of end-to-end
transparency at the application level if addressing issues are focused on
host FQDNs, rather than IP addresses.
this works to some extent. it specifically doesn't work for applications
that need to rendezvous with
Is this something that you think is an inherent flaw in DNS?
Inherent flaw in the DNS: probably not. Inherent flaws in implementations of
DNS (including, of course, ISC's BIND) and things in front of the DNS:
probably. It is far too easy to do the wrong thing.
this is worth
Specific processes can be and almost always are identified by a port number.
uh, no. IP address + port number often works, but port number alone
never works. and FQDN + port number doesn't work nearly as well
as IP address + port number, because there are too many cases
where there are
I'm not advocating one technology over another. I am claiming that in the
IPV4/Private/Public/NAT world, a bigger pool of Private space would be a big
help to many organizations.
I think this is a fine idea. What we need is to reserve enough private
address space so that each organization
NAT can be used for a variety of things. Perhaps we can agree that it's
a good hammer when the nail is a home network, and concentrate on what
to do about the large corporation issue.
NAT is a good hammer for a home network if and only if the only
purpose of a home network is to allow
Keith Moore wrote:
NAT can be used for a variety of things. Perhaps we can agree that it's
a good hammer when the nail is a home network, and concentrate on what
to do about the large corporation issue.
NAT is a good hammer for a home network if and only if the only
purpose
Is this really the "right" model for that sort of interaction? Personally,
my home network (in which every light bulb *will* be on the 'net within
the year) is not something I want end-to-end connectivity to.
why not?
seems like if you want your light bulbs to be independently addressable
or
I think it makes sense to consider a boundary (firewall+ALG) that defines
a "trusted zone" within the house, establishes ACLs for a given
"connection", be it a tunnel or otherwise, defined by an authentication
event, and mediates the activity over that connection as long as it's
active.
There is also a potential scaling issue of using multiple addresses
as general purpose multihomging mechanism. This is because if this
is the case, most of the Internet hosts will end up with multiple
addresses.
I don't see why this is inherently a problem.
it's a problem because
Not just "discussed". Tin Tan Wee has been working on this publicly for a
couple of years and now has a well-supported, fully operational service.
which doesn't even begin to address a large number of problems.
anyone who believes they have running code for iDNS is deluding themselves.
James,
bottom line is, this is a W3C matter. you need to convince *them*.
Keith
IPR is taken into account
when it is time to advance a standard on the standards track
and in practice, IPR is also taken into account by individual
participants (if they know about it) when they decide whether
to contribute to a consensus for Proposed Standard.
Keith
However this goal could be accomplished by
delaying publication of the NECP document by several years, or at least,
until the publication of technically sound, standardized, solutions for
the problems that interception proxies attempt to address.
best regards,
Keith Moore
I am writing to request that the RFC Editor not publish
draft-cerpa-necp-02.txt as an RFC in its current form,
for the following reasons:
2. A primary purpose of the NECP protocol appears to be to
facilitate the operation of so-called interception proxies. Such
proxies violate
The use of "load balancing" technologies is growing rapidly
because these devices provide useful functionality. These
devices utilize many different techniques, only some of which
can be characterized as "interception proxies" or "reverse
network address translation." For example, using MAC
By now we all should know that it is a bad idea to rely on an
unauthenticated IP address as a basis for determining the source of a
packet. Similarly. the IP header checksum offers no security. We
have a variety of IETF standard protocols (e.g., IPsec and TLS) that
provide suitable
Applications can gain a lot of security by building on top of a lower
layer secure communication substrate, such as that provided by IPsec
or TLS. Such substrates allow the application developer to make
assumptions about the security of the basic communication path, and
have these
Stephen,
perhaps the reason that the tools are not used is that they are not
adequate for the task. but it certainly does not follow that "if
one doesn't use the tools, then one does not care very much".
Keith
If one cares
about knowing where the data originated, and that it has not been
Keith Moore wrote:
. . .
3. Aside from the technical implications of intercepting traffic,
redirecting it to unintended destinations, or forging traffic from
someone else's IP address - there are also legal, social, moral and
commercial implications of doing so.
You
perhaps the reason that the tools are not used is that they are not
adequate for the task. but it certainly does not follow that "if
one doesn't use the tools, then one does not care very much".
or perhaps, one does not care enough ...
or perhaps, that building tools that actually solve
Keith Moore wrote:
. . .
You seem to be saying that because we have a higher service layered
on top of IP that we can disregard the IP service model. I disagree.
No, I'm saying you purported to be offended by IP address
redirection when what you really objected to was unauthorized
Peter,
I think that by now I've made my points and defended them adequately and
that there is little more to be acheived by continuing a public,
and largely personal, point-by-point argument. If you want to continue
this in private mail I'll consider it.
The simple fact is that I believe
Keith - I argued to keep the term "transparent routing" in the
NAT terminology RFC (RFC 2663). The arguments I put forth were
solely mine and not influenced by my employer or anyone else.
didn't say that they were.
Clearly, your point of view is skewed against NATs. It is rather
Publication under Informational and Experimental has typically been
open to all wishing it.
uh, no. this is a common myth, but it's not true, and hasn't been
true for many years.
I hope (and believe) that the *potential* for publication is open
to all, and that the process isn't biased
One would be hard-pressed to inspect the author-list of
draft-cerpa-necp-02.txt, the work of the associated companies, and the
clear need for optimizations of application performance, and then deem this
document not relevant.
I'm not hard-pressed to do this at all. In fact I find it
the problem with a "NAT working group" is that it attracts NAT
developers far more than it does the people whose interests
are harmed by NATs - which is to say, Internet users in general.
so by its very nature a "focused" NAT working group will produce
misleading results.
This bias
Peter,
I don't think I would agree that NECP is out of scope for IETF.
I think it's pefectly valid for IETF to say things like "NECP
is intended to support interception proxies. Such proxies
violate the IP architecture in the following ways: ... and
therefore cause the following problems...
Let's remember that a major goal of these facilities is to get a user to a
server that is 'close' to the user. Having interception done only at
distant, localized server farm facilities will not achieve that goal.
granted, but...
an interception proxy that gets the user to a server that
and a technology that only works correctly on the server end seems
like a matter for the server's network rather than the public
Internet - and therefore not something which should be standardized by IETF.
Much the same logic can be applied to NAT (the way it's usually implemented).
The I-D in question has been referred to an existing IETF WG for review,
that assertion was made, but not confirmed by the ADs.
is it really true? it seems odd because it really isn't in scope for wrec.
Keith
Bottom line is that IP-layer interception - even when done "right" -
has fairly limited applicability for location of nearby content.
Though the technique is so widely mis-applied that it might still be
useful to define what "right" means.
And there you have the argument for
its the 21st century:
f you dont use end2end crypto, then you gotta expect people to optimize
their resources to give you the best service money can buy for the least
they have to spend.
...
That's an interesting idea. People might eventually finally start
using end2end crpyto
all these oh so brilliant folk on the anti-cacheing crusade should be
sentenced to live in a significantly less privileged country for a year,
where dialup ppp costs per megabyte of international traffic and an
engineer's salary is $100-200 per month.
and as long as we're talking about just
it's completely natural that people will try such approaches -
they are trying to address real problems and they want quick
solutions to those problems.
In particular, they will try such approaches if they are not
presented with better alternatives.
there's something odd to my ear
It's also bad that there is little or no integration of intermediate
system vendors with end system vendors (or vice versa), because that
results in insufficient sharing of information between those two
industry segments. The IETF should be facilitating information
exchange, but it isn't
wrec is supposed to be about *web* replication and caching.
which probably doesn't include email. so I can hardly
blame them for not talking about port 25. since other kinds
of interception proxies exist, perhaps they should clarify their
document slightly to say it's about web interception
This was a choice - in some larger sense, if sourcing other-owned IP
addresses or TCP connections is considered an architectural problem,
needs to come down from above, rather than up from WREC.
sounds like a convenient excuse to me...
where did the wrec folks get the idea that the IP
Call me a non-team playing scab, but I refuse to the honor the old guild
work rule that limits the questions I can consider. If sourcing
other-owned etc. or anything else is an architectural or other problem,
then professional pride ought to force one to raise the issue insetad of
waiting
I'm being a bit extreme but the point is that just because something is
architecturally bad doesn't mean you shouldn't do it, since these days it
takes us years to make any architectural enhancements.
perhaps architectural impurity alone shouldn't keep you from doing
something, but the
where did the wrec folks get the idea that the IP specification was
obsolete?
(speaking not for the entire WREC, but my impression of the meetings) I
did get the impression (mistakenly or not) that addressing the whole of
the IP spec was not particularly in scope (e.g., from our area
As to the known-probs doc, that focuses on problems of the sort that
TCPIMPL did - errors in the implementation, not deliberately changing
specs.
yes, but given that there are no specs for interception proxies,
how do you judge what is and is not an error in the implementation?
or given that
Hmm... Depends on one's perspective. Do not underestimate the
timeliness of a solution. Timeliness is operational reality.
I'm very much aware of this. timelinesss is what gives you
(or denies you) the opportunity to deploy a new technology.
but just because something is timely (in the sense
I don't want to change it (as if I could!), my purpose was to point out
that our current network is the sum of our mistakes, not the network
equivalent of the Mount Sinai tablets.
no disagreement there. but the original mistakes in TCP/IP were more-or-less
designed to work together. when
I think various people have made a good case for eventually publishing
this particular document, once appropriate revisions have been made.
I'm very supportive of the notion of free exchange of ideas, even
through the RFC mechanism - with the understanding that:
- IETF and the RFC Editor have
In Kerberos 4, when the KDC receives a ticket request, it includes the
source IP address in the returned ticket. This works fine if the KDC
is across a NAT gateway, as long as all of the Kerberos services are
also across a NAT gateway.
doesn't this require the NAT to use the same
doesn't this require the NAT to use the same inside-outside
address binding for the connection between the client and the KDC as
for the connection between the client and the application server?
e.g. it seems like the NAT could easily change address bindings
during the lifetime of a
Look, I have on my disk a file from June, 1992 (yes, that's not a typo -
*1992*) called "Problems with NAT".
However, as a close personal friend of the patron saint of Lost Causes (see
all the scars on my forehead? :-), let me tell you that I have (painfully :-)
learned to recognize a lost
Date: Sat, 22 Apr 2000 05:48:36 -0400
From: "J. Noel Chiappa" [EMAIL PROTECTED]
there are far too many problems to NAT, affecting far too many
applications ... and the list is constantly growing larger.
Perhaps if there was a document that explained how to design
Richard,
I'm not entirely pleased with the current policies for address assignment,
nor with the pricing policies of certain ISPs for access for more than
one IP address. However, I'm convinced that even with improvements to
these policies, IPv4 address exhaustion would still be a major
Most users are not
networking geeks. They like NAT because NAT boxes make what they want
to do so easy.
presumably they don't realize that the NATs are making it hard
to do other things that they might want to do.
I wonder...how many of these folks really want network address
translation,
[Keith Moore on a "KMart box"]
| take it home, plug it in to your phone line or whatever, and get
| instant internet for all of the computers in your home.
| (almost just like NATs today except that you get static IP addresses).
No, not "or whatever" but "AN
What I find interesting throughout discussions that mention IPv6 as a
solution for a shortage of addresses in IPv4 is that people see the
problems with IPv4, but they don't realize that IPv6 will run into the
same difficulties. _Any_ addressing scheme that uses addresses of
fixed length
Users shouldn't care or know about the network's internal addressing.
Some of the application issues with NATs spring directly from this issue
(e.g. user of X-terminal setting display based on IP address instead of
DNS name).
it's not the same issue. the point of using IP addresses in
in an earlier message, I wrote:
OTOH, I don't see why IPv6 will necessarily have significantly more
levels of assignment delegation. Even if it needs a few more levels,
6 or 7 bits out of 128 total is a lot worse than 4 or 5 bits out of 32.
the last sentence contains a thinko. it should
personally, I can't imagine peering with my neighbors.
but maybe that's just me ... or my neighborhood.
Keith
Ah ... famous last words. I feel confident that similar words were said
when the original 32-bit address scheme was developed:
"Four billion addresses ... that's more than one computer for every person
on Earth!"
"Only a few companies are every going to have more than a few computers
even if you do this the end system identifier needs to be globally
scoped, and you need to be able to use the end system identifier
from anywhere in the net, as a means to reach that end system.
DNS is a bright and successfull example of such deal.
actually, DNS is slow, unreliable, and
So, of the 57,887 visable servers, 4314 are improperly configured
in the visable in-addr.arpa. tree. Thats 7.45% of the
servers being "not well maintained".
a 92.55% reliability rate is not exactly impressive, at least not in
a favorable sense.
it might be tolerable
Doesn't this leave out a few pieces of data? Given the current IPv6
address format, which includes a globally unique 64 bit interface ID and 64
bits of globally unique routing goop. My calculation is that you only have
2^64 addresses to work with which leaves roughly 12 bits, maybe 14 to
Now, if you have a site which has more hosts than it can get external IPv4
addresses for, then as long as there are considerable numbers of IPv4 hosts a
site needs to interoperate with, *deploying IPv6 internally to the site does
the site basically no good at all*. Why?
this sounds like a
If people's livelihood depends on something, they're more likely to insure
it actually works.
that's a good point. but it's one thing to make sure that DNS mappings
for "major" services are correct, and quite another to make sure that
the DNS mappings are correct in both directions for
IPv6's claimed big advantage - a bigger address space - turns out not to be an
advantage at all - at least in any stage much short of completely deployment.
that's an exaggeration. if you have an app that needs IPv6, you don't
need complete deployment of IPv6 throughout the whole network to
I dunno. I don't think that adding two more digits in the 1960s to year
fields would have really made any problems too hard.
you've obviously never tried to write business applications for a
machine with only a few Kbytes of memory. memory was expensive
in the 1960s, and limited in size.
I don't see what you're getting at. the outside sites may be running v4
with a limited number of external addresses ... if they are running v6
they will have plenty of external addresses.
Not external *IPv4* addresses, they won't - which is what kind of addresses
they need
Which raises the interesting (to me anyway) question: Is there value in
considering a new protocol, layered on top of TCP, but beneath new
applications, that provides an "association" the life of which transcends
the TCP transports upon which it is constructed?
been there, done that. yes,
So what I am suggesting is that it seems that there is evidence that one
can do an "association" protocol that is relatively lightweight in terms
of machinery, packets, packet headers, and end-node state if one leaves
the heavy lifting of reliability to the underlying TCP protocol.
the
even the DNS names for major services may not be well maintained.
at one time I did a survey of the reasons for mail bounces
for one of my larger mailing lists.
You appear to be saying that because historically people screwed up
configuring their DNS that it is impossible to rely on
So what I am suggesting is that it seems that there is evidence that one
can do an "association" protocol that is relatively lightweight in terms
of machinery, packets, packet headers, and end-node state if one leaves
the heavy lifting of reliability to the underlying TCP protocol.
Actually what happened, was I received this virus from a trusted friend
but of course you didn't receive the virus from a trusted friend;
you received it from an impostor.
now you know not to trust names that appear in a message header.
Keith
Clearly, you need report to re-education camps
to learn why it's important to let the government let companies have to
freedom to innovate wonderful things like vbscript. :-)
not to mention gratuitous incompatibilites to Kerberos.
Keith
So if the users would save the virus to disk and then run it,
what's the savings?
the virus doesn't propagate as quickly, nor to as many people,
before it is detected and countermeasures are put in place.
yes, this does make a significant difference.
You could have senders
it might be useful to further examine the differences between UNIX-like
systems (including Linux) and Windows systems regarding their
susceptibility to viruses.
1. it should first be noted that UNIX-like systems are not immune to
worms or viruses. the Morris worm propagated itself via
Jacob,
in my mind the people most responsible for the viruses are those who
built systems that were so easily compromised.
we don't need protocol support to track them down.
Keith
Jacob,
Given a choice between reducing crime via more government surveillance
and reducing crime via software that doesn't do stupid things, I'd far
prefer the latter. I don't know of any good reason for a mail reader
to make it so easy to execute code that can have harmful side effects,
but
but sooner or later folks are going to be held liable for poor engineering
or poor implementation of networking software, just like folks today can be
held liable for poor engineering or implementation of bridges or buildings.
I don't see how, as long as the software manufacturers ship
I would hope that any software I use, that is able to put my digital
signature on some data, would ask me for my pass-phrase every time
my private key is used.
and I would hope that any software I used would not offer to execute
content that could have harmful side effects, without first
Encryption will be offloaded to the network interface. ASICs on the NICs
will greatly improve encryption and authentication performance.
all well and good, provided that this encryption and authentication
are actually compatible with that specified by higher level protocols
and the
Is there a way to turn off the NAT in the AirPort access points?
if not, seems like that would be a showstopper.
is vulnerability and threat analysis part of the
standardization process ??
yes.
I am particularly interested in hearing about whether such collections are
helpful or not. And if not, what would be more helpful.
not having seen the collections I can't comment about them specifcally.
but in general this seems like a very good idea, provided that the books
make it clear
The use of the trailing dot (www.netscape.com.) remains
a useful way to force the resolver to avoid suffix extensions.
this works sometimes, but not in all cases.
for example, trailing dots are not legal syntax for email addresses.
Keith
yeah, the bill contains language like
"Unsolicited commercial electronic mail can be an important mechanism
through which businesses advertise and attract customers in the online
environment."
this bill isn't designed to limit spam; it's designed to make it legal.
criminals.
Keith
I've yet to read the whole bill (H.R. 3113), but I suspect (or, at
least, hope) that the politicians behind this legislation are intending to
draft a federal law that, unlike at least two state attempts, will survive
a constitutional challenge.
And I hope that the courts will finally
Furthermore, CAUCE, which is a widely respected anti-spam organization,
"vigorously supports" HR 3113 according to
http://www.cauce.org/pressreleases/pr-hr3113-1.shtml.
my understanding is that CAUCE has been in cahoots with the DMA for quite
a while. so whatever they respect that might once
Eric,
not that I disagree with anything you said, but I don't see the spam
issue as being about commercial use of the Internet - I see it as being
about my ability to communicate with correspondents of my choice
without interference from others...
the fact that I have to pay for such
A couple of people have pointed out in private email that I should
not have posted my understanding of CAUCE's relationship with DMA
without substantiation.
Personally I don't see how such things can ever be substantiated
anyway - it seems to be the nature of politics that you rarely
have
The bottom line is that although the bill has some unsavory aspects (hey,
this is politics, dontcha know), it is well intended, and restoral of the
banner notification could be a serious tool in the fight against junk
email.
sure, but there's no way the DMA is going to let that happen.
And I hope that the courts will finally realize that freedom of speech
includes the freedom not to have your communications disrupted by people
who want to sell you things.
I dunno, Keith. What you are asking for is content control - you are saying
that certain content shouldn't get to
WAP might evolve into something more useful, but I don't see
how it will replace IP in any sense.
One is an architecture for supporting application on diverse wireless
systems, and other is a network layer packet transport mechanism. Two
aren't even comparable.
the two are comperable in
From: Mohsen BANAN-Public [EMAIL PROTECTED]
Existing SMTP/IMAP/TCP technology is not well suited for
mobile and wireless environments where bandwidth and
capacity are always limited and precious.
More efficient protocols are needed to address the new
reality of mobile and wireless
Not what I would have hoped for in an evolved Internet.
A lot has changed in the past 30 years.
The notion that 'anything is fair game' in the RFC series made a lot
more sense when the Internet was just an experimental network, and
when packet-switched newtorking was brand new. In such an
Does it need to be if the Web/Wap app can handle this format?
web/wap apps handle a very small number of protocols compared to the
protocols that are handled by IP and used in practice.
Keith
A clue-by-four is a large, heavy, blunt object used to forcibly inject clues
into those who have proven otherwise clue-resistant.
sometimes this is done by inducing unconsciousness, thereby raising the
clue level in a subject who was formerly negatively clued.
Keith
What I oppose strongly, is that people sell weird stuff and call it Internet.
I've never seen a marketing person that wouldn't lie and do exactly that.
If folks want to buy wierd stuff, and they know it's wierd stuff and
are aware of its limitations, I don't have much problem with that.
But
masataka was saying that he could classify providers given a rather fixed
model. i was saying that the world changes and that providers will find
new business models and bend masataka's rigid classification.
yes, but the desire to have classification of providers is significantly
motiviated
If IETF makes it clear that AOL is not an ISP, it will commercially
motivate AOL to be an ISP.
probably not. folks who subscribe to AOL aren't likely to be
reading IETF documents.
face it, it's not the superior quality of AOL's service that keeps
AOLers from moving - it's their
I'd like to publically apologize to Randy Bush for the tone of my
response to his message from earlier this week.
Randy was quite correct to point out that any terms that IETF might
develop to describe ISPs might not be consistent with any particular
provider's offerings. And though I have
However, I've suddenly realized that the fault for some of the vacation
messages rests with the people running this list. Notice that they
are not including a "Precedence: bulk" or "Precedence: bulk" lines
which tell at least some `vacation` programs to do the obvious.
nor should they.
1 - 100 of 1827 matches
Mail list logo