Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-17 Thread Fred Baker

At 12:44 AM 4/11/00 -0700, Derrell D. Piper wrote:
And there you have the argument for publishing this document.  I much prefer a
model where we allow for free exchange of ideas, even bad ones.

hear! hear!

  I tend to
believe that if someone took the time to write up a document that there's
probably some reason for it.  So let's call this an experimental RFC and get
on with life.  Isn't that what the experimental category denotes?

Well, that's one way to interpret it. I think I'd prefer the words of RFC 
2026, which imply that it is simply an idea, something being experimented 
with, and that the document represents a snapshot of the work at a 
particular point in time. We let time and experience tell us whether it is 
a good idea or a bad one, and if you want to participate in the experiment, 
you had best contact the experimenter, as things may have progressed since 
the document was published.




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-17 Thread Keith Moore

I think various people have made a good case for eventually publishing 
this particular document, once appropriate revisions have been made.

I'm very supportive of the notion of free exchange of ideas, even
through the RFC mechanism - with the understanding that:

- IETF and the RFC Editor have limited resources and cannot publish 
  everything that comes their way
- it's inappropriate to use Experimental or Informational publications in 
  the RFC series to promote or claim legitimacy for ideas that are embroynic 
  or experimental in nature, especially when those ideas are violations of 
  Internet standards.  there should be a clear distinction between
  "this is somebody's idea for an experiment" and "this is the sense
  of the Internet technical community"
- some experiments are better carried out under controlled conditions.

Keith




RE: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-14 Thread Steve Hultquist
Title: RE: recommendation against publication of draft-cerpa-necp-02.txt 





Keith Moore wrote:
  . . .
 3. Aside from the technical implications of intercepting traffic,
 redirecting it to unintended destinations, or forging traffic from
 someone else's IP address - there are also legal, social, moral and
 commercial implications of doing so.
   
You will need to be far more specific here. I see absolutely nothing that
is not legal, is not social, or is not moral.
   
   Okay, I'll offer a few specific examples, by no means the only ones:
   
   1. an Internet service provider which deliberately intercepts traffic
   (say, an IP packet) which was intended for one address or service,
   and delivers it to another address or service (say that of an interception
   proxy) may be misrepresenting the service it provides (it's not really
   providing IP datagram delivery service because IP doesn't work this way).
  
  Okay, I think I see the mistake you're making. You're crossing
  abstraction layers and conflating two different things (the name of
  a service with the end point of the connection to that service). You
  are criticizing the moving of an endpoint when what you really
  object to is the misrepresentation of a service. Or do you also
  object to HTTP redirects, dynamic URL rewriting, CNAMEs, telephone
  Call Forwarding, or post office redirecting of mail after you move? 

 I don't object to redirects at all, as long as they are carefully 
 designed. I do object to misrepresenting the service. As I've 
 said elsewhere, if the service wants to set up an interception proxy 
 on its own network to help make its service more scalable, I have 
 no problem with that. I do have a problem with unauthorized third 
 parties setting up interception proxies. (which is according to
 my understanding all the most common application of such devices)


I, too, have been watching this conversation from the sidelines, primarily to see the general opinions of the IETF on this topic. However, as one who is considering deploying such devices both topologically close to servers (so-called Web accelerators) and topologically close to clients of the servers (as an owner of the servers -- so-called content distribution), this is of vital interest to me. In both cases that we are considering, the devices are within the same administrative domain as the servers (effectively administered by the content owner). This is, as a number of people mentioned, a key differentiator in this discussion. 

For Internet network-based applications such as streaming media and rich content, both of these technique provide significant advantages for the administrators of the delivery, hence the intent of NECP is important.

And, as Bill Sommerfeld wrote:
 A quick read through draft-cerpa-necp-02.txt suggests that it's
 primarily targeted at forms of redirection which occur at the request
 of service operators. Such systems are best thought of as a funny
 kind of application-layer proxy, and are far less damaging to the
 end-to-end internet model than the transparent proxies cited above.

 I think it's important to carefully distinguish between these sorts of
 redirection. Some clarifying text in the draft to this effect would
 be helpful.


I agree that this is important, as well.


Patrik Fältström said
 I have no problem whatsoever to have proxies being part of the 
 web-model, but I am strongly opposing someone in the middle of the 
 communication path intercepting and redirecting IP-packages, as the 
 client will not communicate with whoever he wanted.


With which I also agree.


However, I do not see an appropriate documentation of NECP as incompatible with those two views.


ssh
--
Steve Hultquist
VP Ops
Accumedia, Boulder, CO USA





Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-13 Thread John Martin

At 04:45 PM 11/04/00 -0700, Eliot Lear wrote:
I wonder if any of the authors has explored the risks of modifying data in
flight.  How could this be abused by interlopers and evil doers?  If I
could hack into a router implementing NECP, what damage could I do?  Could
I start a nasty little JavaScript/Java/Shockwave/... applet in an
advertisement?
And who would know it was me?

Do you mean the authors of NECP? If so, then I'm confused because NECP is 
not about "modifying data in flight" - it is about load balancing multiple 
services which sit behind e.g. a single IP address. (i.e. DNS server farms, 
firewalls, proxies). As I have said repeatedly, "interception proxies" is 
only one of these applications and by no means the most widely used.

Are you confusing this with WCCP (which *only* works with "interception 
proxies").

Quoth John Martin:
  [...]
  Let me be absolutely clear, NECP is about communication between server
  load-balancers (SLB) and the back-end servers they speak to.

Were this so clear in your document my mailbox wouldn't be full of this
stuff.

The very first sentance says:

"This document describes "NECP", a lightweight protocol for
signalling between servers and the network elements that forward
traffic to them.  It is intended for use in a wide variety of
server applications, including for origin servers, proxies, and
interception proxies."

Despite the fact that "interception proxies" are listed last, they are the 
only service people are talking about.

But, you are right in general: if this is how people read the document, 
we  need to fix the document.

If it looks like a duck and quacks like a duck, but it's not supposed to be
a duck, the IESG ought to point out that it's a turkey by so indicating at
the top of the document.  Also, I'd like to understand why you're not going
experimental, where it would be expected that you'll correct your mistakes.
Your choice of "informational" seems unfortunate at best and as
disingenuous marketing at worst.  I can't tell which.

We used "informational" because we saw that this is what was used for 
HTTP/0.9 with which there are parallels: NECP has existed for some time 
already in slightly differing implementations and this is a codification of 
existing practice. There was no magic of deceit intended. If the IESG 
thinks we should instead go for experimental, I'd be more than happy to 
pursue that instead and bring this into WREC. However, development is not 
within the current WREC charter so we are stuck, I think?

The fact that you mention interception proxies in the introduction has led
to this flame war.  Having used the words, you've mentioned none of the
risks associated with such services both from the server side and the
client side.

OK - we can fix that. It is not the goal of NECP to describe "interception 
proxies" or their deficiencies. There is, however, a working group which 
has a document aimed at exactly that (amongst other things) - WREC.

John

---
Network Appliance   Direct / Voicemail: +31 23 567 9615
Kruisweg 799   Fax: +31 23 567 9699
NL-2132 NG Hoofddorp   Main Office: +31 23 567 9600
---




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-13 Thread Eliot Lear

Part of the problem here is that a knife may be used as a food utensil or a
weapon.  Safe handling, however, is always required, and should be
documented.

I would add two other comments.  I tried to locate the RFC for HTTP/0.9,
but the best I could find was a reference to a CERN ftp site for the
protocol.  In any case, by the time HTTP got to the IETF it was deployed
over a vast number of end stations, and comparisons to it are probably not
apt.

Finally, rechartering is precisely what you ought to have done, and should
do, IMHO.






Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-13 Thread John Martin

At 10:49 AM 13/04/00 -0700, Eliot Lear wrote:
Part of the problem here is that a knife may be used as a food utensil or a
weapon.  Safe handling, however, is always required, and should be
documented.

Granted.

I would add two other comments.  I tried to locate the RFC for HTTP/0.9,
but the best I could find was a reference to a CERN ftp site for the
protocol.

Ooops. s/0.9/1.0 - rfc1945.

   In any case, by the time HTTP got to the IETF it was deployed
over a vast number of end stations, and comparisons to it are probably not
apt.

NECP is a super-set of various load-balancing technologies already deployed 
at thousands of sites.

Finally, rechartering is precisely what you ought to have done, and should
do, IMHO.

For the record: this is exactly what we are doing. (We were waiting for the 
two starter documents to be published or at least start their path via the 
IESG).

Rgds,
John

---
Network Appliance   Direct / Voicemail: +31 23 567 9615
Kruisweg 799   Fax: +31 23 567 9699
NL-2132 NG Hoofddorp   Main Office: +31 23 567 9600
---




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-10 Thread Keith Moore

 Let's remember that a major goal of these facilities is to get a user to a 
 server that is 'close' to the user.  Having interception done only at 
 distant, localized server farm facilities will not achieve that goal.

granted, but...

an interception proxy that gets the user to a server that is 'close'
to that user (in the sense of network proximity), but 'distant' from the  
content provider (in the sense that it has a significant chance of
misrepresenting or damaging the content) is of dubious value.

and a technology that only works correctly on the server end seems
like a matter for the server's network rather than the public 
Internet - and therefore not something which should be standardized by IETF.

I do think there is potential for standardizing content replication
and the location of nearby servers which act on behalf of the content 
provider (with their explicit authorization, change-control, etc).

But IP-layer interception has some fairly significant limitations
for this application.  For one thing, different kinds of content on
the same server often have different consistency requirements, 
which become significant when your replicas are topologically distant
from one another.  If you treat an entire server as having a single 
IP address you probably don't get the granularity you need to implement 
efficient replication - you may spend more effort keeping your replicas
consistent (and propagating the necessary state from one to another)
than you save by replicating the content in the first place.  Obviously 
you can use multiple IP addresses, assigning different addresses to 
different kinds of content, but this also has limitations.  You can also 
get into problems when the network routing changes during a session or 
when the client itself is mobile. 

Bottom line is that IP-layer interception - even when done "right" - 
has fairly limited applicability for location of nearby content.
Though the technique is so widely mis-applied that it might still be 
useful to define what "right" means.

Keith




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-10 Thread Valdis . Kletnieks

On Mon, 10 Apr 2000 07:00:56 EDT, Keith Moore said:
 and a technology that only works correctly on the server end seems
 like a matter for the server's network rather than the public 
 Internet - and therefore not something which should be standardized by IETF.

Much the same logic can be applied to NAT (the way it's usually implemented).

Both have issues, both have proponents, and both will be done even more brokenly
if there's no standard for them.

Personally, I'd rather have the IETF issue verbiage saying "Do it this way",
than have 50 million content providers all implement it in subtly different
and broken ways.

"You are trapped in a twisty little maze of proxies, all different..." ;)

-- 
Valdis Kletnieks
Operating Systems Analyst
Virginia Tech




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-10 Thread Keith Moore

  and a technology that only works correctly on the server end seems
  like a matter for the server's network rather than the public 
  Internet - and therefore not something which should be standardized by IETF.
 
 Much the same logic can be applied to NAT (the way it's usually implemented).

true.
 
 Both have issues, both have proponents, and both will be done even more 
 brokenly if there's no standard for them.

yes, this is the dilemma.  IETF has a hard time saying "if you're going to 
do this bad thing, please do it in this way".  for example, it's unlikely
that the vendors of products which do the bad thing would consent to
such a statement.  and if you take out the language that says "this is bad"
then the vendors will cite the RFC as if it were a standard.

and given that NATs are already in blatent violation of the standards,
it's not clear why NAT vendors would adhere to standards for NATs.
nor is it clear how reasonable standards for NATs could say anything 
other than "modification of IP addresses violates the IP standard;
you therefore MUST NOT do this".

 Personally, I'd rather have the IETF issue verbiage saying "Do it this way",
 than have 50 million content providers all implement it in subtly different
 and broken ways.

not sure what content providers have to do with this -
if content providers harm their own content, it's not clear why IETF 
should care - there are ample incentives for them to fix their own problems.

Keith




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-10 Thread Francis Dupont

 In your previous mail you wrote:

But IP-layer interception has some fairly significant limitations
for this application. ...
   
   There's a technical problem with IP intercepting that I've not seen
   mentioned, including in the draft.  Any intercepting based on TCP or UDP
   port numbers or that makes any assumptions about TCP or UDP port numbers
   will have problems, because of IPv4 fragmentation.  It seems plausible
   that intercepting done by/for the server(s) would want to redirect all
   traffic for a given IP address, and so not be affected by port numbers.
   (Thus, it may make sense for the draft to not mention the issue.)
   
= the first fragment has 8 bytes or more of payload, then the port numbers.
And other fragments share the same ID then it is possible to apply
the same action on all the fragments if they follow the same path at
the interception point.
 This can be hairy if fragments are not in the usual order, for instance
if someone sends the last one first (this is not as stupid as it seems
because the last fragment provides the whole length of the packet).

   However, "transparent" HTTP proxy and email filtering and rewriting schemes
   such as AOL's that need to intercept only traffic to a particular port
   cannot do the right thing if the client has a private FDDI or 802.5 network
   (e.g.  behind a NAT box) or has an ordinary 802.3 network but follows the
   widespread, bogus advice to use a small PPP MTU.
   
= but fragmentation is not the best way to fight against "transparent"
proxies (:-)...

   Yes, I realize IPv6 doesn't have fragmentation
   
= IPv6 has fragmentation, but only from end to end (no fragmentation
en route). Packet IDs are used by IPv6 only with fragmentation (they
are in fragmentation headers) too...

   but most if not all of the distant-from-server IP interception
   schemes sound unlikely to work with IPv6 for other reasons.

= I'd like that this is true (another reason to switch to IPv6 :-)
but the only thing which is broken by interception is authentication
(IPSec is mandatory to implement, not (yet) to use with IPv6).
Encryption isn't really "transparent" proxies friendly too (:-).

Regards

[EMAIL PROTECTED]




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-10 Thread Derrell D. Piper

 Bottom line is that IP-layer interception - even when done "right" - 
 has fairly limited applicability for location of nearby content.
 Though the technique is so widely mis-applied that it might still be 
 useful to define what "right" means.

And there you have the argument for publishing this document.  I much prefer a
model where we allow for free exchange of ideas, even bad ones.  I tend to
believe that if someone took the time to write up a document that there's
probably some reason for it.  So let's call this an experimental RFC and get
on with life.  Isn't that what the experimental category denotes?

Derrell




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-10 Thread Dick St.Peters

 Let's remember that a major goal of these facilities is to get a 
 user to a server that is 'close' to the user.  Having interception 
 done only at distant, localized server farm facilities will not 
 achieve that goal.
 ...
 client -- Internet - ISP - Intercept - Internet - Server1
  - Internet - Server2
  - Internet - Server3
 
 In the second case (which is what I am opposing) the server provider 
 does not have anything to do with the interception. He runs only 
 Server1, while Server2 and Server3 are caches which the ISP chooses 
 to redirect the packages to which are addressed to Server1.

That's an assumption that's not always valid.  There are cases in
existence now where a service provider *pays* the ISP to run a local
mirror, leading to

client -- Internet - ISP - Intercept - Internet - Server1
-  subnet  - Server2

It would be entirely possible for the service provider, having paid
the ISP not to get traffic from the ISP's clients, to block that
traffic - or limit its bandwidth.

Consider the progession:

client -- Internet - ISP - Router - 56k  - Server1
 - T3   - Server1

client -- Internet - ISP - Intercept - 56k  - Server1
- T3   - Server2

client -- Internet - ISP - Intercept - Internet - Server1
-  subnet  - Server2

What is the fundamental difference between choosing the best path and
choosing the best source?  Arguments that the latter breaks the IP
model are simply arguments that the IP model is broken for today's
Internet and will be even more broken for tomorrow's.  The IETF can
fix the model ... or leave that to someone else.

--
Dick St.Peters, [EMAIL PROTECTED] 
Gatekeeper, NetHeaven, Saratoga Springs, NY
Saratoga/Albany/Amsterdam/BoltonLanding/Cobleskill/Greenwich/
GlensFalls/LakePlacid/NorthCreek/Plattsburgh/...
Oldest Internet service based in the Adirondack-Albany region




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-10 Thread Jon Crowcroft


  Bottom line is that IP-layer interception - even when done "right" - 
  has fairly limited applicability for location of nearby content.
  Though the technique is so widely mis-applied that it might still be 
  useful to define what "right" means.
 
 That sounds overly optimistic.


user experience/expectation context is verything

TCP end2end ness?
if you access a web page from our server, chances are its fectehc by one
of several httpds from one of a LOT of NFS or samba servers, which,
depending on local conditions.

if you send audio on the net, its quite possible it goes through several
a2d and d2a conversions (.. thru a PSTN/SIP or 323 gateway) - in fact,

if you speak on an apparently end2end PSTN
transatlantic phone call, chances are your voice 
is digitzed and re-digitzed several times by transcoder/compressers

its the 21st century:
f you dont use end2end crypto, then you  gotta expect people to optimize
their resources to give you the best service money can buy for the least
they have to spend.

hey, when you buy a book written by the author, it was usually typeset,
proofread, and re-edited by several other people

even this email may not be from me...

 cheers

   jon
"every decoding is an encoding"
maurice zapp from the Euphoric State University, in small world, by david lodge




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-10 Thread Vernon Schryver

 From: Jon Crowcroft [EMAIL PROTECTED]

 ...
 its the 21st century:
 f you dont use end2end crypto, then you  gotta expect people to optimize
 their resources to give you the best service money can buy for the least
 they have to spend.
 ...

That's an interesting idea.  People might eventually finally start
using end2end crpyto not for privacy or authnetication where they
really care about either, but for performance and correctness, to
defend against the ISP's who find it cheaper to give you the front
page of last week's newspaper instead of today's.


Vernon Schryver[EMAIL PROTECTED]




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-10 Thread Magnus Danielson

From: Vernon Schryver [EMAIL PROTECTED]
Subject: Re: recommendation against publication of draft-cerpa-necp-02.txt
Date: Mon, 10 Apr 2000 10:41:43 -0600 (MDT)

  From: Jon Crowcroft [EMAIL PROTECTED]
 
  ...
  its the 21st century:
  f you dont use end2end crypto, then you  gotta expect people to optimize
  their resources to give you the best service money can buy for the least
  they have to spend.
  ...
 
 That's an interesting idea.  People might eventually finally start
 using end2end crpyto not for privacy or authnetication where they
 really care about either, but for performance and correctness, to
 defend against the ISP's who find it cheaper to give you the front
 page of last week's newspaper instead of today's.

Maybe this is a reason for these ISPs to filter such traffic out...

Cheers,
Magnus




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-10 Thread Patrik Fältström

At 11.50 -0400 2000-04-10, Dick St.Peters wrote:
What is the fundamental difference between choosing the best path and
choosing the best source?  Arguments that the latter breaks the IP
model are simply arguments that the IP model is broken for today's
Internet and will be even more broken for tomorrow's.  The IETF can
fix the model ... or leave that to someone else.

The difference between what you describe and a random transparent 
proxy is that in your case it is the service provider which is 
building a service with whatever technology he chooses. It is not a 
random ISP in the middle which intercepts and changes IP-packages 
without neither of client nor service provider knowing anything about 
it. If the service provider knows about it, he can choose software 
(or whatever) which can stand the intercept.

Yes, it is the same technology which is used, but not in the same 
ways in both cases.

I.e. for me it is a question _who_ is managing the interception.

   paf




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-10 Thread Gary E. Miller

Yo Randy!

On Mon, 10 Apr 2000, Randy Bush wrote:

 all these oh so brilliant folk on the anti-cacheing crusade should be
 sentenced to live in a significantly less privileged country for a year,
 where dialup ppp costs per megabyte of international traffic and an
 engineer's salary is $100-200 per month.  we are spoiled brats.

Been there, done that, the LEGALLY required cache did NOT help.  I 
bypassed it whenever possible.  Cacheing is NOT the answer.  

Reports from the recent Adelaide meeting confirm this.

RGDS
GARY
---
Gary E. Miller Rellim 20340 Empire Ave, Suite E-3, Bend, OR 97701
[EMAIL PROTECTED]  Tel:+1(541)382-8588 Fax: +1(541)382-8676




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-10 Thread Joe Touch


One other item:

Neither this, nor many NAT I-D's, address the particular
issue of sourcing IP addresses not assigned or owned by 
the host/gateway, e.g., as they affect the standards
of RFCs 1122, 1123, and 1812.

If a device creates (rewrites) IP source addresses with
addresses not its own, it would be useful to see a section
specifically addressing the resulting implications.

Joe




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-10 Thread Keith Moore

  Bottom line is that IP-layer interception - even when done "right" - 
  has fairly limited applicability for location of nearby content.
  Though the technique is so widely mis-applied that it might still be 
  useful to define what "right" means.
 
 And there you have the argument for publishing this document.  

no, this document doesn't try to do that - the protocol it proposes
is an attempt to work around one of the many problem associated
with interception proxies, but it's hardly a blueprint for how
to do them "right" (nor does it purport to be).

Keith




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-10 Thread Vernon Schryver

 From: Randy Bush [EMAIL PROTECTED]

 ...
  That's an interesting idea.  People might eventually finally start
  using end2end crpyto not for privacy or authnetication where they
  really care about either, but for performance and correctness, to
  defend against the ISP's who find it cheaper to give you the front
  page of last week's newspaper instead of today's.

 and, since we're into exaggeration and hyperbole, i imagine you won't
 complain about paying seven times as much for connectivity.

Most of the exaggeration and hyperbole comes from the caching sales
people.  They'd have you believe that caches never miss, or that
cache filling is free.

The news services I watch have front pages with significant (e.g. editorial
and not just DJI numbers) changes every hour or so.


 all these oh so brilliant folk on the anti-cacheing crusade should be
 sentenced to live in a significantly less privileged country for a year,
 where dialup ppp costs per megabyte of international traffic and an
 engineer's salary is $100-200 per month.  we are spoiled brats.

Cachine won't increase those low salaries.

Many people think we should pay for the bandwidth we use, although not
all favor accounting for each bit.  That one now talks about paying per
MByte instead of Kbit of traffic is a radical change due in part to
using instead of conserving.  Undersea fiber isn't paid for by caching.

The primary waste (and perhaps use) of bandwidth is advertising that
almost no one sees, unless you think single-digit response rates amount
to more than almost no one.  Check the source of the next dozen web
pages you fetch.  Even if you use junk filters, chances are that more
of the bits are advertising than content.  Caching that drivel sounds
good, but its providers are already doing things that merely start with
caching to get it to you faster and cheaper.

Caching and proxying with the cooperation of the content provider
can help the costs of long pipes.  No one has said anything bad
about that kind of caching, when done competently.

"Transparent" caching and proxying without the permission of the content
provider will soon be used for political censorship, if not already, and
likely against your $100/month engineers.  How much "transparent proxy"
hardware and software has already been sold to authoritarian governments?

Yes, it's quixotic to worry about that last.  Everyone who feels
comfortable with the IETF's fine words about wiretapping should stop to
think about reality, and do their part in the real battle by putting
end2end encryption into everything they code, specify, or install.


Vernon Schryver[EMAIL PROTECTED]




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-10 Thread Keith Moore

   its the 21st century:
  f you dont use end2end crypto, then you  gotta expect people to optimize
  their resources to give you the best service money can buy for the least
  they have to spend.
  ...
 
 That's an interesting idea.  People might eventually finally start
 using end2end crpyto not for privacy or authnetication where they
 really care about either, but for performance and correctness, to
 defend against the ISP's who find it cheaper to give you the front
 page of last week's newspaper instead of today's.

or ISPs might start penalizing encrypted packets.

I just don't buy the argument that we can solve these problems by
adding more complexity.  That's like saying that a country can 
get more security by building more planes, tanks, bombs, etc.
It might work, but then again, it might fuel an arms race.

Keith




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-10 Thread Keith Moore

 all these oh so brilliant folk on the anti-cacheing crusade should be
 sentenced to live in a significantly less privileged country for a year,
 where dialup ppp costs per megabyte of international traffic and an
 engineer's salary is $100-200 per month. 

and as long as we're talking about just deserts...all of those ISPs that
put an interception proxy between their dialup customers and the rest 
of the Internet should be required to put another interception proxy
on the other side of their international links, between those clients 
and the ISP's local server customers.  that way, they will do the same 
degree of harm to their own business customers that they are doing 
to other ISPs' business customers.

Keith




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-10 Thread Vernon Schryver

I tried to send this earlier, but got a response from
[EMAIL PROTECTED] complaining that every line is a bogus majordomo
command.  My logs say I sent to [EMAIL PROTECTED] and not [EMAIL PROTECTED]
or anything smilar.  I did use the word "s-u-b-s-c-r-i-b-e-r-s" 3 times.
This time I've replaced all with "[users]".

I suspect a serious or at least irritating bug in a defense against stupid
"u-n-s-u-b-s-c-r-i-b-e" requests.  If I'm right, then someone needs to
stop and think a little.



 From: Keith Moore [EMAIL PROTECTED]


  That's an interesting idea.  People might eventually finally start
  using end2end crpyto not for privacy or authnetication where they

 or ISPs might start penalizing encrypted packets.

Why not?  ISP's that figure that last week's or even this morning's Wall
Street Journal front page is good enough might well charge more for traffic
that goes outside their networks to get the current WSJ, or the WSJ with
the Doubleclick ads that Dow Jones prefers.

I wonder how long before an ISP with a transparent proxy uses it to modify
the stream of ads, replacing some with more profitable bits.  It's not as
if "commercial insertion" is a new idea.  The local TV affliate or cable
operator's computers replace a lot of dead air and other people's ads with
their ownas I think about it, I realize I've got to be behind the
times.  I bet many of the so called free ISP's and perhaps others must
already be optimizing the flow of information to their [users].  There's
only so much screen real estate and conscious attention behind those
eyeballs.  They'd not want to be blatant about it, unlike "framing", to
avoid moot excitment among lawyers and [users].  If you must pay for your
[users]' web surfing by posting ads, where better but on top or instead
of other people's ads?


 I just don't buy the argument that we can solve these problems by
 adding more complexity.  That's like saying that a country can 
 get more security by building more planes, tanks, bombs, etc.
 It might work, but then again, it might fuel an arms race.

You've written today about the complications of simplistic solutions to
problems that are not as simple as they sound.  You're right, of course.
The reasons why no one uses real encryption now do not include it being
free or as easy as not using it.

For example, simply using HTTPS if you want to read the WSJ without local
improvements might not be a good enough, depend on how much you can trust
that the public key you get from the nearby PKI servers really belongs to
Dow Jones and not the local ministry of information.  What?--you say the
public key infrastructure is invulnerable to bureaucrats in the middle
with very large purses and bigger sticks?--well, if you say so...

The problem with transparent proxies is that they are men in the middle,
and so are very good at wire tapping, censoring, and improving information.
And even harder to trust.  Stealth proxies are vastly more powerful than
remote controlled taps on everyone's routers and PBX's.


Vernon Schryver[EMAIL PROTECTED]




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-09 Thread Dave Crocker

At 01:39 PM 4/9/00 -0800, [EMAIL PROTECTED] wrote:
   However, I am
  fully in agreement that interception proxies imposed anyplace other
  than either endpoint of the connection is a Bad Idea, because a third

Exactly. And after having read this specification, I also think these issues
are glossed over.

  I'd have to vote against progressing it without language making this
  distinction as clear as possible.

Agreed. I think the right thing to do at this point is to revise the
specification. One possible approach, and the one I'd prefer, is to simply 
call
for NECP to only be used on the server side. Alternately, the distinction of

Let's remember that a major goal of these facilities is to get a user to a 
server that is 'close' to the user.  Having interception done only at 
distant, localized server farm facilities will not achieve that goal.

Further, I'm unclear about the architectural difference between (and 
apologies if things don't quite line up):

client -- Internet - ISP - Intercept - subnet1 - Server1
 - subnet2 - Server2
 - subnet3 - Server3

versus

client -- Internet - ISP - Intercept - Internet - Server1
 - Internet - Server2
 - Internet - Server3

the location of the service could be made clearer and the perils of arbitrary
intermediate use spelled out.

Perhaps the issue is not location, but coherent administration?


I also see some technical issues in the protocol itself. For example, the
performance metric set seems inadequate, at least based on my past experience
with other load balancing systems. OTOH, the set is extensible, so this
could be corrected fairly easily.

This would seem to walk down the path of considering this spec as a BASIS 
for pursuing a standard?

(The usual caveats, proscriptions, etc. apply with respect to IETF change 
control.  What we see now is likely not what they will get later...)


However, I don't see any of these technical issues as impediments to
publication as informational or experimental.

ack.

d/

=-=-=-=-=
Dave Crocker  [EMAIL PROTECTED]
Brandenburg Consulting  www.brandenburg.com
Tel: +1.408.246.8253,  Fax: +1.408.273.6464
675 Spruce Drive,  Sunnyvale, CA 94086 USA




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-09 Thread ned . freed

 At 01:39 PM 4/9/00 -0800, [EMAIL PROTECTED] wrote:
However, I am
   fully in agreement that interception proxies imposed anyplace other
   than either endpoint of the connection is a Bad Idea, because a third
 
 Exactly. And after having read this specification, I also think these issues
 are glossed over.
 
   I'd have to vote against progressing it without language making this
   distinction as clear as possible.
 
 Agreed. I think the right thing to do at this point is to revise the
 specification. One possible approach, and the one I'd prefer, is to simply
 call
 for NECP to only be used on the server side. Alternately, the distinction of

 Let's remember that a major goal of these facilities is to get a user to a
 server that is 'close' to the user.  Having interception done only at
 distant, localized server farm facilities will not achieve that goal.

You are confusing topological locality with administrative locality. I was
talking about the latter, and so, I believe, was Valdis.

Indeed, the only reason I raised the security issues I did was to accomodate
the case where the proxies aren't topologically local to the servers. And
one of the things I see as missing from performance metric set is a means
of factoring in network QOS.

 Further, I'm unclear about the architectural difference between (and
 apologies if things don't quite line up):

 client -- Internet - ISP - Intercept - subnet1 - Server1
  - subnet2 - Server2
  - subnet3 - Server3

 versus

 client -- Internet - ISP - Intercept - Internet - Server1
  - Internet - Server2
  - Internet - Server3

 the location of the service could be made clearer and the perils of arbitrary
 intermediate use spelled out.

 Perhaps the issue is not location, but coherent administration?

In the case of proxies being "close" to the server, absolutely.

 I also see some technical issues in the protocol itself. For example, the
 performance metric set seems inadequate, at least based on my past experience
 with other load balancing systems. OTOH, the set is extensible, so this
 could be corrected fairly easily.

 This would seem to walk down the path of considering this spec as a BASIS
 for pursuing a standard?

I would not have a problem with pursuing standards work on protocols for load
balancing within a single administrative area. (This is not to say that
defining a protocol that can span administrations would be useless. It would be
very useful indeed, but I see so many potential ratholes it isn't funny.)

I suspect a case could be made for working on client-administered proxies, but
it seems fairly clear to me that this isn't what the present protocol
is about.

Ned




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-09 Thread Dave Crocker

At 03:42 PM 4/9/00 -0800, [EMAIL PROTECTED] wrote:

You are confusing topological locality with administrative locality. I was
talking about the latter, and so, I believe, was Valdis.

As my later comment meant to convey, I too was clear about the distinction, 
but yes I was definitely confused about the discussion underway.


This would seem to walk down the path of considering this spec as a BASIS
for pursuing a standard?

I would not have a problem with pursuing standards work on protocols for load
balancing within a single administrative area. (This is not to say that
defining a protocol that can span administrations would be useless. It 
would be
very useful indeed, but I see so many potential ratholes it isn't funny.)

Sounds like a conveniently healthy constraint, then.


d/

=-=-=-=-=
Dave Crocker  [EMAIL PROTECTED]
Brandenburg Consulting  www.brandenburg.com
Tel: +1.408.246.8253,  Fax: +1.408.273.6464
675 Spruce Drive,  Sunnyvale, CA 94086 USA




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-09 Thread Patrik Fältström

At 14.35 -0700 2000-04-09, Dave Crocker wrote:
Let's remember that a major goal of these facilities is to get a 
user to a server that is 'close' to the user.  Having interception 
done only at distant, localized server farm facilities will not 
achieve that goal.

Further, I'm unclear about the architectural difference between (and 
apologies if things don't quite line up):

client -- Internet - ISP - Intercept - subnet1 - Server1
 - subnet2 - Server2
 - subnet3 - Server3

versus

client -- Internet - ISP - Intercept - Internet - Server1
 - Internet - Server2
 - Internet - Server3

In the first case, which Peter Deutch brought up with the cisco local 
director, I understand your picture being that the entity which 
provides the service running on Server1, Server2 and Server3 do 
provide either a hostname and/or IP-address which goes to a virtual 
host which resides "inside" the box which is doing the intercept. 
That box rewrites the IP headers including destination address etc 
and ships the packet to one of Server1, Server2 or Server3.

I.e. the client ask to contact the virtual host, and the virtual host 
is contacted.

In the second case (which is what I am opposing) the server provider 
does not have anything to do with the interception. He runs only 
Server1, while Server2 and Server3 are caches which the ISP chooses 
to redirect the packages to which are addressed to Server1.

That is from my point of view a big difference.

In the first case, the packets sent from the client reaches the 
destination (i.e. the interceptor, which really is not an interceptor 
at all, but some kind of NAT box like the cisco Local Director) while 
in the second case packages addressed to Server1 might not reach 
Server1 but Server2 or Server3.

 paf




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-08 Thread Patrik Fältström

At 15.20 -0400 2000-04-07, Bill Sommerfeld wrote:
I think it's important to carefully distinguish between these sorts of
redirection.  Some clarifying text in the draft to this effect would
be helpful.

That is what I have asked the authors to do.

The problems with "intercepting proxies" are that:

(1) It breaks the model we use for IP transport. I.e. an IP package 
with a specific destination address doesn't reach that destination. 
As Christian says,that means among other things that IPSEC will not 
work.

(2) On application layer (as Peter Deutch talks about) the user 
through the browser want to contact the service according to a 
specific URL given. I.e. the user asks to communicate with that 
service. That is not what is happening -- and this with neither 
client nor server knowing about it or being informed.

As Ted said, if it is the case that an ISP or whatever wants to have 
a web-proxy or proxy/cacheing mechanism for some reasons, then that 
have to be communicated to the users so they understand why it is 
better for them (faster, cheaper, whatever) to use that proxy instead 
of talking with services directly.

I have no problem whatsoever to have proxies being part of the 
web-model, but I am strongly opposing someone in the middle of the 
communication path intercepting and redirecting IP-packages, as the 
client will not communicate with whoever he wanted.

 Patrik




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-08 Thread Keith Moore

Peter,

I think that by now I've made my points and defended them adequately and 
that there is little more to be acheived by continuing a public,
and largely personal, point-by-point argument.  If you want to continue 
this in private mail I'll consider it.

The simple fact is that I believe that the idea of interception proxies 
does not have sufficient technical merit to be published by IETF, and 
that IETF's publication of a document that tends to promote the use 
of such devices would actually be harmful to Internet operation and 
its ability to support applications.  Reasonable people can disagree
about the utility of an idea and I certainly don't expect that my 
notion of the utility of interception proxies will be accepted by everyone.
(especially not folks who are making money by selling these things...)
But I thought it was valuable to try to raise awareness about the issue.

Keith

p.s. I think the term you're looking for is "nihil obstat".




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-08 Thread Pyda Srisuresh

--- Keith Moore [EMAIL PROTECTED] wrote:
... stuff deleted
  As we have done with the NAT WG, it is
  often useful to accurately document the drawbacks of a
  common practice as well as to encourage exploration of
  alternatives.
 
 From my point of view there were significant forces within the 
 NAT group attempting to keep the extent of these drawbacks from 
 being accurately documented and to mislead the readers of those
 documents into thinking that NATs worked better than they do -
 for instance, the repeated assertions that NATs are "transparent".

Keith - I argued to keep the term "transparent routing" in the 
NAT terminology RFC (RFC 2663). The arguments I put forth were
solely mine and not influenced by my employer or anyone else. 
I donot know who else you are refering to as the "significant 
forces in NAT group attempting to mislead readers into 
thinkining NATs are transparent".
  
Clearly, your point of view is skewed against NATs. It is rather 
hypocritical and unfair to say that those opposed to your view 
point are misleading the readers, while you apparently do not 
purport to mislead.

 So I'm not sure that this is a good model on which to base future work.
 
NAT WG has made substantial progress in the form of demystifying 
the FUD surrounding NATs. We still have work to do and intend to stay 
focussed to continue presening a balanced view point.

regards,
suresh

__
Do You Yahoo!?
Talk to your friends online with Yahoo! Messenger.
http://im.yahoo.com




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-08 Thread Peter Deutsch in Mountain View

g'day,

Keith Moore wrote:

 Peter,

 I think that by now I've made my points and defended them adequately and
 that there is little more to be acheived by continuing a public,
 and largely personal, point-by-point argument.  If you want to continue
 this in private mail I'll consider it.

Okay, but I'd like to make clear that I don't regard this as a "largely
personal...argument". On the contrary, I've drunk beer with you, I like you as
a person and would be happy to drink beer with you again. I am engaging here
*only* because I think the principles I'm defending are so important. It really
is nothing personal.


 The simple fact is that I believe that the idea of interception proxies
 does not have sufficient technical merit to be published by IETF, and
 that IETF's publication of a document that tends to promote the use
 of such devices would actually be harmful to Internet operation and
 its ability to support applications.

Fair enough, but my primary goal was not to justify this particular technique,
but to address the issue of whether we should be preventing the publication of
particular techniques, and under what ground rules. The industry and their
customers have already decided against you on this one. I'm wondering about the
future of an IETF that consistently takes itself out of play in this way. I'm
sure there are other techniques on their way that are going to allow us to find
out...


 p.s. I think the term you're looking for is "nihil obstat".

Yup, that's it. Thanks...

  -
peterd




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-08 Thread Valdis . Kletnieks

On Sat, 08 Apr 2000 15:28:12 EDT, Keith Moore said:
 The simple fact is that I believe that the idea of interception proxies 
 does not have sufficient technical merit to be published by IETF, and 
 that IETF's publication of a document that tends to promote the use 
 of such devices would actually be harmful to Internet operation and 
 its ability to support applications.  Reasonable people can disagree

Keith:  I think that there's been sufficient commentary here that
interception proxies *do* have a place, both at the "server" end (for
load-balancing server, etc), and at the "client" end.  However, I am
fully in agreement that interception proxies imposed anyplace other
than either endpoint of the connection is a Bad Idea, because a third
party can't be sure of the connection.  I'm willing to do something at
my end, because I know that I wanted to connect to foobar.sprocket.com,
and what semantics that involves.  foobar.sprocket.com can make
decisions, based on its knowledge that any packet on port 7952 is
either for their monkey-widget server, or invalid.  But my transit
providers don't have any basis for making such decisions.

I'd have to vote against progressing it without language making this
distinction as clear as possible.

Valdis Kletnieks
Operating Systems Analyst
Virginia Tech




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-08 Thread Doug Royer

Peter Deutsch in Mountain View wrote:

[in part you said]

 I still object to your notion that it's not censorship since people can
 always go elsewhere. Where does this lead? I see the day when people
 can't publish a new  directory service protocol because "The IETF has
 endorsed LDAP for directory services", or would ban the publication of
 an extension to DNS because "it must prevent the misuse of the protocol
 in creating inappropriate services". One by one, you'd be chasing
 innovation to other foums.
 
 "Danger, Will Robinson! Danger!"

The above information is nonsense.

You seem to be objecting to Keith's right to object to the draft
as it is written. So using your logic (As I understand what you
are saying above) - you are also guilty of censorship by not
wanting Keith to object.

I understand you frustration as many of us in the IETF have
been frustrated from time to time. If you want to convince
me and others then please ignore anything you feel is a non-technical
issue. And address the technical issues.

Many in the IETF *are* swayed by technical content.

I am undecided on this issue and I am personally glad to see this
debate. I do find it an important discussion when it remains
technical. 

Questions I have:

Does this solve a problem that is not already solved by another method?
Not that it has to be unique as you point out above, but if you could
compare it against other known solutions (if any) then perhaps its
advantages (that I have not seen yet) could help you cause?

If this were not done - what could you not do?
I have worked for large corporations and I have worked on large to huge
scaleability problems. Why do I want this?

-Doug




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-08 Thread Keith Moore

 Keith - I argued to keep the term "transparent routing" in the 
 NAT terminology RFC (RFC 2663). The arguments I put forth were
 solely mine and not influenced by my employer or anyone else. 

didn't say that they were.

 Clearly, your point of view is skewed against NATs. It is rather 
 hypocritical and unfair to say that those opposed to your view 
 point are misleading the readers, while you apparently do not 
 purport to mislead.

I've tried to get an accurate assessment of the harm done by NATs.
Not surprisingly, NAT developers have tried to downplay these problems.

the problem with a "NAT working group" is that it attracts NAT
developers far more than it does the people whose interests
are harmed by NATs - which is to say, Internet users in general.
so by its very nature a "focused" NAT working group will produce
misleading results.

Keith




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-08 Thread Pyda Srisuresh


--- Keith Moore [EMAIL PROTECTED] wrote:
  Keith - I argued to keep the term "transparent routing" in the 
  NAT terminology RFC (RFC 2663). The arguments I put forth were
  solely mine and not influenced by my employer or anyone else. 
 
 didn't say that they were.
 
  Clearly, your point of view is skewed against NATs. It is rather 
  hypocritical and unfair to say that those opposed to your view 
  point are misleading the readers, while you apparently do not 
  purport to mislead.
 
 I've tried to get an accurate assessment of the harm done by NATs.
 Not surprisingly, NAT developers have tried to downplay these problems.
 
 the problem with a "NAT working group" is that it attracts NAT
 developers far more than it does the people whose interests
 are harmed by NATs - which is to say, Internet users in general.

That is just not true. NAT WG attracts NAT users just as much and
often more than NAT developers. It is perhaps your opinion that
NAT harms more people than it benefits that is tainted.

 so by its very nature a "focused" NAT working group will produce
 misleading results.
 

Sorry.. Your conclusion is based on a wrong premise. 

 Keith
 

regards,
suresh

=


__
Do You Yahoo!?
Talk to your friends online with Yahoo! Messenger.
http://im.yahoo.com




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-08 Thread Keith Moore

 Publication under Informational and Experimental has typically been 
 open to all wishing it.  

uh, no.   this is a common myth, but it's not true, and hasn't been
true for many years.

I hope (and believe) that the *potential* for publication is open 
to all, and that the process isn't biased according to who is asking,
but my understanding is that a great many drafts which are submitted 
for publication are rejected.  Like any other publication series
which has value as a compilation, the RFC series requires editorial 
oversight and filtering.  

For those that want an unfiltered publication series, there's always the web.
Or Usenet.

Keith  




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-08 Thread Pyda Srisuresh



--- Keith Moore [EMAIL PROTECTED] wrote:
  Sorry.. Your conclusion is based on a wrong premise. 
 
 The NAT group's draft documents speak for themselves.
 
My point exactly.

regards,
suresh

__
Do You Yahoo!?
Talk to your friends online with Yahoo! Messenger.
http://im.yahoo.com




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-08 Thread Peter Deutsch in Mountain View



Keith Moore wrote:

  The industry and their customers have already decided against you on
  this one.

 Industry people love to make such claims.  They're just marketing BS.
 The Internet isn't in final form yet and I don't expect it to stabilize
 for at least another decade.  There's still lots of time for people to
 correct brain damage.

Well, I don't share the view of a monotonic march towards the "correct" Internet.
Just as the first single celled giving off massive amounts of waste oxygen created
an environment which led eventually to the furry mammals, the Internet responds and
evolves from instantiation to instantiation. I hear talk about products which people
expect to only have a lifetime of a few years, or even a period of months, until
evolution moves us all on. Some of the things that you find so offensive may not
even be relevant in a couple of years.

But (you knew they'd be a but, didn't you?) there is a substantial market for
products based upon interception or redirection technologies today. I don't offer
this as a technical argument for their adoption. I was merely pointing out that the
market has voted on this technique and judged it useful despite what the IETF might
or might not decree. Short of punishing those poor misguided users, I don't know
what else you can accomplish on this one...


  I'm wondering about the future of an IETF that consistently takes itself
  out of play in this way.

 IETF's job is to promote technical sanity, not to support unsound vendor
 practices.

Well there you go. You think the IETF's Seal of Approval and promotion of technical
sanity can prevent our unsound vendor practices  from perpetrating Marketing BS on
poor users. You're right - the positions are fairly clear at this point. I'll try to
quieten down now...


  - peterd




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-08 Thread Keith Moore

  the problem with a "NAT working group" is that it attracts NAT
  developers far more than it does the people whose interests
  are harmed by NATs - which is to say, Internet users in general.
  so by its very nature a "focused" NAT working group will produce
  misleading results.
 
 This bias holds for any working group, be it IPSEC, header
 compression, or anything else. Why pick on NAT?

good question.

Most IETF working groups are working on things that are 
more-or-less harmless, in that their effects are likely
to be isolated to those who choose to use them.  Their greatest 
potential for harm is that they will displace something better 
that might crop up in the same space.   The fact that such a WG
is biased toward its own problem space isn't of much consequence 
because their solution isn't likely to affect people trying to
solve different problems.

NATs (and interception proxies) have much higher risk - they attempt
to address certain real problems but they do so at the expense of 
flexibility, generality, predictability, and reliability of the
network.  They also violate long-established conventions about
the separation of functions between network layers, and in doing so,
break higher level applications that (quite reasonably) assume
that the lower layers of the network are working within their
design constraints.  

Within the current IP architecture, the notion of a "technically
sound NAT" is an oxymoron - NATs inhernetly violate fundamental
design constraints of the architecture.  The technically sound 
way to solve the problems that NATs attempt to address is not to 
alter the behavior of NATs but to provide alternatives outside
of the NAT space.  But a group that's NAT-centric is inherently
focused inside that space, and thus has a very limited ability to
promote technical soundness.

(and there are those who think that the Internet architecture
should be changed to incorporate NATs and that all of those applications
which don't work in the presence of NATs should be deemed obsolete.
but the effect of such a change is so widespread that it is far beyond the 
ability of the NAT working group - or any single working group - to evaluate)

Keith




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-08 Thread Peter Deutsch in Mountain View

g'day,

Lloyd Wood wrote:

  Well, look at the list of signatories to the Draft in question.

 technical merits, please.

I was not arguing for the merits of the technology in question based upon who
signed it. In fact, I haven't tried to address the technical merits of the
specific document at all. I was addressing the issue that Keith was declaring
a technique (IP address interception) as out of scope for the IETF and was,
in doing so, cutting the IETF off from an area of work that finds widespread
application on the net. This seemed inappropriate considering the claims for
the organization in such places as:

   http://www.ietf.org/tao.html#What_Is_IETF

Note in particular the claims " It is the principal body engaged in the
development of new Internet standard specifications" and "Its mission
includes: ... Providing a forum for the exchange of information within the
Internet community between  vendors, users, researchers, agency contractors
and network managers."

IP address interception is a widely used technique that would not be
developed or documented through the IETF if Keith had his way. Since it is a
widely used technique for vendors and network managers, we'd have to ask Gary
Malkin to revise his document if we agree to Keith's request.

To be explicit, this is a metadiscussion about the IETF. I specifically
disclaim any interest or standing in a debate about NECP itself.


  Frankly, I hope we are going to be able to arrest this dangerous
  trend. So, I engaged last year when I saw the broadcast industry
  being run out of town over the TV: URL draft

 whose technical content I recall as being a few lines of incorrect
 EBNF notation that didn't parse. But hey, they were generating a _lot_
 of signatories with their write-in campaign.

Again, I did not engage in a debate about the technical merits of their
claims, I was (and am) pointing out that the IETF claims to be a gathering
place to educate and exchange ideas and is not as good at that as it used to
be (IMHO).


  Yes, and those of us who object to this degradation of the original
  concept of the IETF

 I'd like a reference for the original concept of the IETF, please; I
 worry about history being rewritten. What was the original concept of
 'RFC Editor', exactly?

I'm away from my archives, and my history only goes back about 10 years, so
I'll leave that for others who were there, but I refer you again to the Tao
of the IETF.


  To me the appropriate reponse when someone sees danger in a technology
  is another document, making your case.

 ...which is why Keith began this thread in the first place. Or don't
 mailing list posts count as documents?

Well, that's not the way I read his initial messages. He basically said "I
tried to stop them at the drafting stage, but couldn't so I ask that we not
let this go out the door with our name on it".

Originally the response to an RFC you disagreed with was an RFC explaining
the problems you perceive. It is true that things have become more formal
over time, but nowhere is it carved in stone tablets that we can't identify
problems with current trends and make some adjustments to our processes.
That's not a call for less peer review. It's a call for less of a single
world view, and more tolerance for multiple world views. In effect, I'd also
rather we worry more about what technical people do with our documents and
less with what marketing people do with our documents.


  Note, nobody talks in terms of getting something "blessed by the
  IETF", but in terms of how the IETF would slow the work down and get
  in the way so shouldn't be a part of the process.

 Peer review slows things down. This is unavoidable in exposing the
 work to a larger audience, but has long-term societal benefits
 unfortunately not quantifiable on a balance sheet.

I guess I didn't explain that very well. The question was not whether the
work would be submited to peer review, but whether the IETF was an
appropriate forum for that peer review. The view I've heard expressed several
times has been that the IETF has ossified and would not be receptive to new
ideas so just slowed things down. The implication being that the IETF was no
longer relevant to the engineering process. You are not going to hear this
opinion expressed here from people who have already moved on, so I thought
I'd pass it on.

Note, I am *not* suggesting Cisco has abandoned the IETF. Heck, such a
decision would be so way out of my pay grade (and not the way I see this
company working  at all).  I'm just suggesting that at least some individuals
I know (and not just at Cisco) are starting to feel that the IETF is less
relevant to their needs than it used to be. Some people are going to say
"great, that'll cut down on the marketing BS".. I happen to say "Houston, we
have a problem here..."

-
peterd



Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-08 Thread Keith Moore

Peter,

I don't think I would agree that NECP is out of scope for IETF.
I think it's pefectly valid for IETF to say things like "NECP
is intended to support interception proxies.  Such proxies 
violate the IP architecture in the following ways: ... and 
therefore cause the following problems... the only acceptable 
things for an interception proxy to do within the current 
architecture are those things which are already allowed by the
IP layer (route packets without change to their intended 
destination, delay packets, duplicate packets, or drop them)
and the use (if at all) of devices that do more than this should 
be limited to the following situations..."  But it's difficult to 
say things like this if you start with a document that assumes 
that interception proxies are fundamentally a good thing.  

IETF tries to make the Internet work well.  If you propose something
that fundamentally violates the design of the Internet Protocol,
and which harms the ability of Internet applications to work well,
is that really something that the IETF should take on?  Granted
that the Internet can and should evolve over time, but proposals that
fundamentally change the architecture need to be examined from a 
big-picture point of view and not in a piecemeal fashion.

 Note, I am *not* suggesting Cisco has abandoned the IETF. Heck, such a
 decision would be so way out of my pay grade (and not the way I see this
 company working  at all).  I'm just suggesting that at least some 
 individuals I know (and not just at Cisco) are starting to feel that the 
 IETF is less relevant to their needs than it used to be. 

then perhaps they misunderstand the IETF.  IETF doesn't exist to
meet the needs of cisco or any other single group.  IETF exists 
to help the Internet work well for the benefit of all Internet users.  
When vendors produce products that harm the ability of the Internet
to work well, they're quite naturally putting themselves in conflict
with the IETF.  This shouldn't surprise anyone.

Keith 




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-08 Thread Martin J.G. Williams

I've come into this discussion rather late, however there is at least one
salient point which I believe that
Keith Moore has argued rather well...

In my understanding the role of the IETF is to promote the logical growth
and evolution of the Internet Protocols.
Whilst 'vendors' have massive technical resources at their disposal, one
thing that they are not, is impartial...this
is what I have always believed that the IETF's role is!

There is no denying the benefits that companies like Cisco have brought to
the Internet, but their kotive is, quite simply,
profit basedno more, no less.

On another note - to the guy who said 'this is way out of my pay grade' - I
would have to say...since when did pay reflect IQ?
Knowledge and insight are a valuable and rare commodity anyway...if we
start discriminating an the basis of income then
all of these discussion will become irrelevent within ten years, if not
less! (i'm not gonna name any names here! :-)  )

Kind regards
Martin



begin:vcard 
n:Williams;Martin
tel;cell:+44 (0)7971 018413
tel;fax:+44 (0)7092 245455
tel;home:+44 (0)1672 514508
tel;work:+44 (0)7971 018413
x-mozilla-html:TRUE
adr:;;;Marlborough;;;United Kingdom
version:2.1
email;internet:[EMAIL PROTECTED]
title:UNIX Systems Consultant
fn:Martin Williams
end:vcard



RE: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-07 Thread Bernard Aboba

The use of "load balancing" technologies is growing rapidly
because these devices provide useful functionality. These
devices utilize many different techniques, only some of which
can be characterized as "interception proxies" or "reverse
network address translation." For example, using MAC address
translation (MAT) it is possible to provide load balancing
and failover without breaking IPSEC or violating other
basic principles.

Thus it strikes me that this is a legitimate topic for
inquiry and that cannot be so easily dismissed as "morally"
unacceptable. As we have done with the NAT WG, it is
often useful to accurately document the drawbacks of a
common practice as well as to encourage exploration of
alternatives.

If seen within this context, it is conceivable that we
might well want to publish draft-cerpa-necp-0x.txt at
some future date as a documentation of existing practice
with the correct caveats and references. While there are
clearly elements of the document which are misleading,
overall it does not seem unredeemable to me.

My recommendation would be to explore formation of a WG to
deal with the issues in this area, and to remand
draft-cerpa-necp-02.txt to that WG if and when it is formed.

-Original Message-
From: Keith Moore [mailto:[EMAIL PROTECTED]]
Sent: Thursday, April 06, 2000 9:42 AM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: recommendation against publication of draft-cerpa-necp-02.txt


I am writing to request that the RFC Editor not publish
draft-cerpa-necp-02.txt as an RFC in its current form,
for the following reasons:

1. The document repeatedly, and misleadingly, refers to NECP as a
standard.  I do not believe this is appropriate for a document
which is not on the IETF standards track.  It also refers to
some features as "mandatory" even though it's not clear what
it means for a non-standard to have mandatory features.


2. A primary purpose of the NECP protocol appears to be to
facilitate the operation of so-called interception proxies.  Such
proxies violate the Internet Protocol in several ways:

(1) they redirect traffic to a destination other than the one
specified in the IP header,

(2) they impersonate other IP hosts by using those hosts' IP addresses
as source addresses in traffic they generate,

(3) for some interception proxies, traffic which is passed on to the
destination host, is modified in transit, and any packet-level
checksums are regenerated.

IP allows for the network to delay, drop, or duplicate IP packets,
as part of a best effort to route them to their intended destination.
But it does not allow the above practices.

This document implicitly treats such behavior as legitimate even
though it violates the primary standard on which all Internet
interoperability depends.


3. Aside from the technical implications of intercepting traffic,
redirecting it to unintended destinations, or forging traffic from
someone else's IP address - there are also legal, social, moral
and commercial implications of doing so.

In my opinion IETF should not be lending support to such dubious
practices by publishing an RFC which implicitly endorses them,
even though the authors are employed by major research institutions
and hardware vendors.


4. Furthermore, while any of the above practice might be deemed "morally"
acceptable in limited circumstances (such as when the interception proxy
is being operated by the same party as the one which operates the host being
impersonated) in general these are very dangerous.  There have been numerous
cases where network elements employing practices similar to the above have
been demonstrated to harm interoperability.  (e.g. there is a widely-used
SMTP firewall product which breaks SMTP extension negotiation, and a
traffic shaping product was recently found to corrupt data in TCP streams
generated by certain kinds of hosts)

This document contains language touting the benefits of NECP but very
little language describing the danger of using the above techniques which
NECP was designed to support.   Where the document does mention the
problems, it is misleading or incomplete.  For example, the Introduction
says

   However, it [an interception proxy] can cause problems: users
   have no way to go directly to origin servers, as may be required in
   some cases (e.g., servers that authenticate using a client's source
   IP address).  The proxy has a high-level understanding of the
   application protocol; it can detect these cases and decide which
   flows should be cut through to origin servers.

The latter sentence is a false assertion - even though the proxy has
a high level understanding of the protocol, the proxy is not generally
able to determine when cut-through is required.   For example, the
service being impersonated by the interception proxy may have uses for
the client's source address which are outside of the protocol bein

Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-07 Thread Keith Moore

  I am writing to request that the RFC Editor not publish 
  draft-cerpa-necp-02.txt as an RFC in its current form,
  for the following reasons:
  
  2. A primary purpose of the NECP protocol appears to be to 
  facilitate the operation of so-called interception proxies.  Such 
  proxies violate the Internet Protocol in several ways: 
  
  3. Aside from the technical implications of intercepting traffic,
  redirecting it to unintended destinations, or forging traffic from
  someone else's IP address - there are also legal, social, moral and
  commercial implications of doing so.
 
 You will need to be far more specific here.  I see absolutely nothing that
 is not legal, is not social, or is not moral.  

Okay, I'll offer a few specific examples, by no means the only ones:

1. an Internet service provider which deliberately intercepts traffic 
(say, an IP packet) which was intended for one address or service, 
and delivers it to another address or service (say that of an interception 
proxy) may be misrepresenting the service it provides (it's not really
providing IP datagram delivery service because IP doesn't work this way).

2. an internet service provider which deliberately forges IP datagrams
using the source address of a content provider, to make it appear
that the traffic was originated by that content provider
(interception proxies do this), may be misrepresenting that content
provider by implicitly claiming that the service conveyed to the user
by the ISP is the one provided by the content provider.

3. an internet service provider which deliberately alters IP traffic 
in transit, and deliberately makes it appear that the traffic was not 
altered (by recomputing the checksum) may be misrepresenting the 
service it provides (because IP doesn't work that way) and it may
also be violating the traffic orignator's right to make derivative 
works of its copyrighted material.  (in particular an interception 
proxy that modified the content in transit - say to convert one
data format to another - might be held to violate that right because
content conversions almost always degrade the content and thereby
degrade the expression)

now whether any of these is actually illegal would be up to a court
to decide, and different courts in different jurisdictions might rule 
differently (especially depending on the particulars of a test case)
but each of these is similar to behavior that in other communications
domains would be illegal.  and regardless of whether the grounds is
technical, legal, or moral, none of these behaviors seems like 
something that IETF should support.

 I do see commercial
 implications, but whether those are is "good" or "bad" is not a technical
 judgement.

I agree that we are safest when we can rely purely on technical arguments,
and I believe that all of the above practices are in general technically 
unsound.  Still, I don't think there is a definite boundary between 
technical judgement and moral judgement.Technical judgements are
often based on moral sensibilities.

  In my opinion IETF should not be lending support to such dubious
  practices by publishing an RFC which implicitly endorses them, even
  though the authors are employed by major research institutions and
  hardware vendors.
 
 I take the contrary position.  The IETF ought to be encouraging the
 documentation of *all* practices on the net.  It is far better that they
 are documented where people can find useful information when they see this
 kind of packet activity rather than have them known only to a few
 cognescenti.

Actually I share the belief that it is a good thing to document any 
widespread practice, or any idea which is believed to be useful.   
However it's also clear that publication of all such pratices in RFCs
is beyond IETF's meager resources, and would dilute the value of 
the RFC document collection. And the reality is that a protocol
documented in an RFC is often treated as if it were a standard, or 
at least, as if the practice were endorsed by IETF.When a poor
protocol is apparently endorsed in this way it does harm to IETF's 
reputation and to its work of promoting interoperability.

 May I suggest that one treat this in its classical sense - as a Request
 for Comments and that those who have technical objections or technical
 enhancements publish those comments in an additional document rather than
 try to suppress the original one.

RFCs have not been treated in this sense for many years.  And while
such treatment may have made sense in the early days of the ARPAnet
with a community of a few hundred users, it does not make sense in 
an Internet with tens of millions of users.

The reality is that today, many documents submitted for RFCs are rejected.
I'm simply arguing that this document should be added to that set.
or at least, that it needs substantial revision before it is found 
acceptable.

 Having a document trail that shows what paths and ideas have been found
 wanting is nearly as 

Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-07 Thread Keith Moore

 The use of "load balancing" technologies is growing rapidly
 because these devices provide useful functionality. These
 devices utilize many different techniques, only some of which
 can be characterized as "interception proxies" or "reverse
 network address translation." For example, using MAC address
 translation (MAT) it is possible to provide load balancing
 and failover without breaking IPSEC or violating other
 basic principles.
 
 Thus it strikes me that this is a legitimate topic for
 inquiry and that cannot be so easily dismissed as "morally"
 unacceptable. 

I agree that this is a legitimate topic for inquiry.  the thing
I want to avoid is having IETF encourage deployment of
interception proxies by publishing this or similar RFCs that
(implicitly or explicitly) support the concept - at least,
until we have community consensus on which such practices are
harmful to the Internet and when they are mostly harmless.

load balancing technologies, at least as I've seen them used, 
don't bother me.  if you front-end your own servers with a 
load-balancing box, you are presumably in a position to determine
whether the box actually suits your needs for the service you
are providing from those servers.  if it doesn't suit your needs,
switch to something else.  and the fact that they "forge" source
IP addresses doesn't bother me as long as the only addresses
that they are forging belong to the same people as those who
are operating the load-balancing box.  to the extent that such
boxes are doing harm they're probably only hurting those who
operate them.  

again, this doesn't mean that IETF should standardize such
practices (since they're a local matter).  but I don't see this
as a moral issue.  my real concern is about boxes are intended
to affect third-party traffic.

 As we have done with the NAT WG, it is
 often useful to accurately document the drawbacks of a
 common practice as well as to encourage exploration of
 alternatives.

From my point of view there were significant forces within the 
NAT group attempting to keep the extent of these drawbacks from 
being accurately documented and to mislead the readers of those
documents into thinking that NATs worked better than they do -
for instance, the repeated assertions that NATs are "transparent".
So I'm not sure that this is a good model on which to base future work.

 If seen within this context, it is conceivable that we
 might well want to publish draft-cerpa-necp-0x.txt at
 some future date as a documentation of existing practice
 with the correct caveats and references. While there are
 clearly elements of the document which are misleading,
 overall it does not seem unredeemable to me.

I also suspect that the document is redeemable, but that it 
needs significant modification.  As currently written it
does not seem intended to document current practice, but rather,
to encourage deployment of certain kinds of products, some
of which are arguably harmful.

 My recommendation would be to explore formation of a WG to
 deal with the issues in this area, and to remand
 draft-cerpa-necp-02.txt to that WG if and when it is formed.

wrec already exists, and the intention was that it would eventually
define technically sounds mechanisms for web replication and caching.

while the technology in necp may have other uses besides web replication,
I really wonder if IETF should put its energies into defining
how to violate the Internet Protocol - especially when it seems to
me that legitmiate violations of IP (i.e. when it only affects your
own services) are likely to be "local matters" and therefore not
really candidates for internet standardization.  

And as a practical matter I think it would be really difficult to 
attract a balanced constituency to the working group.

But I agree that a dialog of some form is needed, which is why I
cc'ed the IETF list on my note to the RFC Editor in the first place.

Keith




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-07 Thread Stephen Kent

Keith,

Without comments on other aspects of the technology in question, I 
would like to make some observations about the security aspects of 
the processing you cite as violating IP.

By now we all should know that it is a bad idea to rely on an 
unauthenticated IP address as a basis for determining the source of a 
packet. Similarly. the IP header checksum offers no security.  We 
have a variety of IETF standard protocols (e.g., IPsec and TLS) that 
provide suitable assurance for data origin authentication and 
integrity for application data sent via IP.  Thus, if anyone is 
really concerned about know with whom they are communicating, and 
whether a packet was modified in transit, they should be using these 
standards security technologies.  Many web sites for which these 
security concerns are significant already make use of SSL/TLS anyway.

Steve




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-07 Thread Leslie Daigle

Howdy,

Stephen Kent wrote:
 Thus, if anyone is
 really concerned about know with whom they are communicating, and
 whether a packet was modified in transit, they should be using these
 standards security technologies.  Many web sites for which these
 security concerns are significant already make use of SSL/TLS anyway.

I think the point was that this will impact many more casual 
interactions, where one wouldn't necessarily think to have to employ
authentication technologies.  

There are times when I and my ISP, or the ISPs it peers with, have 
different opinions about what is sufficiently recent/authentic
(of a copy of a resource, or even of a final destination address).
If unrelated entities in the chain each get to "assert an opinion"
about what's "good enough", for their own purposes, it is not at
all clear that I get the end-result that I deserve, or am even aware
of the fact that things have been changed midstream.


Leslie.

-- 

---
"My body obeys Aristotelian laws of physics."
   -- ThinkingCat

Leslie Daigle
[EMAIL PROTECTED]
---




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-07 Thread Keith Moore

 By now we all should know that it is a bad idea to rely on an 
 unauthenticated IP address as a basis for determining the source of a 
 packet. Similarly. the IP header checksum offers no security.  We 
 have a variety of IETF standard protocols (e.g., IPsec and TLS) that 
 provide suitable assurance for data origin authentication and 
 integrity for application data sent via IP.  Thus, if anyone is 
 really concerned about know with whom they are communicating, and 
 whether a packet was modified in transit, they should be using these 
 standards security technologies.  Many web sites for which these 
 security concerns are significant already make use of SSL/TLS anyway.

While I naturally agree that one should not use unauthenticated
IP addresses to determine the source of a packet, I think it's a 
big stretch to say that the existence of IPsec and TLS means that 
it's okay for third parties to forge source addresses.

and for different reasons, both IPsec and TLS are of fairly limited 
applicability for application-level security - we are still missing
lots of pieces.

Keith




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-07 Thread Stephen Kent

Leslie,

I understand your point, but we leave ourselves open to many forms of 
attacks, or errors, by assuming that "what you receive is what was 
sent" in this era of the Internet.  Security is not black and white, 
but the gray area we're discussing does bother me.  If one cares 
about knowing where the data originated, and that it has not been 
altered, then one needs to make use of the tools provided to address 
that concern.  if one doesn't use the tools, then one does not care 
very much, and the results may be surprising :-).

Steve




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-07 Thread Valdis . Kletnieks

On Fri, 07 Apr 2000 13:07:29 EDT, Stephen Kent said:
 but the gray area we're discussing does bother me.  If one cares 
 about knowing where the data originated, and that it has not been 
 altered, then one needs to make use of the tools provided to address 
 that concern.  if one doesn't use the tools, then one does not care 
 very much, and the results may be surprising :-).

The sad part is that in this day and age, we had to publish the SANS
DDOS Roadmap, which suggested that things would be a lot better if sites
installed the patches and did ingress/egress filtering.

I suspect that there is a *very large* portion of the Internet community that
does "care very much" (or at least enough to worry a little bit), but is
too new/clueless/whatever to properly find/install/configure the tools.

I encounter a lot of sites that install spam filters and firewalls because
they ARE concerned about spam, hackers, etc.  Unfortunately, a lot of them
Get It Very Wrong, and do stuff like bounce SMTP 'MAIL FROM:', or Do The
Wrong Thing with NTP traffic, etc etc.

I have to conclude that there's a lot of sites that *do* care very much, but
are lacking the technical expertise to use the tools.

Remember: There's 4 million .coms.  There's not 4 million experienced sysadmins.

-- 
Valdis Kletnieks
Operating Systems Analyst
Virginia Tech




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-07 Thread Leslie Daigle


A fine argument in the abstract, but reality bites. 

Stephen Kent wrote:
 sent" in this era of the Internet.  Security is not black and white,
 but the gray area we're discussing does bother me.  If one cares
 about knowing where the data originated, and that it has not been
 altered, then one needs to make use of the tools provided to address
 that concern.  if one doesn't use the tools, then one does not care
 very much, and the results may be surprising :-).

Who is "one", in your mind?  Mail, web, WAP client application
writers?  Or the poor end-user who gets the surprise without having
a clue what hit him?

As an end-user, I can be as aware as I like about the security issues,
but if client software doesn't support security, and/or my ISP, services
don't support it, there's nothing I can do.  

I am not saying that security isn't the answer -- but I do think you're
looking at your chalkboard, not deployed reality, when you suggest
it isn't a problem because there are technologies for authenticating
packets.

Leslie.

-- 

---
"My body obeys Aristotelian laws of physics."
   -- ThinkingCat

Leslie Daigle
[EMAIL PROTECTED]
---




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-07 Thread Peter Deutsch



Keith Moore wrote:
.  .  .
   3. Aside from the technical implications of intercepting traffic,
   redirecting it to unintended destinations, or forging traffic from
   someone else's IP address - there are also legal, social, moral and
   commercial implications of doing so.
 
  You will need to be far more specific here.  I see absolutely nothing that
  is not legal, is not social, or is not moral.
 
 Okay, I'll offer a few specific examples, by no means the only ones:
 
 1. an Internet service provider which deliberately intercepts traffic
 (say, an IP packet) which was intended for one address or service,
 and delivers it to another address or service (say that of an interception
 proxy) may be misrepresenting the service it provides (it's not really
 providing IP datagram delivery service because IP doesn't work this way).

Okay, I think I see the mistake you're making. You're crossing
abstraction layers and conflating two different things (the name of
a service with the end point of the connection to that service). You
are criticizing the moving of an endpoint when what you really
object to is the misrepresentation of a service. Or do you also
object to HTTP redirects, dynamic URL rewriting, CNAMEs, telephone
Call Forwarding, or post office redirecting of mail after you move? 

A while ago Fedex tried an advertizing campaign in which they
explained to people that when you send a packet from New York to
Chicago, the package is actually routed down to Tennessee (or where
ever it is) and then sent back up to the destination. They wanted
people to see how clever they were, in sending all their packets on
this round-about trip, thus getting it there *much* faster. 

The campaign apparently confused people, and made them nervous, so
they dropped it, but the point is still valid. If what you want is a
particular service (fast information delivery), don't confuse that
with the lower lever transport layer issues (packet delivery). It
may well be desireable to reroute things to get improved service at
a higher abstraction layer. I see nothing "illegal" about Fedex
sending my packet to Tennessee and I see nothing immoral about
Earthlink, MCI, Cisco and CNN all getting together to route my
packets to whichever one of Akami's caches is the most appropriate
one for me to go to today. After all, I didn't ask CNN to send me
packets, I asked CNN for today's news.

Now, misrepresenting myself as someone else may well be fraud, a
well defined crime, so someone else offering me news and pretending
it's from CNN is wrong, but that's nothing to do with IP packet
delivery. You're thinking at the wrong abstraction layer. Changing
IP addresses may *result* in fraud, depending upon why you do it,
but it doesn't constitute fraud in and of itself ("routers don't
mislead people, people mislead people..." ;-) 

Bottom line is, you seem pretty confused here. Sadly, you take this
in really strange directions (see below).


 2. an internet service provider which deliberately forges IP datagrams
 using the source address of a content provider, to make it appear
 that the traffic was originated by that content provider
 (interception proxies do this), may be misrepresenting that content
 provider by implicitly claiming that the service conveyed to the user
 by the ISP is the one provided by the content provider.

Keith, this is a legal issue. We don't do legal issues here. If
someone is misrepresenting themselves, and causing harm, there are
very clearly defined legal mechanisms to address that. This is *so*
far outside the purvue of the IETF that I can't figure out what
you're even trying to accomplish. Out of curiosity, do you even have
any legal training??


.  .  .
 now whether any of these is actually illegal would be up to a court
 to decide, and different courts in different jurisdictions might rule
 differently (especially depending on the particulars of a test case)
 but each of these is similar to behavior that in other communications
 domains would be illegal.  and regardless of whether the grounds is
 technical, legal, or moral, none of these behaviors seems like
 something that IETF should support.

So because someone can pick up a router and beat someone to death
with it, we shouldn't build routers? Or do you honestly think it
appropriate that we add a "legal" section to RFCs?


.  .  .
  May I suggest that one treat this in its classical sense - as a Request
  for Comments and that those who have technical objections or technical
  enhancements publish those comments in an additional document rather than
  try to suppress the original one.
 
 RFCs have not been treated in this sense for many years.  And while
 such treatment may have made sense in the early days of the ARPAnet
 with a community of a few hundred users, it does not make sense in
 an Internet with tens of millions of users.
 
 The reality is that today, many documents submitted for RFCs are rejected.
 I'm simply arguing that this document should be 

Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-07 Thread Vernon Schryver

 From: [EMAIL PROTECTED]

 ...
 they ARE concerned about spam, hackers, etc.  Unfortunately, a lot of them
 Get It Very Wrong, and do stuff like bounce SMTP 'MAIL FROM:', or Do The
 Wrong Thing with NTP traffic, etc etc.

 I have to conclude that there's a lot of sites that *do* care very much, but
 are lacking the technical expertise to use the tools.

 Remember: There's 4 million .coms.  There's not 4 million experienced sysadmins.


It's worse than that, as AOL is demonstrating with their port 25 redirecting.
If your skin doesn't crawl at the thought of a third party adding headers
to your SMTP messages, you need to take some time out to think about things.
There need be no significant implementation difference between adding
headers and making improvements to the body of an SMTP message.

Then there is the collateral damage.  I don't know if the AOL redirectors
store and forward, but if they do, think about what that does to SMTP AUTH
challenges and responses.   (yes, assuming that SMTP AUTH runs on port 25
or that AOL expands their redirecting to other ports).

If you haven't seen wierd effects from HTTP redirecting including clients
getting the wrong (i.e. old) pages, then you need to look around.

I think a bigger worry than a shortage of experience administrators is an
abundance of people who prefer the easiest, cheapest in the short term
solution instead of a long view, such as redirecting SMTP instead of
enforcing serious anti-spam terms of service.


Vernon Schryver[EMAIL PROTECTED]




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-07 Thread Dennis Glatting

Leslie Daigle wrote:
 

 As an end-user, I can be as aware as I like about the security issues,
 but if client software doesn't support security, and/or my ISP, services
 don't support it, there's nothing I can do.
 

Huh? You have a choice: (a) obtain a client that does support
security; and (b) get a new ISP. Both are plentiful.
 S/MIME Cryptographic Signature


Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-07 Thread Valdis . Kletnieks

On Fri, 07 Apr 2000 12:16:05 MDT, Vernon Schryver [EMAIL PROTECTED]  said:
 It's worse than that, as AOL is demonstrating with their port 25 redirecting.

Hmm.. I don't correspond with that many AOL people.  What are they doing NOW?

 If your skin doesn't crawl at the thought of a third party adding headers
 to your SMTP messages, you need to take some time out to think about things.

You mean *other* than the required RFC822 Received: headers, and/or the
RFC2476-approved re-writing?  Gaak if so.

-- 
Valdis Kletnieks
Operating Systems Analyst
Virginia Tech




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-07 Thread Keith Moore

 Applications can gain a lot of security by building on top of a lower 
 layer secure communication substrate, such as that provided by IPsec 
 or TLS.  Such substrates allow the application developer to make 
 assumptions about the security of the basic communication path, and 
 have these assumptions be valid.  Precisely the sorts of things you 
 are citing as "bad" can be addressed in this way.  Fancier 
 application security requires some level of customization, perhaps in 
 an application-specific fashion, as you noted.

I beg to differ.  Few applications can use IPsec or TLS authentication 
as-is.   A few more can get away with using username/password schemes
on top of IPsec or TLS privacy.  But neither IPsec nor TLS is anything
resembling a generally applicable authentication solution.  

Keith




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-07 Thread Keith Moore

Stephen,

perhaps the reason that the tools are not used is that they are not
adequate for the task.  but it certainly does not follow that "if 
one doesn't use the tools, then one does not care very much".

Keith

 If one cares 
 about knowing where the data originated, and that it has not been 
 altered, then one needs to make use of the tools provided to address 
 that concern.  if one doesn't use the tools, then one does not care 
 very much, and the results may be surprising :-).




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-07 Thread Valdis . Kletnieks

On Fri, 07 Apr 2000 11:25:40 PDT, Dennis Glatting said:
 Huh? You have a choice: (a) obtain a client that does support
 security; and (b) get a new ISP. Both are plentiful.

Only if a client supporting security is available for your software
(which might be be something other than Netscape/IE - the Web and the
Internet are not the same thing.  There are platforms that (for instance)
have a telnet client, but no ssh client.

Only if a client supporting security is available in your country.  Some
countries still have issues regarding cryptography.

Only if a new ISP is available in *your area*.  In many parts of the country,
if you require ISDN, DSL, cable modem, or anything else that's faster than
a 56K-over-POTS, your choices are severely limited, and may in fact be zero
for some technologies.

-- 
Valdis Kletnieks
Operating Systems Analyst
Virginia Tech




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-07 Thread Daniel Senie

Dennis Glatting wrote:
 
 Leslie Daigle wrote:
 
 
  As an end-user, I can be as aware as I like about the security issues,
  but if client software doesn't support security, and/or my ISP, services
  don't support it, there's nothing I can do.
 
 
 Huh? You have a choice: (a) obtain a client that does support
 security; and (b) get a new ISP. Both are plentiful.

Ah, no. In the real world of the Internet today, we have LOTS of folks
who get their Internet connectivity via cable modems and DSL. Many
vendors of such services, in order to help preserve IP address space,
give out only a single IP address to each customer. Since this is
incompatible with the way people use the Internet in many cases (e.g.
MANY homes have more than one computer), Network Address Translation is
used.

NAT is the reality of the Internet today. IPSec was developed for an
Internet that existed some years back, before address allocation
policies forced NAT to become commonplace. We now are in need of
security solutions which can survive such an environment. SSL is one
such example.

NAT presents a lot of problems to the Internet architecture. It's ugly
architecturally. We all know that. We can't make it go away by
complaining about it. We could fix IPSec to survive in the current
environment, or find ways to get more people interested in IPv6, do
both, or find alternate forms of security.

Getting a new ISP, however, is NOT necessarily an option. You'd argue I
give up a cable modem for a dialup ISP? I don't think so. Application
level security (SSL, TLS, SSH) work fine for my needs and transit the
equipment I must use to exist on a cable modem.

-- 
-
Daniel Senie[EMAIL PROTECTED]
Amaranth Networks Inc.http://www.amaranth.com




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-07 Thread Stephen Kent

Keith,

Stephen,

perhaps the reason that the tools are not used is that they are not
adequate for the task.  but it certainly does not follow that "if
one doesn't use the tools, then one does not care very much".

or perhaps, one does not care enough ...

Steve




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-07 Thread Paul Francis

  
  In my 20+ years of security experience in the Internet community, it 
  has often been the arguments for the need to make do with existing 
  features or to adopt quick fix solutions that have retarded the 
  deployment of better security technology.  In retrospect, this 
  approach has not served us well.
  

I have a time machine.

I just went back 20 years in time, convinced everybody that it
was always more important to implement proper security than to
make do with existing features and quick fix solutions.  Having
thus changed the future, I went back forward in time.
Guess what---there was no internet!

PF




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-07 Thread Dennis Glatting

 
 Getting a new ISP, however, is NOT necessarily an option. You'd argue I
 give up a cable modem for a dialup ISP? I don't think so. Application
 level security (SSL, TLS, SSH) work fine for my needs and transit the
 equipment I must use to exist on a cable modem.
 

You have made the choice to have no choice but you do have a choice.
 S/MIME Cryptographic Signature


Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-07 Thread Keith Moore

 Keith Moore wrote:
 .  .  .
3. Aside from the technical implications of intercepting traffic,
redirecting it to unintended destinations, or forging traffic from
someone else's IP address - there are also legal, social, moral and
commercial implications of doing so.
  
   You will need to be far more specific here.  I see absolutely nothing that
   is not legal, is not social, or is not moral.
  
  Okay, I'll offer a few specific examples, by no means the only ones:
  
  1. an Internet service provider which deliberately intercepts traffic
  (say, an IP packet) which was intended for one address or service,
  and delivers it to another address or service (say that of an interception
  proxy) may be misrepresenting the service it provides (it's not really
  providing IP datagram delivery service because IP doesn't work this way).
 
 Okay, I think I see the mistake you're making. You're crossing
 abstraction layers and conflating two different things (the name of
 a service with the end point of the connection to that service). You
 are criticizing the moving of an endpoint when what you really
 object to is the misrepresentation of a service. Or do you also
 object to HTTP redirects, dynamic URL rewriting, CNAMEs, telephone
 Call Forwarding, or post office redirecting of mail after you move? 

I don't object to redirects at all, as long as they are carefully 
designed.   I do object to misrepresenting the service.As I've 
said elsewhere, if the service wants to set up an interception proxy 
on its own network to help make its service more scalable, I have 
no problem with that.  I do have a problem with unauthorized third 
parties setting up interception proxies.  (which is according to
my understanding all the most common application of such devices)

 It may well be desireable to reroute things to get improved service at
 a higher abstraction layer. 

the problem is that one person's idea of improved service may be
another person's idea of degraded service.  getting stale data
to me faster may not be much help.  I would argue that it
is up to the producer and consumer, not the ISP, to decide what
level of service is appropriate.

 I see nothing "illegal" about Fedex
 sending my packet to Tennessee and I see nothing immoral about
 Earthlink, MCI, Cisco and CNN all getting together to route my
 packets to whichever one of Akami's caches is the most appropriate
 one for me to go to today. After all, I didn't ask CNN to send me
 packets, I asked CNN for today's news.

If CNN is okay with this, I have no problem with it.  They get to
decide what content delivery mechanisms are appropriate for their
content.  Other content providers might make different decisions.
Where I have a problem is when J. Random ISP unilaterally decides 
that some content delivery mechanism other than standard IP routing
is appropriate for CNN's data (or my data).  

And on some level, yes, you did ask CNN to send you packets.  Or you
sent packets to CNN and the network sent you some packets back purporting
to be from CNN.  You and your web client presumably knew what you were
asking for, and CNN's web server (if it was even in the loop) presumably
knew what kind of response to give.  But the network in the middle does
not know for sure how to interpret your request and CNN's response.
Just because you are sending port 80 does not even mean that you are 
using HTTP, and it certanily doesn't mean that you're using the same
version of HTTP that the interception proxy just happens to support,
and it certainly doesn't mean that you're willing to tolerate whatever
data corruption the interception proxy (whether by design or by accident)
happens to introduce.


 Now, misrepresenting myself as someone else may well be fraud, a
 well defined crime, so someone else offering me news and pretending
 it's from CNN is wrong, but that's nothing to do with IP packet
 delivery. You're thinking at the wrong abstraction layer. Changing
 IP addresses may *result* in fraud, depending upon why you do it,
 but it doesn't constitute fraud in and of itself ("routers don't
 mislead people, people mislead people..." ;-) 

You seem to be saying that because we have a higher service layered 
on top of IP that we can disregard the IP service model.  I disagree.
There are two separate problems here:

1. An interception proxy, unless it is acting with authorization 
of the content provider, is misrepresenting itself as the content 
provider.  IP address spoofing as just one particular mechanism 
by which this can be done, but regardless of the mechanism, it's 
wrong to misrepresent yourself as someone else.

2. At a different level, IP networks that don't behave like IP 
networks are supposed to behave violate the assumptions on which 
higher level protocols are based.  This degrades interoperability
and increases the complexity of higher level protocols as they
try to work around the damage done when clean layering is destroyed.
(for example 

Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-07 Thread Vernon Schryver

 From: [EMAIL PROTECTED]

 ...
  If your skin doesn't crawl at the thought of a third party adding headers
  to your SMTP messages, you need to take some time out to think about things.

 You mean *other* than the required RFC822 Received: headers, and/or the
 RFC2476-approved re-writing?  Gaak if so.

Consider the following:

] From: [EMAIL PROTECTED] (Jay Levitt)
] Newsgroups: news.admin.net-abuse.email
] Subject: Re: AOL Spammer online now, now what?
] Date: 30 Mar 2000 05:18:23 GMT
] Message-ID: [EMAIL PROTECTED]

] ...
] rly-ip* are the new hosts that will catch all port 25 connections from
] *.ipt.aol.com and attempt to filter spam.  They will also add the
] X-Apparently-From: header with the real AOL/CS/whatever screen name. ...


That all sounds fine, if you worry only about reducing spam in the
cheapest way possible.  I think their modifications would be compliant
if they the were done by a host that legitimately answers the IP address
to which the SMTP sender thinks it is connection.  As it stands, how
can these redirectors comply with the postmaster mailbox requirement?
Assuming you figure out what's been done to your SMTP stream, how would
you contact postmaster at the stealthy redirector/filter.

I think it's certifiably crazy to assume that all TCP connections to a
distant port 25 involve SMTP.  Assigned numbers doesn't say you can run
only SMTP on port 25.

It's even crazier to not consider the inevitable next step, stealth SMTP
and HTTP redirector/filters to deal with dirty words or taboo subjects,
and not just sex but politics.

And that's based on the best possible, content-neutral interpretation of
Mr. Levitt's words "filter spam."   I hope AOL would not look for telltale
spam keywords and only do connection rate limiting, if only because I hope
AOL knows that reports of spam would trigger content filters.  I'm even
less confident about other outfits.  

Think about port 25 redirecting used for other kinds of filtering
at certain national borders.

On the other hand, if this doesn't get IPSEC as well as application
layer encryption going, nothing will.


Vernon Schryver[EMAIL PROTECTED]




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-07 Thread Keith Moore

 perhaps the reason that the tools are not used is that they are not
 adequate for the task.  but it certainly does not follow that "if
 one doesn't use the tools, then one does not care very much".
 
 or perhaps, one does not care enough ...

or perhaps, that building tools that actually solve these problems
as opposed to chipping away at the edges is (a) fundamentally difficult
(b) requires many kinds of expertise, most of them scarce, (c) has 
been frustrated by governments and patent holders who were bent 
on trying to control things, and (d) has not kept pace with the
development of the 'net.

Keith




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-07 Thread Stephen Kent

Paul,


I have a time machine.

I just went back 20 years in time, convinced everybody that it
was always more important to implement proper security than to
make do with existing features and quick fix solutions.  Having
thus changed the future, I went back forward in time.
Guess what---there was no internet!

You need a better time machine, or you need to stop complaining. 
either will work for me.

Steve




RE: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-07 Thread Stephen Kent

  Christian,

Suppose, rhetorically, that we were to encrypt every IP packet using IPSEC.
What happens if a box takes your packet and deliver it to the "wrong"
address, for example to an ISP controlled cache? Well, the cache cannot do
anything with it, except drop it to the floor. We are thus faced with a
dilemma: not use IPSEC because it breaks the ISP provided "enhancement," or

If it delivered to the "wrong" address then the security technology 
will have done its job, the user will become aware of the problems, 
and the ISP will have been prevented from doing, in an undetectable 
way, what folks were complaining about. sounds like success to me.

Steve




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-07 Thread Theodore Y. Ts'o

   Date: Fri, 07 Apr 2000 15:00:22 -0400
   From: Daniel Senie [EMAIL PROTECTED]

   Ah, no. In the real world of the Internet today, we have LOTS of folks
   who get their Internet connectivity via cable modems and DSL. Many
   vendors of such services, in order to help preserve IP address space,
   give out only a single IP address to each customer. Since this is
   incompatible with the way people use the Internet in many cases (e.g.
   MANY homes have more than one computer), Network Address Translation is
   used.

   NAT is the reality of the Internet today. IPSec was developed for an
   Internet that existed some years back, before address allocation
   policies forced NAT to become commonplace. We now are in need of
   security solutions which can survive such an environment. SSL is one
   such example.

   NAT presents a lot of problems to the Internet architecture. It's ugly
   architecturally. We all know that. We can't make it go away by
   complaining about it. We could fix IPSec to survive in the current
   environment, or find ways to get more people interested in IPv6, do
   both, or find alternate forms of security.

Actually, there are other solutions to this problem --- in fact, one in
which IPSEC plays a starring role.  I've been hearing more and more
people who are using IPSEC to tunnel from their cable modem to some site
which has (a) plenty of addresses, and (b) is well connected to the
internet.  They can thus get a /28, /27, or sometimes even a /24 block
of addresses, even though their cable modem or DSL provider either won't
provide that service, or would force the customer to pay through the
nose for the block of the addresses.  One advantage of using IPSEC to
solve this problem is that the ISP can't peer inside the packets to
figure out this is what's going on, so won't know that the customer is
using mutliple computers through what they thought was the single
computer rate.

The downside is that your packets may take a longer than normal routing
to get to their destination, but that's happening already even without
this hack.  For example, until I changed my DSL provider out of sheer
disgust and appallingly bad service, my packets from my home in Medford,
Massachusetts, to MIT in Cambridge, Massachusetts were going by way of
Washington, D.C. and MAE-East.

- Ted




RE: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-07 Thread Christian Huitema

Steve,

Suppose, rhetorically, that we were to encrypt every IP packet using IPSEC.
What happens if a box takes your packet and deliver it to the "wrong"
address, for example to an ISP controlled cache? Well, the cache cannot do
anything with it, except drop it to the floor. We are thus faced with a
dilemma: not use IPSEC because it breaks the ISP provided "enhancement," or
tell the ISP to stop this denial of service attack.

 -Original Message-
 From: Stephen Kent [mailto:[EMAIL PROTECTED]]
 Sent: Friday, April 07, 2000 10:07 AM
 To: Leslie Daigle
 Cc: Keith Moore; [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED]
 Subject: Re: recommendation against publication of
 draft-cerpa-necp-02.txt
 
 
 Leslie,
 
 I understand your point, but we leave ourselves open to many forms of 
 attacks, or errors, by assuming that "what you receive is what was 
 sent" in this era of the Internet.  Security is not black and white, 
 but the gray area we're discussing does bother me.  If one cares 
 about knowing where the data originated, and that it has not been 
 altered, then one needs to make use of the tools provided to address 
 that concern.  if one doesn't use the tools, then one does not care 
 very much, and the results may be surprising :-).
 
 Steve
 




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-07 Thread Theodore Y. Ts'o

   From: Keith Moore [EMAIL PROTECTED]
   Date: Fri, 07 Apr 2000 15:15:57 -0400

   the problem is that one person's idea of improved service may be
   another person's idea of degraded service.  getting stale data
   to me faster may not be much help.  I would argue that it
   is up to the producer and consumer, not the ISP, to decide what
   level of service is appropriate.

Speaking of which, in Adelaide, at the IETF terminal room, there were
http "transparent" proxies running.  It turns out that if you bypassed
them, you got faster service than if you used the transparent proxy
servers provided expressly for the IETF terminal room.

I'm told it saved 15% of the overall bandwidth consumed by the IETF, and
given the limited bandwidth in and out of Australia, I'm somewhat
sympathetic to why it was installed.  However, it's still much more
polite ask people to use a proxy server than to just try to sneak in a
"transparent" proxy.

If they had asked nicely, I might have decided to use it.  As it was, I
was annoyed enough to simply bypass the proxy server, and gain for
myself faster web access.

- Ted




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-07 Thread Peter Deutsch



Keith Moore wrote:
.  .  .
 You seem to be saying that because we have a higher service layered
 on top of IP that we can disregard the IP service model.  I disagree.

No, I'm saying you purported to be offended by IP address
redirection when what you really objected to was unauthorized
spoofing of services and the delivery of something other than what
the user and/or information provider would have expected. That in
turn resulted in your calling for a ban on publication of a
technical document describing a technique which you admit has quite
legitimate applications (e.g. when CNN knows that such IP
interception is going on) because it *could* be used in a manner you
judge to be immoral (i.e. in a case when neither client nor server
knew).


 There are two separate problems here:
 
 1. An interception proxy, unless it is acting with authorization
 of the content provider, is misrepresenting itself as the content
 provider.  IP address spoofing as just one particular mechanism
 by which this can be done, but regardless of the mechanism, it's
 wrong to misrepresent yourself as someone else.

So write an RFC Draft and call it "IP Address Spoofing Considered
Harmful". Argue eloquently. Convince everyone and you will be famous
to generations of students to come as the person who saved us from
this pernicious practice, right up there with Djkstra and GOTOs.
Fight ideas with ideas. But banning mention of the technique because
it can be misused? Puuleeze.



 2. At a different level, IP networks that don't behave like IP
 networks are supposed to behave violate the assumptions on which
 higher level protocols are based.  This degrades interoperability
 and increases the complexity of higher level protocols as they
 try to work around the damage done when clean layering is destroyed.
 (for example of increased complexity consider the suggestions to
 solve the problem by having everyone use IPsec or TLS)

You know, I've been pretty uncomfortable over the past few years at
what I perceive as a growing hostility in some quarters towards
innovation in the name of purity and stability. I agree the Internet
is "important", and we must consider the consequences of our
actions, but personally I think you've gone way over a line here...


 (as a friend of mine said many years ago, the problem with intelligent
 networks is that the network has to be smarter than the applications.)
 
 now it happens that both of these problems are caused by interception
 proxies, which is why I choose to mention both of them in the same
 discussion.

Actually, you mistyped "both problems are caused by the *misuse* of
interception proxies". And you advocate that the IETF prevent
discussion of the very technique because it can be misused. The bad
guys have proved pretty adept at misusing whatever technologies we
create, but the fact that search engines *can* be misused to leak
information wouldn't have been a reason to ban discussion of Archie
10 years ago, and the fact that the Web can carry porn wasn't a
reason to ban the publishing of an RFC on HTTP five years ago. The
final line of the argument is left as an exercise for the reader...

We need to build publishing and distribution services that can scale
to millions, if not billions, of users, and we need them now.
Address interception is a perfectly legitimate technique in our
arsenal of ideas for this task, with some dangers. So document the
dangers, but if you seek to ban the ideas themselves, I will tap my
head, stick out my tongue and speak in a terrible French accent in
your general direction.



  Bottom line is, you seem pretty confused here.
 
 only if you think that discussing several related topics in a single
 mail message is a sign of confusion.

Sorry, you're not convincing me you understand my point. You
acknowledge that it's okay to intercept if CNN knows you're doing
it. So why don't we document how to do that? Oh, you say - that's
because the idea can be misused. "Let these dangerous kooks publish
their innovations elsewhere, so we don't sully the IETF brand".
Fine, if we do that, I guarantee that new ideas will simply migrate
out of this forum. Be careful what you ask for, as you're liable to
get it...


   2. an internet service provider which deliberately forges IP datagrams
   using the source address of a content provider, to make it appear
   that the traffic was originated by that content provider
   (interception proxies do this), may be misrepresenting that content
   provider by implicitly claiming that the service conveyed to the user
   by the ISP is the one provided by the content provider.
 
  Keith, this is a legal issue. We don't do legal issues here.
 
 that's BS.  IETF has every reason to be concerned about publishing
 documents that promote illegal or clearly immoral behavior.  While it
 is true that it is not for us to judge fine points of law, it's also
 true that promoting illegal or clearly immoral behavior reflects poorly
 on IETF as an 

RE: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-07 Thread Ian King

I have a hammer.  

It's been driving nails just fine for twenty years.  It's a first rate
hammer, for which I paid top dollar.  It's a really useful tool.  But when I
try to open beer bottles with it, I end up with glass splinters in my beer.
What gives?  

As has been pointed out many times in many ways, the Internet was not
originally designed as a secure network, nor for many of the other tasks we
now wish it to perform.  Should we have implemented something in another
way?  Moot question, we have what we have.  Should we learn from our
mistakes, and when we can see something that appears to be yet another
mistake (no matter how appealing it is as a "quick fix"), avoid making that
mistake?  

We clever, clever engineers have come up with a number of interesting
"solutions" (workarounds?) for the limitations of the network we have
created.  Some of them are, in the long run, not good ideas, although they
are useful as interim solutions.  Some of them are just too violent to the
rules of the game as they are defined (by us!), and/or establish technical
or process precedents that are too dangerous to be allowed.  

-- Ian King

-Original Message-
From: Paul Francis [mailto:[EMAIL PROTECTED]]
Sent: Friday, April 07, 2000 12:13 PM
To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED];
[EMAIL PROTECTED]
Subject: Re: recommendation against publication of
draft-cerpa-necp-02.txt


  
  In my 20+ years of security experience in the Internet community, it 
  has often been the arguments for the need to make do with existing 
  features or to adopt quick fix solutions that have retarded the 
  deployment of better security technology.  In retrospect, this 
  approach has not served us well.
  

I have a time machine.

I just went back 20 years in time, convinced everybody that it
was always more important to implement proper security than to
make do with existing features and quick fix solutions.  Having
thus changed the future, I went back forward in time.
Guess what---there was no internet!

PF




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-07 Thread Keith Moore

 Keith Moore wrote:
 .  .  .
  You seem to be saying that because we have a higher service layered
  on top of IP that we can disregard the IP service model.  I disagree.
 
 No, I'm saying you purported to be offended by IP address
 redirection when what you really objected to was unauthorized
 spoofing of services and the delivery of something other than what
 the user and/or information provider would have expected. 

Actually I have objections to both - though the objections
to the former are purely technical and mostly in response
to folks who claim that such redirection is deserving of
standardization, or in general is anything more than a crude
short-term hack.  The objections to the latter are both moral
and technical.

 That in
 turn resulted in your calling for a ban on publication of a
 technical document describing a technique which you admit has quite
 legitimate applications (e.g. when CNN knows that such IP
 interception is going on) because it *could* be used in a manner you
 judge to be immoral (i.e. in a case when neither client nor server
 knew).

I did not call for a ban on publication of any document.  I suggested
that the RFC Editor consider not devoting its energies to publishing
the document - and I only suggested this after I suggested several
things that could be done to "fix" the document.  Clearly the document 
can be published by other means, nor would I try to prevent such publication.

What you may not realize is that fixing the bugs in documents such
as this one - which at best are on the margin of IETF's mission -
tends to consume inordinate amounts of effort on the part of IESG 
and/or the RFC Editor, who already have lots of work on their plates.  
Their effort, I believe, is better spent on getting more deserving
documents out the door.  

(Such waste of resources is especially annoying when the motivation for 
having the document published appears to be lend IETF's imprimatur 
to an approach by having it published as an RFC - and therefore, 
can be cited as if it were a standard - language in the RFC preamble
to the contrary notwithstanding.)  

 So write an RFC Draft and call it "IP Address Spoofing Considered
 Harmful". Argue eloquently. Convince everyone and you will be famous
 to generations of students to come as the person who saved us from
 this pernicious practice, right up there with Djkstra and GOTOs.
 Fight ideas with ideas. But banning mention of the technique because
 it can be misused? Puuleeze.

again, you're using "ban" incorrectly.

 You know, I've been pretty uncomfortable over the past few years at
 what I perceive as a growing hostility in some quarters towards
 innovation in the name of purity and stability. I agree the Internet
 is "important", and we must consider the consequences of our
 actions, but personally I think you've gone way over a line here...

I do take a hostile attitude toward so-called innovations which impair
the flexibility and reliability of the Internet and Internet applications,
and I make no apology for it.

  now it happens that both of these problems are caused by interception
  proxies, which is why I choose to mention both of them in the same
  discussion.
 
 Actually, you mistyped "both problems are caused by the *misuse* of
 interception proxies". 

tell that to the marketing departments of companaies who are selling
interception proxies to ISPs and as local web caches.  such applications 
of interception proxies *do* cause harm, and yet most of the companies
selling such products would claim that these are legitimate uses.

 And you advocate that the IETF prevent
 discussion of the very technique because it can be misused. 

nope, not prevent discussion - clearly we are discussing it here -
I'm advocating that IETF not spend resources publishing a biased
description of this technique.

 We need to build publishing and distribution services that can scale
 to millions, if not billions, of users, and we need them now.
 Address interception is a perfectly legitimate technique in our
 arsenal of ideas for this task, with some dangers. 

I will agree that legitimate uses of the technique exist, but given 
the widespred misuse of this technique (there seems to be a great
deal more misuse than appropriate use) "perfectly legitimate" 
seems like an oversimplificatiaon.

 
   Bottom line is, you seem pretty confused here.
  
  only if you think that discussing several related topics in a single
  mail message is a sign of confusion.
 
 Sorry, you're not convincing me you understand my point. You
 acknowledge that it's okay to intercept if CNN knows you're doing
 it. 

not quite. I said "if it's okay with CNN".  Knowledge != explicit consent.

 So why don't we document how to do that? Oh, you say - that's
 because the idea can be misused. "Let these dangerous kooks publish
 their innovations elsewhere, so we don't sully the IETF brand".
 Fine, if we do that, I guarantee that new ideas will simply migrate
 out of this forum. Be 

recommendation against publication of draft-cerpa-necp-02.txt

2000-04-06 Thread Keith Moore

I am writing to request that the RFC Editor not publish 
draft-cerpa-necp-02.txt as an RFC in its current form,
for the following reasons:

1. The document repeatedly, and misleadingly, refers to NECP as a 
standard.  I do not believe this is appropriate for a document
which is not on the IETF standards track.  It also refers to
some features as "mandatory" even though it's not clear what
it means for a non-standard to have mandatory features.


2. A primary purpose of the NECP protocol appears to be to 
facilitate the operation of so-called interception proxies.  Such 
proxies violate the Internet Protocol in several ways: 

(1) they redirect traffic to a destination other than the one 
specified in the IP header, 

(2) they impersonate other IP hosts by using those hosts' IP addresses 
as source addresses in traffic they generate,

(3) for some interception proxies, traffic which is passed on to the 
destination host, is modified in transit, and any packet-level
checksums are regenerated.

IP allows for the network to delay, drop, or duplicate IP packets,
as part of a best effort to route them to their intended destination.
But it does not allow the above practices.

This document implicitly treats such behavior as legitimate even
though it violates the primary standard on which all Internet
interoperability depends.


3. Aside from the technical implications of intercepting traffic, 
redirecting it to unintended destinations, or forging traffic from
someone else's IP address - there are also legal, social, moral
and commercial implications of doing so.

In my opinion IETF should not be lending support to such dubious
practices by publishing an RFC which implicitly endorses them,
even though the authors are employed by major research institutions 
and hardware vendors.


4. Furthermore, while any of the above practice might be deemed "morally"
acceptable in limited circumstances (such as when the interception proxy 
is being operated by the same party as the one which operates the host being 
impersonated) in general these are very dangerous.  There have been numerous 
cases where network elements employing practices similar to the above have 
been demonstrated to harm interoperability.  (e.g. there is a widely-used
SMTP firewall product which breaks SMTP extension negotiation, and a 
traffic shaping product was recently found to corrupt data in TCP streams
generated by certain kinds of hosts) 

This document contains language touting the benefits of NECP but very 
little language describing the danger of using the above techniques which 
NECP was designed to support.   Where the document does mention the 
problems, it is misleading or incomplete.  For example, the Introduction says 

   However, it [an interception proxy] can cause problems: users
   have no way to go directly to origin servers, as may be required in
   some cases (e.g., servers that authenticate using a client's source
   IP address).  The proxy has a high-level understanding of the
   application protocol; it can detect these cases and decide which
   flows should be cut through to origin servers.  

The latter sentence is a false assertion - even though the proxy has
a high level understanding of the protocol, the proxy is not generally
able to determine when cut-through is required.   For example, the
service being impersonated by the interception proxy may have uses for
the client's source address which are outside of the protocol being
intercepted and for which the proxy cannot be knowledgable.
Such uses may be both active (in that they involve attempts to establish
other traffic between the origin server and the client, or between the
client and other hosts on the network), or passive (in which the origin 
server uses the client's IP address without attempting to communicate
with it), or even deferred (in which an attempt is made to communicate
with the client's IP address at a later time).  In addition, the *user* 
may have a requirement for his client to talk directly to an origin server, 
or the content provider may have a requirement for the origin server to 
talk directly to a client, simply because they expect communications 
integrity.  By its very nature an interception proxy ignores the 
requirements of the user and/or the content provider.

The document refers to two other documents which it says further
describe the dangers of interception proxies: "Internet Web Replication 
and Caching Taxonomy" [reference 3], and "Known HTTP Proxy/Caching Problems".
Both of these appear to be works in progress, and the latter document does 
not even have a reference.  Until such documents are published, or 
at least until they are deemed ready for publication by their creators, 
it is impossible to evaluate whether they contain sufficient and
accurate information to inform readers of the NECP document about
the dangers of interception proxies.


5. While in one sense NECP is an attempt to alleviate some of the harm done 
by 

Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-06 Thread Karl Auerbach


 I am writing to request that the RFC Editor not publish 
 draft-cerpa-necp-02.txt as an RFC in its current form,
 for the following reasons:
 
 2. A primary purpose of the NECP protocol appears to be to 
 facilitate the operation of so-called interception proxies.  Such 
 proxies violate the Internet Protocol in several ways: 
 
 3. Aside from the technical implications of intercepting traffic,
 redirecting it to unintended destinations, or forging traffic from
 someone else's IP address - there are also legal, social, moral and
 commercial implications of doing so.

You will need to be far more specific here.  I see absolutely nothing that
is not legal, is not social, or is not moral.  I do see commercial
implications, but whether those are is "good" or "bad" is not a technical
judgement.
 
 In my opinion IETF should not be lending support to such dubious
 practices by publishing an RFC which implicitly endorses them, even
 though the authors are employed by major research institutions and
 hardware vendors.

I take the contrary position.  The IETF ought to be encouraging the
documentation of *all* practices on the net.  It is far better that they
are documented where people can find useful information when they see this
kind of packet activity rather than have them known only to a few
cognescenti.

May I suggest that one treat this in its classical sense - as a Request
for Comments and that those who have technical objections or technical
enhancements publish those comments in an additional document rather than
try to suppress the original one.

Having a document trail that shows what paths and ideas have been found
wanting is nearly as important has having a trail that show what paths
have been found useful.

--karl--