Re: Warning - risk of duty free stuff being confiscated on the way to Prague

2007-03-11 Thread Eric A. Hall

On 3/11/2007 11:55 AM, Marshall Eubanks wrote:
 I know for a fact (because it happened to me Friday) that
 liquids are confiscated on the security check required to transit at
 London Heathrow. 100 milliliters is the limit, and this includes duty- 
 free purchased elsewhere in-route.
 
 Now, of course, you can _buy_ duty free at Heathrow and carry it on  
 the plane. How convenient.

clearly a certificate-trust problem

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: draft-santesson-tls-ume Last Call comment

2006-03-07 Thread Eric A. Hall

On 3/7/2006 8:16 PM, Mark Andrews wrote:

   * Hostnames that are 254 and 255 characters long cannot be
   expressed in the DNS.

Actually hostnames are technically defined with a maximum of 63 characters
in total [RFC1123], and there have been some implementations of /etc/hosts
that could not even do that (hence the rule).

But even ignoring that rule (which you shouldn't, if the idea is to have a
meaningful data-type), there is also a maximum length limit inherent in
SMTP's commands which make the maximum practical mail-domain somewhat
smaller than the DNS limit. For example, SMTP only requires maximum
mailbox of 254 octets, but that includes localpart and @ separator. The
relationship between these different limits is undefined within SMTP
specs, but its there if you know about the inheritance.

When it is all said and done, max practical application of mailbox address
is 63 chars for localpart, @ separator, 63 chars for domain-part.
Anything beyond that runs afoul of one or more standards.

/pedantry

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: draft-santesson-tls-ume Last Call comment

2006-03-07 Thread Eric A. Hall

On 3/7/2006 10:23 PM, Mark Andrews wrote:
On 3/7/2006 8:16 PM, Mark Andrews wrote:

   63 is not a maximum.  It is a minumum that must be supported.

Right, sorry. Same results.


-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: What's an experiment?

2006-02-16 Thread Eric A. Hall

On 2/15/2006 12:19 PM, Joe Touch wrote:
 There are two different potential intentions to 'Experimental':
 
 1. to conduct an experiment, as Eliot notes below, i.e.,
to gain experience that a protocol 'does good' 'in the wild'
 
 2. to gain experience that a protocol does no harm 'in the wild'

There is a third option, which is we are not sure how this will work, or
if it will even work at all really, but we are confident enough in its
design stability to release it in limited form for further study

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Last Call: 'Linklocal Multicast Name Resolution (LLMNR)' to Proposed Standard

2005-08-31 Thread Eric A. Hall

On 8/30/2005 2:18 PM, Stuart Cheshire wrote:

 Well, in case 1 (mDNS), no, because it won't return a useful result, so 
 why keep doing it?
 
 In case 3 (conventional DNS), no, because it won't return a useful 
 result, so why keep doing it?
 
 In case 2 (LLMNR) the answer is yes, all the time, if you chose to call 
 your printer isoc.frog, which LLMNR allows and encourages.

What part of the specification requires LLMNR names to be processed
through Internet DNS?

There are lots of similar-looking naming services out there (DNS, NIS,
NetBIOS, AppleTalk, ...), and there is a significant amount of experience
in keeping the names and resolution paths separate. Just because an LLMNR
name looks like a DNS name doesn't make it one (just as an AppleTalk
name that looks like a DNS name doesn't make it one).

People who mix the resolution paths (and/or the caches) deserve what they
get. Unless you can point out where this is mandatory, I'd say the correct
response is don't do that

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Last Call: 'Linklocal Multicast Name Resolution (LLMNR) ' to Proposed Standard

2005-08-31 Thread Eric A. Hall

On 8/31/2005 12:36 PM, Ned Freed wrote:

 Section 2 states:

that's unfortunate

LLMNR clients need to make a distinction between the different kinds of
networks and then process the names accordingly.

The whole argument behind the original distinction between LLMNR and DNS
is that ad-hoc names aren't self-authoritative, the namespaces are
therefore different, and so forth. Having clients try both namespaces is
really missing the point.

 Another way of fixing the overlapping namespace problem would be to  require
 LLMNR to be truly disjoint from the DNS: Remove all this stuff about using the
 DNS as the primary service when available. However, my guess is that such a
 service would not be terribly interesting, and that much of the supposed value
 opf LLMNR comes from its lashup to the DNS.

I was under the impression that LLMNR value was from ad-hoc networks and
naming. Surely that's the right place to flag the distinction too.

NB: I consider .local to be a crutch that's needed to make a hack work,
and the right place to deal with these problems is at the interface map,
not inside the namespace. Nevermind the fact that .local is overloaded
with massive historical and experimental context already too.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: IETF servers aren't for testing

2005-08-05 Thread Eric A. Hall

On 8/5/2005 5:19 AM, Bill Sommerfeld wrote:

 While the availability and stability of the IETF infrastructure is
 important, an engineering organization which declines to eat its own
 dog food sends a message about its suitability for widespread
 deployment.

The phrase is an anecdotal reference, where the executives of a dog food
company said they knew the product tasted good because they tried it
themselves. It's a good analogy.

Getting hung up on the dog food part of the analogy misses the whole
point of the anecdote.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: SRV continued

2005-07-20 Thread Eric A. Hall

On 7/20/2005 9:34 AM, Hallam-Baker, Phillip wrote:

 If I have [EMAIL PROTECTED] there is only going to be one set of
 servers for incomming mail for that address, the place for POP3 and
 IMAP4 services is obvious.
 
 Traditionally it was possible to choose the outgoing mail service. That
 option is effectively closing due to spam control measures. But even if
 the email service has a choice of outboud email relay it is a limited
 choice and this does not affect the means used to advertise each relay.

Finding your retrieval servers is harder still. You may get lucky with
somebody's local submission servers, or you may have different internal vs
external submission servers on your home network that you access based on
where you are currently located, but your retrieval servers don't ever
change based on location information. This is partly due to the way that
repositories are generally bound to the disks they are stored on, but also
partly due to the way that retrieval clients preserve state across session
lines (such as remembering which messages have already been seen).

Worse is that your retrieval server's location is specific to your account
within a domain, and not tied to the domain itself. I mean, [EMAIL PROTECTED]
could have their mailstore in Nashville or any other city, so the user@
part is really the lookup key that matters.

http://www.ehsco.com/misc/I-Ds/draft-hall-email-srv-02.txt tries to deal
this with by pushing some of the algorithm to the server administrator,
although that's not something I'm not really happy with since it allows
for different responses in different cases, which isn't what protocols are
supposed to do. An artifact of this vagueness is that the service is
declared as only being usable for initial configuration and not really
useful for ongoing configuration.

Making it work would require generating queries for the servers associated
with localpart.domain.dom (ie, your email address), which is fraught with
problems, large and small. One of the bigger problems is that I'm opposed
to storing user data in the DNS on principle--DNS is a miserable directory
and trying to make it one would make it a miserable naming service. But
the IETF has not produced a workable distributed directory service. So
we've got a real dilemma here.

About the best we can do at the moment is to make it vague and limit the
lookup role accordingly.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Why people by NATs

2004-11-22 Thread Eric A. Hall

On 11/22/2004 11:33 AM, Fred Baker wrote:

 So I will argue that the value of (2) is ephemeral. It is not an objective, 
 it is an implementation, and in an IPv6 world you would implement in a 
 slightly different fashion. 

That's right--the device would get a range (or block) of addresses and
then either do a 6-to-4 gateway conversion on those addresses (still using
192.168.*.*) or assign v6 directly (if that option had been enabled) but
would still use DHCP for those assignments. Server-specific holes in the
incoming connection table would still have to be managed, with a default
deny policy. Very similar but still different.

One potentially technical hurdle here is the way that the device discovers
that a range/block of addresses is available to it. Some kind of DHCP
sub-lease, or maybe a collection of options (is it a range of addresses or
an actual subnet? how big is it, and does that include net/bcast
addresses?),is going to be required. So it would obviously be useful that
Linksys et al make sure that the specs are there to help them continue
providing the same kind of high-value low-management experience. This is
the kind of cross-industry participation I'm talking about needing.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/

___
Ietf mailing list
[EMAIL PROTECTED]
https://www1.ietf.org/mailman/listinfo/ietf


Re: Why people by NATs

2004-11-22 Thread Eric A. Hall

On 11/22/2004 4:04 PM, Ralph Droms wrote:

 DHCPv6 PD (prefix delegation; RFC 3633) to obtain a prefix

Yeah, that's what I was thinking about. So now we just need implementors
to provide it and for service providers to offer it before declaring the
problem as solved.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/

___
Ietf mailing list
[EMAIL PROTECTED]
https://www1.ietf.org/mailman/listinfo/ietf


Re: How the IPnG effort was started

2004-11-21 Thread Eric A. Hall

On 11/21/2004 2:48 PM, Christian de Larrinaga wrote:

 cdel This is difficult to confirm (or deny) as current research into
 why users buy NAT's is not clear

When you say buy you are adding another layer here. Most small devices
come with NAT technology built-in, so there are lots of reasons why people
use NATs that are separate from any kind of purchasing decision. On the
other hand, if folks did have to shell out money for NAT versus real
addresses, then adoption rates would probably be somewhat different than
they are now.

For one thing, It is free for SOHO users (NAT is bundled with all their
devices already) and they would have to pay extra for multiple addresses
from their low-end service provider. This won't change until IPv6 is
equally supported by the devices and the providers, and when the providers
are willing to hand out small blocks of v6 addresses at no extra cost.
That probably won't happen without a coordinated push between industry,
governments, and investors.

Small businesses can't get portable IPv6 or IPv4 addresses, so there is no
difference between NAT or not. If they want numbering independence at the
local level, the low cost of SOHO NAT is the clear winner over the higher
 cost of specialty gear and management for IPv6. This won't change until
the routing table can handle a magitude more routes (at a minimum).

Large orgs have multiple tiers of management and expense sources. While
they may be using the high gear and have the propellor heads on staff
already for doing IPv6, the departmental manager doesn't want to here that
he can't use that $30 print server on his network because of some kind of
technical double-talk. This is basically the SOHO argument multiplied.

There are doubtless a million variations on this (markets are elections,
and people vote according to their priorities and values, which are
somewhat unique to each), but in general it boils down to a lack of
availability. IPv6 has to reach the same kind of unavoidable prevalance as
free NAT technology has today in order for people to start using it in the
same volume, and this is becomes increasingly true as you move towards the
network edge.

My feeling is that there has to be a group effort to change this, and it
needs across-the-board cooperation. VCs need to be shown that
bidirectional reachability is in their ultimate interest, in that it opens
the door for new technologies and products. Small carriers need to be
convinced that providing lots of addresses to each user won't bankrupt
them or make them non-competitive (probably the place where government has
the most to contribute in this whole thing is underwriting loans against
IPv6 equipment in SOHO ISPs). Edge gear that provides NAT technology also
needs to support v6 technolgy, and so does edge gear that doesn't. It
needs to be a lot easier to get private routable space so that small orgs
aren't implicitly forced to use NATs. Etcetera.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/

___
Ietf mailing list
[EMAIL PROTECTED]
https://www1.ietf.org/mailman/listinfo/ietf


Re: How the IPnG effort was started

2004-11-18 Thread Eric A. Hall

On 11/17/2004 9:02 PM, Paul Vixie wrote:

 therefore after a middle state of perhaps five more years

How long have folks been predicting ~5yr windows?

Not to diminish your table or anything, but markets don't work in binary,
and the problem has been with access more than anything else. Usually we
see adoption go through big-org to small-org to consumer [according to
price and availability], but as of right now there are still very few
big-orgs interested in it, and almost no early adopters in the small-org
and individual sectors [even though IPv4 has been unavailable to them
for many years now]. Demand is not driving adoption, because there is no
demand. From all evidence. it actually looks like people would prefer to
reinvent IPv4 connectivity as the first choice. no matter how much
friction you put on it with allocation rules and whiatnot.

This is not primarily a technology problem, and more about figuring ways
to convince ~Linksys that their SOHO products need to support IPv6, and
also convincing ~Comcast to provide the addresses. Why do they not
already? Too much support cost probably, given the limited pool of
early-adopter technologists. Okay then, how do we help get the costs
lower? Rinse and repeat.

Five years would be reasonable if we were actually starting on these kinds
of efforts today. Not starting means that it will always be five years out
of reach.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/

___
Ietf mailing list
[EMAIL PROTECTED]
https://www1.ietf.org/mailman/listinfo/ietf


Re: How the IPnG effort was started

2004-11-18 Thread Eric A. Hall

On 11/18/2004 12:38 PM, Paul Vixie wrote:

 i am directly aware of latent address space needs that are 50X larger
 than all of ipv4.

Me too, but the sum total of these (both now and immediately foreseeable)
is very few. I mean, I can site the corner cases too, but what does that
have to do with edge deployment?

 so we can argue as to whether it's 5 years or 3 years or 10 years, and
 we can argue about whether ipv6 is the best possible replacement for
 ipv4, and we can argue about whether ipv6's warts can be fixed or
 whether we'll have to live with them or throw it away and start over.
 but ipv4 is in what the product managers call end of life, and i hope
 we're not arguing about that.

IPv6 is certainly inevitable in some form or another (at a minimum, its
current deployment levels are inevitable), but it's not inevitable
everywhere within a sub-decade window. It's kind of fun to think about
scenarios here (reinventing bang-path routing comes to mind) but I'm
trying to focus on what we ought to be working on to reduce deployment
friction. Granted, road-building isn't what the I* collective is good at
(or at least not since Postel stopped isssuing executive fiats) but it
would ultimately be far more productive, I think. I mean, we can try to
fix the problems that folks are having with it (especially including the
non-technical hurdles) or we can argue over whether 3% is better enough
than 2% to qualify as success, the latter of which seems to be the
preference around here.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/

___
Ietf mailing list
[EMAIL PROTECTED]
https://www1.ietf.org/mailman/listinfo/ietf


Re: A modest proposal for Harald

2004-11-08 Thread Eric A. Hall

On 11/6/2004 3:53 AM, Harald Tveit Alvestrand wrote:

 In IPv6, I see our job as standardizers to make sure the thing we have
 defined is well-defined enough to let it work, and then get the hell
 out of the way.

Pardon me for saying so, but I think that represents the canonical problem
with v6 deployment and some other large-scale efforts; there hasn't been
enough solid leadership promoting the agenda. By way of analogy, the I*
collection of bodies is really good at things like facilitating discussion
about the kinds of laws that we all want to live under, but they are lousy
at doing things like building the interstate highway that actually lifts
the system upwards.

Usually this is good, because usually there are vendors out there that
will do the build-up work, and we can let the market take its own path.
But in the case of IPv6, there is a real dearth of products to choose from
(especially in the SOHO space), and there is a real lack of availability
from ISPs. It's not just impractical for SOHO networks to deploy IPv6, it
is almost foolish to even try given the lack of availability. The right
fix here is to make IPv6 deployment easier. I think the RIRs are falling
down here -- everytime a request for a /24 IPv4 block is rejected, the
letter should be accompanied with an offer for a /24 IPv6 block, for
example. Similarly, vendors need to be met with and encouraged to take
one for the team so to speak, and sink RD money into products. Both of
those are areas where the I* has not done well, and where comments like
Harald's above make success less likely. Like I said, such a position is
usually okay, but in this case it's not the appropriate tact.

We're going to see another multicast here if we don't do this kind of
work. Then we'll see another spam problem, where the shortcomings
eventually do bite us in the collective butt because we've shown no
leadership until it was too late.

There is still some technical work that needs done, too. NATs exist
because (1) address space is difficult to get but also because of routing
table limitations. If BGP can't handle millions of IPv4 /24 blocks, why
will millions of IPv6 blocks work better? What do you mean it won't?

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/

___
Ietf mailing list
[EMAIL PROTECTED]
https://www1.ietf.org/mailman/listinfo/ietf


Re: Last Call: 'The APPLICATION/MBOX Media-Type' to Proposed Standard

2004-08-17 Thread Eric A. Hall

On 8/17/2004 2:09 PM, John C Klensin wrote:

 To be clear about this, I think there are three choices which we
 might prefer in descending order:
 
   (1) There is a single canonical wire format in which
   these things are transmitted.

Such a specification would surely dictate a series of message/rfc822
objects. But if we were to require that end-points perform conversion
into a neutral form, we might as well go the whole nickel and just say
use multipart/digest, because that's where we'd end up after monhts of
beating on each other.

We'd still be in the same place we're at today, of course, because HTTP
and filesystem transfers aren't going to do mbox-digest conversion any
more than they are going to do EOL conversion or Content-Length calcs.
We'd still have opaque messaging databases that people wanted to transfer,
search, open and otherwise automate, but couldn't.

   (2) The content-type specifies a conceptual form
   (application/mbox) but has _required_ parameters that
   specify the specific form being transmitted.

Global parameters are useless if the parser is intelligent enough to
figure out the message structure independently. Given that such
intelligence is a prerequisite to having a half-baked parser, the global
parameters are always unnecessary.

Actually, global parameters are more than useless. What if we have a mixed
mbox file, where some messages are untagged BIG5 and others are untagged
8859-1, or we have some messages have VMS::Mail addresses and others have
MS/Mail addresses, or so forth? The global nature of global parameters
ignores the per-message reality of the mbox structure.

Global parameters can also be harmful if they conflict with reality.

   (3) These might as well be sent with
   application/octet-stream, with optional file name, etc.,
   information.

That's where we are now and we already now that's not working.


___
Ietf mailing list
[EMAIL PROTECTED]
https://www1.ietf.org/mailman/listinfo/ietf


Re: Last Call: 'The APPLICATION/MBOX Media-Type' to Proposed Standard

2004-08-17 Thread Eric A. Hall

On 8/17/2004 4:06 PM, John C Klensin wrote:

 In that context, unless I completely misunderstand what is going
 on here, the ...prerequisite to having a half-baked parser...
 assertion borders on the silly.   Take the example to which Tony
 has been pointing.  Apparently the Solaris version of an mbox
 format is well-documented and based on content length
 information rather than key strings.

... it is well-defined for writing, but AFAIK the number of reliable
clients that depend on the presence of such data are zero.

This is the point really: Given that the database structure really is
undefined, and that any number of clients may have accessed the database
via ~NFS or via different mailers on the same local client (Pine and KMail
and ...), a half-baked mailer that wants to reliably read mbox data of any
kind will know better than to rely on Content-Length in isolation, and
will have to validate assumptions and check the clues before doing
anything substantive. This includes checking for EOL syntaxes, but also
includes things like default charsets for untagged data, default domains,
and so forth.

 Given that model, the key to an mbox format isn't the content of
 the blobs, it is the system used to decompose an mbox into a
 blob collection.

That's a function of the content parser and whatever task it is trying to
perform. The purpose of importing an mbox database into an IMAP store (as
is common with things like downloadable archives) is going to have several
different considerations and requirements than simple searching and
printing (as with local archives generated by ~Mozilla locally), but both
of them can be satisfied through basic sniff tests and local defaults.

Most of the spec work is appropriate and useful and arguably necessary for
*writing* to mbox files, especially if you are appending an existing file,
but even then it takes few smarts to figure out that you should write to
the existing content instead of druidic rules. Having said that, I'd love
to see a canonical authoritative format which could be referenced for such
purposes (not for transfers, but for parsers to use when they go about
generating content).

 Instead, you are making a case that this registration should be
 a family of, e.g.,
   application/mbox-solaris-v5-and-later
   application/mbox-sendmail-v2-v4

I hope not; unrecognized types match to application/octet-stream so many
types that nobody implements won't get us very far either.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/

___
Ietf mailing list
[EMAIL PROTECTED]
https://www1.ietf.org/mailman/listinfo/ietf


Re: Last Call: 'The APPLICATION/MBOX Media-Type' to Proposed Standard

2004-08-13 Thread Eric A. Hall

Keith Moore [EMAIL PROTECTED] wrote on Thu, 12 Aug 2004 08:45:24 -0400:

 this is an application for a media-type. it's not trying to define the
 mbox format, it's just trying to define a label for the mbox format
 that can be used in contexts where MIME media-type labels are required.

That's correct; this is a tag definition, not a format specification.

Others should note that RFC2048 is designed to facilitate registrations --
more definitions for common data-types are widely preferred over a
proliferation of x-foo media-types that result from high registration
barriers (read the intro to 2048 if you don't believe me) -- and that's
the limited objective of this proposal.

There are a few places where this tag is necessary or useful. Online
downloadable archives of mailing lists would be easier to import, search
and otherwise manipulate if a media-type were defined and which allowed a
transfer agent to bridge the data to a local content agent (most of the
IETF archives are in mbox format, and I'm aware that certain cellular
email systems allow for downloadable SMS/email logs via mbox, not to
mention the proliferation of web/local mail systems that use mbox files
which are often transferred back and forth). Local actions such as
importing or opening a local mbox file would also be easier to perform if
there was a media-type that the OS could use (I have many mbox files of
project-specific archives and would appreciate better OS/messaging
integration with these files [such as opening and searching folders the
same as I can with message/rfc822 file objects], but need an OS-level
media-type to get there). And so forth.

A definitive authoritative specification for all variations of the mbox
database format is explicitly not the objective, for several reasons. For
one thing, such a definition is outside the IETF's purview, the same as a
definition for Outlook or Eudora or other vendor/platform-centric database
formats would be. Second, if the IETF *did* try to write such a
definition, it would quickly settle on the fact that multipart/digest
already exists for such purposes, and that defining an mbox format would
be redundant. If development proceeded beyond that (unlikely at best),
then the spec would necessarily conform with other messaging definitions,
and would end up declaring that mbox files SHOULD/MUST be compliant with
RFC2822 messages at a minimum, which ignores the reality of eight-bit
content, long lines, untagged charsets, EOL variations, etc. Other folks
can start a bof/wg and go through this process if they want to, but
frankly I'm happy just registering the media-type (the purpose of the
process, and the objective of this I-D).

On top of all that, mbox parsers need to be aware of the limitations and
variations on the content, and this cannot be adequately handled with a
media-type definition.

Suggestions about parameters has come up before (John Klensin suggested it
to me a couple of months ago). Unfortunately, these kind of helper tags
attempt to define content rules rather than transfer rules, and therefore
represent a non-trivial layer violation. They are analogous to using a
version= tag for app/postscript and relying on that meta-information
instead of embedded clue data. Obviously, content agents should be aware
of the content formatting rules. [what happens when the helper meta-tags
are lost? should the content agent NOT look at the content to make a
determination? that's where the logic belongs in the first place, so
putting it into the transfer layer is not only irrelevant it is possibly
harmful.] This data should not be overflowed to the transfer agent except
where it affects the transfer process proper, which is not the case with
any of the suggested tags.

 however I don't think this should be up for PS, as the standards track
 is intended for protocols that can be subject to interoperability tests,
 and this is just a label. Informational seems more appropriate to me.

I'll go along with that. 2048 only requires IESG approval, but does not
require standards-track, and dropping to informational gets us there.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/


___
Ietf mailing list
[EMAIL PROTECTED]
https://www1.ietf.org/mailman/listinfo/ietf


Re: Last Call: 'The APPLICATION/MBOX Media-Type' to Proposed Standard

2004-08-12 Thread Eric A. Hall

On 8/12/2004 5:18 PM, Tony Hansen wrote:

 The information about the mbox format being anecdotally defined is 
 incorrect. The mbox format has traditionally been documented in the 
 binmail(1) or mail.local(8) man pages (BSD UNIX derivatives) or mail(1)
  man page (UNIX System 3/5/III/V derivatives).

I checked each of those and none of them seem to adequately describe the
message or database format.

 The most complete description of an mbox format can be seen in the man
 page from any UNIX System Vr4 derived system, such as Solaris.

 The description found in http://qmail.org./man/man5/mbox.html, referred
 to at the end of Appendix A, does cover these variations fairly 
 succinctly. However Appendix A doesn't cover the variations properly.

Do you have a specific URL to a specific man page that you think would be
appropriate and authoritative?

I spent a while looking (see also http://makeashorterlink.com/?X61D16909
for example), and couldn't find anything that seemed to be authoritative
enough to reference.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/

___
Ietf mailing list
[EMAIL PROTECTED]
https://www1.ietf.org/mailman/listinfo/ietf


Re: Last Call: 'The APPLICATION/MBOX Media-Type' to Proposed Standard

2004-08-11 Thread Eric A. Hall

On 8/10/2004 10:59 PM, Philip Guenther wrote:

 If there are no defined semantics for the content of an application/mbox
 part, how does the type differ from application/octect-stream?

It provides an identifier for the content, so that transfer agents can
perform specific tasks against the data (such as importing or searching a
remote mailstore, or handing the data to an agent that knows what to do
with it). The agent still needs to deal with content-specific issues like
determining the EOL markers, applying default domains to relative
addresses, and so forth. That's a pretty common separation of powers;
application/postscript doesn't relieve the system from needing a
postscript interpreter, and we leave things like ~version tags for the
content agent to worry about instead of the transfer agent.

 [regarding creating a spec for a mailbox file format]
 
I'd like to see one, and I'd like to see whatever *NIX consortium is
responsible for such things get together and define one.
 
 At that point, would application/mbox be updated to refer to said spec,
 rendering non-compliant some chunk of the previous uses, or would a new
 content-type be specified?

Given that the current proposal specifies minimal formatting (essentially
being limited to the likely presence of some kind of From_ line), I'd
think that a reasonably authoritative spec could be referenced in an
update to this proposal. It would depend in large part on the depth and
comprehensiveness of the specification, I'd imagine.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/

___
Ietf mailing list
[EMAIL PROTECTED]
https://www1.ietf.org/mailman/listinfo/ietf


Re: Last Call: 'The APPLICATION/MBOX Media-Type' to Proposed Standard

2004-08-10 Thread Eric A. Hall
.

  * Since mbox files are text files (assuming that any binary messages
in the mailbox are themselves encoded) and can be read sensibly
with the naked eye, the content type should be text/* not
application/*.  This will also remove ambiguity surrounding line
endings.

Automatic EOL conversion destroys message objects. There is no assumption
that messages are conformant with 2822, nor can there be any such assumption.

  * Since an mbox is actually an aggregate type - a way of encoding a
set of RFC822 messages - transfer encodings other than 7bit and
8bit should be discouraged.  The spec should probably deprecate
them in most cases.

I don't see why. It may be efficient to append an app/mbox database to a
[2822-compliant] email message, and to encode is as QP, or it may be
necessary to encode it as base64 if it contains long lines of raw 8bit data.

  * The Proposed Standard should either include or refer to a specific
mbox format.  The fact that there are variant implementations
doesn't mean that the Proposed Standard should hesitate to declare
those broken (at least, broken when a file is sent as text/mbox).
Those variant implementations are not wholly interoperable anyway,
and in order to write software which deals correctly with text/mbox
it will be necessary for the spec to say what the format is
supposed to be !

Well, this would require that every local application be changed to suit
the needs of the transfer format, which is not reasonable. Instead the
goal here is to define a transfer identifier which says that the data
probably looks like XYZ.

  * The format specified should be that described in Rahu Dhesi's
posting to comp.mail.misc in 1996, [EMAIL PROTECTED].

The message with that message-ID does not define a format.

  * If an mbox file contains messages with unencoded binary data, the
file is difficult to sensibly process on a machine with non-UN*X
line-endings, because of the bare CRs in the binary data.  (Bare
LFs are fine and look just like line endings, with From_-escaping
and all.)  As far as I can tell there is then no non-lossy
representation of the file which allows sensible local processing
by non-mbox-specific tools.  This issue should be resolved (or at
least acknowledged).


-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/

___
Ietf mailing list
[EMAIL PROTECTED]
https://www1.ietf.org/mailman/listinfo/ietf


ftp license

2004-05-16 Thread Eric A. Hall

ftp://www1.ietf.org/

*
*

  NOTICE:  The archive maintainer is running an expired
   evaluation copy of the FTP server software.
   Please encourage the maintainer to license the
   software now at http://www.ncftp.com/order/ !

*
*

Please wait 23 seconds.
Wasn't that annoying?  Tell your site administrators that the delay
goes away when they license the software.



___
Ietf mailing list
[EMAIL PROTECTED]
https://www1.ietf.org/mailman/listinfo/ietf


Re: Not sure if this is the right place for this

2004-05-11 Thread Eric A. Hall

On 5/10/2004 10:31 AM, Paul Hoffman / VPNC wrote:

 At 9:38 AM -0500 5/10/04, Eric A. Hall wrote:

 Using an encrypted port just means an attack can only produce 
 failure, rather than inducing fallback.
 
 Unless that's wrong for some reason, I'd say that a secure ports 
 policy actually is more secure.
 
 In many cases, a client for a secure ports policy protocol will fall 
 back to the insecure port instead of telling the user you can't 
 communicate. That's not true for HTTPS, but it is true for secure POP,
  secure SMTP, and so on.

This is a fair enough point, and I should have accounted for it, but I
don't think its absence weakens the merits of the first point any.

I'm not even sure they are similar arguments. I mean, the argument against
SSL is that *if* an SSL connection is blocked, and *if* an alternative
clear channel exists, and *if* that channel accepts clear-text logins, and
*if* the client fallsback through all of that, then the session will be
exposed. I'll grant that it's easy enough to interfere with somebody
establishing a session, and that there's a measurable percentage of
services that maintain clear channel alternatives, and that PLAIN/LOGIN
are still the only things that work everywhere, and that there are
probably clients which fallback invisibly. I can also argue some of those
points to some degree -- that people who have setup SSL versions are
likely to have eliminated access to the clear channel, for example -- so
instead of any of those things being certainties, we should agree that
this is a question of overlapping probabilities.

On the other hand, STARTTLS *requires* a clear channel that the client
MUST *already* be using. So whereas the attack on SSL *might* succeed in
putting the client in touch with an unencrypted service, TLS is
*guaranteed* to be using such a service already anyway. The only question
is whether or not a decipherable login can be used, which is a probability
mask that also applies to the SSL scenario.

Given the collection of probabilities, therefore, starting with an SSL
channel and refusing connections to a backup clear channel eliminates most
of the probability from the MitM attack model. Conversely, those
attributes are prerequisites to the very existence of TLS. Ergo, TLS is
weaker against this particular vector, by design.

I don't think it really matters given the other problems (see below).

 A man-in-the-middle can more easily block the secure port than he/she 
 can elide the STARTTLS messing in the client's start-up.

That's true.

On the other hand, channeling clients through a proxy is easy, especially
if they are on a foreign network. The client can be tricked into using a
different host via DNS attacks, or can simply be routed through proxies,
NATs, firewalls, or any number of 'transparent' proxies that can be easily
deployed. STARTTLS can be disabled by any of these, using a two-bit attack
(literally: borrow one bit from the T and hand it to the S, causing the
service to be advertised as TSARTTLS -- I've seen worse than that from
unintentional bugs and intentional features alike).

 Any client that is willing to back down to non-secure mode is 
 susceptible to a MITM attack, regardless of the protocol.

agreed on that


-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/

___
Ietf mailing list
[EMAIL PROTECTED]
https://www1.ietf.org/mailman/listinfo/ietf


Re: Not sure if this is the right place for this

2004-05-10 Thread Eric A. Hall

On 5/10/2004 3:02 AM, RL 'Bob' Morgan wrote:

 So a secure ports only policy has very little to do with security and
 very much to do with organizational power relationships, and making
 your computing environment dysfunctional.

Somebody check my math on this please, but it seems to me that the whole
STARTTLS approach is succeptible to a specific attack which the secure
socket model is not.

Specifically, a man-in-the-middle can blank out the STARTTLS feature
advertisement, and thus make the client believe that TLS is not available.
For example:

  server-AMitM  client-C
 |  250-DSN | 250-DSN
 +--   250-AUTH+-   250-AUTH
250-STARTTLS  250 ok [...pad...]
250 ok

The client, seeing that TLS is not available, dumbs down to cleartext.
Most clients would probably do that invisibly without even barking at the
user, or not doing so in a way that most of them would appreciate.

Using an encrypted port just means an attack can only produce failure,
rather than inducing fallback.

Unless that's wrong for some reason, I'd say that a secure ports policy
actually is more secure.


___
Ietf mailing list
[EMAIL PROTECTED]
https://www1.ietf.org/mailman/listinfo/ietf


Re: Principles of Spam-abatement

2004-03-17 Thread Eric A. Hall

On 3/17/2004 9:33 AM, Paul Vixie wrote:

 identities without history will be a dime a dozen, or cheaper.
 spammers with no history could trample your privacy all day long if you
 allowed it.
 
 accepting incoming communication from someone the world has no hooks
 into is off the table.

Not applicable to sales@ or emergency@ type mailboxes.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/



Re: Principles of Spam-abatement

2004-03-17 Thread Eric A. Hall

On 3/17/2004 10:47 AM, Ed Gerck wrote:

 Eric A. Hall wrote:

Not applicable to sales@ or emergency@ type mailboxes.
 
 Why? Should someone arrive at your Sales or Emergency window
 in your office, naked, what would you do? 

uh, public nudity is (mostly) criminal, so not a good analogy, although
comparisons to a no shirt, no shoes, no service policy statement would
be getting there.

A better analogy is with new checking accounts. Many places won't accept
checks numbered below 1000, since they indicate that the account has not
established a track record. Other places will accept the checks after
verification, other places will accept them with thumbprint or some other
identifier. In all these cases, the organization is able to determine its
risk limits and act accordingly.

I'm not going to get sucked into this endless debate, but it is equally
tyrannical to require everyone use some kind of hard trust as it is to
require everyone use no trust (what we are moving away from, in blobbish
fashion and pace). There must be consideration for exceptions; the
swimming pool snack bar probably cannot enforce a no shirt... policy.

Property rights (of which I am a big advocate) work because they can be
selected and enforced at the owner's scale; people can put up no
trespassing or no solicitation or no hunting or no shirt... signs
to advertise their chosen policies. The same kind of mechanism is needed
for property protection to work in networking as well.

 Note that this is nothing new. We already do this with IP numbers.
 If you send me a packet with a non-verifiable source IP there 
 will be no communication possible. Why should it be different 
 with email addresses?

Verification is different from trust. My position is that you need to be
able to validate and verify before you can successfully apply any kind of
trust (otherwise the trust is meaningless). Paul's message that I replied
to was specifically describing a minimum threshold of trust (it was akin
to the no checks below #1000 position).

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/



Re: move to second stage, Re: Principles of Spam-abatement

2004-03-16 Thread Eric A. Hall

On 3/16/2004 3:41 PM, Yakov Shafranovich wrote:

 How would introducing trust help with the spam problem? Would the cost 
 of doing so perhaps would be so prohibitive that we will not be able to 
 do so? Is it really possible to introduce trust that will actually work?

Trust is a contiuum, like everything else related to security.

Different people will have different levels of trust; having a marketplace
of trust brokers -- each of whom provide different levels and strenghts
based on different factors -- is appropriate. Some people and/or services
will require notarization-based trust, others will be happy knowing that
blacklist-dujour.org doesn't think the sender is scum.

I don't see what cost has to do with it. The IETF only needs to provide
standardized mechanisms for negotiating trust between end-points. Leave
the brokerage functions (and the implementation costs) to the service
providers who want to enter the market.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/



Re: SUMMARY: Processing of expired Internet-Drafts

2004-01-29 Thread Eric A. Hall

On 1/28/2004 8:15 PM, Harald Tveit Alvestrand wrote:

 Conclusions, all mine:
 
 - Documenting current procedures is good. - We won't expire tombstones.
 They're not a big enough problem yet. - We'll think about naming
 tombstones something else than the exact draft name (for instance
 draft-whatever-version-nn-expired.txt???) - We'll note the issue of
 referencing names without the version number as input for thinking
 about overhauling the whole I-D system. But that won't happen very
 quickly - it mostly works.
 
 Seems to make sense?

How about using the draft name without a version number as placeholder?
That placeholder file can either reference the current version, or it can
contain the tombstone text. For example, draft-whatever.txt can either
contain a pointer to draft-whatever-nn.txt or can contain text that the
last version of that I-D was -nn but has since expired.

That eliminates naming collisions, allows for mirroring based on the
filedate alone, and provides a running reference to the latest version
which can either be fetched directly (if active) or can be retrieved from
an archival system (if expired). Other useful information could also be
provided in the placeholder, such as referencing the I-D's current
progress through the channels, referencing any RFC which may have been
published, and so forth.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/



Re: visa requirements (US citizens)

2004-01-28 Thread Eric A. Hall

On 1/28/2004 12:46 PM, Kevin C. Almeroth wrote:

 Seems to me to pretty clear that a visa is not needed.

These are the future possibilities:

 1) You got the visa, the guard on duty that day deems it unnecessary,
and you curse the effort you spent to get it.

 2) You don't get the visa, the trainee on duty that day deems it is
necessary, and you curse the ~30 hour round-trip flight, the
money, and the effort you spent avoiding the visa fetch.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/



Re: The IETF Mission

2004-01-19 Thread Eric A. Hall

On 1/19/2004 1:01 PM, Bob Braden wrote:

 Not all important ideas enter the working group process and emerge
 as standards, and the fact that some working group chooses not to
 capture an document does not make it necessarily unworthy of
 preservation.  After all, the technical problems evolve, and our
 solutions need to evolve too; ideas that did not make it at one
 stage may turn out to be important in the future.

Another approach here is to allow for the creation of ad-hoc WGs. That
would provide a cleaner path for tangential documents that don't fit
within existing charters, and would facilitate broader group review of
independent submissions. Speaking for myself, I've written some drafts
that I would have liked to seen progressl, but didn't fit in existing WGs,
and the requirements for new WGs were too stringent to pursue. This latter
 issue is also one of the reasons behind the relatively low involvement
from the developing-world community, and exacerbates the feelings of
powerlessness and resentment that end up costing us recovery time fighting
off the well-meaning fools who would make this problem worse by handing
control to organizations with even higher barriers of entry.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/



Re: The IETF Mission

2004-01-19 Thread Eric A. Hall

On 1/19/2004 3:47 PM, Vernon Schryver wrote:

 Not all important ideas enter the working group process and emerge 
 as standards, and the fact that some working group chooses not to 
 capture an document does not make it necessarily unworthy of 
 preservation.  ...
 
 Another approach here is to allow for the creation of ad-hoc WGs.
 That would provide a cleaner path for tangential documents that don't
 fit ...
 
 Let's see if I've got all of this straight:

No (although I'm not really sure you tried). Ad-hoc isn't supposed to
mean 'anything goes', but about providing an alternative channel to BOFs.

I'm sure there was a time where discouraging potentially frivolous efforts
by imposing a high cost of entry was considered a positive benefit of the
design. It still might be. If the objective is to broaden the scope and
quantity of documents to be published, however, lowering that bar would be
useful. Speaking for myself and others, getting volunteer part-timers from
multiple different continents to the same meeting is a pain, and the
payoff is so far into the future and marginal that it usually isn't worth
the cost involved.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/



Re: Death of the Internet - details at 11

2004-01-13 Thread Eric A. Hall

On 1/12/2004 9:03 PM, Noel Chiappa wrote:

 IPv6's only hope of some modest level of deployment is, as the latter
 part of your message points out, as the substrate for some hot
 application(s). Somehow I doubt anything the IETF does or does not do
 is going to have any affect on whether or not that happens.

Yup, it needs a killer app or feature. Bigger address space was that
feature, but one made moot by NATs.

There are other features (security, etc), but they are end-user oriented
and don't really hold promise to ISPs or the equipment manufacturers (the
simple cost-of-goods factor means that the vendor community has negative
motivation to offer IPv6 in low-end gear). There has to be some kind of
effort to get past these hurdles -- development of a routing service that
makes multi-homing simpler for everybody at a magnitude higher scale, or
convincing vendors that IPv6 in the cheapest gear is in their best
interests, and so forth.

Since the engineers in the IETF tend to hold these kind of marketing
efforts in relatively low regard, the likelihood of any of this changing
is close to nil.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: Death of the Internet - details at 11

2004-01-13 Thread Eric A. Hall

On 1/13/2004 1:06 PM, Dan Kolis wrote:

Yup, it needs a killer app or feature. Bigger address space was that
feature, but one made moot by NATs.
 
 VoIP and multimedia via SIP without having a resident network engineer in
 your attic. 
 Enough said?

in your attic implies end-user benefit. As I said, I think the hurdles
are in the carrier and equipment space, not the end-user benefits.

Keep in mind that larger address space was beneficial towards ISPs, and
not just end-users.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: Death of the Internet - details at 11

2004-01-13 Thread Eric A. Hall

On 1/13/2004 1:24 PM, Joe Touch wrote:

 Eric A. Hall wrote:

 Other than conserving addresses, NAT features are basically poison 
 resold as bread.

Heck, I don't even like the conservation feature.

Misguided allocation policies created a false demand. We would have been
better off to run out of addresses than to let gateways 'rescue' us from
our false shortage.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: ITU takes over?

2003-12-08 Thread Eric A. Hall

On 12/8/2003 5:36 PM, vinton g. cerf wrote:

 The subject of Internet Governance has been a large focus of
 attention, as has been a proposal for creating an international fund to
 promote the creation of information infrastructure in the developing
 world. Internet Governance is a very broad topic including law
 enforcement, intellectual property protection, consumer protection, tax
 policies, and so on.

Yay, another proposal to give control of $resource to $tyrant while having
the west pay for it.

What's the track record on those? Not gonna happen. Move along.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: i18n name badges

2003-11-20 Thread Eric A. Hall

Dave Crocker wrote:

 What I would suggest, if we do this, is writing the person's name
 *twice*: once in their native character set, and once in a form that
 an english-reader can read. The latter is an established interchange
 architecture
 
 I believe that was the intention in the proposal. List names in the 
 same way we always have, AND list them in their native form.
 
 Whether it would helpful to provide a third form -- the ascii encoding 
 of the native form, as it would be seen in an email address header -- 
 is a separate question.

It seems to me that this is really the heart of the matter. There are too
many encodings.

There are going to be *at least* three desirable encodings of a person's
identity -- the 'natural' encoding in the preferred/native charset of the
person's name, some kind of phonetic-ASCII encoding that tells non-natives
how to pronounce the name, and the email/idna encoding[s] that folks would
use to exchange mail.

Of the two 'name' forms, it might be possible for the protocol to choose
just one encoding based on the meeting context. For example, if the
meeting attendees are entirely (or mostly) native speakers of the same
language, then use the natural encoding for the names and just ignore the
phonetic ASCII entirely. If the meeting is heavily internationalized, then
the phonetic form is needed and the natural forms are less useful and can
be dropped. So, I would think that part of the protocol should look at
trying to determine the meeting context, choose the appropriate encoding,
and drop the other (contextually inappropriate) name representation. I
think that the default case for IETF meetings would probably be the
phonetic ASCII representation given the constituency, but the protocol
should still attempt to deliver on the right context even when we know
what the context will be for these particular meetings.

I would also suggest that if the meeting context is determined to be
mostly local, and the name is determined to use a natural encoding, then
any contact (email) data should also be provided in that encoding, since
it will be memorable to people who can understand the name context. If the
meeting context is mixed-international, then an ASCII representation
should be used. Whether or not the ASCII representation is IDNA or a
phonetic (pre-IDNA) address is up to the the person who provides that
data. Also, the designers should remember that there will be some cases
where meetings that prefer natural encodings will still have phonetic
contact addresses (pre-IDNA, again).

If done correctly, there would only need to be the two identifiers, and
they would only need to be provided in a single encoding each, determined
by the context of the specific meeting itself.


-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: Pretty clear ... SIP

2003-08-24 Thread Eric A. Hall

on 8/24/2003 1:53 AM Rob Austein wrote:

 I've used ASN.1 compiler technology for a project that included an
 H.323-related frob, and ended up wishing I hadn't.  Can you say more
 than 2MB just for the ASN.1 PER encoder/decoder on a box with an 8MB
 flash chip? (For comparision, the embedded Linux kernel on this same
 box was quite a bit less than 1MB.)  I knew you could.  No doubt we
 could have gotten a smaller implementation if we'd been willing to
 throw serious money at it, but H.323 wasn't even the main point of the

quick-n-dirty = bloat is a universal. I've seen rfc822 parsers that were
nearly as large for example. As far as that goes, moving bloat around
within a layer doesn't seem to matter much in the end, and ASN.1 parser
bloat is just as bad as XML or 822 or [...] parser bloat.

The question of where compression should occur is a good one. The usual
suspects are the application or the data-link layers, but I'm not sure
there are overwhelmingly compelling arguments for either, or that equally
compelling arguments couldn't be made for transport or network layer
compression for that matter.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/






Re: myth of the great transition (was US Defense Department formally adopts IPv6)

2003-06-19 Thread Eric A. Hall

on 6/18/2003 10:44 PM [EMAIL PROTECTED] wrote:

 Melinda Shore [EMAIL PROTECTED] writes:

 None of these things worked real well through firewalls either, which
 is sort of my point.

 If it doesn't work through a firewall, it's because the firewall is
 doing what you ASKED it to do - block certain classes of connections.

 If it doesn't work through a NAT, it's because the NAT is FAILING to do
 what you asked it to do - allow transparent connections from boxes
 behind the NAT.

Exactly. I can tell a firewall to get out of the way (stupid as that may
be in some cases) and the application protocols will function as designed
and expected. I cannot tell a NAT to do that, but instead must first
educate the vendor about the protocol that's being blocked, wait for them
to do their market research and/or prioritize the application among their
Great List of Applications They Have Broken, and then maybe one day get a
patch that actually spoofs the protocol well enough for it to work with a
middlebox in the way. There are some (very few) exceptions to the latter
routine, but that's the usual dance.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: myth of the great transition (was US Defense Department formally adopts IPv6)

2003-06-19 Thread Eric A. Hall

on 6/19/2003 12:59 PM Keith Moore wrote:

Yeah, that there's a subset who cares. They got it. The market is
working.
 
 the market is dysfunctional.  it doesn't always fail to deliver what is
 needed, but it often does. 

I wouldn't say that this market is dysfunctional, more that markets aren't
always self-deterministic (regulations and laws are another form of this),
and that the market is as functional as can be expected considering the
restrictions that folks usually work under. Folks can't always get a range
of addresses from their provider, or can't get a routable block of private
addresses, or don't want to depend on a single provider, or whatever. I
mean, I certainly wouldn't be using a NAT on my home network if I had kept
the multiple Class C blocks I used to have, so really the problem is that
the market isn't offering the full range of appropriate choices. People
have chosen something that mostly works (or have been lied to), rather
than chosing the best option (which is not available).

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: myth of the great transition (was US Defense Department formally adopts IPv6)

2003-06-18 Thread Eric A. Hall

on 6/18/2003 1:31 PM Eric Rescorla wrote:

 What applications that people want to run--and the IT managers would 
 want to enable--are actually inhibited by NAT? It seems to me that most
 of the applications inconvenienced by NAT are ones that IT managers
 would want to screen off anyway.

Oracle and H.323 have both been historically problematic, and are both
reasonable applications. Home users also suffer a lot with perfectly
innocent services like games. Ignoring the rifle-shot arguments, requiring
an application to be rebuilt or that a middlebox learn to ~emulate an
active end-point are both miserable choices, and generally unnecessary.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: myth of the great transition (was US Defense Department formally adopts IPv6)

2003-06-18 Thread Eric A. Hall

on 6/18/2003 5:37 PM Keith Moore wrote:

 you're simply wrong about that, at least for anything resembling
 today's NATs.  except for a shortage of IPv4 addresses, NATs would not
 be needed at all.

...and a routing grid that could handle a squared table size. No use in
opening allocations to everybody that can justify ~/29 if filters are
still going to be dropping everything after /20.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: Engineering to deal with the social problem of spam

2003-06-08 Thread Eric A. Hall

on 6/8/2003 12:47 PM Theodore Ts'o wrote:

 In order for this to work, the request for the Hashcalc calculation
 has to be done automatically.  If it requires manual intervention
 where the user sees the reject notice and then has to manually take
 action --- of course, it's doomed to fail.  So this is something which
 would require modification to the MTA's in order for this to work.

 The easist way to automate such a scheme would be in the context of
 your replace SMTP proposal; it's just a matter of using bare keys +
 hashcash-style solution, instead of requiring a global PKI.

If a key was linked to a sender address, wouldn't that give somebody
everything they'd need to send me mail (just send it as me, since I'm
going to read my own mail even if nobody else does)? What's to prevent a
group of blackhats in the ~Ukraine from having their farm of 3Ghz PCs
generate error responses all day, just to gather keys to sell along with
the associated addresses? How would I replace the key/address pair in all
of the good repositories -- but not the bad ones -- after my key were
compromised, especially if my own delivery server is responding to
requests on my behalf?

Isn't the only way out of this to require senders use certificates that
can be validated, proving that the sender has access to a private key
which roughly corresponds to that address?

I'm in agreement that we don't need a global PKI to do any of this. We
only (ha!) need a relatively simple and lightweight way to locate and
retrieve public keys associated with specific resources (additional
vouching systems would be useful but aren't immediately necessary). That
particular problem isn't too tough to solve when limited to a scope that
is somewhat smaller than global PKI, and I think we'll actually get
something like that relatively quickly. I also think this is mostly a
chicken-and-egg problem at that level, and that we might find both the
chicken and the egg at the same time if we're willing to look a bit.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: Engineering to deal with the social problem of spam

2003-06-07 Thread Eric A. Hall

on 6/7/2003 1:40 PM Paul Vixie wrote:

 and in this bof, i suggest that gateways to the current system be shat
 upon and never again considered.  when we move, we'll MOVE.

That's not globally-applicable. Probably better to specify the gateway
tagging, and then ~Paul can reject mail that has the markers, while ~Sales
can devalue mail with those markers in their post-transfer filters.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: Engineering to deal with the social problem of spam

2003-06-07 Thread Eric A. Hall

on 6/7/2003 3:57 PM Spencer Dawkins wrote:

 Why wouldn't we have mail sending applications that spoke (I'm
 making this up) SMTP and MT2, with different URL schemes
 (mailto: for SMTP, mailtoauth: for MT2) associated with our
 correspondents, let correspondents advertise both ways of being
 reached on Vcards, etc., and not worry about gateways?

Let's separate those concepts.

First, regarding the need for gateways, people will use them no matter
what we say, since there will always be people with mixed installations,
people who need mail from both networks (eg, sales and support), and so
forth. If we dont specify the gateway behavior, the only predictable
outcome is that people will build them without guidance. If we specify
that they cannot be made, people will still make them, and without
guidance. Clearly, the only workable strategy is to specify them, and to
do so in such a way that folks like Paul can reject mail that ever
travelled across a legacy network (that's going to be tough in toto,
considering that MUAs will probably be built to use SMTP as the first-hop
service for a very long time to come).

As for the use of an alternate URI, those are used to tell the viewer
which protocol to use. In the case of outbound mail, it would effectively
be a way for the message recipient to tell the message sender that they
have to use MT2 for the first-hop of the message, which doesn't make a lot
of sense outside closed environments. Furthermore, what happened to the
message after the first-hop would be a result of the mail-routing
information in between the first and last hops, and would not necessarily
be determined by the protocol that the sender used for the first-hop. So,
URIs can't really be used to control the delivery path.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: Engineering to deal with the social problem of spam

2003-06-07 Thread Eric A. Hall

on 6/7/2003 6:01 PM Paul Vixie wrote:

 Probably better to specify the gateway tagging, ...
 
 and we're going to convey trust and credence through a nontrusted
 system How?

We can discover without question who the first MT2 system in the path was,
and (assuming that identity information is required, which I do) that
gateway will also have had to present identity information about the
sender. All rules, recommendations, and supportive integrity mechanisms
aside, those are going to be your primary actionable knobs.

Assume that somebody like AOL embraces this system for private transfers
with some other large-scale provider. They probably won't update all of
their submission services beforehand, but instead will just map their
existing authenticated submission services to this system. EG, they'll
see who a particular mail message is from, locate the appropriate user
certificate in their private directory, and feed that into the system.
This same model can hold true for private Exchange, GroupWise, or SMTP
AUTH submission services. All of these are examples of gateways that can
leverage authentication services to map a sender certificate, even if
those networks aren't running MT2 as the native service.

So the problem isn't with gateways it's with unauthenticated senders.
Simply put, messages won't make it to the next-hop inside the MT2 transfer
network UNLESS the gateway provides a user cert for the sender identity;
the next-hop would otherwise just reject the message.

Gateway rules (which weren't discussed in any of the above) can give you
more information to act on. For example, you can set your defenses higher
if you see remnants of more than one legacy Received header, or if there
are other characteristics you don't like. Obviously gateways are going to
be necessary, so it's really going to be a question of being able to apply
the right kind of heuristics.

 if smtp fallback is desired, it must be done in the sending user agent,
 who upon not finding the SRV RR, could ask try smtp instead?.

Conversion in either direction could theoretically occur at any point.
What cannot easily happen is for any message to get past the first hop of
the MT2 network without having entered at a system which did not have
access to user credentials.

[not to Paul, who already gets it: On the subject of identity-tracking,
this subject is a non-starter. Folks can gather and use all of the
identities they want from any number of ISPs and mail services (you can
call yourself [EMAIL PROTECTED] and nobody will care as long as it
validates). This is, in the end, the same level of anonymity that is
available with SMTP today]

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




discussion redirect

2003-06-05 Thread Eric A. Hall

I've setup a temporary mailing list to handle discussion of an alternate
mail transfer system, which I expect will get replaced by a real list
eventually, assuming that interest continues to progress. To subscribe to
that list, send an email message to [EMAIL PROTECTED]

I've also put together a basic outline proposal that shows what I'd like
to see in such a service. http://www.ehsco.com/misc/mt2/ has that proposal
(if it doesn't display right, you can download the original powerpoint
slides from http://www.ehsco.com/misc/mt2/MT2-00.ppt). I know that Keith
was scratching some ideas together for reusing SMTP and I'll post that if
he wants. Anybody that has any other proposals, let me know and I'll post
those too. We can discuss any or all of those on the mailing list. We
should probably move out of here anyway, before we get evicted.

Thanks

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/






Re: The utilitiy of IP is at stake here

2003-06-03 Thread Eric A. Hall

on 6/2/2003 4:15 PM Tony Hain wrote:

 I agree with the idea of a BOF, but 'anti-spam' is the wrong focus. Spam
 is a social problem, not an engineering one. I contend that is why we
 already have a research group dealing with it (social problems are
 inherently difficult for engineers, thus requiring research to figure
 out). Focus the group on a tangible engineering problem, deployable
 authenticated email. Or as Vixie labeled the more generic, interpersonal
 batch communication system. 

I agree that this is the right approach to pursue. In fact, I think we
could probably skip the BoF and start talking about that particular topic
immediately (secure mail, not anti-spam technologies, which is an
interesting research topic but is a dead-end outside negotiated usage
domains). All we probably need to get started is a mailing list, if
anybody feels like setting one up.

I would also _still_ like to see the I*** leadership fill the necessary
jurisdictional roles of liason, so that the current and future technical
mechanisms can be presented as hooks for the anti-spam legislation that is
needed to actually solve the social problem of spam.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: The utilitiy of IP is at stake here

2003-06-03 Thread Eric A. Hall

on 6/2/2003 5:28 PM J. Noel Chiappa wrote:
 From: Tony Hain [EMAIL PROTECTED]
 
 'anti-spam' is the wrong focus. Spam is a social problem, not an 
 engineering one.
 
 Sorry, I don't agree with this logic: if it's valid, then why try to
 design better locks, since theft is a social problem?

All human action (and crime especially) is a result of opportunity and
desire. Locks and other defenses are designed to impede opportunity, while
the law is supposed to restrain desire.

Most of us have the equivalent of locks already, either in the form of
blacklists, filters, whatever. Coming up with more of these is fine, but
those will only address the question of utility within certain scopes. By
the same measure, a lock on a barn door in an Iowa cornfield has different
utility than a lock on the door of a Porsche in a crowded ghetto (on the
Internet, of course, everybody is your roommate). Nobody makes a lock that
is deployable everywhere, and which cannot be easily circumvented by a
willing expert. There's no reason to believe that any particular anti-spam
solution is going to work any better.

Security cameras are another form of deterrant, and which work towards
reducing both the opportunity and the desire. Think of an authenticated
mail system as being analogous to security cameras. You will still want to
have the locks that are appropriate for your needs, but making the
presence of security cameras known is going to keep some people from
succumbing to the desire, while having the criminals on tape will prove
useful when they decide they want to get past the locks.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: Spam

2003-06-01 Thread Eric A. Hall
, converted, and bit-stuffed into
the proverbial round hole. All of this would make implementation much more
difficult, and would in turn make SMTP/821 more fragile.

My feeling is that the whole effort would be much more likely to succeed
if it were a from-scratch design. In fact, trying to do this inside SMTP
seems like a pretty good strategy for making sure that it either doesn't
happen at all, or that whatever gets through would be watered down and not
have been worth the effort.

 (I'd also rather avoid the how do we design a new
 mail protocol pandora's box - the amount of second-system effect would
 be huge, and it would take years to sort it out)

I competely agree with this sentiment, but that can be dealt with by
turning feature requests into architecture requests, and keeping the
features themselves out of the core specs. As an example, I want a
recall feature that lets me rescind an unread message from a recipient's
mailstore. That kind of feature has certain requirements (end-to-end
option encapsulation and verifiable identities), but that particular
feature should definitly not be in the spec. I can think of about 10-20
core architectural elements, but after that everything else probably
should be part of another spec.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: spam

2003-05-31 Thread Eric A. Hall

on 5/30/2003 11:43 AM Vernon Schryver wrote:

 That is an intersting point, but only if you think that ISPs that
 don't care enough about spam today to yank accounts (even of
 resellers) will bother to provide any of the things that might be
 called additional validation of identities.  Otherwise, you may
 as well permanently blacklist those ISPs by their IP addresses with
 supplemental whitelists of your correspondents that patronize them,
 and declare victory in the spam war.

I do that already. Trace information would let me do it sooner, and with
greater precision. Also, I can be more selective in my blocks, rather than
having to block entire countries (not all of whom are in Greater Asia)
because some ISPs have multiple netblocks or move around frequently. In
this regard, being able to identify an idiot ISP gives me finer-grained
control over that idiot's access to my network.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: The utilitiy of IP is at stake here

2003-05-31 Thread Eric A. Hall

on 5/30/2003 11:45 AM Paul Hoffman / IMC wrote:

 So far on this thread, we have heard from none of the large-scale 
 mail carriers, although we have heard that the spam problem is 
 costing them millions of dollars a year. That should be a clue to the 
 IETF list. If there is a problem that is affecting a company to the 
 tune of millions of dollars a year, and that company thinks that the 
 problem could be solved, they would spend that much money to solve 
 it. Please note that they aren't.

http://www.nytimes.com/2003/05/25/business/yourmoney/25SPAM.html?pagewanted=3
leads with the question Microsoft has spent more than a half-billion
dollars trying to build software to filter out spam. Why isn't that good
enough? Note that the responder doesn't challenge the amount.

 I have spoken to some of these heavily-affected companies (instead of 
 just hypothesizing about them). Their answers were all the same: they 
 don't believe the problem is solvable for the amount of money that 
 they are losing. They would love to solve the spam problem: not only 
 would doing so save them money, it would get them new income. Some 
 estimate this potential income to be hundreds of millions of dollars 
 a year, much more than they are losing on spam. But they believe that 
 the overhead of the needed trust system, and the cost of losing mail 
 that didn't go through the trust system, is simply too high.

http://www.infoworld.com/article/03/05/30/HNmsspam_1.html demonstrates
that at least some of the large mailers still beleive there is a problem
worth addressing, and that they are continuing to spend towards solving
the problem.

There's no need to hypothesize about much of this. There is a problem,
obviously. Some of the specific approaches have proven to be untenable
(many of the approaches which were discussed here have been tried and
apparently rejected as part of the Black Penny Project for example). I
think it's equally obvious that companies are spending massive amounts of
money to try and solve this problem (assuming we agree that $500 million
from one company qualifies as massive), and that they are continuing to
pursue those solutions which are probably the most viable for their
peculiar situations.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: spam

2003-05-31 Thread Eric A. Hall
 cost less than the
settlement costs, corporate personnel and legal fees would have been well
worth the expense to them.

http://www.wired.com/news/technology/0,1282,9641,00.html describes three
different instances of fraudulent misrepresentation of Yahoo, any of which
would have been ameliorated with half-decent identity information which
clearly indicated the user was a customer and not an employee of Yahoo.
I've no idea what the dollar cost to Yahoo was, but the legal time alone
couldn't have been cheap, not to mention the economic impact from loss of
credibility.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: Spam

2003-05-31 Thread Eric A. Hall

on 5/30/2003 2:17 PM John Stracke wrote:
 Paul Vixie wrote:
 
if we could make spamganging so illegal that they were eventually
not replaced, then their traffic would be replaced by bulk e-mail from every
customer of every CRM (customer relationship management) company in the world.
 
 I'm not sure why this follows.  Wiping out the fraudulent spammers would 
 take a War On Spam, with massive levels of propaganda to stir up public 
 opinion enough to justify the money it would cost.  In that kind of 
 environment, even in the aftermath, no white hat would dare spam; their 
 customers would immediately turn against them.

The way to avoid that kind of blow-back against legitimate use is through
documented innocence (saving the explicit opt-in confirmation).

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: spam

2003-05-30 Thread Eric A. Hall

on 5/29/2003 12:18 PM Bill Cunningham wrote:
 Personally I think the best idea I've seen yet is the idea of a prefix,
  such as ADV in the subject line.

Using the current transfer and message-format models, that requires
post-transfer processing. At a minimum, you would be legitimizing
artificially increased bandwdith and processing demands (assuming that
everybody complied with the law).

 It maybe possible to put something like ADV in a protocol header. Or
 maybe that is too extreme.

A special header would be feasible if the transfer headers and message
headers were separate, since you could reject the message before the
transfer. The same results would also be possible with ESMTP using
something like an ;ADV extension to the MAIL FROM command. Both of those
require wholesale upgrades to have any impact, so in the meantime you'd
still have to rely on post-transfer processing.

There is another significant problem with using an ADV tag with all
commercial mail, which is that it doesn't adequately distinguish between
spam and legitimate commercial mail. Would upgrade notification messages
for stuff like software need to be marked? Would domain renewal notices
from your registrar need to be marked? Would you need to explicitly opt-in
to get those messages without them being marked?

Seems to me we should be defining laws that put the onus on the spammers
rather than on the recipients and legitimate business communications.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: The utilitiy of IP is at stake here

2003-05-30 Thread Eric A. Hall

on 5/29/2003 3:39 PM Peter Deutsch wrote:

 I personally want a next generation system that would *increase* my
 privacy, not attempt to make a virtue out of *removing* the few shreds
 of annonymity I have left. I would specifically refuse to use such a
 system. And yes, I also want it to make unsolicited, bulk email harder
 to send to me, but not at the cost of my privacy.

Everybody wants to see caller-ID but nobody wants to send it.

Actually, the use of an identification system doesn't necessarily have to
go directly against privacy or anonymity. It leaves the door open for some
kinds of abuses in that area, but those aren't a whole lot worse.

A ~certificate would validate the identity you are using for that piece of
email. That identity doesn't have to be your name or anything else that
identifies you personally. Hell, use 20 certificates, call yourself Batman
in one group and Wonder Woman in the other, nobody will care. As long as
they all verify -- and as long as I can track you down with a court order
that exposes what I need to know when I have a demonstrable reason to know
it -- nobody should care about the identitiers you choose to use.

The real risk here is that the delegator will know who you really are and
might tell somebody. I don't see much difference between that and the risk
we already have from upstreams being able to sniff and delegate, though.

Besides, if everybody feels that strongly about it, a mail system like the
one I laid out doesn't *require* user identification, only host and domain
identification. If folks want the user part to be optional, that's fine
with me.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: The utilitiy of IP is at stake here

2003-05-30 Thread Eric A. Hall

on 5/29/2003 3:29 PM Dave Crocker wrote:

 Please indicate some historical basis for moving an installed base of
 users on this kind of scale and for this kind of reason.

Notwithstanding the overly-specific nature of the request, I can think of
two off the top of my head, which are FTP/Gopher-HTTP and POP-IMAP.

The features define the benefits, and the benefits are the motivators (I
already gave a list of the features I'd like, and which I think would be
motivational). Large-scale mail carriers would probably switch quickly if
the accountability feature proved useful, even in the absence of laws. The
same is probably true for corporates and financial services firms who rely
heavily on accountability. That's just one benefit.

There are external motivators as well, such as flagdays for the government
and all of its contractors.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: The utilitiy of IP is at stake here

2003-05-30 Thread Eric A. Hall

on 5/29/2003 5:59 PM David Morris wrote:

 The slower process will be the millions of smaller mail infrastructures,

Yes, small business are the biggest hurdle in the deployment cycle.

Fortunately, I think that most of them probably use their ISP's mail
services, so its not quite like we have to convince every office in every
stripmall to upgrade.

 As long as the new protocols provide a migration plan and support,
 upgrade over a year or two is a reasonable expectation.

Yes. And its also reasonable that after ~80% switch, sites can start to
disable the legacy compatibility mode. Note that many of them will still
need it for things like printservers and other devices, but for general
Internet communications it should be a little easier since most of the
changeover can happen just by getting most of the ISPs to switch.

The really hard question isn't the upgrade, its how to limit pollution
from legacy MTAs during the upgrade. If spam is still running high during
the transition, then people will wonder why they bothered.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: The utilitiy of IP is at stake here

2003-05-30 Thread Eric A. Hall

on 5/29/2003 6:27 PM Dean Anderson wrote:

 Anyway, with Type 1 and Type 2 spam, this is unnecessary, since they
 tell you how to contact them in the message.

There is still a reason to have verifiable identities for commercial spam,
which is protection against joe-jobs. You want to have proof that the
beneficiary is really the spammer and not just a victim, or that the
spammer is really the spammer regardless of who he is spamming for. While
there are ways of doing this after the fact as you said, having a
verifiable sender identity makes it a lot simpler.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: The utilitiy of IP is at stake here

2003-05-30 Thread Eric A. Hall

on 5/30/2003 1:36 AM Dave Crocker wrote:

 HTTP can reasonably be considered a replacement for Anonymous FTP,
 during an academic discussion.  The massive difference in the service
 experience makes this a less-than-practical comparison, when discussion
 an email transition.  So does the massive difference in scaling issues
 for the 1989 timeframe, versus now.

 The POP-IMAP example is excellent, since it really demonstrates my
 point. IMAP is rather popular in some local area network environments.
 However it's long history has failed utterly to seriously displace POP
 on a global scale.

I would not disagree with your assessments other than to say that the
comparisons aren't exactly applicable.

Specifically, you don't have to upgrade every client in the world for the
transition to work. As a matter of deployment, you only have to upgrade
the MTAs. The submission service can still be SMTP or whatever you want;
as long as the server which first puts the message into the ng stream is
ng-compliant *AND* that server is capable of providing the identity
information, then the first-hop(s) don't really have to be ng-compliant
for the scheme to work.

Asking for examples of upgrades involving hundreds of millions of clients
isn't really an applicable exercise. The examples I gave are useful to the
extent that they demonstrate a willingness to move critical technology in
varying scales.

 Seriously folks, if discussion about changes is going to be productive,
 it needs to pay much more realistic attention to history and pragmatics
 of ISP operations and average-user preferences.

Let's not overdo it either.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: spam

2003-05-29 Thread Eric A. Hall
 be necessary in order to get beyond what SMTP can
provide today. It is much more justifiable if we are going to reinvent
mail for other purposes at the same time.

I also want to reiterate that we still need laws regardless of whether we
go to a new SMTPng or stay with SMTP. My feeling is that we (collectively)
should go ahead and start pursuing legal enforcement methods for use with
SMTP now, regardless of what we end up doing over the next five years on
the technology front. As such, I think the legal-interaction issue needs
to be addressed first.

I am also in favor of reinventing mail. It's a standing joke against the
IETF that people can readily commit flagrant forgeries in one of the most
important technologies of our time.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: The utilitiy of IP is at stake here

2003-05-29 Thread Eric A. Hall

on 5/28/2003 8:30 PM Richard Shockey wrote:

 Its not that hard to write a letter, sign it with a return address and
 put a postage stamp on it or make a phone call to a local
 representatives office..
 
 The US Congress is not very good a dealing with email ..trust me.
 
 they like snail mail...

As a follow-up to anyone considering it, one office has said that the
anthrax scare has made snail-mail processing even more snail-like on the
hill. Sending postal mail to the home-state office is supposedly much
faster. If it has an in-state return address then it is that much more
likely to get read.

Also, at least some of the senators supposedly get several thousand emails
each week. Interestingly, about half the senators and reps no longer
publish email addresses, and many of them bounce email with a message
saying to resubmit via a web form. Whether this is due to the volume of
political spamming or UCE spamming is left to the reader to ponder.

All of that aside, the one response I've received which doesn't appear to
be completely automated (or at least somebody had to manually choose the
correct automated response) was to email.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: spam

2003-05-28 Thread Eric A. Hall

on 5/27/2003 8:04 PM Dean Anderson wrote:

more waffling snipped

 In both of these examples, the verbiage present in the TCPA is
 equally applicable to the problem of spam. Your continued waffling on
 minor, irrelevant, non-contributory details and detours does not
 change that.
 
 No, it isn't, despite your continued assertions.  You have failed to 
 present a case that spam costs any money, or interferes with any
 reaonable person's email.

You still don't seem to understand the nature of proof, arguing instead
that the existence of alternatives somehow disproves a matter of fact.
Again, whether or not you think that the proof is significant is a matter
of opinion, not a matter of proof.

Look, there were plenty of alternatives to the TCPA as well. Users could
have avoided the cost issues by keeping the fax powered off, and could
have avoided the availability issues by ensuring they had plenty of
supplies or by using a second machine. But rather than saying lump it and
use a workaround, congress passed a law which recognized and addressed
the violative nature of the canonical problem itself. This is perfectly
demonstrated by the fact that the law affords protection *even to those
people who never had any problems with junk faxes at all*. And this law
has been upheld as constitutionally valid in circuit courts, despite your
equivocating and factually erroneous claims to the contrary.

For those same reasons -- and using the same kind of proof -- a legal ban
on email spam is a demonstrably reasonabe pursuit. There is proof of
cost-reversal with email spam at the same cost levels as fax spam
(measured-rate), and there is proof of interference with usefulness at
the same levels as fax spam (full mailboxes), even without getting into
the testimony presented to congress in various hearings, and certainly
without exposing ourselves to the common sense that the first amendment
test allows for. Your alternatives such as ~don't use PacBell ISDN and
~don't use Hotmail are not only facile waffling, they are also irrelevant
in the face of the proof. Unless you can disprove the proof -- showing
that *NO* communication services charges by the byte or minute, and that
*NO* free mailbox service is filled with spam -- then the proof stands.
Until then, the only part of this that can be debated is whether or not
the proof contributes to the testimony and the common sense towards the
purpose of extending the TCPA. Feel free to claim that it doesn't, nobody
will be surprised.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: spam

2003-05-27 Thread Eric A. Hall

on 5/27/2003 12:21 PM Anthony Atkielski wrote:

 I have an idea that is kind of odd, and I don't know if it would work.
 I really wouldn't mind signing up for a service that sends me filtered 
 advertising, in domains that I find _specifically_ interesting.  I'd be
 happy to read about products and services that directly address my
 needs and interests, and if such a service existed, I'd be tempted to
 sign up.  So what if such services did exist on a widespread basis, and
 everyone signed up for them?  Would there still be a need for spam?

People would still want to tell everybody about their products. They would
even believe that if you aren't monitoring their specific domain, it's
only because you don't fully appreciate the benefits of their products.

To quote a bulk-mailer:

http://www.nytimes.com/2003/05/25/business/yourmoney/25SPAM.html?pagewanted=3

| MICHAEL P. SHERMAN

| Q. What's wrong with the idea that people should have to opt in to
| get commercial e-mail?
|
| A. Do you ask for advertising? Advertising introduces someone to a new
| idea. How are you going to do that? People aren't going to say, I
| want something new today, so I want an e-mail from you.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: spam

2003-05-27 Thread Eric A. Hall

on 5/27/2003 1:07 PM Iljitsch van Beijnum wrote:

 They will continue to do it as long as:
 
 1. they get the return they're looking for
 2. it's relatively easy to do
 3. they get away with it

 Number two is an area where the IETF could actually do something 
 useful. The way things are today, everyone can contact any mailserver 
 and expect the message to be delivered. Now this is a nice way to build
 a distributed mail system, until such time that spammers pop up, 
 bombard mail servers around the world with their enlargement ads, and 
 when they are shut down they simply move to another IP address and 
 resume their abuse. If we mandate an extension to SMTP to signal an 
 unknown mail server that it should either

 a. find a known server to forward the message, or
 b. go through some kind of (off-line) procedure to become accredited
 
 people who send small amounts of mail can simply be instructed to use 
 their ISP's mail server while those who send lots of legitimate mail 
 can be whitelisted. Spammers are presumably stopped when they flood 
 their ISP's mail server or they lose their white list status.

I'm all for that, but there are some serious difficulties with this. I
personally don't think it will work until the transfer protocols are
reinvented (specifically so that the exchange supports a separation of the
transfer and message headers, the current comingling of which is a
layering violation, IMHO), but rearchitecting gets a lot of push-back.
It's definitely an area that can stand some work.

 For number three we need the law.

Yep.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: spam

2003-05-27 Thread Eric A. Hall

on 5/27/2003 12:28 PM Zefram wrote:

 Whether the receiving-end filtering approach is viable in the long
 term is debateable.  When 99.9% of email is spam, the filtering world
 will look very different.  Perhaps at that point it'll be infeasible to
 continue allowing unsolicited email at all.

Some of us are already at that point and are wanting to have the debate
right now.

Every system can be fooled (corrollary: every fool has a system). This
includes pre-emptive measures like laws, pre-transfer black/whitelists,
and post-transfer filtering techniques. What laws provide over the other
mechanisms is a discouragement hammer. The other mechanisms are purely
defensive, and as I said earlier, only have value if you redefine some
measure of defeat as victory.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: spam

2003-05-27 Thread Eric A. Hall

on 5/27/2003 3:39 PM Dean Anderson wrote:

 I read the ruling, too.  The district court said that the government 
 failed to satisfy the tests for restriction of commercial speech, as 
 outlined in Central Hudson. The appeals court reversed.

This is more waffling, and is useless. The constituionality of the law was
upheld. Everything else is irrelevant.

 However, this is an interesting case, since American Blast Fax was not 
 represented in the Appeal, and had apparently gone out of business.

More waffling. Even if this mattered, the other defendant (fax.com) was
still in business. Not that it matters, since the court specifically
upheld the validity of the law.

 In other words, cost plays a big part in the decision.  But as has been
 so roundly demonstrated, the cost associated with email is practically 
 non-existant, and does not ever cost any user more than $1 or $2 per 
 month, which they pay for email services.

The cost of a fax to a typical organization with bulk purchasing power is
probably on the order of $.02 per page, including the paper and ink used.
Using thermal paper from a retailer averages out to about $.06 per page.
It is very easy to demonstrate per-message costs in that same ballpark for
spam, especially once we get measured-rate circuits involved. That you do
not suffer from these burdens does not mean that nobody should be
protected from them.

As has been stated, the junk fax laws are not limited to cost protection,
and also address the usability arguments. It has been successfully proven
in this discussion that a sufficiently high volume of spam makes email
less useful for its owner.

You continue to exhibit an extraordinary willingness to argue your
opinions against the facts.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: spam

2003-05-27 Thread Eric A. Hall

on 5/27/2003 6:24 PM Dean Anderson wrote:

waffling snipped

The cost of a fax to a typical organization with bulk purchasing power is
probably on the order of $.02 per page, including the paper and ink used.
Using thermal paper from a retailer averages out to about $.06 per page.
It is very easy to demonstrate per-message costs in that same ballpark for
spam, especially once we get measured-rate circuits involved. That you do
not suffer from these burdens does not mean that nobody should be
protected from them.
 
 No, this isn't true. If you pay $1 per month for email

You have no idea what I pay. As far as you know, I pay $.05 per minute for
ISDN and cellular-data hookups to pull my mail down from my colo server.
Everytime I pull a piece of spam I pay the nickel, which is in the same
ballpark as fax spam costs. Your response to this point was, and I quote
here: Don't get email on measured rate services, then. which is a limp
way of saying that spam costs people with these links too much money for
them to use email. You have admitted that spam has a cost, and are now
trying to waffle out of that position by claiming that there is no cost.

As has been stated, the junk fax laws are not limited to cost protection,
and also address the usability arguments.
  
 No, the Appeals court didn't find that at all.

I thought you read the opinion:

| We conclude that the Government has demonstrated a substantial
| interest in restricting unsolicited fax advertisements in order to
| prevent the cost shifting and interference such unwanted advertising
| places on the recipient.  

In this particular case, the court cited previous evidence on the issue of
usefulness, in that junk faxes prevented the machine from being used for
its intended purposes. Spam also prevents email from being used for its
intended purposes when (1) their hotmail/yahoo mailbox fills up, (2) a
false-positive in a filter or a blacklist kills a message, or (3) somebody
deletes all of their email because they can't scan it all. All of these
are examples of interference. I don't really care about how you waffle
around this fact.

In both of these examples, the verbiage present in the TCPA is equally
applicable to the problem of spam. Your continued waffling on minor,
irrelevant, non-contributory details and detours does not change that.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: spam

2003-05-27 Thread Eric A. Hall

on 5/27/2003 6:28 PM Vernon Schryver wrote:

 Again, if spam costs mail providers much more than $1 or $2/month/user,
 then how can free providers offer mailboxes and how can you buy full
 Internet service including the use of modem pools or whatever for
 $10-$15/month?

In the case of Yahoo and Hotmail, they have free and premium accounts.
They underwrite both services with advertising income. The cost for the
free service is the value they have assigned to the ads they display when
you read your mail. The cost for the premium service is that value plus a
little extra.

http://help.msn.com/!data/en_us/data/HMFAQv7.its51/$content$/HM_FAQ_HowMuchEmailStorageSpaceDoIGetWithHotmailUnAuth.htm

| With free MSN Hotmail accounts you receive 2 megabytes (MB) of
| storage space on Hotmail servers for messages and attachments.

| Subscribers to MSN Extra Storage and MSN Internet Services monthly
| subscribers receive 10 MB of Hotmail storage space as well as 30 MB
| of storage space in MSN Groups on which to store and share photos,
| music, and other files.

Yahoo's free option is a six-megabyte mailbox. The upgrade options are
at http://billing.mail.yahoo.com/bm/Upgrades and below. Note that there is
 a Plus option which is an extra-premium to get rid of their ads.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: The utilitiy of IP is at stake here

2003-05-27 Thread Eric A. Hall

on 5/27/2003 6:59 PM Tony Hain wrote:

 protocol design. We as the IETF need to step up and provide an alternate
 design if we want the system to change. Some components of a new design
 need to be a viable trust model, and irrefutable traceability. 

Yes.

 The IETF can't do social engineering, but can provide the framework for
 others to do so. In that light, we simply need to get on with the task
 at hand and create a replacement protocol.

Actually, and this was point of my early posts, the IETF or somebody who
is authorized to represent this unruly mob needs to interact with the
legislative bodies to tell them what we *CAN* do. The law has to be built
around the protocol work just as much as the protocol has to support the
machinations of the law.

As I also tried to demonstrate in the earlier messages, the coordination
can be very minor to start with, being little more than 250 means don't
send me illegal crap while 2XX means this other thing...

Somebody has to perform this coordination. They don't have to embrace
policy or anything else.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: Thinking differently about the site local problem (was: RE:site local addresses (was Re: Fw: Welcome to the InterNAT...))

2003-04-01 Thread Eric A. Hall

on 3/31/2003 11:01 AM Bill Manning wrote:
   Is may be worth noting that RIRs have -NEVER- made presumptions
   on routability of the delegations they make.

Probably more accurate to say that they have never guaranteed routability.

They make all kinds of presumptions about routability. One of the reasons
they claim to refuse (say) private /24 is that it isn't going to be widely
routable. By default, this implies that larger delegations are presumed to
be routable, or else they wouldn't assign them either.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: new.net (was: Root Server DDoS Attack: What The Media Did NotTell You)

2002-12-03 Thread Eric A. Hall

on 12/2/2002 11:13 AM Stephen Sprunk wrote:

 Okay, so when every foo.com. applies to become a foo., how will you
 control the growth?

1/ no trademarks allowed

2/ competitive rebidding every two years

3/ mandatory open downstream registrations (no exclusions)

4/ high entry fees

 IMHO, the only solution to this problem is the elimination of gTLDs
 entirely.

There isn't enough demand to support more than a few dozen popular TLDs.
Generic TLDs are user-driven, with the market deciding which ones they
want to use. Geographic TLDs are completely arbitrary and favor the
functionary instead of the user.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: new.net (was: Root Server DDoS Attack: What The Media Did NotTell You)

2002-12-03 Thread Eric A. Hall

on 12/2/2002 11:53 AM Måns Nilsson wrote:

 I hope it would shut the nutcases arguing about new TLDs up, because they
 have been given what they so hotly desire (why escapes me, but I suppose
 they believe they'll make a big bag of money selling domain names. Good
 luck.) 
 
 Technically, it is no problem to keep 500 delegations in sync -- even with
 higher demands on correctness than are made today, both for the root and
 most TLDs. 
 
 However, there can only be one root. That is not up for discussion. (in
 case somebody thought I think so.)

This is also my position entirely.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: new.net (was: Root Server DDoS Attack: What The Media Did NotTell You)

2002-12-03 Thread Eric A. Hall

on 12/3/2002 1:49 PM Stephen Sprunk wrote:
 Thus spake Eric A. Hall [EMAIL PROTECTED]

 1/ no trademarks allowed
 
 Every combination of characters is trademarked for some purpose in some
  jurisdiction.  If you find some exceptions, I'll find some VC money
 and take care of that; problem solved.

Let's not get carried away. Trademark didn't stop .info and it won't stop
.car or .auto either.

 2/ competitive rebidding every two years
 
 IBM is not going to like potentially losing IBM.

see item 1.

 3/ mandatory open downstream registrations (no exclusions)
 
 A hierarchy without any kind of classification?

Nobody has been able to make any kind of classification work in the
generalized sense. Every classification scheme eventually proves to be
derived and arbitrary. Markets are chaotic, but the ordering that makes
sense to the customers does eventually emerge.

 COM. vs NET. today, most SLDs from one exist in the other, and VeriSign
 even offers a package where they'll register your SLD in every single
 TLD that exists for one price.

This is completely irrelevant.

 4/ high entry fees
 
 Well, that'll certainly be needed, since the root registrar will need a
 few hundred DNS servers to handle the volume of new queries in the root
 now that you've made a flat namespace.

I don't see anybody arguing for a flat root. That may be the argument you
want to have but I haven't seen it suggested.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: Does anyone use message/external-body?

2002-11-16 Thread Eric A. Hall

on 11/15/2002 4:14 PM Keith Moore wrote:
 However, this raises a question: does *anyone* use external-body in
  association with I-D announcements?

 Several MUAs support message/external-body, but they don't all work
 with the I-D announcements. Specifically, some MUAs only render the
 entities if a Content-Transfer-Encoding MIME header field is defined,



 How bizarre.  You mean those MUAs can default to 7bit for other
 bodyparts but not for message/external-body?

That's right. They aren't applying the default CTE to the entity headers
inside the message/external-body entity.

 And that multiple implementors have made this same error?

Netscape has had this problem since 3.x, and Mozilla still has it
(presumably they are all the same code tree). I seem to remember others
having the same problem but I can't find my testing notes.

 Even though fixing the MUAs would be the best fix in the long-term,
 adding the CTE MIME header field to these entities would at least
 allow more MUAs to render the entities appropriately. Since this is a
 mandatory header field for some entities, there is some argument that
 the missing CTE is the problem anyway.

 The RFC is quite clear that in the absence of a
 content-transfer-encoding field it is interpreted as 7bit.

Well, yeah, I don't disagree with the default interpretation. Some
argument specifically refers to those times when the docs aren't 7bit
(plenty of examples of I-Ds with 8bit text, despite the rules there too),
which eventually nests into ~it should always be declared.

 But I do think it would be an interesting experiment to add the line

 content-transfer-encoding: 7bit

 to the external body parts of those notices, just to see how many more
 MUAs worked with them.

It makes NS/Moz usable anyway.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: Does anyone use message/external-body?

2002-11-15 Thread Eric A. Hall

on 11/14/2002 11:46 PM Dan Kohn wrote:

 However, this raises a question: does *anyone* use external-body in
 association with I-D announcements?

Several MUAs support message/external-body, but they don't all work with
the I-D announcements. Specifically, some MUAs only render the entities if
a Content-Transfer-Encoding MIME header field is defined, and the I-Ds
don't define that MIME header field with those entities. The result is
that Netscape, Mozilla and some others don't render those links, meaning
that the message/external-body entities cannot be used by users with the
affected MUAs. This obviously limits usability.

Even though fixing the MUAs would be the best fix in the long-term, adding
the CTE MIME header field to these entities would at least allow more MUAs
to render the entities appropriately. Since this is a mandatory header
field for some entities, there is some argument that the missing CTE is
the problem anyway.

As to the larger question, I'm opposed to replacing the external links
with URLs. There are just as many known problems with rendering long URLs
as there are with message/external-body entities (eg, your example folded
and became unusable in Mozilla). Besides, the IETF should eat its own dog
food, and message/external-body is an important type. Now if we could just
make it work with some of the popular MUAs...

While we're on the topic of troublesome messages from the IETF, it's also
interesting to note that the I-D submission response messages are also
malformed, containing two different Subject header fields:

  Subject: Re: draft-ietf-crisp-lw-user-00.txt
  Subject: Autoreply from Internet Draft Submission Manager


-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: APEX

2002-09-24 Thread Eric A. Hall


on 9/24/2002 11:45 AM Dave Crocker wrote:

 However the problem is not with a lack of documentation for the terms.
 The problem is with community USE of the terms.  The community is not
 precise.  The terms do not have universal, rigorous usage, the way
 meter or kilogram do.

This is problematic all across the IETF. Mailbox has different
syntactical meanings across the different messaging RFCs, for example,
sometimes implying a localpart while at other times implying a folder
within a mailstore, and implying some other unit at other times. And lets
not even get started on domains.

Some RFCs invent their own words. One of the DNS RFCs uses a term which
does not exist in common dictionaries.

I've been wondering for a while if it wouldn't serve the community well
for one of the I* bodies to develop a dictionary for developing and
discussing Internet technologies. This discussion seems to reinforce the
neccessity of such.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: Global PKI on DNS?

2002-06-14 Thread Eric A. Hall


on 6/14/2002 11:29 AM Ed Gerck wrote:

 since Verisign de facto controls the DNS name space

Having any [registry-X] control [TLD-Y] at any given moment is a whole
'nother set of issues. EG, if the registry operator for a TLD changes,
would the private key linked to the TLD also need to be changed?

I'd say that the only way to meld these technologies is to use DNS as a
locator for a self-signed root certificate associated with a domain.
Caveat being that even then you are at the mercy of the signer. You don't
know if you are talking with [EMAIL PROTECTED] or if you are really talking
to [EMAIL PROTECTED] who has access to the repository. Likewise, you don't
know if you are talking with mailserver.example.com or if you are talking
to bofh-pc.example.com. And there's still the problem of ensuring the
integrity of the link between the DNS referral and how you get to the CA
authority; maybe [EMAIL PROTECTED] has hijacked the target machine,
or maybe he is running a different copy of the zone and is happy to
redirect only half of the traffic.

About the only thing you could use this for is to prove that you aren't
talking to something -- they haven't been able to obtain a copy of the
private key, legitimately or otherwise -- meaning that it would
essentially be PTR/A validation on steriods. It could also be useful for
negotiating transport security services and a limited amount of identity
information (similar to what you get from PTR/A validation) but would not
be useful for any explicit identity information. Some sort of external
notary service is still necessary for that.

Of course the universe is at the mercy of network operator integrity
already so maybe this is ok.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: Global PKI on DNS?

2002-06-08 Thread Eric A. Hall


on 6/8/2002 8:22 AM Franck Martin said the following:

 I was wondering if the best system to build a global PKI wouldn't be the
 DNS system already in place?

This is an ongoing argument. Essentially there are two camps:

  Pro--there's a global database out there, let's put useful stuff
   into it. Certs is a no-brainer, but people have also argued for
   baseball scores, usernames, and everything else short of kitchen
   sink inventories.

  Con--the more crap you put into DNS, the less usable it becomes for
   its primary purpose of providing fast and lightweight lookups
   for Internet resources. While certs can be argued to be in that
   camp, they cannot be handled with fast and lightweight lookups.

As other people have already pointed out, the use of large objects
requires that clients and servers use TCP for lookups. TCP imposes a large
burden on servers (especially busy servers) in comparison to UDP. Add to
that the fact that many DNS systems do not support the use of TCP for
queries whatsoever, meaning that it just won't work with a large number of
systems in the first place. And even if it did work, it would result in
other simple lookups failing, essentially punishing everybody for the
benefit of a single application.

 It would be the easiest way as apparently nobody is trying to build a
 global PKI infrastructure and LDAP people can't agree on a global
 standard to link each ldap server to each other, which DNS has...

There is some work underway to develop an LDAP infrastructure for the
Internet community, with DNS being used as a stub to kickstart the
process. That will get you the same thing as what you want, but without
crushing DNS as a result.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: [idn] Re: CDNC Final Comments on Last call of IDN drafts

2002-06-08 Thread Eric A. Hall


on 6/8/2002 4:00 PM Dave Crocker said the following:

 If you review the amount of time spent on developing this work, you will
 discover that it is a long way from premature.  Quite the contrary.  The
 work has gone on approximately two years longer than it needed to.

 the process has continued beyond achieving adequate rough consensus, 
 because the IETF has been heeding the concerns of a vocal minority of the 
 community, where most of the concerns of that vocal minority concern 
 technical issues that are outside of the scope of this working group.

If that were true we wouldn't be up to rev 9 of the core draft.

Gee, maybe some of the concerns are real, in spite of two years of harping
that we are finished.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: RFC3271 and independance of cyberspace

2002-05-01 Thread Eric A. Hall


Keith Moore wrote:

 And the downside of information capitalism is that it facilitates
 control over the many by those few who possess crucial pieces of
 information - the information produced by everyone else is nearly
 useless in comparison.  Ironically, what you call information
 capitalism encourages centralized control.

I think we are looking at this in two different ways.

Which of the following two options is more likely to feed starving
children in Africa:

  1) the Africans produce millions of pieces of valuable IPR

  2) we take Steamboat Willie away from Disney, making it valueless
 to everybody

Clearly, allowing people to generate their own value is going to improve
the global lot moreso than mugging the IPR barons.

Where you cite centralized control, that control only exists because
people are not improving their own portfolios. If everybody who was
capable of doing so built up their own IPR collections, then the
centralized wealth would not be so disproportionate.

On the other hand, if everybody has none, then the benefits go to the
distributors and those who can leverage the material for other purposes,
while the creators are at a disadvantage. That direction abosultely favors
an imbalance, and it rewards the exploiters vs the creators.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: RFC3271 and independance of cyberspace

2002-05-01 Thread Eric A. Hall


Keith Moore wrote:

 you falsely assume that millions of pieces of valuable IPR can be
 created out of thin air.

I make no such assumptions. It would certainly help things, for example,
if you were to donate your IPR to them. Which is better, that I donate my
IPR for them to sell, or that you take my IPR away so that it has zero
value to everybody?

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: RFC3271 and independance of cyberspace

2002-05-01 Thread Eric A. Hall


John Stracke wrote:

 As John Gilmore has pointed out, we are approaching an age when
 nanotech will mean that any material object can be copied as
 easily as we can currently copy digital information.

This discussion is leaving the realm of ~modifications to RFCs. However,
there are two comments on this:

 1) The benefits of nanotech are promising but speculative, while the
issues with IPR are current and real.

 2) Assuming that the promise of nanotech is realized, issues with IPR
will be the least of our worries. Issues with physical property and
cash currency will drive any subsequent IPR issues. For example
do we even need to have real estate in a world where production is
not bounded? Do we need companies to provide physical goods? Do we
need cash currency? How do we pay people to re-run fiber when a
train derails if we don't have cash, or if cash has no value, or
if there is no physical property to acquire? Would anybody do it
without pay? Would we want the quality of no-pay work that we
got? No, IPR will be the least of our problems.

As stated, this is OT.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: [idn] Re: 7 bits forever!

2002-04-06 Thread Eric A. Hall


Robert Elz wrote:
 
 Date:Sat, 6 Apr 2002 11:55:50 +0200 (MET DST)
 From:Gunnar Lindberg [EMAIL PROTECTED]
 Message-ID:  [EMAIL PROTECTED]
 
   | To me the real issue, however, seems to be in the applications, i.e.
   | in the resolvers
 
 If applications have problems with this, they almost certainly need to
 be fixed anyway - regardless of what the standards say should appear,
 the application has to deal with what actually exists, which need not
 have a lot in common with what is specified.

There are a couple of intertwined problems here. First off, the octet
values are not specified with an interpretation, so what is the
application supposed to use when it has to render the values? Conversely,
if the user specifies some non-ASCII characters, will these be the same
characters that are displayed by the remote end-point? The problem isn't
transferring the octets, it is converting the octets from display to
transfer, or vice versa. In that regard, the applications which multilate
eight-bit names are somewhat broken (hyper-technical perspective), but to
be fair, it's not solely their fault. This is the exact same problem as
IDNA transliteration -- there's no telling what happens to the data after
it has been mutated for rendering, and vice versa.

This is where isolation becomes important. Keep the old apps out of the
new *interpreted* eight-bit namespace, using new APIs, new messages,
whatever it takes. If they screw up due to an interaction problem with
charsets, give them an error instead of giving them the wrong information
or allowing the wrong information to pollute other apps. I agree that it
would be nice if the old apps could be fixed (then we could go with
'just-use-UTF8'), but the truth of the matter is that there isn't any way
to tell if an old app *HAS* been fixed, other than to use some new
mechanism, as in the above.

So, long story short, there's no way to just lay this on the apps.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: [idn] Re: 7 bits forever!

2002-04-03 Thread Eric A. Hall


Erik Nordmark wrote:

 Instead of a brand new proposal I'd be more interested in finding out
 how you can address the DNSSEC issues I pointed out in
 draft-hall-dm-idns-00.txt

The DNSSEC problem is hard, for multiple reasons. Some zones can sign
dynamically, some can't. The state of the delegation parent also affects
the negotiation. The solution to this problem will be multi-faceted. The
OPT-IN work can probably help solve part of it. Small changes to DNSSEC
may be required to solve other parts of it. One of the changes I am
planning to make (and would encourage substitutes to use) is EDNS RCODE
responses in OPT RRs, such that each transaction in the query operation
has a distinct code for fallback purposes, and this will help with part of
the problem too.

Essentially, one RCODE value could be used whenever fallback processing
has occurred (the cache/server/whoever tells the client: I am falling
back to legacy mode), and this will reset the query timer on the client.
This means that queries can fallback as neeeded multiple times, without
triggering the client's query timer. This would also mean that DNSSEC can
be preserved if the final data has been signed in UTF-8 form, even if the
immediate parent is only ACE.

Another RCODE value could be used for cannot fallback and couldd occur
for any (valid) reason. Implementations would have to be prohibited from
refusing to fallback because ~I don't want to, but they could refuse to
do it if they are under heavy load, if a configuration error prevents it,
or if DNSSEC prevents it, such as a delegation NS RR only being available
in ACE, and it is signed. In such an event, the resolver would report
cannot fallback and the original client would have to reissue the query
in ACE in order for the query to succeed.

Anyway, there are several angles to pursue here. I'm not really sure that
any of this can be done until DNSSEC stabilizes some. Frankly, I'd rather
somebody else try to solve this problem.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: 7 bits forever!

2002-03-22 Thread Eric A. Hall


Keith Moore wrote:

 if you can somehow solve the PKI problem, you can make SMTP immune to
 forgeries and have it provide end-to-end integrity.  so far, nobody
 has managed to do this.

I am of the opinion that a new messaging framework -- including the
protocol and the data-format -- is going to be pretty much required for
all of the interrelated problems to be solved. Once the mental committment
to a new design model is made, most of the problems can be redefined as
solvable. In this regard, many of the current problems are due to the
current model. So change the model.

 if you can somehow figure out a way for anybody to type in a mailbox
 in any language on any keyboard, you can solve the i18n mailbox problem.

Certainly backwards-compatible access methods should be defined for the
mailbox names, just as they are necessary for the domain name.

 until then, there's very marginal value in replacing SMTP.  even then,
 it would probably be easier to upgrade SMTP than to replace it.

I think that depends on the approach. If we are only allowed to think of
ways to extend the current model into new territory while preserving 100%
backwards compatibility, we can abort right now. If instead we try to
build a new mail system that provides backwards compatibility ONLY when
communicating with a legacy system, it is much more feasible.

For example, let's say that a new message-transfer service is defined that
uses a new message structure, so that the e2e issues can really be dealt
with properly. In the new environement, perhaps the protocol only
exchanges multipart/container entities, and these have subordinate parts
of message/trace, message/headers and message/body, while ~From and ~To
and other 822-like headers are stored in the message/headers entity.

Mapping this to a legacy system is straightforward in principle: if the
new transport is not available on the destination, have the agent combine
portions of the message/headers entity with portions of the message/trace
entity, perform whatever conversions are needed, and then send the
message/body part over SMTP (possibly performing additional conversions
such as line-folding or base64).

So, yes, we still have to coexist with legacy systems, but 100%
compatiblity at all times is no longer the root design objective. By
redefining the design criteria, we are liberated from the design
constraints that are imposed by SMTP.

[I know there are plenty of flaws in the above. This is not a proposal,
but rather an argument to redefine the design criteria.]

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: [idn] WG last call summary

2002-03-16 Thread Eric A. Hall


Keith Moore wrote:

 I doubt we'd reach consensus on that either, since many of us suspect
 that the following statement is closer to the truth:
 
 The on-the-wire encoding of IDNs is irrelevant; what matters is the
 behavior experienced by users.

Obviously. It's a question of getting there, however.

The final, ultimate argument in this logic-chain is that native
representation of the data in the protocol message is the ultimate design
solution, since it means less implementation work, fewer errors introduced
by wayward codecs, highest reusability by other services, etc. Managed
facades are not long-term solutions to anything, and in fact, tend to
introduce as many problems as they try to fix.

Obviously, direct encapsulation is harder with some services than others.
That doesn't mean that direct encapsulation should not be the ultimate
design target, it only means that some protocols will have to use a patch
(like IDNA) until they can either be extended or replaced.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: [idn] WG last call summary

2002-03-16 Thread Eric A. Hall


Dave Crocker wrote:

 The reference to searching nicely underscores that this entire thread
 pertains to local implementation issues,

Searching in the context of RFC2047 doesn't just apply to local issues.
Consider that IMAP SEARCH and NNTP XPAT both provide protocol mechanisms
to search server-based data. However, I don't know of any clients that
convert ë to =EB for those searches.

That's partly a local issue, but it is also a protocol issue, and in the
end it is also a service-wide issue. At some point, somebody has to say
okay we either mandate back-end conversion to allow for unencoded
searches or we mandate encoded search behavior. Globally, it is a
user-interaction issue, where the facade failed, due to the lack of direct
integration.

This same kind of issue will apply equally to searches for an ë that is
embedded into a domain name component of an email address, and which has
been ACE-encoded. There are multiple secondary considerations here as
well. Searching for a single ë that has been encoded with ACE is not as
simple as searching for an =EB sequence with 2047, for example. I dare
say that substring searches are likely to prove impossible with
ACE-encoded strings.

Anyway, clearly these aren't just local implementation issues.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: [idn] WG last call summary

2002-03-16 Thread Eric A. Hall


Keith Moore wrote:

 First, it's naive to assume that UTF-8 will be the native
 representation on everybody's platform

Clarification. I did not say anything about UTF-8 on platforms, but
instead cited native representation in protocol messages. However,
if UTF-8 is the encoding of choice for a particular protocol's data or
message formats, then we know that UTF-8 is also going to be incorporated
into the necessary supporting functions at the participating end-points.
This doesn't mean that the whole box has to use UTF-8. Frankly, I don't
even think that's relevant. Instead, it is a question of whether or not
the related components (searching, as in the previous example) will be
likely to deal with UTF-8, rather than having to selectively graft an
exraneous encoding into select portions of that service in order to
provide simple functionality (as with 2047 and searching, again).

But this is also entirely irrelevant. By your argument that the transfer
encoding is irrelevant, I would like to hear your arguments as to how,
say, using EBCDIC to pass ASCII data around could possibly be seen as
reasonable design. Of course the native encodings are always best. The
fact that most of the apps are heading towards UTF-8 should tell us that
we should be designing for a long-term support infrastructure that
provides the data in the format it is going to be used in. Furthermore,
whenever the remaining services get upgraded or replaced, they should be
able to use something a little better than the best technology that 1968
money can buy.

 Second, the portion of IDNA that does ASCII encoding is such a trivial
 bit of code that the number of failures introduced by that code will
 pale in comparison to those introduced by the other code needed to
 handle 10646 (normalization, etc) which would be needed no matter what
 encoding were used.

Getting new problems in addition to shared problems is hardly an argument
in your favor. You've already conceded that 2047 has some problems with
transliteration goofiness, and that restricting it to unstructured data
limits the real damage that is caused. Are we to believe that extending
structured data with mandatory transliteration will not cause the problems
you thankfully avoided?

 Numerous examples demonstrate that transition issues are often
 paramount in determining whether a new technology can succeed.

I agree that transitional services are important. I also think that the
evidence shows that end-station layering works well when existing formats
are used as carriers for *subsets*, and when it is targeted to a specific
usage. That isn't what's being done here, though. Instead, well-known and
commmonly-used data-types will get *extended* into broader and
incompatible forms by default, and it will happen purposefully and
accidentally. This is not transitional probing, it is knowing that stuff
will break and doing it anyway.

Cripes, why do we have to do it all in a big-bang? Can't we start with the
transfer encoding (no required upgrades for anything), incrementally add
transliteration where we know it will be safe and robust (some upgrades),
and then add UTF-8 for those newer services that can make use of it (some
more upgrades)? What is the problem with this?

 Simplicity is often a virtue, but IDN is inherently complex - it
 reflects the tremendous variety in the world's languages and
 writing systems.  And blind faith in some vague notion of
 cleanliness is a poor substitute for engineering analysis.

That's almost a fair shot. I do put a bunch of faith into transparent
data-types and structures. Dunno about blind. ASCII is always best when
it's encoded as ASCII, after all.

 reliability.  But the need to allow incremental upgrade of
 legacy application components strongly compels IDNA, and the
 incremental benefit of a native UTF-8 query interface beyond
 that of IDNA does not appear to justify the additional complexity.

The complexity required for a direct UTF-8 name-resolution service in
conjunction with simple passthru-everywhere is minor in comparison to the
complexity of transliterate-everywhere.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: [idn] WG last call summary

2002-03-16 Thread Eric A. Hall


Keith Moore wrote:

  I would like to hear your arguments as to how, say, using EBCDIC
  to pass ASCII data around could possibly be seen as reasonable
  design.
 
 it would be quite reasonable for applications that already had a
 deeply-wired assumption that character strings were in EBCDIC.

Mapped to the current topic, you are saying that ~if the current apps were
already deeply-wired for IDNA, then it would be reasonable to use. Of
course, I would agree with that, if it were anything resembling the
current situation. Too bad nothing in the known universe makes any use
whatsoever of IDNA or (more importantly) its conversion routines.

  Of course the native encodings are always best.
 
 utter hogwash.  first, there is no single native encoding of UCS,

You already know that UTF-8 is the implicit default encoding for it here.

 second; there's no encoding of IDNs that is native to the vast majority
 of deployed applications.

Keith, there's no IDNA encodings of IDNs that are native to *any*
applications, either current or scheduled.

[big snip]

 if and when the rest of the protocol uses UTF-8, then the worst that
 happens is that there's an extra layer of encoding that happens before
 a DNS query is sent.

No, the worst that happens is that well-known and widely-used data-types
get extended by a third-party process, and then get reused.

  and having the client do encoding
 prior to lookup is less complex than the extra cruft that's required
 to take advantage of having two kinds of DNS queries without introducing
 a significant new source of failures or delays.

There is definitely some transitional pain in getting to direct UTF-8, but
it is non-fatal here, there, and in-between. It doesn't rewrite data, it
doesn't require a forklift upgrade of the entire Internet for rendering
purposes, and the entire Industry would be better off when it was done.
None of that can be said for IDNA transliteration. Well, the pain part
still applies.

 you're ignoring the set of new problems that comes with either
 a) providing multiple DNS query interfaces or

Also necessary for IDNA sooner or later, mark my words.

 b) failing to provide an ascii-compatible encoding of IDNs that can be
tolerating by existing apps

To repeat myself for the record, I think that the IDNA transfer-encoding
portion is necessary to provide legacy applications with access to
resources in the IDN namespace. What I am specifically arguing against is
transliteration. It is just foolish to expect that well-known and
widely-used data-types can survive being extended beyond their spec and
still function as expected. I have even provided you with evidence proof
to the contrary.

Look, if a private venture were to come out pitching an untested add-on
that rewrote every domain name which appeared on screen, the greybeards
would trot out to make measured statements cautioning against mangling of
well-known datatypes. What's the difference here?

Oh, shit, we *ARE* the greybeards!

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: [idn] WG last call summary

2002-03-15 Thread Eric A. Hall


Dave Crocker wrote:

 The transition issues for IDNA that actually pertain to Internet
 protocol behavior have been described repeatedly and are identical
 with the kind of changes that were required to adopt MIME.

I can see why you'd want to say that, but it's a poor analogy.

MIME imposed structure on unstructured data by using a subset of values,
and specifically limiting itself to particular technologies. I mean, even
RFC2047 only allowed some particular kinds of data (mostly unstructured)
to be extended, and marked structured data as off-limits. Conversely, IDNA
modifies structured data-elements, and applies to all technologies.
Resistance is futile, you will be transliterated.

A much better analogy for IDNs in general is IPv6, where every
participating protocol, data-element and application that uses the
previous data-format has to be modified if it wants to use the new
data-format. But since IDNA seeks to avoid these costs, in the end, the
protocols, data-elements and applications don't actually obtain any real
benefits from internationalization; it's all just a facade, taped onto the
surface of the Internet, sometimes in harmful ways given the artifacts of
mandatory transliteration. In this regard, the best analogy would be what
IPv6 *DIDN'T* do: pass IPv6 inside of IPv4 options and call the work
finished. That's pretty much what IDNA does.

As for breakage, it will undoubtedly occur, including in those places that
Dan mentions, but it will also happen inside protocols and namespaces.
Message-IDs will absolutely get corrupted and reinserted into the
messaging system, Kerberos and NetBIOS will get broken and then get panic
fixes, etc.

 The example given had nothing to do with Internet protocols, but rather
 with how to transition the software on a computer system to support a
 richer set of characters.  That is, of course, an interesting problem,
 but it is not part of IETF scope.

Dan's argument is actually the easiest to deal with, since it can be
addressed with the judicious and cautious use of MAY along with the
appropriate warnings.

Extending all known data-structures which happen to make use of DNS domain
names as key elements is the larger problem, and it is certainly within
the scope of the IETF.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: [idn] WG last call summary

2002-03-15 Thread Eric A. Hall


Keith Moore wrote:

 because at the time we hadn't figured out how to solve the various
 problems associated with having multiple representations of
 machine-readable text.
 
 today we have a much better idea for how to do that.

At least IDNA only has one conversion form. Still, the point is valid that
something as precise and scope-limited as 2047 only worked well within the
scope it was intended for, namely presentation of unstructured header
fields within a specific service. I mean, I don't know of any email
clients that perform 2047 encoding on search string input, do you?

 I agree with Dave that IDNA is very similar to things we did with MIME
 in both the header and the message body

The notion of transliterate-for-display-so-we-don't-upgrade is similar,
but the scopes are completely different. If you are using a 2047-capable
email client, copy-n-paste between From: and To: is practically guaranteed
to work out. But all IDN approaches introduce copy-n-paste between web
browsers, email clients, WHOIS clients, ad nauseum. That also introduces
all kinds of other issues, like, is that i18n URL I copied actually a
legal URL? [not today, no] That's why this is closer to IPv6 than MIME.

Just to be clear, my position on this subject remains that IDNA is a
necessary and useful transfer encoding for legacy systems that need access
to i18n domain name data, but that the mandatory transliteration of all
domain names as any kind of solution to the problem is just reckless.
These problems won't go away until they are accepted and designed for.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/