Re: motivations (was: Re: draft-housley-two-maturity-levels-00)

2010-06-29 Thread Phillip Hallam-Baker
The reason I chose RFC2821 is that there is really no basis on which the
earlier document is better.

RFC821 was accepted when the process was far less stringent. The working
group that produced 2821 was chartered to improve the documentation of SMTP
rather than redesign the protocol.

What people are asking for is that the transition from one standards state
to another to actually mean something. If we think of the world as a huge
collection of loosely coupled, probabilistic communicating finite state
machines (similar to CSP but without rendezvous), I cannot think of any
system in that world that would undergo a state transition on receiving the
message RFC2821 has gone from DRAFT to STANDARD. Nor can I think of any
system where the probability of a state transition would be affected by
whether RFC2821 was in the one state or the other.


What I and others conclude from this fact is that the current system as
documented is broken. There is no point in fixing the practice to match the
theory because the theory has been rejected FOR GOOD REASON.

The current missmatch between theory and practice is hurting the IETF. I
have been involved in the decision of where to take quite a few standards
proposals now. And it must be said that the fact that nothing now becomes an
IETF standard until the fact is irrelevant causes resistance.


On Mon, Jun 28, 2010 at 3:35 PM, Martin Rex m...@sap.com wrote:

 Phillip Hallam-Baker wrote:
 
  The fact remains that RFC 821 has the STANDARD imprimatur and the better
  specification that was intended to replace it does not.
 
  It seems pretty basic to me that when you declare a document Obsolete it
  should lose its STANDARD status. But under the current system that does
 not
  happen.
 
  This situation has gone on now for 15 years. Why would anyone bother to
 put
  time an effort into progressing documents along the three step track when
  most of the documents at the highest rank are actually obsolete?
 
  What does STANDARD actually mean if the document it refers to is quite
  likely obsolete?


 To me it looks like Obsolete:  has been used with quite different
 meanings across RFCs, and some current uses might be inappropriate.

 Although it's been more than two decades that I read rfc821 (and
 none of the successors), I assume that all those RFC describe _the_same_
 protocol (SMTP) and not backwards-incompatible revisions of a protocol
 family (SMTPv1,v2,v3).  I also would assume that you could implement an
 MTA with rfc2821 alone (i.e. without ever reading rfc821), that is still
 fully interoperable with an implementation of rfc821.  So for a large
 part we are looking at a revised specification of the same single protocol,
 and the term obsoletes should indicate that you can create an
 implementation of the protocol based solely on a newer version of the
 specification describing it and remain fully interoperable with an
 implementation of the old spec when (at least when using the mandatory
 to implement plus non-controversial recommended protocol features).


 For RFCs that create backwards-incompatible protocol revisions, and
 in particular when you still need the old specification to implement
 the older protocol revision, there is *NO* obsoletion of the old
 protocol by publication of the new protocol.  Examples where this
 was done correctly:  IPv4IPv6, LDAPv2LDAPv3, HTTPv1.0HTTPv1.1.

 A sensible approach to obsolete a previous protocol version is to
 reclassify it as historic when the actual usage in the real world
 drops to insignificant levels and describepublish that move in an
 informational RFC (I assume that is the intent of rfc-3494).


 Examples of clearly inappropriate Obsoletes:  are the
 TLS protocol revisions (v1.1:rfc-4346 and v1.2:rfc-5246) which describe
 backward-incompatible protocol revisions of TLS and where the new RFCs
 specify only the behaviour of the new protocol version and even
 fail to clearly identify the backwards-incompatible changes.


 And if you look at the actual use of TLS protocol versions in the
 wild, the vast majority is using TLSv1.0, there is a limited use
 of TLSv1.1 and very close to no support for TLSv1.2.

 (examples https://www.ssllabs.com/ssldb/index.html
  http://www.ietf.org/mail-archive/web/tls/current/msg06432.html


 What irritates me slightly is that I see this announcement

 https://www.ietf.org/ibin/c5i?mid=6rid=49gid=0k1=935k2=34176tid=1277751536

 which is more of a bashing of existing and widely used versions
 of SSLv3 and TLS, instead of an effort to improve _one_ of the
 existing TLS protocol revisions and to advance it on the standards
 maturity level and make it more easily acceptable to the marketplace.

 Adding explicit indicators for backwards-incompatible protocol changes
 in rfc-5246 might considerably facilitate the assessment just how much
 changes are necessary to an implementation of a predecessor version
 of TLSv1.2.  Btw. 7.4.1.4.1 Signature Algorithms extension appears
 to be a 

Re: motivations (was: Re: draft-housley-two-maturity-levels-00)

2010-06-28 Thread Phillip Hallam-Baker
The fact remains that RFC 821 has the STANDARD imprimatur and the better
specification that was intended to replace it does not.

It seems pretty basic to me that when you declare a document Obsolete it
should lose its STANDARD status. But under the current system that does not
happen.

This situation has gone on now for 15 years. Why would anyone bother to put
time an effort into progressing documents along the three step track when
most of the documents at the highest rank are actually obsolete?


What does STANDARD actually mean if the document it refers to is quite
likely obsolete?


On Fri, Jun 25, 2010 at 4:35 PM, Yoav Nir y...@checkpoint.com wrote:

 On Thursday, June 24, 2010 22:01 Phillip Hallam-Baker wrote:

 snip/
  We currently have the idiotic position where RFC821 is a full standard
 and RFC2821 which obsoletes it is not.

 Why is this idiotic. RFC 821 needed to be obsoleted. It had some features
 that needed to be removed, and some things that may have been appropriate in
 1982, but no longer so in 2001. Proposed, Draft and Full refer to the
 maturity of a standard, not to how well it fits the current Internet. One
 could argue that 821 was very mature, because it needed a revision only
 after 19 years.

 Just because the old standard needs replacing, does not automatically mean
 that the new standard is just as mature as the old one.

 It does, however mean that the distinction is meaningless to implementers.
 In 2001 or 2002 we would expect someone implementing SMTP to implement 2821,
 a proposed standard, rather than 821, a full standard. While implementing a
 full standard gives you more assurance about the quality of the spec, it
 doesn't mean that they are not going to obsolete it ever.




-- 
Website: http://hallambaker.com/
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: motivations (was: Re: draft-housley-two-maturity-levels-00)

2010-06-28 Thread Martin Rex
Phillip Hallam-Baker wrote:
 
 The fact remains that RFC 821 has the STANDARD imprimatur and the better
 specification that was intended to replace it does not.
 
 It seems pretty basic to me that when you declare a document Obsolete it
 should lose its STANDARD status. But under the current system that does not
 happen.
 
 This situation has gone on now for 15 years. Why would anyone bother to put
 time an effort into progressing documents along the three step track when
 most of the documents at the highest rank are actually obsolete?
 
 What does STANDARD actually mean if the document it refers to is quite
 likely obsolete?


To me it looks like Obsolete:  has been used with quite different
meanings across RFCs, and some current uses might be inappropriate.

Although it's been more than two decades that I read rfc821 (and
none of the successors), I assume that all those RFC describe _the_same_
protocol (SMTP) and not backwards-incompatible revisions of a protocol
family (SMTPv1,v2,v3).  I also would assume that you could implement an
MTA with rfc2821 alone (i.e. without ever reading rfc821), that is still
fully interoperable with an implementation of rfc821.  So for a large
part we are looking at a revised specification of the same single protocol,
and the term obsoletes should indicate that you can create an
implementation of the protocol based solely on a newer version of the
specification describing it and remain fully interoperable with an
implementation of the old spec when (at least when using the mandatory
to implement plus non-controversial recommended protocol features).


For RFCs that create backwards-incompatible protocol revisions, and
in particular when you still need the old specification to implement
the older protocol revision, there is *NO* obsoletion of the old
protocol by publication of the new protocol.  Examples where this
was done correctly:  IPv4IPv6, LDAPv2LDAPv3, HTTPv1.0HTTPv1.1.

A sensible approach to obsolete a previous protocol version is to
reclassify it as historic when the actual usage in the real world
drops to insignificant levels and describepublish that move in an
informational RFC (I assume that is the intent of rfc-3494).


Examples of clearly inappropriate Obsoletes:  are the
TLS protocol revisions (v1.1:rfc-4346 and v1.2:rfc-5246) which describe
backward-incompatible protocol revisions of TLS and where the new RFCs
specify only the behaviour of the new protocol version and even
fail to clearly identify the backwards-incompatible changes.


And if you look at the actual use of TLS protocol versions in the
wild, the vast majority is using TLSv1.0, there is a limited use
of TLSv1.1 and very close to no support for TLSv1.2.

(examples https://www.ssllabs.com/ssldb/index.html
 http://www.ietf.org/mail-archive/web/tls/current/msg06432.html


What irritates me slightly is that I see this announcement
https://www.ietf.org/ibin/c5i?mid=6rid=49gid=0k1=935k2=34176tid=1277751536

which is more of a bashing of existing and widely used versions
of SSLv3 and TLS, instead of an effort to improve _one_ of the
existing TLS protocol revisions and to advance it on the standards
maturity level and make it more easily acceptable to the marketplace.

Adding explicit indicators for backwards-incompatible protocol changes
in rfc-5246 might considerably facilitate the assessment just how much
changes are necessary to an implementation of a predecessor version
of TLSv1.2.  Btw. 7.4.1.4.1 Signature Algorithms extension appears
to be a big mess and fixing it wouldn't hurt.

MUST requirements in spec ought to be strictly limited to features
that are absolutely necessary for interoperability _and_ for the
existing market, not just nice to have at some time in the future.
The only TLS extension that deserves a MUST is described
in rfc-5746 (TLS extension RI).


One of the reasons why some working groups recycling protocol
revisions on proposed rather advancing a widely deployed protocol
to draft is the the better is the enemy of the good.


-Martin
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: motivations (was: Re: draft-housley-two-maturity-levels-00)

2010-06-28 Thread ned+ietf
 Phillip Hallam-Baker wrote:
 
  The fact remains that RFC 821 has the STANDARD imprimatur and the better
  specification that was intended to replace it does not.
 
  It seems pretty basic to me that when you declare a document Obsolete it
  should lose its STANDARD status. But under the current system that does not
  happen.
 
  This situation has gone on now for 15 years. Why would anyone bother to put
  time an effort into progressing documents along the three step track when
  most of the documents at the highest rank are actually obsolete?
 
  What does STANDARD actually mean if the document it refers to is quite
  likely obsolete?

Simple: It means we're letting technical correctness get in the way of clarity.

 To me it looks like Obsolete:  has been used with quite different
 meanings across RFCs, and some current uses might be inappropriate.

 Although it's been more than two decades that I read rfc821 (and
 none of the successors), I assume that all those RFC describe _the_same_
 protocol (SMTP) and not backwards-incompatible revisions of a protocol
 family (SMTPv1,v2,v3).

That assumption is incorrect. The diffrences are minor, but there are
differences - a couple of things, like EHLO instead of HELO or periods in
unquoted phrases, are allowed now, whereas lots of stuff that used to be
allowed has been removed.

The protocols don't even hahve the same names in common usage. The term ESMTP
is often used to refer to the SMTP variant described in RFC 5321 (RFC 2821 is
obsolete, BTW) that uses EHLO, and SMTP refers to the original RFC821 
protocol.

 I also would assume that you could implement an
 MTA with rfc2821 alone (i.e. without ever reading rfc821), that is still
 fully interoperable with an implementation of rfc821.

Fully interoperable? Not even close. A lot of stuff has been removed from RFC
5311. If someone attempts to use those RFC 821 features, things aren't going to
interoperate.

Now, the consensus is that those features are useless, dangerous, rarely if
ever implemented, or sometimes all three, and we're are better off for their
absence, but that doesn't make your assertion valid.

 So for a large
 part we are looking at a revised specification of the same single protocol,

It is a revised specification and the *service* it provides remains the same.
But the protocol has changed. Some things have been removed; other things have
been added. There's fallback where it is necessary, but there are also cases
where functionality has simply been removed.

 and the term obsoletes should indicate that you can create an
 implementation of the protocol based solely on a newer version of the
 specification describing it and remain fully interoperable with an
 implementation of the old spec when (at least when using the mandatory
 to implement plus non-controversial recommended protocol features).

That would be an absolutely absurd requirement to impose. Full interoperability
is far too high a bar.

 For RFCs that create backwards-incompatible protocol revisions, and
 in particular when you still need the old specification to implement
 the older protocol revision, there is *NO* obsoletion of the old
 protocol by publication of the new protocol.  Examples where this
 was done correctly:  IPv4IPv6, LDAPv2LDAPv3, HTTPv1.0HTTPv1.1.

THat's also absurd and overly constaining. These choices aren't amenable to
being codified as a fixed set of rules. Context has to be considered.

 A sensible approach to obsolete a previous protocol version is to
 reclassify it as historic when the actual usage in the real world
 drops to insignificant levels and describepublish that move in an
 informational RFC (I assume that is the intent of rfc-3494).

This approach may indeed be appropriate in some cases. There are bound
to be cases where it is inappropriate, though.

 Examples of clearly inappropriate Obsoletes:  are the
 TLS protocol revisions (v1.1:rfc-4346 and v1.2:rfc-5246) which describe
 backward-incompatible protocol revisions of TLS and where the new RFCs
 specify only the behaviour of the new protocol version and even
 fail to clearly identify the backwards-incompatible changes.

 And if you look at the actual use of TLS protocol versions in the
 wild, the vast majority is using TLSv1.0, there is a limited use
 of TLSv1.1 and very close to no support for TLSv1.2.

 (examples https://www.ssllabs.com/ssldb/index.html
  http://www.ietf.org/mail-archive/web/tls/current/msg06432.html

Whereas, in the case of email, the vast majority of MTAs now support ESMTP and
the *overwhelming* majority of MUAs support MIME.

 What irritates me slightly is that I see this announcement
 https://www.ietf.org/ibin/c5i?mid=6rid=49gid=0k1=935k2=34176tid=1277751536

 which is more of a bashing of existing and widely used versions
 of SSLv3 and TLS, instead of an effort to improve _one_ of the
 existing TLS protocol revisions and to advance it on the standards
 maturity level and make it more easily acceptable to the 

Re: motivations (was: Re: draft-housley-two-maturity-levels-00)

2010-06-28 Thread Douglas Otis

On 6/28/10 12:35 PM, Martin Rex wrote:

To me it looks like Obsolete:  has been used with quite different
meanings across RFCs, and some current uses might be inappropriate.

Although it's been more than two decades that I read rfc821 (and
none of the successors), I assume that all those RFC describe _the_same_
protocol (SMTP) and not backwards-incompatible revisions of a protocol
family (SMTPv1,v2,v3).  I also would assume that you could implement an
MTA with rfc2821 alone (i.e. without ever reading rfc821), that is still
fully interoperable with an implementation of rfc821.  So for a large
part we are looking at a revised specification of the same single protocol,
and the term obsoletes should indicate that you can create an
implementation of the protocol based solely on a newer version of the
specification describing it and remain fully interoperable with an
implementation of the old spec when (at least when using the mandatory
to implement plus non-controversial recommended protocol features).


For RFCs that create backwards-incompatible protocol revisions, and
in particular when you still need the old specification to implement
the older protocol revision, there is *NO* obsoletion of the old
protocol by publication of the new protocol.  Examples where this
was done correctly:  IPv4IPv6, LDAPv2LDAPv3, HTTPv1.0HTTPv1.1.

A sensible approach to obsolete a previous protocol version is to
reclassify it as historic when the actual usage in the real world
drops to insignificant levels and describepublish that move in an
informational RFC (I assume that is the intent of rfc-3494).


Examples of clearly inappropriate Obsoletes:  are the
TLS protocol revisions (v1.1:rfc-4346 and v1.2:rfc-5246) which describe
backward-incompatible protocol revisions of TLS and where the new RFCs
specify only the behaviour of the new protocol version and even
fail to clearly identify the backwards-incompatible changes.


And if you look at the actual use of TLS protocol versions in the
wild, the vast majority is using TLSv1.0, there is a limited use
of TLSv1.1 and very close to no support for TLSv1.2.

(examples https://www.ssllabs.com/ssldb/index.html
  http://www.ietf.org/mail-archive/web/tls/current/msg06432.html


What irritates me slightly is that I see this announcement
https://www.ietf.org/ibin/c5i?mid=6rid=49gid=0k1=935k2=34176tid=1277751536

which is more of a bashing of existing and widely used versions
of SSLv3 and TLS, instead of an effort to improve _one_ of the
existing TLS protocol revisions and to advance it on the standards
maturity level and make it more easily acceptable to the marketplace.

Adding explicit indicators for backwards-incompatible protocol changes
in rfc-5246 might considerably facilitate the assessment just how much
changes are necessary to an implementation of a predecessor version
of TLSv1.2.  Btw. 7.4.1.4.1 Signature Algorithms extension appears
to be a big mess and fixing it wouldn't hurt.

MUST requirements in spec ought to be strictly limited to features
that are absolutely necessary for interoperability _and_ for the
existing market, not just nice to have at some time in the future.
The only TLS extension that deserves a MUST is described
in rfc-5746 (TLS extension RI).


One of the reasons why some working groups recycling protocol
revisions on proposed rather advancing a widely deployed protocol
to draft is the the better is the enemy of the good.
   


Make everything as simple as possible, but not simpler. Albert Einstein

The current scheme is already too simple.  Too simple because any 
resulting utility does not justify promotion efforts.  Reducing the the 
status categories will not greatly impact the goal of spending less time 
at advancing related RFCs to the same level, where often an originating 
wg will have closed.  Making changes that impact a large number of 
interrelated protocols will have these efforts causing as much 
disruption as utility.  Rather than providing stability, efforts at 
simplification are likely to inject as many errors, as those corrected.


Four years ago, an effort to create a cover sheet for standard 
protocols was attempted.  After gaining majority support within the wg, 
subsequently the wg closed without a clear explanation for the IESG 
push-back.  Often no single RFC or STD encapsulates a standard.  In 
addition, references to a wider set depends upon an evolving set of 
numbered RFCs, where tracking these relationships often requires complex 
graphs.


With a cover sheet approach, core elements are described separately 
from extension,  guidance, replaces, experimental, and 
companion elements.  Many overlapping protocols can be defined as 
representing different associations of RFCs.  This scheme lessens 
dependence on a concise relationship described in each RFC,  or 
attempting to resolve relationships based upon the roughly maintained 
categories that frequently offer little insight or reflect actual use.


Cover sheets that 

Re: motivations (was: Re: draft-housley-two-maturity-levels-00)

2010-06-27 Thread Yoav Nir

On Jun 26, 2010, at 12:56 AM, Phillip Hallam-Baker wrote:

 The fact remains that RFC 821 has the STANDARD imprimatur and the better 
 specification that was intended to replace it does not.

Yes, but most of the RFC repositories, including 
http://tools.ietf.org/html/rfc821 show Obsoleted by: 2821 right there at the 
top next to the word STANDARD. Anyone looking at this RFC now (as opposed to 
10 years ago) would immediately know that while this *was* a standard, it is 
now obsolete.

This raises another question. What does obsolete mean?  RFC 821 and RFC 2821 
describe the same standard. Upgrading implementations to comply with RFC 2821 
was not supposed to break any connectivity. They describe the same protocol, so 
unless you are interoperating with a peer that implemented some deprecated 
features, you're good. OTOH, looking at RFC 2409, it says that RFC 4306 
obsoletes it. But RFC 2409 is IKEv1, while RFC 4306 is IKEv2.  If you had 
upgraded an implementation to comply with RFC 4306 *instead of* RFC 2409 in 
2005, you would not be able to finish an IKE exchange at all. If you need to 
implement IKEv1 (that is still much more widely used than IKEv2), the RFC to 
look at is 2409, not 4306.  IMO this is a totally different meaning of 
obsolete

 It seems pretty basic to me that when you declare a document Obsolete it 
 should lose its STANDARD status. But under the current system that does not 
 happen.

It's true that under the current system RFCs never change. Even advancing them 
to a higher level gives them a different number.

 This situation has gone on now for 15 years. Why would anyone bother to put 
 time an effort into progressing documents along the three step track when 
 most of the documents at the highest rank are actually obsolete?

I don't think there's any incentive to do so.  RFC 4478 has been at 
Experimental for 4 years, with at least 3 independent implementations. But 
when I thought it was time to advance it to PS, I was told (by an AD) why 
bother?. It certainly didn't stop implementers from implementing it.

Also, it seems that in the last 4 years, the IETF has published only 3 full 
standards, 18 draft standards, and 740 proposed standards. I think this tells 
us that there is very little incentive for advancing a standard.
http://www.rfc-editor.org/std-index.html
http://www.rfc-editor.org/fyi-index.html

 What does STANDARD actually mean if the document it refers to is quite likely 
 obsolete?

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: motivations (was: Re: draft-housley-two-maturity-levels-00)

2010-06-27 Thread SM

Hi Yoav,
At 00:36 27-06-10, Yoav Nir wrote:
Yes, but most of the RFC repositories, including 
http://tools.ietf.org/html/rfc821 show Obsoleted by: 2821 right 
there at the top next to the word STANDARD. Anyone looking


Yes.

at this RFC now (as opposed to 10 years ago) would immediately know 
that while this *was* a standard, it is now obsolete.


If you then go to RFC 2821 and you will see that it is a Proposed 
Standard which has been obsoleted by RFC 5321 (Draft 
Standard).  There are implementations out there that are STD 10 
compliant.  There are still a lot of implementations out there that 
are RFC 2821 compliant.  I'll ignore the differences between these 
specifications.  Most people mention RFC 2821 when it comes to 
SMTP.  Do I implement RFC 2821 or RFC 5321?


Let's try another example.  RFC 4871 was published in May 2007 as 
Proposed Standard.  It was updated by RFC 5672 in August 2009.  Do I 
want to implement a specification that could be changed in two years 
or do I stick to the Draft or Internet Standard for maturity?


This raises another question. What does obsolete mean?  RFC 821 
and RFC 2821 describe


It means that there is a newer version of the specification (RFC).

 the same standard. Upgrading implementations to comply with RFC 
2821 was not supposed to
 break any connectivity. They describe the same protocol, so unless 
you are interoperating with a peer that implemented some deprecated 
features, you're good. OTOH,


In Section 3.3 of RFC 821:

 The VRFY and EXPN commands are not included in the minimum
  implementation (Section 4.5.1), and are not required to work
  across relays when they are implemented.

In Section 3.5.2 of RFC 2821:

  Server implementations SHOULD support both VRFY and EXPN.

In Section 3.5.2 of RFC 5321:

  Server implementations SHOULD support both VRFY and EXPN.

If I know which specification is widespread, I can decide whether I 
should be able to rely on VRFY being available or not.  It is also 
unlikely that the VRFY is removed as the Standard is updated.  If the 
IETF decides to do that anyway, the specification is recycled at a 
lower maturity level.  In practice, it does not work like that for 
reasons I won't get into.


It's true that under the current system RFCs never change. Even 
advancing them to a higher level gives them a different number.


Actually no, the RFC only gets a different number if the text is changed.

I don't think there's any incentive to do so.  RFC 4478 has been at 
Experimental for 4 years, with at least 3 independent 
implementations. But when I thought it was time to advance it to PS, 
I was told (by an AD) why bother?. It certainly didn't stop 
implementers from implementing it.


If the specification has mind share, it will be implemented even if 
the RFC is Experimental.  If RFC 4478 fulfills the requirements for 
PS, it could be advanced unless it can be shown that it is 
technically unsound.  The PS could mean that someone bothered to look 
up the three independent implementations to see whether they are 
interoperable.  It might also help the author determine where the 
text is clearly understood.


Also, it seems that in the last 4 years, the IETF has published only 
3 full standards, 18 draft standards, and 740 proposed standards. I 
think this tells us that there is very little incentive for 
advancing a standard.


Or it might mean that the ADs are not interested in seeing the a 
specification advanced through the standards track.


If there isn't any motivation to advance a standard, we might as 
well publish these RFCs as Informational.


Regards,
-sm 


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: motivations (was: Re: draft-housley-two-maturity-levels-00)

2010-06-27 Thread Bill McQuillan
It seems to me that this discussion is conflating two related but distinct
things: protocols and specifications.

The IETF is concerned with producing and refining *protocols*; however the
work products are specifications(RFCs).

A *protocol* such as SMTP is very mature and thus can be used by many
different parties to enable e-mail exchange with high confidence in its
interoperability. For example, SMTP has matured over the last several
decades by adopting DNS and MX routing, creating a mechanism for allowing
enhancements (EHLO), dropping unuseful features such as SEND and source
routing, separating Submission from forwarding (SUBMIT), among others.

The *specifications* for SMTP (RFC821, etc.) have been of varying quality
measured by their accuracy in describing the *protocol*. The goal of a
specification should be its capability for allowing someone to implement
the protocol accurately, not whether the protocol itself is well designed.

Therefore I would suggest that the SMTP protocol remains a Full Standard
even while successor specifications to RFC821, which are trying to describe
it, are cycling through levels of wordsmithing. Although the words
Proposed and Draft seem reasonable to describe these editing cycles I
am not sure that Full quite captures the goal of this process.

For what it's worth.

-- 
Bill McQuillan mcqui...@pobox.com

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: motivations (was: Re: draft-housley-two-maturity-levels-00)

2010-06-25 Thread Phillip Hallam-Baker
I can't remember offhand if DNS got to full standard or not, lets say for
the sake of argument that it did.

If we want to make a significant change to DNS, such as yank out some
features that were never used, we have a minimum of about six years before
the change can be made.

First we would have to get an ID written and progress it through the working
group, that would be two years. Then we would have to get the proposed
standard to draft, a minimum of two years. Then we would have to go from
draft to standard, which has not happened to a DNS spec since the fall of
the Soviet Union.


We currently have the idiotic position where RFC821 is a full standard and
RFC2821 which obsoletes it is not.



On Tue, Jun 22, 2010 at 11:16 AM, Andrew Sullivan a...@shinkuro.com wrote:

 On Tue, Jun 22, 2010 at 10:12:13AM +0200, Eliot Lear wrote:

  Question #1: Is such a signal needed today?  If we look at the 1694
  Proposed Standards, are we seeing a lack of implementation due to lack
  of stability?  I would claim that there are quite a number of examples
  to the contrary (but see below).

 In connection with that question, I'll observe that a very large
 number of the DNS protocol documents have not advanced along the
 standards track, and efforts to do something about that state of
 affairs have not been very successful.  In addition, any time there is
 an effort to make a change to anything already deployed is met by
 arguments that we shouldn't change the protocol in even the slightest
 detail, because of all the deployed code.  (I've been known to make
 that argument myself.)

 I don't know whether the DNS is special in this regard, though I have
 doubts.

 A

 --
 Andrew Sullivan
 a...@shinkuro.com
 Shinkuro, Inc.
 ___
 Ietf mailing list
 Ietf@ietf.org
 https://www.ietf.org/mailman/listinfo/ietf




-- 
Website: http://hallambaker.com/
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: motivations (was: Re: draft-housley-two-maturity-levels-00)

2010-06-25 Thread Yoav Nir
On Thursday, June 24, 2010 22:01 Phillip Hallam-Baker wrote:

snip/
 We currently have the idiotic position where RFC821 is a full standard and 
 RFC2821 which obsoletes it is not.

Why is this idiotic. RFC 821 needed to be obsoleted. It had some features that 
needed to be removed, and some things that may have been appropriate in 1982, 
but no longer so in 2001. Proposed, Draft and Full refer to the maturity 
of a standard, not to how well it fits the current Internet. One could argue 
that 821 was very mature, because it needed a revision only after 19 years.

Just because the old standard needs replacing, does not automatically mean that 
the new standard is just as mature as the old one.

It does, however mean that the distinction is meaningless to implementers. In 
2001 or 2002 we would expect someone implementing SMTP to implement 2821, a 
proposed standard, rather than 821, a full standard. While implementing a full 
standard gives you more assurance about the quality of the spec, it doesn't 
mean that they are not going to obsolete it ever.

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: motivations (was: Re: draft-housley-two-maturity-levels-00)

2010-06-23 Thread Eliot Lear
 Hi Ran, and thanks for your reply.

There are two separate issues that we need to distill.  First, what to
do about draft-housley-two-maturity-levels-00, and second, how do take
input to improve the overall process?

I have not really come down on one side or the other on this draft
(yet).  To be sure, two maturity levels seem better than three, and as
you know, I've proposed a single maturity level in the past, so to me,
the draft goes in the right direction.  However, I do not know how many
times we get to change this sort of procedure, and I believe the
community and IESG choice could be better informed than it is today. 
Having been involved in NEWTRK, and having produced what I think was the
only output from that group, in terms of RFCs or processes, I think I
know about which I write, when I say this community can become a bit of
an echo chamber, and could use a bit of formal academic input. 
Conveniently, there are researchers in this area.  This is an even
stronger reason for for me to not state an opinion about whether to
advance the draft.

As to the questions I asked, you and I obviously hold very different
views.  In the end, it is of course for researchers (who are the real
target of my questions) to ask what questions that they think might be
telling about our process.  I hope this discussion informs them, if and
when they review it.

You claim I have a vendor bias.  Guilty, but I am also concerned that we
do the right things for the right reasons, and that motivations are
reasonably aligned so that we have some reason to believe that what is
proposed will work and not have perverse impact.  Absent some serious
analysis, we also are making the assumption that the logic of decisions
of over twenty years ago holds today, when in fact we don't really even
know if it held then.

And now as to your specifics, you have placed a lot of weight on one
example, seemingly extrapolating from it, the Joint Interoperability
Test Command.  I have no experience working with them, and defer to
yours.  However, when you say,

   As examples, the JITC and TIC requirements pay a great
   deal of attention to whether some technology is past PS.
   Various IPv6 Profile documents around the world also pay
   much attention to whether a particular specification is
   past PS.

It leads to the following questions:

* Would the vendors have implemented the functionality ANYWAY? 
  Specifically, would other RFPs have already driven vendors in this
  direction?  Can you cite a counter example, where that was not the
  case?
* Is the defense industry at all representative of the broader
  market?  My own experience leads to an answer of, “barely at all”,
  and this has been assuredly the case with the Internet where a
  huge portion has run on on PS, Internet-Drafts, and proprietary
  standards, and not waited for advancement.  Examples have included
  BGP, MPLS-VPNs, HTTP, SSL, and Netflow, just to name a few. 

But again, I would like to see a rigorous analysis, rather than simply
rely on either of our personal experiences.

 The IETF already has a tendency to be very vendor-focused 
 vendor-driven.  It is best, however, if the IETF keeps the 
 interests of both communities balanced (rather than tilting 
 towards commercial vendors).
While this is a perhaps laudable idea, someone has to do the work to get
specifications to the next standards level.  The whole point of my
questions is to determine what motivations that someone might have for
actually performing that work.

 If we look at the 1694
 Proposed Standards, are we seeing a lack of implementation due to lack
 of stability?  I would claim that there are quite a number of examples
 to the contrary (but see below).
 Wrong question.  How clever to knock down the wrong strawman.

There's no need to be rude or snarky with me, even if you disagree.  You
are looking at this from the angle of the customers, and that's
perfectly reasonable.  I'm looking at it from the developers' point of
view, and from the supply side of your equation.  Both seem reasonably
valid, and so I have no qualms with the question part of your (A),
although as I mentioned above, I question your answer.

   B) whether that signal has a feedback loop to implementers/
  vendors that still works.
   The answer to this is also clearly YES.  Technologies that
   appear in RFPs or Tender Requirements have a stronger
   business case for vendors/implementers, hence are more
   likely to be widely implemented.

Certainly so, but I don't understand how you made the leap of logic from
your question to your answer.  Do we have situations, for instance,
where a proposed standard is compared to a draft standard, or a draft
standard is compared to a full standard, and one is chosen over the
other?  If so, are they the norm, and are they likely to drive
implementation?  Also, if all this gets you is interoperability, but

Re: motivations (was: Re: draft-housley-two-maturity-levels-00)

2010-06-23 Thread RJ Atkinson

On 23  Jun 2010, at 08:45 , Eliot Lear wrote:
 And now as to your specifics, you have placed a lot of weight on one
 example, seemingly extrapolating from it, the Joint Interoperability
 Test Command.  I have no experience working with them, and defer to
 yours.  However, when you say,

Again, you setup an incorrect strawman, and then knock it down.

In the quote (below), I also mentioned the various IPv6
Profile documents around the world, which you ignore,
apparently in order to incorrectly characterise my note 
as using a single example.  There are a number of cases
where a large customer's requirements (in RFPs or Tender
opportunities) have driven feature priorities.  The TIC
and JITC are merely examples.  Numerous other examples
exist, including a large bank in central Europe and 
several ISPs.

  As examples, the JITC and TIC requirements pay a great
  deal of attention to whether some technology is past PS.
  Various IPv6 Profile documents around the world also pay
  much attention to whether a particular specification is
  past PS.
 
 It leads to the following questions:
 
* Would the vendors have implemented the functionality ANYWAY? 
  Specifically, would other RFPs have already driven vendors in this
  direction?  Can you cite a counter example, where that was not the
  case?

Yes.

There are certainly numerous cases where vendor implementation 
timing and new feature prioritisation were directly impacted 
by a profile document cited in some RFP, and where that profile
document's contents were directly impacted by whether a
particular technology was at Proposed Standard or some more
advanced stage in the IETF processes.

The most obvious examples come from the various IPv6 Profiles
around the world.  There are some number of these in Japan,
in Europe, in the USA, and in other countries.

Various examples also exist outside the IPv6 Profile universe,
including but not limited to large customers (e.g. the JITC and TIC).

* Is the defense industry at all representative of the broader
  market?  My own experience leads to an answer of, “barely at all”,
  and this has been assuredly the case with the Internet where a
  huge portion has run on on PS, Internet-Drafts, and proprietary
  standards, and not waited for advancement.  Examples have included
  BGP, MPLS-VPNs, HTTP, SSL, and Netflow, just to name a few. 

I provided non-defense examples in both my original note
(which examples you have ignored for some reason) and also
in my response above.

 The IETF already has a tendency to be very vendor-focused 
 vendor-driven.  It is best, however, if the IETF keeps the 
 interests of both communities balanced (rather than tilting 
 towards commercial vendors).
 While this is a perhaps laudable idea, someone has to do the work to get
 specifications to the next standards level.  The whole point of my
 questions is to determine what motivations that someone might have for
 actually performing that work.

I was quite detailed on that front, although you seem to have
selectively ignored that part of my note.

 There's no need to be rude or snarky with me, even if you disagree.  

I wasn't rude, and can't find snarky in the OED.

 You are looking at this from the angle of the customers, and that's
 perfectly reasonable.  I'm looking at it from the developers' point of
 view, and from the supply side of your equation. 

I've been both customer/user/operator and vendor/implementer
at various points in time.  So I look at it from both points
of view, and my earlier note included discussion of both
vendor advantages and user/operator/customer advantages.

It seems quite odd that you seem to have ignored my note 
so selectively.

  B) whether that signal has a feedback loop to implementers/
 vendors that still works.
  The answer to this is also clearly YES.  Technologies that
  appear in RFPs or Tender Requirements have a stronger
  business case for vendors/implementers, hence are more
  likely to be widely implemented.
 
 Certainly so, but I don't understand how you made the leap of logic from
 your question to your answer.  Do we have situations, for instance,
 where a proposed standard is compared to a draft standard, or a draft
 standard is compared to a full standard, and one is chosen over the
 other?  

Yes, we do.

 If so, are they the norm, and are they likely to drive
 implementation?  

Such decisions in various IPv6 Profiles around the world,
in large customer requirements documents around the world
(e.g. JITC, TIC) regularly have driven implementation priorities 
and new feature timetables in the past.  

Folks at many vendors have experienced this.  I witnessed
it at every vendor I've ever worked for.  It isn't a surprise 
that a business case would drive these things NOR is it 
a surprise that standards status would drive an RFP 
(and hence drive the business case).

 Also, if all this gets you is 

motivations (was: Re: draft-housley-two-maturity-levels-00)

2010-06-22 Thread Eliot Lear
 Russ,

Thank you for bringing this topic full circle.  Having considered this
topic for a long time, and having lead a cleanup around old standards in
newtrk, I share the following thoughts for your consideration.

In concurring in part with what Bernard and Mike wrote, the basic
question I ask, in several parts, is whether standards maturity levels
have been overtaken by events or time?  For each level above PS, a
substantial amount of work must go into advancement, perhaps without a
single line of code actually being written or changed by implementers. 
This then leads to a question of motivations.  What are the motivations
for the IESG, the IETF, and for individual implementers?  Traditionally
for the IETF and IESG, the motivation was meant to be a signal to the
market that a standard won't change out from underneath the developer.

Question #1: Is such a signal needed today?  If we look at the 1694
Proposed Standards, are we seeing a lack of implementation due to lack
of stability?  I would claim that there are quite a number of examples
to the contrary (but see below).

Question #2: Is the signal actually accurate?  Is there any reason for a
developer to believe that the day after a mature standard is
announced, a new Internet Draft won't in some way obsolete that work? 
What does history say about this effort? 

Question #3: What does such a signal say to the IETF?  I know of at
least one case where work was not permitted in the IETF precisely
because a FULL STANDARD was said to need soak time.  It was SNMP, and
the work that was not permitted at the time was what would later become
ISMS.

Question #4:  Is there a market advantage gained by an implementer
working to advance a specification's maturity?  If there is none, then
why would an implementer participate?  If there *is* a market advantage,
is that something a standards organization wants?  Might ossification of
a standard retard innovation by discouraging extensions or changes?

Question #5:  Are these the correct questions, and are there others that
should be asked?

I do not mean to answer these research questions here, but I claim that
that they should be answered, perhaps with some academic rigor, and may
be a worth subject for the economics and policy research group that
Aaron is considering.

Referring to SM's and Dave's messages, judging maturity based on
industry-wide acceptance requires a similar analysis, but with an added
twist: the grey areas of industry acceptance make these sorts of
decisions for the IESG rather difficult.  Having gone through the
effort, it was clear to me that there have been some losers, and I think
we can all spot some winners that remain at Proposed.

Eliot

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: motivations (was: Re: draft-housley-two-maturity-levels-00)

2010-06-22 Thread Andrew Sullivan
On Tue, Jun 22, 2010 at 10:12:13AM +0200, Eliot Lear wrote:

 Question #1: Is such a signal needed today?  If we look at the 1694
 Proposed Standards, are we seeing a lack of implementation due to lack
 of stability?  I would claim that there are quite a number of examples
 to the contrary (but see below).

In connection with that question, I'll observe that a very large
number of the DNS protocol documents have not advanced along the
standards track, and efforts to do something about that state of
affairs have not been very successful.  In addition, any time there is
an effort to make a change to anything already deployed is met by
arguments that we shouldn't change the protocol in even the slightest
detail, because of all the deployed code.  (I've been known to make
that argument myself.) 

I don't know whether the DNS is special in this regard, though I have
doubts.

A

-- 
Andrew Sullivan
a...@shinkuro.com
Shinkuro, Inc.
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


motivations (was: Re: draft-housley-two-maturity-levels-00)

2010-06-22 Thread RJ Atkinson
On 22nd June 2010, at 10:12:13 CET, Eliot Lear wrote:
 This then leads to a question of motivations.  What are the motivations
 for the IESG, the IETF, and for individual implementers?  Traditionally
 for the IETF and IESG, the motivation was meant to be a signal to the
 market that a standard won't change out from underneath the developer.

The above seems fairly muddled as written.

Traditionally, the market refers to consumers, users, 
and operators, rather than implementers or developers 
of products.

Indeed, moving beyond Proposed Standard has long been a signal 
to users, consumers, and operators that a technology now has
demonstrated multi-vendor interoperability.  

Further, by moving technology items that lacked multi-vendor 
interoperability into optional Appendices, or downgrading
them to MAY implement items, that process also makes clear
which parts of the technology really were readily available, 
as different from (for example) an essentially proprietary 
feature unique to one implementation.

In turn, that tends (even now) to increase the frequency that 
a particular IETF-standardised technology appears in RFPs 
(or Tender Announcements).  In turn, that enhanced the business 
case for vendors to implement the interoperable standards.

Standards are useful both for vendors/implementers and also
for consumers/users/operators.  However, standards are useful
to those 2 different communities in different ways.  

The IETF already has a tendency to be very vendor-focused 
vendor-driven.  It is best, however, if the IETF keeps the 
interests of both communities balanced (rather than tilting 
towards commercial vendors).

 Question #1: Is such a signal needed today?  

Yes.  Users/operators/consumers actively want and need
independent validation that a standard is both interoperable
and reasonably stable.

 If we look at the 1694
 Proposed Standards, are we seeing a lack of implementation due to lack
 of stability?  I would claim that there are quite a number of examples
 to the contrary (but see below).

Wrong question.  How clever to knock down the wrong strawman.

The right questions are:
A) whether that signal is useful to consumers/users/operators 

The answer to this is clearly YES, as technologies that
have advanced beyond Proposed Standard (PS) have a higher
probability of showing up in RFPs and Tender Requirements.

As examples, the JITC and TIC requirements pay a great
deal of attention to whether some technology is past PS.
Various IPv6 Profile documents around the world also pay
much attention to whether a particular specification is
past PS.

B) whether that signal has a feedback loop to implementers/
   vendors that still works.

The answer to this is also clearly YES.  Technologies that
appear in RFPs or Tender Requirements have a stronger
business case for vendors/implementers, hence are more
likely to be widely implemented.

Items that appear in the TIC or JITC requirements are very
very likely to be broadly implemented by many network
equipment vendors.  The same is true for technologies 
in various IPv6 Profiles around the world.

 Question #2: Is the signal actually accurate?  

Yes.

 Is there any reason for a developer to believe that the day after
 a mature standard is announced, a new Internet Draft won't
 in some way obsolete that work? 

Again, the wrong question, and an absurdly short measurement
time of 1 day.  Reductio ad absurdum is an often used technique
to divert attention when one lacks a persuasive substantial
argument for one's position.

By definition, Internet-Drafts cannot obsolete any 
standards-track document while they remain Internet-Drafts.  

Only an IESG Standards Action can obsolete some mature standard, 
and that kind of change happens slowly, relatively infrequently, 
and with long highly-visible lead times.

 What does history say about this effort? 

History says that 2-track has NOT happened several times already
because people (e.g. Eliot Lear) quibble over the details,
rather than understand that moving to 2-track is an improvement
and that optimum is the enemy of better in this situation.

 Question #3: What does such a signal say to the IETF?  

It is a positive feedback loop, indicating that work is
stable and interoperable.  It also says that gratuitous
changes are very unlikely to happen.  By contrast, 
technologies at Proposed Standard very frequently have 
substantial changes, often re-cycling back to PS with
those major changes.

Further, the new approach will have the effect of making
it easier to publish technologies at Proposed Standard,
which would be good all around.

 I know of at least one case where work was not permitted
 in the IETF precisely because a FULL STANDARD was said
 to need soak time.  It was SNMP, and the work that was
 not permitted at the time was what would later