Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-09 Thread Abdussalam Baryun
might preference would be just to pick one, and
 provide a stick for hitting those who do it the other way.

I think that IESG is already using that stick :)

AB

On 1/9/13, Dean Willis dean.wil...@softarmor.com wrote:

 On Jan 8, 2013, at 12:57 PM, Abdussalam Baryun abdussalambar...@gmail.com
 wrote:

 but the question of
 error in process is; does the RFC lack communication requirement with
 the community?


 Sorry if not clear. I mean that as some participant are requesting a
 scientific approach to struggling with 2119 (i.e. thread-subject),
 does that mean in some RFCs the use or not use (i.e. we see that
 participant use different approaches to follow the 2119) that it may
 add communication confuse with some of community?


 I'm absolutely certain that some of our community is confused about
 something related to this thread. Given the absence of information that
 would help in a decision, might preference would be just to pick one, and
 provide a stick for hitting those who do it the other way.

 --
 Dean





Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-09 Thread Martin Rex
John Day wrote:

For any formal proofing system worth its dime, this can be 100% ruled out,
since the proofing system will emit a 100% bugfree implementation of the
spec in the programming language of your choice as a result/byproduct of the
formal proofing process.
 
 C'mon. You don't really believe that do you?

Which one in particular.

The KIV-tool, which I was shown and explained in 1996 by a student doing
his master thesis with it on a module of our software, can output
source code at the end of its processing.  Code that is a guaranteed
bug-free implementation of the spec.  (which MUST NOT be confused
with the given spec providing the desired behaviour and being free
of unintended (and unthough-of) behaviour).

(While the tool's name appears to originate from the University of
 Karlsruhe (-Karlsruhe Interactive Verifier), and that student back
 then was from Karlsruhe, the current homepage of the tool appears
 to be at the University of Augsburg:

  http://www.informatik.uni-augsburg.de/lehrstuehle/swt/se/kiv/
)


 Ever see a compiler with bugs?

Compiler bugs are comparatively rare, many of them are problems
of the optimizer (therefore avoidable) and many happen under
obscure situations that can be avoided by defensive programming
style.  Code generators often produce defensive style code.
Using a robust and mature programming language, such as C89,
may also help to avoid compiler problems of fancy or complex
languagues or fancy features.

Look at the long list of defects of modern CPUs.  Apps programmers
that use a compiler rarely have to know (let alone take into account)
those CPU defects because most compiler code generators use only
a subset of CPU features anyway.



 Who verified the verifier?  How do you know the verifier is bug free?

Now you must be kidding.

Are you suggesting that anyone using the programming language Prolog
would be gambling and could never expect deterministic results?


With a decent amount of expertise and the right mindset,
even mere mortal humans can produce code with a very low error rate.
IIRC, Donald E. Knuth didn't need a formal specification and a
formal proofing tool to achieve a low error rate for TeX and Metafont.

Having started assembler programming at the age of 12 myself, I could
easily understand why DEK used Assembler for his algorithm code examples
in TAOCP when I first encountered these books at age 21.


 
 As I indicated before, I have been working on this problem since we 
 discovered Telnet wasn't as good as we thought it was.  For data 
 transfer protocols, it is relatively straightforward and can be 
 considered solved for anyone who wants to bother.  The problem is 
 most don't.

There is a perverted conditioning in our education system and
business life by rewarding mediocrity (cheating, improvising, being
superficial) which is IMO one of the underlying causes for
  http://en.wikipedia.org/wiki/Dunning_Kruger
and impairs many implementors' motivation to watch out and compensate for
defects in the spec and to recognize and respect design limits of a spec,
i.e. avoid underspecified protocol features or require official
clarification and adjust the code accordingly before shipping
implementations of underspecified protocol features.


 
 The real problem is the so-called application protocols where dealing 
 the different semantics of objects in different systems makes the 
 problem very hard and very subtle.  The representation of the 
 semantics is always of what you think its seminal properties are. 
 This is not always obvious.


I believe that the real difficulty is about designing and discussing
at the abstract level and then performing formally correct transitions
from abstract spec level to concrete specification proposla level
and comparing implementation options at the spec level ... and getting
everyone that wants to participate to understand the abstraction
and how to perform transitions from and to concrete spec proposals.


I'm experiencing that difficulty myself every once in a while.
In 1995/1996 I had a adopted notion/bias for the uselessness of
traditional GSS-API channel bindings (as defined in GSS-APIv1 rfc1508/1509
and GSS-APIv2 rfc2743/2744) and it took me a few weeks of discussion
to really understand what Nico was trying to accomplish with
cryptographic channel bindings, such as tls-unique, its prerequisites
and real properties and its benefits.


 
But maybe you're not really thinking of a defect in the implementation
(commonly called bug) but rather a gap in the specification that
leads to unintended or undesired behaviour of a 100% bug-free
implementation of your spec.
 
 One person's gap is another person's bug.  What may be obvious to one 
 as something that must occur may not be so to the other.

There is a significant difference between a bug in an implementation
and an unintended consequence due to a gap in the specification.

The SSLv3 protocol and the IETF successor TLS contains a somewhat
underspecified 

Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-09 Thread Martin Rex
John Day wrote:

 It would be interesting to see you apply that.
 
 This is what I have been talking about.  The human mind's ability to 
 believe that the whole world sees everything the same way they do. 
 It really is quite amazing.
 
 These so-called gaps often arise because they were unstated 
 assumptions or things that the author believed were patently obvious 
 and didn't need to be stated. Actually didn't know needed to be 
 stated.  From his point of view, no one would do it differently. 
 Nothing had been left out and he didn't make the mistake.   What 
 the other guys did was a bug.
 
 There is a much greater range of interpreting these things than it 
 appears that you imagine.

With bug, I mean behaviour that is non-compliant with the spec,
where the difference is not to accomodate for a defect of the spec,
and where this behaviour is detrimental to the operation of that
implementation (vulnerability or robustness issue) or detrimental
to the interop with other implementations.

With conflict in a spec, I refer to distinct statements in a
spec that contradict each other (such as the same symbolic protocol
element is assigned two different values in two seperate locations
of the spec).

With ambiguity in a spec, I refer to an omission that precludes
that a certain feature can be implemented at all.  Such as a symbolic
protocol element that represents an integer value in a PDU, but
the spec lacks the definition of the numeric value for the 
on-the-wire representation.

With gap in a spec, I refer to omissions that do not preclude
the implementation, but can lead to unexpected behaviour of an
implementation.  This result is vaguely similar to what happens when
a switch() statement lacks a default: case and is unexpectedly faced
with a value that is not covered by case statements.  The result
can be undefined / unpredictable, yet remain formally provable
correct with respect to the given specification.


Finally, there are omissions in a spec, i.e. properties that a
protocol does and can not provide, yet consumers of the protocol may
believe in magic security pixie dust and develop a flawed assumption
about a non-existing protocol property, as it happened in with the
TLS renegotiation.  TLS renegotiation being secure/protected against
MitM was a property that was retrofitted into SSLv3/TLS with rfc5746,
and could not possibly have existed by accident in any of the existing
implementations.  (Although servers that did not support renegotiation
at all never were vunerable to the interesting attacks.)



-Martin


Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-09 Thread Hector Santos
Maybe the survey to be done is a review of all the RFC, STD and see 
which ones


- had a great abstract and introduction,
- had the better writing styles
- had the least endorsement resistance
- progress faster than most,
- had the most implementators,
- with least support/questions need to be asked,

etc, etc.

Are the RFCs that followed 2119 the more successful ones, the ones 
written with clarity and knew how to separate functional and technical 
descriptions?  Had the right target audience(s) in mind?


Maybe we can also review which I-D and RFCs have high failure or have 
trouble getting industry endorsement, and even if it did, why has it 
failed in the market place?


What is the overall problem anyway?   Is this part of the reason why 
there are efforts to fast track documents? Because there are so much 
resistance by such a broad range of disciplines with different level 
mindsets and philosophies?


--
HLS

Martin Rex wrote:

John Day wrote:

It would be interesting to see you apply that.

This is what I have been talking about.  The human mind's ability to 
believe that the whole world sees everything the same way they do. 
It really is quite amazing.


These so-called gaps often arise because they were unstated 
assumptions or things that the author believed were patently obvious 
and didn't need to be stated. Actually didn't know needed to be 
stated.  From his point of view, no one would do it differently. 
Nothing had been left out and he didn't make the mistake.   What 
the other guys did was a bug.


There is a much greater range of interpreting these things than it 
appears that you imagine.


With bug, I mean behaviour that is non-compliant with the spec,
where the difference is not to accomodate for a defect of the spec,
and where this behaviour is detrimental to the operation of that
implementation (vulnerability or robustness issue) or detrimental
to the interop with other implementations.

With conflict in a spec, I refer to distinct statements in a
spec that contradict each other (such as the same symbolic protocol
element is assigned two different values in two seperate locations
of the spec).

With ambiguity in a spec, I refer to an omission that precludes
that a certain feature can be implemented at all.  Such as a symbolic
protocol element that represents an integer value in a PDU, but
the spec lacks the definition of the numeric value for the 
on-the-wire representation.


With gap in a spec, I refer to omissions that do not preclude
the implementation, but can lead to unexpected behaviour of an
implementation.  This result is vaguely similar to what happens when
a switch() statement lacks a default: case and is unexpectedly faced
with a value that is not covered by case statements.  The result
can be undefined / unpredictable, yet remain formally provable
correct with respect to the given specification.


Finally, there are omissions in a spec, i.e. properties that a
protocol does and can not provide, yet consumers of the protocol may
believe in magic security pixie dust and develop a flawed assumption
about a non-existing protocol property, as it happened in with the
TLS renegotiation.  TLS renegotiation being secure/protected against
MitM was a property that was retrofitted into SSLv3/TLS with rfc5746,
and could not possibly have existed by accident in any of the existing
implementations.  (Although servers that did not support renegotiation
at all never were vunerable to the interesting attacks.)



-Martin








Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-09 Thread Joe Touch

On 1/7/2013 6:01 PM, John Day wrote:

All standards groups that I am aware of have had the same view.  This is
not uncommon.

Although, I would point out that the TCP specification nor do most
protocols specifications of this type follow this rule.  State
transitions are not visible on the wire.  The rules for sliding window
are not described entirely in terms of the behavior seen on the line, etc.

I have seen specifications that attempted this and the implementations
built from them were very different and did not come close to
interoperating or in some cases of even doing the same thing.

In fact, I remember that we thought the new Telnet spec (1973) was a
paragon of clarity until a new site joined the Net that had not been
part of the commuity and came up with an implementation that bore no
relation to what anyone else had done.

This problem is a lot more subtle than you imagine.


+1

A protocol *is*:

- the states at the endpoints
- the messages on the wire
- a description of input events (message arrival, upper-layer
interface requests, timer expiration) that indicates
the subsequent change of state and output event
(message departure, upper layer indication,
or timers to set)

(i.e., a Mealy machine, attaching events to arcs)

The wire is the second of these, and entirely insufficient as a 
protocol spec.


Yes, there are two ways to try to write a protocol spec:
- procedural
defining the above explicitly
- behavioral
defining a protocol only from its external
behavior

The difference between these is easy to see for sort algorithms:

procedural:
quicksort
heapsort
etc.

behavioral:
sort

AFAICT, on the wire often implies behavioral equivalence, but it's a 
lot more complicated than just the on-wire messages. A behavioral 
description of a protocol would treat the protocol as a black box and 
explain its behavior under every possible input.


I'll take procedural descriptions of protocols over behavioral any day.

Joe


Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-08 Thread Abdussalam Baryun
Hi John,

thank you for your reply (i.e. learned alot), so I understand that a
RFC standard track may have more than one implementation but same
behavior enough not to make an error. Regarding following 2119, I
understand most text follow it only when there are normative actions.
Regarding implementer claiming following a RFC, but the question of
error in process is does the RFC lack communication requirement with
the community?

AB

On 1/7/13, John Day jeanj...@comcast.net wrote:
 Strictly speaking, the language of 2119 should be followed wherever
 necessary in order for the text to be normative and make it mandatory
 that a conforming implementation meets some requirement.  Otherwise,
 someone could build an implementation and claim it was correct and
 possibly cause legal problems. However, in the IETF there is also a
 requirement that there be two independent but communicating
 implementations for an RFC to standards-track. Correct?

 For all practical purposes, this requirement makes being able to
 communicate with one of the existing implementations the formal and
 normative definition of the RFC.  Any debate over the content of the
 RFC text is resolved by what the implementations do.  It would seem
 to be at the discretion of the authors of the implementations to
 determine whether or not any problems that are raised are bugs or not.

 Then it would seem that regardless of whether 2119 is followed, the
 RFCs are merely informative guides.

 So while the comments are valid that RFC 2119 should be followed,
 they are also irrelevant.

 Given that any natural language description is going to be ambiguous,
 this is probably for the best.

 Take care,
 John Day

 At 9:41 AM +0100 1/6/13, Abdussalam Baryun wrote:
Hi Marc Petit-Huguenin ,

I read the responses so far, and what can be said today is that there is
 2
philosophies, with supporters in both camps.  The goal of the IETF is to
 make
the Internet work better, and I do believe that RFC 2119 is one of the
fundamental tool to reach this goal, but having two ways to use it does
 not
help this goal.

I like the approach, and agree with you that we need a solution in
IETF which still is not solved or ignored by participants fo some
reasons. However, I agree of a survey or an experiment what ever we
call it, that makes IETF reflects to the RFC2119 performance on the
end-user of such product implementation of RFC protocols. I think many
old participants already have good experience to inform us of some
reality of IETF standards' end-user production.

AB




Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-08 Thread Dick Franks
On 5 January 2013 19:14, Marc Petit-Huguenin petit...@acm.org wrote:
[snip]

 Another way to look at it would be to run the following experiment:

 1. Someone design a new protocol, something simple but not obvious, and write
 in a formal language and keep it secret.

Which raises the obvious question:  Why do we not write protocol specs
in a formal specification language instead of struggling with the
ambiguities of natural language?

Theorem provers and automated verification tools could then be brought
to bear on both specifiations and implementations.



Dick
--


Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-08 Thread John Day
The reasons have been discussed or at least alluded to previously in 
this thread.  The short answer is we have been there and done that: 
30 years ago.


All those tools were developed and used successfully in the 80s.   I 
know of cases where doing the formal specification alongside the 
design phase caught lots of problems.   However, there are two 
central problems:  First, in general, programmers are lazy and just 
want to code. ;-)  Using a formal method is a lot more work.  Second, 
the complexity of the formal statements that must be written down is 
greater than the code.  So there is a higher probability of mistakes 
in the formal description than in the code.  Admittedly, if those 
statements are made, one has a far greater understanding of what you 
are doing.


Once you have both, there is still the problem that if a bug or 
ambiguity shows up, neither the code nor the formal spec nor a prose 
spec can be taken be the right one.  What is right is still in the 
heads of the authors.  All of these are merely approximations.  One 
has to go back and look at all of them and determine what the right 
answer is.  Of course, the more things one has to look at the better. 
(for a bit more, see Chapter 1 of PNA).


John


Which raises the obvious question:  Why do we not write protocol specs
in a formal specification language instead of struggling with the
ambiguities of natural language?

Theorem provers and automated verification tools could then be brought
to bear on both specifiations and implementations.



Dick
--




Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-08 Thread Donald Eastlake
Another problem is maintenance. Protocols change. Having to maintain a
formal specification is commonly at least an order of magnitude more
effort than maintaining a prose description. So it doesn't happen and
they very rapidly get out of synch in any living protocol. As an
example, the IEEE 802.11 (Wi-Fi) standard used to have a normative
formal description of the MAC operation (see Annex C of 802.11-1999).
By 802.11-2007 this was out of synch but was still included as
informational material on the theory it might be of some use and the
section began with the words This clause is no longer maintained
Although still present as informational in 802.11-2012, the first
words of that section are now This annex is obsolete.

And, as has been mentioned before, I'd like to emphasize that the IETF
experience and principal is that, if you want interoperation,
compliance testing is useless. The way to interoperation is
interoperation testing between implementations and, to a first
approximation, the more and the earlier you do interoperation testing,
the better.

Thanks,
Donald
=
 Donald E. Eastlake 3rd   +1-508-333-2270 (cell)
 155 Beaver Street, Milford, MA 01757 USA
 d3e...@gmail.com


On Tue, Jan 8, 2013 at 9:45 AM, John Day jeanj...@comcast.net wrote:
 The reasons have been discussed or at least alluded to previously in this
 thread.  The short answer is we have been there and done that: 30 years ago.

 All those tools were developed and used successfully in the 80s.   I know of
 cases where doing the formal specification alongside the design phase caught
 lots of problems.   However, there are two central problems:  First, in
 general, programmers are lazy and just want to code. ;-)  Using a formal
 method is a lot more work.  Second, the complexity of the formal statements
 that must be written down is greater than the code.  So there is a higher
 probability of mistakes in the formal description than in the code.
 Admittedly, if those statements are made, one has a far greater
 understanding of what you are doing.

 Once you have both, there is still the problem that if a bug or ambiguity
 shows up, neither the code nor the formal spec nor a prose spec can be taken
 be the right one.  What is right is still in the heads of the authors.  All
 of these are merely approximations.  One has to go back and look at all of
 them and determine what the right answer is.  Of course, the more things one
 has to look at the better. (for a bit more, see Chapter 1 of PNA).

 John


 Which raises the obvious question:  Why do we not write protocol specs
 in a formal specification language instead of struggling with the
 ambiguities of natural language?

 Theorem provers and automated verification tools could then be brought
 to bear on both specifiations and implementations.



 Dick
 --




Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-08 Thread John Day

Hear. Hear.  Ditto!  Absolutely.  etc.

At 10:12 AM -0500 1/8/13, Donald Eastlake wrote:

Another problem is maintenance. Protocols change. Having to maintain a
formal specification is commonly at least an order of magnitude more
effort than maintaining a prose description. So it doesn't happen and
they very rapidly get out of synch in any living protocol. As an
example, the IEEE 802.11 (Wi-Fi) standard used to have a normative
formal description of the MAC operation (see Annex C of 802.11-1999).
By 802.11-2007 this was out of synch but was still included as
informational material on the theory it might be of some use and the
section began with the words This clause is no longer maintained
Although still present as informational in 802.11-2012, the first
words of that section are now This annex is obsolete.

And, as has been mentioned before, I'd like to emphasize that the IETF
experience and principal is that, if you want interoperation,
compliance testing is useless. The way to interoperation is
interoperation testing between implementations and, to a first
approximation, the more and the earlier you do interoperation testing,
the better.

Thanks,
Donald
=
 Donald E. Eastlake 3rd   +1-508-333-2270 (cell)
 155 Beaver Street, Milford, MA 01757 USA
 d3e...@gmail.com


On Tue, Jan 8, 2013 at 9:45 AM, John Day jeanj...@comcast.net wrote:

 The reasons have been discussed or at least alluded to previously in this
 thread.  The short answer is we have been there and done that: 30 years ago.

 All those tools were developed and used successfully in the 80s.   I know of
 cases where doing the formal specification alongside the design phase caught
 lots of problems.   However, there are two central problems:  First, in
 general, programmers are lazy and just want to code. ;-)  Using a formal
 method is a lot more work.  Second, the complexity of the formal statements
 that must be written down is greater than the code.  So there is a higher
 probability of mistakes in the formal description than in the code.
 Admittedly, if those statements are made, one has a far greater
 understanding of what you are doing.

 Once you have both, there is still the problem that if a bug or ambiguity
 shows up, neither the code nor the formal spec nor a prose spec can be taken
 be the right one.  What is right is still in the heads of the authors.  All
 of these are merely approximations.  One has to go back and look at all of
 them and determine what the right answer is.  Of course, the more things one
 has to look at the better. (for a bit more, see Chapter 1 of PNA).

 John



 Which raises the obvious question:  Why do we not write protocol specs
 in a formal specification language instead of struggling with the
 ambiguities of natural language?

 Theorem provers and automated verification tools could then be brought
 to bear on both specifiations and implementations.



 Dick
 --







Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-08 Thread Abdussalam Baryun
but the question of
 error in process is; does the RFC lack communication requirement with
 the community?


Sorry if not clear. I mean that as some participant are requesting a
scientific approach to struggling with 2119 (i.e. thread-subject),
does that mean in some RFCs the use or not use (i.e. we see that
participant use different approaches to follow the 2119) that it may
add communication confuse with some of community?

AB


Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-08 Thread Abdussalam Baryun
Why not participant follow one approach to use 2119 in IDs and done,
and if not/another, then please specify in the language section.

AB


Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-08 Thread Dean Willis

On Jan 7, 2013, at 4:53 AM, Stewart Bryant stbry...@cisco.com wrote:

 Speaking as both a reviewer and an author, I would like
 to ground this thread to some form of reality.
 
 Can anyone point to specific cases where absence or over
 use of an RFC2119 key word caused an interoperability failure,
 or excessive development time?


I'm anecdotally familiar with some early pre-RFC 2543 SIP implementations where 
the implementors ignored everything that didn't say MUST and got something that 
didn't work. At all. But it was apparently really easy to develop, as the spec 
only had a few dozen MUST clauses, and the developers didn't 
include-by-reference any of the cited specs, such as SDP.

When we were trying to decide whether to make RFC 3261 (the replacement of RFC 
2543) a draft standards instead of a proposed standard, I recall Robert 
Sparks and some others attempting to define a fully interoperable 
implementation test that tabulated all of the RFC 2119 invocations that had 
sprouted in FC 3261. They then immediately gave up the idea as impractical, we 
recycled at proposed, and gave up on every making full. The testing 
methodology has greatly improved since then, and it makes lithe use of RFC 2119 
language for test definition or construction. 

--
Dean




Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-08 Thread Dean Willis

On Jan 8, 2013, at 12:57 PM, Abdussalam Baryun abdussalambar...@gmail.com 
wrote:

 but the question of
 error in process is; does the RFC lack communication requirement with
 the community?
 
 
 Sorry if not clear. I mean that as some participant are requesting a
 scientific approach to struggling with 2119 (i.e. thread-subject),
 does that mean in some RFCs the use or not use (i.e. we see that
 participant use different approaches to follow the 2119) that it may
 add communication confuse with some of community?
 

I'm absolutely certain that some of our community is confused about something 
related to this thread. Given the absence of information that would help in a 
decision, might preference would be just to pick one, and provide a stick for 
hitting those who do it the other way.

--
Dean




Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-08 Thread Martin Rex
 
John Day jeanj...@comcast.net wrote:

 The reasons have been discussed or at least alluded to previously in this
 thread.  The short answer is we have been there and done that: 30 years ago.

 All those tools were developed and used successfully in the 80s.
 I know of cases where doing the formal specification alongside the
 design phase caught lots of problems.   However, there are two
 central problems:  First, in general, programmers are lazy and just
 want to code. ;-)  Using a formal method is a lot more work.
 Second, the complexity of the formal statements that must be written
 down is greater than the code.  So there is a higher probability
 of mistakes in the formal description than in the code.
 Admittedly, if those statements are made, one has a far greater
 understanding of what you are doing.

I believe that the problem with the formal logic is that it is difficult
to both write as well as read/understand, and to verify that the chosen
axioms actually reflect (and lead to) the desired behaviour/outcome,
the relative of scarcity of suitable tools, the complexity of the
underlying theory and tools, and the tools' resulting lack of
intuitive usability.



 Once you have both, there is still the problem that if a bug or ambiguity
 shows up,

For any formal proofing system worth its dime, this can be 100% ruled out,
since the proofing system will emit a 100% bugfree implementation of the
spec in the programming language of your choice as a result/byproduct of the
formal proofing process.



  neither the code nor the formal spec nor a prose spec can be taken
 be the right one.  What is right is still in the heads of the authors.

But maybe you're not really thinking of a defect in the implementation
(commonly called bug) but rather a gap in the specification that
leads to unintended or undesired behaviour of a 100% bug-free
implementation of your spec.

(see Section 4 of this paper for an example:
  http://digbib.ubka.uni-karlsruhe.de/volltexte/documents/1827
)


Donald Eastlake wrote:

 Another problem is maintenance. Protocols change. Having to maintain a
 formal specification is commonly at least an order of magnitude more
 effort than maintaining a prose description.

Definitely.  Discussing protocols at the level of formal specification
language would be quite challenging (typically impossible in an open
forum, I believe it is even be difficult in a small and well-trained
design team).


 
 And, as has been mentioned before, I'd like to emphasize that the IETF
 experience and principal is that, if you want interoperation,
 compliance testing is useless.

Ouch.  I believe this message is misleading or wrong.

Compliance testing is VERY important, rather than useless,

Compliance testing would actually be perfectly sufficient iff the spec
was formally proven to be free of conflicts and ambiguities among
specified protocol elements -- plus a significant effort spent on
ensuring there were no gaps in the specification.

As it turns out, however, a significant amount of implementations will
be created by humans interpreting natural language specifications,
rather than implemenations created as 100% bug-free by-products of a
formal proof tool, and often, interop with such buggy, and often
spec-incompliant implementations is desirable and necessary since
they make up a significant part of the installed base.



 The way to interoperation is interoperation testing between
 implementations and, to a first approximation, the more and the
 earlier you do interoperation testing, the better.


The #1 reason for interop problems and road-block to evolution of
protocols is the wide-spread lack of compliance testing (on the flawed
assumption that it is useless) and focus on black-box interop testing
of the intersection of two subsets of protocol features.

The problem with interop testing is that it really doesn't provide
much of a clue, and the results can _not_ be extrapolated to features
and areas that were not interop tested (typically the other 90% of the
specification, many optional features and most of the extensibility).


The near complete breakage of TLS protocol version negotiation is a
result of the narrow-minded interop testing in companion with a complete
lack of compliance testing.


If done right, pure compliance testing can go a long way to providing
interop.  The only area where real interop testing is necessary is
those protocol areas where the spec contains conflicts/inconsistencies
that implementors didn't notice or ignored, or where there is a
widespread inconsistencies of implementations with the specification,
of created by careless premature shipping of a defective implementation
and factual creation of an installed base that is too large to ignore.


-Martin


Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-08 Thread Marc Petit-Huguenin
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 01/08/2013 05:41 AM, Dick Franks wrote:
 On 5 January 2013 19:14, Marc Petit-Huguenin petit...@acm.org wrote: 
 [snip]
 
 Another way to look at it would be to run the following experiment:
 
 1. Someone design a new protocol, something simple but not obvious, and
 write in a formal language and keep it secret.
 
 Which raises the obvious question:  Why do we not write protocol specs in a
 formal specification language instead of struggling with the ambiguities of
 natural language?
 
 Theorem provers and automated verification tools could then be brought to
 bear on both specifiations and implementations.
 

IMO, the main problem with using a formal specification language is that it
would made RFCs a little bit too close to mathematics for some people, as math
is not patentable.  So do not expect any change in this direction.

- -- 
Marc Petit-Huguenin
Email: m...@petit-huguenin.org
Blog: http://blog.marc.petit-huguenin.org
Profile: http://www.linkedin.com/in/petithug
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)

iQIcBAEBCAAGBQJQ7MGHAAoJECnERZXWan7EFSEP/0fyEuzFAQMoGvgKFxZG5ZKZ
H94ALXz6pFiSyBC9xImwlUR1TzcKqQbu5BMkPjirsWq/Xt8vlRIEuv7DzpHg0Dp6
A4IuU6t/ddtf9FEHRdvmkUEkHrKd99BjiIBywRoPCRgnqPhYrk+Q7OmyFZzz6BI4
tobQLgIiS9EPZDync9iu8atfubMhuy95VM+v6VfIfOLkdw6HcJWUArRXMP5j2rgK
AD7vQ9SmYXI1x3ZFVjTzWDg4KCAk1uET7+r2QyvmYMgWS3B+bMTl0vCO6kz8TDzw
B8f6thEXZ2/VzLOmoklFxN3ZhR+VDyMzHC0BWgD1Ro1YNogB2G2En8HLlxrmoj6e
T2PIZNnT9xgzkVRkTDmerwLlg7dwUjn8nK/462WTlCRcrAVQ52PHzfDUlBDBdX+A
l10Teah5PbpE1cK285b2CX+nbXbaR3HrhXFdTwvfLruExgsm571isfH3WVmfxXsI
z+3hl+HW92FQ7nreV+BrX/BqwAMpcjG4lkNlM1GgduDuv/rv1nh0lKD9dazNvwmg
r7Qx8pi6TyYL2YbKb0GPRSUACNR8qAXOWHWD3TMuJqiuzIZFj1wCm0mqwzNt7KFG
6RFyTez/7jZWqXx8RPJ1awRnDSv5wiKcH2lgI0h6kvxWdVtnwGyWb3e/W/uY3//p
hcBfwtvCdfw/OiwxicJv
=PDy4
-END PGP SIGNATURE-


Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-08 Thread John Day

At 1:36 AM +0100 1/9/13, Martin Rex wrote:



John Day jeanj...@comcast.net wrote:


 The reasons have been discussed or at least alluded to previously in this
 thread.  The short answer is we have been there and done that: 30 
years ago.


 All those tools were developed and used successfully in the 80s.
 I know of cases where doing the formal specification alongside the
 design phase caught lots of problems.   However, there are two
 central problems:  First, in general, programmers are lazy and just
 want to code. ;-)  Using a formal method is a lot more work.
 Second, the complexity of the formal statements that must be written
 down is greater than the code.  So there is a higher probability
 of mistakes in the formal description than in the code.
 Admittedly, if those statements are made, one has a far greater
 understanding of what you are doing.


I believe that the problem with the formal logic is that it is difficult
to both write as well as read/understand, and to verify that the chosen
axioms actually reflect (and lead to) the desired behaviour/outcome,
the relative of scarcity of suitable tools, the complexity of the
underlying theory and tools, and the tools' resulting lack of
intuitive usability.



The tools have been available for quite some time.  It is still very difficult.





 Once you have both, there is still the problem that if a bug or ambiguity
 shows up,


For any formal proofing system worth its dime, this can be 100% ruled out,
since the proofing system will emit a 100% bugfree implementation of the
spec in the programming language of your choice as a result/byproduct of the
formal proofing process.


C'mon. You don't really believe that do you? The statement either is 
a tautology or naive.  Ever see a compiler with bugs?  Who verified 
the verifier?  How do you know the verifier is bug free?


As I indicated before, I have been working on this problem since we 
discovered Telnet wasn't as good as we thought it was.  For data 
transfer protocols, it is relatively straightforward and can be 
considered solved for anyone who wants to bother.  The problem is 
most don't.


The real problem is the so-called application protocols where dealing 
the different semantics of objects in different systems makes the 
problem very hard and very subtle.  The representation of the 
semantics is always of what you think its seminal properties are. 
This is not always obvious.






  neither the code nor the formal spec nor a prose spec can be taken
 be the right one.  What is right is still in the heads of the authors.


But maybe you're not really thinking of a defect in the implementation
(commonly called bug) but rather a gap in the specification that
leads to unintended or undesired behaviour of a 100% bug-free
implementation of your spec.


One person's gap is another person's bug.  What may be obvious to one 
as something that must occur may not be so to the other.  Then there 
is that fine line between what part of the specification is required 
for the specification and what part is the environment of the 
implementation.


The human mind has an amazing ability to convince us that how we see 
the world is the way others do.  Having been on the Net when there 
were 15-20 very different machine architectures, I can assure you 
that implementation strategies can differ wildly.


I had had great hopes for the Temporal ordering approaches to 
specification since they said the least about the implementation. 
(the specification is entirely in terms of ordering events) However, 
I never met anyone who could design in them.




(see Section 4 of this paper for an example:
  http://digbib.ubka.uni-karlsruhe.de/volltexte/documents/1827
)


Donald Eastlake wrote:


 Another problem is maintenance. Protocols change. Having to maintain a

  formal specification is commonly at least an order of magnitude more

 effort than maintaining a prose description.


Definitely.  Discussing protocols at the level of formal specification
language would be quite challenging (typically impossible in an open
forum, I believe it is even be difficult in a small and well-trained
design team).


Which means there will always be a danger of lost in translation.





 And, as has been mentioned before, I'd like to emphasize that the IETF
 experience and principal is that, if you want interoperation,
 compliance testing is useless.


Ouch.  I believe this message is misleading or wrong.

Compliance testing is VERY important, rather than useless,

Compliance testing would actually be perfectly sufficient iff the spec
was formally proven to be free of conflicts and ambiguities among
specified protocol elements -- plus a significant effort spent on
ensuring there were no gaps in the specification.


Do you realize the cost of this amount of testing? (Not saying it 
isn't a good idea, just the probability of convincing a development 
manager to do it is pretty slim.)  ;-)




As it turns out, however, a significant 

Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-08 Thread Dick Franks
On 9 January 2013 01:19, John Day jeanj...@comcast.net wrote:
[snip]

 One person's gap is another person's bug.  What may be obvious to one as
 something that must occur may not be so to the other.  Then there is that
 fine line between what part of the specification is required for the
 specification and what part is the environment of the implementation.

Disagree

A gap in the specification will result in all implementations having
the same unintended behaviour, because the developers understood and
followed the spec 100%.

Bugs are distinguishable from gaps because they occur in some
implementations but not others and arise from misinterpretation of
some aspect of the specification.  In this context, over-engineering
is a bug, as distinct from competitive advantage.


Dick
--


Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-08 Thread John Day

It would be interesting to see you apply that.

This is what I have been talking about.  The human mind's ability to 
believe that the whole world sees everything the same way they do. 
It really is quite amazing.


These so-called gaps often arise because they were unstated 
assumptions or things that the author believed were patently obvious 
and didn't need to be stated. Actually didn't know needed to be 
stated.  From his point of view, no one would do it differently. 
Nothing had been left out and he didn't make the mistake.   What 
the other guys did was a bug.


There is a much greater range of interpreting these things than it 
appears that you imagine.


At 2:46 AM + 1/9/13, Dick Franks wrote:

On 9 January 2013 01:19, John Day jeanj...@comcast.net wrote:
[snip]


 One person's gap is another person's bug.  What may be obvious to one as
 something that must occur may not be so to the other.  Then there is that
 fine line between what part of the specification is required for the
 specification and what part is the environment of the implementation.


Disagree

A gap in the specification will result in all implementations having
the same unintended behaviour, because the developers understood and
followed the spec 100%.

Bugs are distinguishable from gaps because they occur in some
implementations but not others and arise from misinterpretation of
some aspect of the specification.  In this context, over-engineering
is a bug, as distinct from competitive advantage.


Dick
--




Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-08 Thread Abdussalam Baryun
This is what I have been talking about. The human mind's ability to believe 
that the whole world sees everything the same way they do. It really is quite 
amazing.

These so-called gaps often arise because they were unstated assumptions or 
things that the author believed were patently obvious and didn't need to be 
stated. Actually didn't know needed to be stated. From his point of view, no 
one would do it differently. Nothing had been left out and he didn't make the 
mistake. What the other guys did was a bug.

That is why I think the IETF names the RFC or standards or
specifications as *Request For Comments*, if there is a gap or the
human mind failed, then it should communicate to the world and report
such gap/bug.

AB


Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-07 Thread Stewart Bryant

Speaking as both a reviewer and an author, I would like
to ground this thread to some form of reality.

Can anyone point to specific cases where absence or over
use of an RFC2119 key word caused an interoperability failure,
or excessive development time?

- Stewart






Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-07 Thread John Day
As you are guessing that is unlikely, however, the more pertinent 
question is whether it has prevented some innovative approach to 
implementations.  This would be the more interesting question.


We tend to think of these as state machines and describe them 
accordingly.  There are other approaches which might be prevented if 
using a MUST when it wasn't needed.


At 10:53 AM + 1/7/13, Stewart Bryant wrote:

Speaking as both a reviewer and an author, I would like
to ground this thread to some form of reality.

Can anyone point to specific cases where absence or over
use of an RFC2119 key word caused an interoperability failure,
or excessive development time?

- Stewart




Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-07 Thread Stewart Bryant

Indeed an interesting additional question.

My view is that you MUST NOT use RFC2119 language, unless you MUST use 
it, for exactly that reason. What is important is on the wire (a term 
that from experience is very difficult to define) inter-operation, and 
implementers need to be free to achieve that though any means that suits 
them.


- Stewart

On 07/01/2013 12:22, John Day wrote:
As you are guessing that is unlikely, however, the more pertinent 
question is whether it has prevented some innovative approach to 
implementations.  This would be the more interesting question.


We tend to think of these as state machines and describe them 
accordingly.  There are other approaches which might be prevented if 
using a MUST when it wasn't needed.


At 10:53 AM + 1/7/13, Stewart Bryant wrote:

Speaking as both a reviewer and an author, I would like
to ground this thread to some form of reality.

Can anyone point to specific cases where absence or over
use of an RFC2119 key word caused an interoperability failure,
or excessive development time?

- Stewart


.




--
For corporate legal information go to:

http://www.cisco.com/web/about/doing_business/legal/cri/index.html



Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-07 Thread Brian E Carpenter
On 07/01/2013 12:42, Stewart Bryant wrote:
 Indeed an interesting additional question.
 
 My view is that you MUST NOT use RFC2119 language, unless you MUST use
 it, for exactly that reason. What is important is on the wire (a term
 that from experience is very difficult to define) inter-operation, and
 implementers need to be free to achieve that though any means that suits
 them.

Agreed. Imagine the effect if the TCP standard had said that a particular
congestion control algorithm was mandatory. Oh, wait...

... RFC 1122 section 4.2.2.15 says that a TCP MUST implement reference [TCP:7]
which is Van's SIGCOMM'88 paper. So apparently any TCP that uses a more recent
congestion control algorithm is non-conformant. Oh, wait...

... RFC 2001 is a proposed standard defining congestion control algorithms,
but it doesn't update RFC 1122, and it uses lower-case. Oh, wait...

RFC 2001 is obsoleted by RFC 2581 which obsoleted by RFC 5681. These both
use RFC 2119 keywords, but they still don't update RFC 1122.

This is such a rat's nest that it has a guidebook (RFC 5783, Congestion
Control in the RFC Series) and of course it's still an open research topic.

Attempting to validate TCP implementations on the basis of conformance
with RFC 2119 keywords would be, well, missing the point.

I know this is an extreme case, but I believe it shows the futility of
trying to be either legalistic or mathematical in this area.

   Brian


Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-07 Thread John Day
Let me get this straight, Brian.  It would seem you are pointing out 
that the IETF does not have a clear idea of what it is doing?  ;-)  I 
could believe that.


No, your example is not an example of what I suggested at all.

Yours is an example of not specifying the conditions that a 
congestion control algorithm must have rather than the congestion 
control algorithm itself.


What I was suggesting (and it is very easy trap to fall into) was 
defining a spec with one implementation environment in mind and not 
realizing you are constraining things unnecessarily. Consider the 
difference between defining TCP as a state machine with that sort of 
implementation in mind and building an implementation in LISP. (I 
know someone who did it.)  It would be very easy to make assumptions 
about how something was described that made a LISP implementation 
unduly messy, or missed an opportunity for a major simplification.


It is quite easy to do some thing mathematical in this area (not 
necessarily alluding to formal specification), but you do have to 
have a clear concept of the levels of abstraction.  Of course, once 
you do, you still have the question whether there is a higher 
probability of errors in the math or the program.


Yes, programming is just math of a different kind, which of course is 
the point.


Take care,
John

At 1:31 PM + 1/7/13, Brian E Carpenter wrote:

On 07/01/2013 12:42, Stewart Bryant wrote:

 Indeed an interesting additional question.

 My view is that you MUST NOT use RFC2119 language, unless you MUST use
 it, for exactly that reason. What is important is on the wire (a term
 that from experience is very difficult to define) inter-operation, and
 implementers need to be free to achieve that though any means that suits
 them.


Agreed. Imagine the effect if the TCP standard had said that a particular
congestion control algorithm was mandatory. Oh, wait...

... RFC 1122 section 4.2.2.15 says that a TCP MUST implement reference [TCP:7]
which is Van's SIGCOMM'88 paper. So apparently any TCP that uses a more recent
congestion control algorithm is non-conformant. Oh, wait...

... RFC 2001 is a proposed standard defining congestion control algorithms,
but it doesn't update RFC 1122, and it uses lower-case. Oh, wait...

RFC 2001 is obsoleted by RFC 2581 which obsoleted by RFC 5681. These both
use RFC 2119 keywords, but they still don't update RFC 1122.

This is such a rat's nest that it has a guidebook (RFC 5783, Congestion
Control in the RFC Series) and of course it's still an open research topic.

Attempting to validate TCP implementations on the basis of conformance
with RFC 2119 keywords would be, well, missing the point.

I know this is an extreme case, but I believe it shows the futility of
trying to be either legalistic or mathematical in this area.

   Brian




Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-07 Thread John Scudder
On Jan 6, 2013, at 11:50 PM, John Day jeanj...@comcast.net wrote:

 However, in the IETF there is also a requirement that there be two 
 independent but communicating implementations for an RFC to standards-track. 
 Correct?

Alas, no. 

--John


Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-07 Thread ned+ietf
 On 07/01/2013 12:42, Stewart Bryant wrote:
  Indeed an interesting additional question.
 
  My view is that you MUST NOT use RFC2119 language, unless you MUST use
  it, for exactly that reason. What is important is on the wire (a term
  that from experience is very difficult to define) inter-operation, and
  implementers need to be free to achieve that though any means that suits
  them.

 Agreed. Imagine the effect if the TCP standard had said that a particular
 congestion control algorithm was mandatory. Oh, wait...

 ... RFC 1122 section 4.2.2.15 says that a TCP MUST implement reference [TCP:7]
 which is Van's SIGCOMM'88 paper. So apparently any TCP that uses a more recent
 congestion control algorithm is non-conformant. Oh, wait...

 ... RFC 2001 is a proposed standard defining congestion control algorithms,
 but it doesn't update RFC 1122, and it uses lower-case. Oh, wait...

 RFC 2001 is obsoleted by RFC 2581 which obsoleted by RFC 5681. These both
 use RFC 2119 keywords, but they still don't update RFC 1122.

 This is such a rat's nest that it has a guidebook (RFC 5783, Congestion
 Control in the RFC Series) and of course it's still an open research topic.

 Attempting to validate TCP implementations on the basis of conformance
 with RFC 2119 keywords would be, well, missing the point.

 I know this is an extreme case, but I believe it shows the futility of
 trying to be either legalistic or mathematical in this area.

Exactly. Looking for cases where the use/non-use of capitalized terms caused an
interoperability failure is a bit silly, because the use/non-use of such terms
doesn't carry that sort of weight.

What does happen is that implementation and therefore interoperability quality
can suffer when standards emphasize the wrong points of compliance. Things
work, but not as well as they should or could.

A fairly common case of this in application protocols is an emphasis on
low-level limits and restrictions while ignoring higher-level requirements. For
example, our email standards talk a fair bit about so-called minimim maximums
that in practice are rarely an issue, all the while failing to specify a
mandatory minimum set of semantics all agents must support. This has led to
lack of interoperable functionality in the long term.

Capitalized terms are both a blessing and a curse in this regard. They make it
easy to point out the really important stuff. But in doing so, they also
make it easy to put the emphasis in the wrong places.

tl;dr: Capitulized terms are a tool, and like any tool they can be misused.

Ned


Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-07 Thread John Day

Alas, indeed.  ;-)


At 3:50 PM + 1/7/13, John Scudder wrote:

On Jan 6, 2013, at 11:50 PM, John Day jeanj...@comcast.net wrote:

 However, in the IETF there is also a requirement that there be two 
independent but communicating implementations for an RFC to 
standards-track. Correct?


Alas, no.

--John




Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-07 Thread Thomas Narten
Stewart Bryant stbry...@cisco.com writes:

 Indeed an interesting additional question.

 My view is that you MUST NOT use RFC2119 language, unless you MUST use 
 it, for exactly that reason. What is important is on the wire (a term 
 that from experience is very difficult to define) inter-operation, and 
 implementers need to be free to achieve that though any means that suits 
 them.

The latter goes without saying. It's one of the obvious assumptions
that underlies all IETF protocols. It may not be written down, but its
always been an underlying principle.

E.g., from RFC 1971:

   A host maintains a number of data structures and flags related to
   autoconfiguration. In the following, we present conceptual variables
   and show how they are used to perform autoconfiguration. The specific
   variables are used for demonstration purposes only, and an
   implementation is not required to have them, so long as its external
   behavior is consistent with that described in this document.

Other document (that I've long forgotten) say similar things.   

That sort of language was put into specific documents specifically
because some individuals sometimes would raise the concern that a spec
was trying to restrict an implementation.
   
IETF specs have always been about describing external behavior
(i.e. what you can see on the wire), and how someone implements
internally to produce the required external behavior is none of the
IETF's business (and never has been).

(Someone earlier on this thread seemed to maybe think the above is not
a given, but it really always has been.)

Thomas



Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-07 Thread John Day
All standards groups that I am aware of have had the same view.  This 
is not uncommon.


Although, I would point out that the TCP specification nor do most 
protocols specifications of this type follow this rule.  State 
transitions are not visible on the wire.  The rules for sliding 
window are not described entirely in terms of the behavior seen on 
the line, etc.


I have seen specifications that attempted this and the 
implementations built from them were very different and did not come 
close to interoperating or in some cases of even doing the same thing.


In fact, I remember that we thought the new Telnet spec (1973) was a 
paragon of clarity until a new site joined the Net that had not been 
part of the commuity and came up with an implementation that bore no 
relation to what anyone else had done.


This problem is a lot more subtle than you imagine.

Take care,
John Day

At 4:46 PM -0500 1/7/13, Thomas Narten wrote:

Stewart Bryant stbry...@cisco.com writes:


 Indeed an interesting additional question.



 My view is that you MUST NOT use RFC2119 language, unless you MUST use
 it, for exactly that reason. What is important is on the wire (a term
 that from experience is very difficult to define) inter-operation, and
 implementers need to be free to achieve that though any means that suits
 them.


The latter goes without saying. It's one of the obvious assumptions
that underlies all IETF protocols. It may not be written down, but its
always been an underlying principle.

E.g., from RFC 1971:

   A host maintains a number of data structures and flags related to
   autoconfiguration. In the following, we present conceptual variables
   and show how they are used to perform autoconfiguration. The specific
   variables are used for demonstration purposes only, and an
   implementation is not required to have them, so long as its external
   behavior is consistent with that described in this document.

Other document (that I've long forgotten) say similar things.  


That sort of language was put into specific documents specifically
because some individuals sometimes would raise the concern that a spec
was trying to restrict an implementation.
  
IETF specs have always been about describing external behavior

(i.e. what you can see on the wire), and how someone implements
internally to produce the required external behavior is none of the
IETF's business (and never has been).

(Someone earlier on this thread seemed to maybe think the above is not
a given, but it really always has been.)

Thomas




Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-06 Thread Abdussalam Baryun
Hi Marc Petit-Huguenin ,

I read the responses so far, and what can be said today is that there is 2
philosophies, with supporters in both camps.  The goal of the IETF is to make
the Internet work better, and I do believe that RFC 2119 is one of the
fundamental tool to reach this goal, but having two ways to use it does not
help this goal.

I like the approach, and agree with you that we need a solution in
IETF which still is not solved or ignored by participants fo some
reasons. However, I agree of a survey or an experiment what ever we
call it, that makes IETF reflects to the RFC2119 performance on the
end-user of such product implementation of RFC protocols. I think many
old participants already have good experience to inform us of some
reality of IETF standards' end-user production.

AB


Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-06 Thread John Day
Strictly speaking, the language of 2119 should be followed wherever 
necessary in order for the text to be normative and make it mandatory 
that a conforming implementation meets some requirement.  Otherwise, 
someone could build an implementation and claim it was correct and 
possibly cause legal problems. However, in the IETF there is also a 
requirement that there be two independent but communicating 
implementations for an RFC to standards-track. Correct?


For all practical purposes, this requirement makes being able to 
communicate with one of the existing implementations the formal and 
normative definition of the RFC.  Any debate over the content of the 
RFC text is resolved by what the implementations do.  It would seem 
to be at the discretion of the authors of the implementations to 
determine whether or not any problems that are raised are bugs or not.


Then it would seem that regardless of whether 2119 is followed, the 
RFCs are merely informative guides.


So while the comments are valid that RFC 2119 should be followed, 
they are also irrelevant.


Given that any natural language description is going to be ambiguous, 
this is probably for the best.


Take care,
John Day

At 9:41 AM +0100 1/6/13, Abdussalam Baryun wrote:

Hi Marc Petit-Huguenin ,


I read the responses so far, and what can be said today is that there is 2

philosophies, with supporters in both camps.  The goal of the IETF is to make
the Internet work better, and I do believe that RFC 2119 is one of the
fundamental tool to reach this goal, but having two ways to use it does not
help this goal.

I like the approach, and agree with you that we need a solution in
IETF which still is not solved or ignored by participants fo some
reasons. However, I agree of a survey or an experiment what ever we
call it, that makes IETF reflects to the RFC2119 performance on the
end-user of such product implementation of RFC protocols. I think many
old participants already have good experience to inform us of some
reality of IETF standards' end-user production.

AB