Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-07 Thread ned+ietf
 On Fri, Sep 6, 2013 at 6:02 PM, Tim Bray tb...@textuality.com wrote:

  How about a BCP saying conforming implementations of a wide-variety of
  security-area RFCs MUST be open-source?
 
  *ducks*
 

 And the user MUST compile them themselves from the sources?

 Nobody runs open source, (unless its an interpreted language). They run the
 compiled version and there is no infrastructure to check up on the
 compilation.

And don't forget:

   http://cm.bell-labs.com/who/ken/trust.html

Ned


Re: stability of iana.org URLs

2013-08-01 Thread ned+ietf
 Hi,

 The link in RFC3315 is actually incorrect -- it should have been 
 http://www.iana.org/assignments/enterprise-numbers, without the file 
 extension, and there's an erratum about this. HTML was generally (if not 
 exclusively) reserved for files that needed to include links to registration 
 forms .

 As Barry said, we do intend to keep that same short 
 http://www.iana.org/assignments/example format working for every current 
 page, even the newly-created ones. We also prefer to see that format used in 
 documents, since we can't guarantee that the file extension used for the long 
 version won't change. (This information will be appearing on the website in 
 some form.)


 Also, if you find that a formerly-valid URL (like one that used to have an 
 .html exception) isn't redirecting to the current page, please report it to 
 i...@iana.orgmailto:i...@iana.org. A redirect should have been set up.

Excellent, thanks.

Ned


Re: stability of iana.org URLs

2013-07-31 Thread ned+ietf
 On 7/31/13 4:06 PM, Barry Leiba wrote:
  I just followed http://www.iana.org/assignments/enterprise-numbers.html
  From RFC3315 (DHCPv6)'s reference section.  Ten years later, the URL
  doesn't work.
 
  I know that things were reworked when we went to XML based storage, but
  I thought that the old URLs would at least have a 301 error on them.
 
  I discovered that dropping the .html gets me the right data at:
http://www.iana.org/assignments/enterprise-numbers
 
  Yes: that's the form that IANA would like you to use.  They changed
  their registries from HTML to XML, and the URLs changed.

 That's true, but cool URIs don't change:

 http://www.w3.org/Provider/Style/URI.html

 IMHO it'd be easy enough to put general redirects in place.

+1. mod_rewrite is nobody's friend, but it can be tamed and put to good use.

Ned


Re: Content-free Last Call comments

2013-06-12 Thread ned+ietf
Dave Cridland wrote:

 I strongly feel that positive statements have value, as they allow the
 community to gauge the level of review and consensus, and I suspect that
 human nature means that we get more reviews if people get to brag about it.

Agreed 100%.

But also consider the likely effect of calling certain comments useless.

Discussions like this don't exactly fire me up with enthusiasm to expend 
additional time reviewing and commenting on documents not directly related to
what I do. And I rather doubt I'm alone in this.

Ned


Re: Language editing

2013-05-07 Thread ned+ietf
 Maybe things have changed, but, if one actually believes the
 robustness principle, then, in the case Geoff cites, Exchange is
 simply non-conforming -- not because the spec prohibits
 rejecting on the basis of a fine distinction about IPv6 formats,
 but because doing so is unnecessary, inconsistent with the
 robustness principle, and, arguably, plain silly.

I'm afraid I'm going to have to disagree here. If you look at RFC 5321 and
are unaware of the history of how the text came about, it gives the definite
appearance of going out of its way to ban the use of :: to replace a single
0. A reasonable interpretation is therefore that such forms are disallowed
for a reason.

It's fine to be tolerant of stuff the relevant standard doesn't
allow but doesn't call out explicitly. But that's not the case here.

In any case, if you want to fix this, we could change RFC 5321 to accept this
form. But as Mark Andrews points out, you can't make it legal to send such
forms without breaking interoperability. I suppose we could make the change and
recycle at proposed, but that seems rather extreme to fix what is in fact a
nonissue.

I'll also point out that this has diddley-squat to do with formal verification
processes. Again as Mark Anrdrews points out, we deployed something with a
restriction that subsequently turned out to be unnecessary, and now we're stuck
with it. Indeed, had formal verification processes been properly used, they
would have flagged any attempt to change this as breaking interoperability.

Ned


Re: Language editing

2013-05-07 Thread ned+ietf
 On 08/05/2013 03:28, John C Klensin wrote:
 ...
  I'll also point out that this has diddley-squat to do with
  formal verification processes. Again as Mark Anrdrews points
  out, we deployed something with a restriction that
  subsequently turned out to be unnecessary, and now we're stuck
  with it. Indeed, had formal verification processes been
  properly used, they would have flagged any attempt to change
  this as breaking interoperability.
 
  Also agreed.

 To be clear, I'm no fan of formal verification either, but this
 *is* a case where the IETF's lapse in formality has come back to
 bite, and the Postel principle would have helped. Also, given the
 original subject of the thread, I don't see how language editing
 could have made any difference.

Reread the notes about the history behind this in this thread. You haven't even
come close to making a case that formal verification of the standards would
have prevented this from happening. (Formal verification of implementation
compliance to the standards would of course have prevented Apple's client bug,
but that's a very different thing.)

You are, however, correct that this has nothing to do with specification
editing.

Ned


Re: Language editing

2013-05-07 Thread ned+ietf
 On 08/05/2013 08:33, Ned Freed wrote:
  On 08/05/2013 03:28, John C Klensin wrote:
  ...
  I'll also point out that this has diddley-squat to do with
  formal verification processes. Again as Mark Anrdrews points
  out, we deployed something with a restriction that
  subsequently turned out to be unnecessary, and now we're stuck
  with it. Indeed, had formal verification processes been
  properly used, they would have flagged any attempt to change
  this as breaking interoperability.
  Also agreed.
 
  To be clear, I'm no fan of formal verification either, but this
  *is* a case where the IETF's lapse in formality has come back to
  bite, and the Postel principle would have helped. Also, given the
  original subject of the thread, I don't see how language editing
  could have made any difference.
 
  Reread the notes about the history behind this in this thread. You haven't 
  even
  come close to making a case that formal verification of the standards would
  have prevented this from happening.

 You are correct if only considering the mail standards. I suspect
 that a serious attempt at formal verification would have thrown
 up an inconsistency between the set of mail-related standards and
 the URI standard.

Which is relevant to the present situation... how exactly? And in any case, the
relevant URI standard incorporates the ABNF from RFC 2373, but doesn't
state whether or not it also inherits restrictions specified in prose
in that specification, which is where the restriction in RFC 2821
originated.

 However, I think the underlying problem here is
 that we ended up defining the text representation of IPv6 addresses
 in three different places, rather than having a single normative
 source. (ABNF in the mail standards, ABNF in the URI standard,
 and English in ipv6/6man standards.)

Except that wasn't the problem. The ABNF in the email standards is
consistent with what the other standards said when RFC 2821 was published. And
once that happened the die was cast as far as email usage was concerned. The
fact that the other standards later decided to loosen the rules in this
regard is what caused the inconsistency.

If you want to blame something, it has to be either the initial decision to
limit use of :: or the subsequent decision to remove that limit. And for
increased formalism to have helped it would have to have prevented one of those
from happening. I supose that's possible, but I certainly don't see it as
inevitable.

Ned


Re: Language editing

2013-05-06 Thread ned+ietf
 Mark Andrews ma...@isc.org wrote:
 
  Apples mail client is broken [IPv6:2001:df9::4015:1430:8367:2073:5d0]
  is not legal according to both RFC 5321 and RFC 2821 which is all
  that applies here.

I was until today unaware how strong the feelings are on this
 one-or-more vs. two-or-more issue. I do not expect to change
 anybody's mind. :^(

But I do object to calling that EHLO string not legal.

It's syntactically illegal according to the ABNF in RFC 5321 itself, which
specifically states:

 IPv6-comp  = [IPv6-hex *5(: IPv6-hex)] ::
  [IPv6-hex *5(: IPv6-hex)]
; The :: represents at least 2 16-bit groups of
  ; zeros.  No more than 6 groups in addition to the
  ; :: may be present.

The 5321 reference names RFC 4291 as the source of address syntax
 (even if it gives BNF which says two or more if you delve deeply
 enough).

It may be the source, but the formal syntax specified in the document at hand
has to be the definitive one.

RFC 4921 is clear about saying one or more. The Errata posted
 against it claiming it should say two or more have been rejected.
 It is silly to argue under these conditions that Apple's EHLO string
 is not legal.

No, what's silly is your argument that the ABNF in the actual specification of
how mail clients and servers are supposed to behave isn't the definitive
definition.

BTW, RFC 5321 still contains the language about
  if the verification fails, the server MUST NOT refuse to accept a
  message on that basis.

And that language appears in the context of checking that the IP literal
in the HELO/EHLO actually corresponds to the IP address of the client. It
has nothing to do with syntactic validity.

 so IMHO enforcing any particular interpretation of what an IPv6
 address literal should look like is double-plus-ungood.

Then you should be arguing for a change in RFC 5321, because it is *very*
clear that this usage is not allowed.

Ned


Re: Internet Draft Final Submission Cut-Off Today

2013-02-27 Thread ned+ietf

 On Feb 26, 2013, at 5:38 PM, Pete Resnick presn...@qti.qualcomm.com wrote:

  But more seriously: I agree with you both. The deadline is silly.

 +1

+1

 The deadline originated because the secretariat needed time to post all of 
 those drafts (by hand) before the meeting.  The notion of an automated tool 
 that blocks submissions for two weeks before the meeting is just silly.

All the more so since it just leads people to use informal distribution
methods. I don't recall a case where a chair forbid the discussion of a draft
distributed this way.

I recall hearing something once about routing around obstacles... Pity we don't
internalize such principles fully.

Ned


Re: Internet Draft Final Submission Cut-Off Today

2013-02-27 Thread ned+ietf

On 02/27/2013 01:49 PM, Carsten Bormann wrote:
 On Feb 27, 2013, at 19:18, ned+i...@mauve.mrochek.com wrote:

 routing around obstacles
 It turns out for most people the easiest route around is submitting in time.

 That is actually what counts here: how does the rule influence the behavior 
of people.

 Chair hat: WORKSFORME.  (And, if I could decide it, WONTFIX.)
+1.



As far as I can tell, the deadline actually serves the purpose of
getting people to focus on IETF and update their documents sufficiently
prior to the meeting, that it's reasonable to expect meeting
participants to read the drafts that they intend to discuss.   And I say
this as someone who, as an author, has often found the deadline to be
very inconvenient.


And your evidence for this is .. what exactly? Yes, the deadline makes the
drafts show up a bit sooner, but I rather suspect that the overwhelming
majority of people don't bother to do much reading in the inverval. I certainly
don't.

And given the ready available tools to tell the reader what's changed I don't
need to. In almost all cases for -nn where nn  00 I can check what's changed
in a few minutes, and I can do it in the context of the actual work being done.

I don't really have any objection to a -00 cutoff, but the second cutoff
is nothing short of asinine.

In any case, I personally have basically stopped caring about the deadline and
I encourage others to do the same. If I make the deadline fine, if not I post
the update somewhere else, done.

Ned


Re: The RFC Acknowledgement

2013-02-08 Thread ned+ietf
 I try to include in the Acknowledgements section of any Internet
 Drafts I edit the names of anyone who comments on the draft if (1) the
 comment results in a change in the draft and (2) the commenter does
 not request that they be left out. If you comment on some draft and
 the draft is changed as a result and you want to be acknowledged and
 you are not added to the acknowledgements list, you should complain to
 the editor / author.

That's exactly the policy Nathaniel Borenstein and I agreed to use for MIME.
I've used it ever since for all the documents I have edited, and it seems to
have worked well. (And apologies to anyone whose name I have omitted under that
policy - if I did that it was entirely inadvertent.)

The only time I've ever had an acknowledgments section has been when an author
or contributor is deceased. This very unfortunate situation is quite delicate
and merits handling on a case-by-case basis; IMO no specific policy could
possibly be written to accomodate it.

Ned


Re: Vestigial Features (was Re: CRLF (was: Re: A modest proposal))

2013-01-24 Thread ned+ietf
 On Jan 24, 2013, at 04:41, wor...@ariadne.com (Dale R. Worley) wrote:

  From: Carsten Bormann c...@tzi.org
 
  I think in protocol evolution (as well as computer system evolution
  in general) we are missing triggers to get rid of vestigial
  features.
 
  That's quite true.  Let us start by rationalizing the spelling and
  punctuation of written English (which is the coding system for *this
  entire discussion*).  Once we've cleaned up that idiocy, we can start
  in on SIP.

 I see I didn't make myself clear.
 I'm not suggesting we clean up vestigials in existing spec[ie]s, such as HTTP 
 or SIP.
 (We might even do that, see HTTPbis, but only very carefully.)

 My point was about the case when we clone new stuff off existing protocols.
 (SIP was cloned off HTTP which was cloned off SMTP which was cloned off FTP
 which at least has had strong kinship with Telnet, hence all these use NVT.)

Actually, in regards to NVT, there's something of a break in HTTP: text
material is *not* required to use CRLF as a line terminator. Any of LF, CR, or 
CRLF is permissible.

I really don't want to get into the history of this choice or for that matter
the results it has produced. Suffice it to say it has had some advantages and
some disadvantages.

 Staying in your analogy: when you design a new language based on English, 
 please do fix some of this stuff.
 (Maybe the analogy isn't that useful after all.)

 Actually, we haven't been that bad about this.

 E.g., we have been pretty good about getting rid of the madness of historic
 character coding by focusing on UTF-8 for new designs.

But a vital part of that choice is that it is backwards compatible with
US-ASCII.

 What I'm asking for: Apart from these big reformation projects, we also
 should occasionally fix little things.

Or not. How about instead of viewing these sorts past choices as things to be
fixed, we instead view them as what they were: Choices. And then decide, based
on the actual requirements of whatever we're doing, whether or not it makes
sense to make a change.

Sure, we can design some new format with LFs or CRs or maybe even line or page
separators as line breaks instead of CRLFs. And maybe that makes sense in some
cases. Or not: Such choices can have serious consequences if any degree of
backwards compatibility with exiting transfer services is desired. And constant
transcoding of stuff to make it work in various places is not necessarily a
winning approach.

 When inventing new stuff by drawing analogies off of old specs.

I really don't think unthinking copying from past specifications is the issue
here, your asssertions to the contrary notwithstanding.

Ned


Re: Last Call: draft-farrell-ft-03.txt (A Fast-Track way to RFC with Running Code) to Experimental RFC

2013-01-17 Thread ned+ietf

 Hi Ned,

 On 01/16/2013 03:40 AM, Ned Freed wrote:
  Actually I think you make a couple of great points that ought be
  mentioned in the draft about implementability. (No chance you'd
  have time to craft a paragraph? If not, I'll try pinch text from
  above:-) Now that you point it out like that, I'm irritated at
  myself for not having included it already! (It rings bells for me.)
 
  OK, I think the place for a new paragraph is just before the last
  paragraph of section 2. How about something along the lines of:
 
 A complete and correct specification is not in and of itself a guarantee 
  of
 high quality implementations. What may seem like minor details can
 increase implementation difficulty substantially, leading to 
  implementations
 that are fragile, contain unnecessary restrictions, or do not scale well.
 Implementation experience has the potential to catch these problems 
  before
 the specification is finalized and becomes difficult to change.
 
  You might also want to change the final paragraph of the section a bit in
  light of the addition; I'll leave that to you to work out.

 Did that, working copy at [1]. Lemme know if there're any changes
 that are needed.

It looks good to me. 

I also note that there's a reference to interoperability in the fourth
paragraph of section 1. Perhaps changing

   For example, a framework draft will not be a good candidate because
   implementations of such documents are not, of themselves,
   interoperable.

to something like

   For example, a framework draft will not be a good candidate because
   implementations of such documents are incomplete and therefore do
   not demonstrate either implementability or interoperability of an
   entire protocol.

would be in order.

Ned


Re: Last Call: draft-farrell-ft-03.txt (A Fast-Track way to RFC with Running Code) to Experimental RFC

2013-01-15 Thread ned+ietf
Martin Rex wrote:

 John Leslie wrote:
 
 I'm pretty darn uncomfortable _ever_ picking a fight with any
  sitting AD, But I feel obligated to say this seems like a terrible
  idea to me.
 
 As a background, I'm a long-time believer in rough consensus for
  Proposed Standard and running code for advancement along the
  standards track. I do not believe the two mix well.

 I don't have the resource to participate the discussion, but
 these statements capture my opinion pretty well.

I'm reluctantly going to have to join John and Martin, with one very important
difference: I think there is huge value in implementing specifications during
the process leading up to proposed standard. (As many people know, I do this
myself quite often for drafts I'm interested in.) I would rather phrase it as
having demonstrable interoperability for advancement, and ideally having
implementations done much earlier.

More specifically, where I part ways with this draft is in believing that very
early implementation helps find interoperability issues. I have not found that
to be the case. What it helps find are issues affecting implementability. In
the applications area at least, it's surprisingly easy to specify things that
look good on paper but turn out to suck when you try to code them. (My own sins
in this area are well known - RFC 2231 - and one of the reasons for that it is
one of the few RFCs I've written without implementing it first.)

Now, it's quite true that implementability problems can lead to
interoperability problems, like when different people take different shortcuts
in coding an unnecessarily problematic specification. But other outcomes are
possible, including fragility, unnecessary restrictions, and most especially
scalability problems.

Another problem with focusing on interoperability specifically as opposed to
overall implementability is that it makes it hard to argue that one or two
implementations provide that much benefit. In my experience having just one
implementation, even one done by a draft author, is surprisingly helpful in
cleaning up drafts.

I guess what I would like to see is for the draft to talk a little more about
finding implementation issues in general and a lot less about finding
interoperability issues specifically. I also think the draft goes a bit far in
the carrots it provides, but that may have more to do with my own experiences
in the applications area, where important comments have a way of only showing
up at the last second and where therefore the abbreviated process might be a
little dangerous to use.

Finally, I'm going to apologize for the tardiness of this comment, which really
should have been made sooner. I'm also going to apologize in advance for
probably not being able to fully participate in this discussion due to severe
time constraints.

Ned


Re: Last Call: draft-farrell-ft-03.txt (A Fast-Track way to RFC with Running Code) to Experimental RFC

2013-01-15 Thread ned+ietf
 Actually I think you make a couple of great points that ought be
 mentioned in the draft about implementability. (No chance you'd
 have time to craft a paragraph? If not, I'll try pinch text from
 above:-) Now that you point it out like that, I'm irritated at
 myself for not having included it already! (It rings bells for me.)

OK, I think the place for a new paragraph is just before the last
paragraph of section 2. How about something along the lines of:

   A complete and correct specification is not in and of itself a guarantee of
   high quality implementations. What may seem like minor details can
   increase implementation difficulty substantially, leading to implementations
   that are fragile, contain unnecessary restrictions, or do not scale well.
   Implementation experience has the potential to catch these problems before
   the specification is finalized and becomes difficult to change.

You might also want to change the final paragraph of the section a bit in
light of the addition; I'll leave that to you to work out.

Hope this helps.

Ned



Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-07 Thread ned+ietf
 On 07/01/2013 12:42, Stewart Bryant wrote:
  Indeed an interesting additional question.
 
  My view is that you MUST NOT use RFC2119 language, unless you MUST use
  it, for exactly that reason. What is important is on the wire (a term
  that from experience is very difficult to define) inter-operation, and
  implementers need to be free to achieve that though any means that suits
  them.

 Agreed. Imagine the effect if the TCP standard had said that a particular
 congestion control algorithm was mandatory. Oh, wait...

 ... RFC 1122 section 4.2.2.15 says that a TCP MUST implement reference [TCP:7]
 which is Van's SIGCOMM'88 paper. So apparently any TCP that uses a more recent
 congestion control algorithm is non-conformant. Oh, wait...

 ... RFC 2001 is a proposed standard defining congestion control algorithms,
 but it doesn't update RFC 1122, and it uses lower-case. Oh, wait...

 RFC 2001 is obsoleted by RFC 2581 which obsoleted by RFC 5681. These both
 use RFC 2119 keywords, but they still don't update RFC 1122.

 This is such a rat's nest that it has a guidebook (RFC 5783, Congestion
 Control in the RFC Series) and of course it's still an open research topic.

 Attempting to validate TCP implementations on the basis of conformance
 with RFC 2119 keywords would be, well, missing the point.

 I know this is an extreme case, but I believe it shows the futility of
 trying to be either legalistic or mathematical in this area.

Exactly. Looking for cases where the use/non-use of capitalized terms caused an
interoperability failure is a bit silly, because the use/non-use of such terms
doesn't carry that sort of weight.

What does happen is that implementation and therefore interoperability quality
can suffer when standards emphasize the wrong points of compliance. Things
work, but not as well as they should or could.

A fairly common case of this in application protocols is an emphasis on
low-level limits and restrictions while ignoring higher-level requirements. For
example, our email standards talk a fair bit about so-called minimim maximums
that in practice are rarely an issue, all the while failing to specify a
mandatory minimum set of semantics all agents must support. This has led to
lack of interoperable functionality in the long term.

Capitalized terms are both a blessing and a curse in this regard. They make it
easy to point out the really important stuff. But in doing so, they also
make it easy to put the emphasis in the wrong places.

tl;dr: Capitulized terms are a tool, and like any tool they can be misused.

Ned


Re: I'm struggling with 2219 language again

2013-01-07 Thread ned+ietf

Dean, I am struggling constantly with 2119 as an AD, because if I take
the letter (and the spirit) of 2119 at face value, a lot of people are
doing this wrong. And 2119 is a BCP; it's one of our process documents.
So I'd like this to be cleared up as much as you. I think there is
active harm in the misuse we are seeing.



To Ned's points:



On 1/4/13 7:05 PM, ned+i...@mauve.mrochek.com wrote:
 +1 to Brian and others saying upper case should be used sparingly, and
 only where it really matters. If even then.

 That's the entire point: The terms provide additional information as to
 what the authors consider the important points of compliance to be.




We will like end up in violent agreement, but I think the above
statement is incorrect. Nowhere in 2119 will you find the words
conform or conformance or comply or compliance, and I think
there's a reason for that: We long ago found that we did not really care
about conformance or compliance in the IETF. What we cared about was
interoperability of independently developed implementations, because
independently developing implementations that interoperate with other
folks is what makes the Internet robust. Importantly, we specifically
did not want to dictate how you write your code or tell you specific
algorithms to follow; that makes for everyone implementing the same
brittle code.


Meh. I know the IETF has a thing about these terms, and insofar as they  can
lead to the use of and/or overreliance on compliance testing rather than
interoperability testing, I agree with that sentiment.

OTOH, when it comes to actually, you know, writing code, this entire attitude
is IMNSHO more than a little precious. Maybe I've missed them, but in my
experience our avoidance of these terms has not resulted in the magical
creation of a widely available perfect reference implementation that allows me
to check interoperability. In fact in a lot of cases when I write code I have
absolutely nothing to test against - and this is often true even when I'm
implementing a standard that's been around for many years.

In such cases the use of compliance language - and yes, it is compliance
language, the avoidance of that term in RFC 2119 notwithstanding - is
essential. And for that matter it's still compliance language even if RFC 2119
terms are not used.

I'll also note that RFC 1123 most certainly does use the term compliant in
regards to capitalized terms it defines, and if nitpicking on this point
becames an issue I have zero problem replacing references to RFC 2119 with
references to RFC 1123 in the future.

All that said, I'll again point out that these terms are a double-edged sword,
and can be used to put the emphasis in the wrong place or even to specify
downright silly requirements. But that's a argument for better review of our
specifications, because saying MUST do this stupid and couterproductive thing
isn't fixed in any real sense by removing the capitalization.


The useful function of 2119 is that it allows us to document the
important *behavioral* requirements that I have to be aware of when I am
implementing (e.g., even though it's not obvious, my implementation MUST
send such-and-so or the other side is going to crash and burn; e.g.,
even though it's not obvious, the other side MAY send this-and-that, and
therefore my implementation needs to be able to handle it). And those
even though it's not obvious statements are important. It wastes my
time as an implementer to try to figure out what interoperability
requirement is meant by, You MUST implement a variable to keep track of
such-and-so-state (and yes, we see these in specs lately), and it makes
for everyone potentially implementing the same broken code.


Good point. Pointing out the nonobvious bits where things have to be done in a
certain way is probably the most important use-case for these terms.

Ned


Re: I'm struggling with 2219 language again

2013-01-04 Thread ned+ietf
 +1 to Brian and others saying upper case should be used sparingly, and
 only where it really matters. If even then.

That's the entire point: The terms provide additional information as to 
what the authors consider the important points of compliance to be.

 The notion (that some have) that MUST means you have to do something
 to be compliant and that a must (lower case) is optional is just
 nuts.

In some ways I find the use of SHOULD and SHOULD NOT be to be more useful
than MUST and MUST NOT. MUST and MUST NOT are usually obvious. SHOULD and
SHOULD NOT are things on the boundary, and how boundary cases are handled
is often what separated a good implementation from a mediocre or even poor
one.

 If the ARP spec were to say, upon receipt of an ARP request, the
 recipient sends back an ARP response, does the lack of a MUST there
 mean the response is optional? Surely not. And if we make it only a
 SHOULD (e.g., to allow rate limiting of responses - a very reasonable
 thing to do), does lack of MUST now make the feature optional from a
 compliance/interoperability perspective?

 The idea that upper case language can be used to identify all the
 required parts of a specificition from a
 compliance/conformance/interoperability perspective is just
 wrong. This has never been the case (and would be exceeding painful to
 do), though (again) some people seem to think this would be useful and
 thus like lots of upper case language.

At most it provides the basis for a compliance checklist. But such checklists
never cover all the points involved in compliance. Heck, most specifications in
toto don't do that. Some amount of common sense is always required.

 Where you want to use MUST is where an implementation might be tempted
 to take a short cut -- to the detriment of the Internet -- but could
 do so without actually breaking interoperability. A good example is
 with retransmissions and exponential backoff. You can implement those
 incorrectly (or not at all), and still get interoperability. I.e.,
 two machines can talk to each other. Maybe you don't get good
 intereoperability and maybe not great performance under some
 conditions, but you can still build an interoperabile implementation.

 IMO, too many specs seriously overuse/misuse 2119 language, to the
 detriment of readability, common sense, and reserving the terms to
 bring attention to those cases where it really is important to
 highlight an important point that may not be obvious to a casual
 reader/implementor.

Sadly true.

Ned


Re: WCIT outcome?

2013-01-02 Thread ned+ietf
  From: John Day jeanj...@comcast.net

  I remember when a modem came with an 'acoustic coupler' because
  connecting it directly to the phone line was illegal.
  No, there was nothing illegal about it. The reason for acoustic
  couplers was that the RJ-11 had been invented yet and it was a pain to
  unscrew the box on the wall and re-wire every time you wanted to
  connect.
  ...
  It may have been illegal in some countries but certainly not in the US.

 Huh? Remember the Carterphone decision?

Absolutely. Too bad the FCC didn't see fit to extend it to wireless.

 The one that overturned FCC Tariff Number 132: No equipment, apparatus,
 circuit or device not furnished by the telephone company shall be attached to
 or connected with the facilities furnished by the telephone company, whether
 physically, by induction or otherwise.

 Now, your point about rewiring the jack may in fact be the reason for
 _post-Carterphone_ acoustic couplers, but it was indeed at one time illegal
 to connect directly (other than AT+T/WE supplied equipment).

I'm skeptical about this last part. Prior to the advent of RJ-11 Bell System
line cords used a large polarized four pin jack. After Carterphone all sorts of
stuff started to appear to accomodate these, including extension cords,
plug-jack passthroughs, and even cube taps.

At one point there was something that said one phone in each home had to be
directly wired without a plug. I don't know if this was a regulation, a phone
company rule, or just a suggestion, but it also fell by the wayside after
Carterphone.

I certainly saw acoustic coupled equipment in use long after Carterphone, but
in my experience it was because of general intertia/unwillingness to do the
necessary engineering, not because of the lack of connectors.

Ned


Acoustic couplers (was: Re: WCIT outcome?)

2013-01-02 Thread ned+ietf



On 1/2/2013 1:34 PM, ned+i...@mauve.mrochek.com wrote:
 Now, your point about rewiring the jack may in fact be the reason for
 _post-Carterphone_ acoustic couplers, but it was indeed at one time illegal
 to connect directly (other than AT+T/WE supplied equipment).

 I'm skeptical about this last part. Prior to the advent of RJ-11 Bell System
 line cords used a large polarized four pin jack. After Carterphone all sorts 
of
 stuff started to appear to accomodate these, including extension cords,
 plug-jack passthroughs, and even cube taps.



Acoustic couplers date back farther than the 4-pin plugs.


Of course. However, we're talking about post-Carterphone here. Carterphone was
1968, and I'm sure four pin plugs were in use by then.

Also keep in mind that ATT fought the Carterphone decision for many years.
They got some state regulators to issue their own restrictions, but the FCC
nixed them all. Then they said a special protection device had to be used. The
FCC shot that down too. They also tried fees, but for that to work people had
to tell ATT to charge them, which of course didn't happen.


...



 At one point there was something that said one phone in each home had to be
 directly wired without a plug. I don't know if this was a regulation, a phone
 company rule, or just a suggestion, but it also fell by the wayside after
 Carterphone.



It was usually enforced rigorously.  A given field tech might choose to
overlook a local mod, but they were authorized to remove such things.



So in my apartment, I installed a shutoff switch to the line, to be able
to sleep through attempts by my boss to call me in to work an additional
shift as a computer operator, at UCLA, around 1970 -- if I answered, I
was required to come in.  Remember there was no caller ID in those days.



The tech who needed to work on my phone service was very clear that he
was supposed to remove it.  After checking that I had handled the wiring
acceptably, he looked at me and said so if I remove this, you'll
probably just reinstall it, right?  He then left it in place.


A line mod was probably against the rules irrespective of Carterphone in those
days. But had you bought your own phone with a ringer switch and hooked that
up, that absolutely would have been covered by Carterphone. Of course you would
then have had to convince ATT of that - see above.

Ned


Re: IETF work is done on the mailing lists

2012-11-27 Thread ned+ietf
 So here's my question:
 Does the community want us to push back on those situations?  Does the
 community believe that the real IETF work is done on the mailing
 lists, and not in the face-to-face meetings, to the extent that the
 community would want the IESG to refuse to publish documents whose
 process went as I've described above, on the basis that IETF process
 was not properly followed?

The issue isn't the lack of comments but any potential lack of opportunity to
comment. If the document was announced on the list, prefably including
ancillary about changes that have been made, and people chose not to comment
there, then that's fine. But if information about the document wasn't made
available - as is sometimes the case if the document isn't named under the WG - 
then that's a problem.

Ned


Re: Gen-ART LC review of draft-leiba-5322upd-from-group-06

2012-10-17 Thread ned+ietf
 Minor issues:

 1.It is not clear from the draft what the use case for using the group
 construct is.  Section 3 talks about the issues with using the group
 construct and recommend limited use, but this is the only information.

The main driver for this work is to add support for EAI downgrade mechanisms,
although a good case can also be made that the present restrictions on group
usage are silly, limiting and unnecessary. (The silliness arises from the
observation that these restrictions enforced by most agents in the field.)

It is far from clear that it makes sense to clutter up this very clean and
crisp specification with a bunch of EAI use case scenarios, especially since
this is an update to RFC 5322 that could in theory be merged into the main
document in the future. A decision taken long ago for both RFC 5321 and 5322
was to keep them as free of entanglements with technology that's layered on top
of them (e.g., MIME, various SMTP extensions) as possible.

In any case, the decision of the group was not to do this.

 2.Section 2.1 says If the sender field uses group syntax, the group
 MUST NOT contains more than one mailbox.  Why use a group name for a single
 mailbox?

Why not? I'm sorry, but this question makes no sense. A group is a
characteristic attached to a collection of zero or more addresses. A group
containing a single member is every bit as valid as a construct as a group
containing 20 members. Or zero, for that matter.

Ned


Re: Gen-ART LC review of draft-leiba-5322upd-from-group-06

2012-10-17 Thread ned+ietf




On 10/17/2012 10:49 AM, ned+i...@mauve.mrochek.com wrote:
 Minor issues:

 1. It is not clear from the draft what the use case for using the group
 construct is.  Section 3 talks about the issues with using the group
 construct and recommend limited use, but this is the only information.

 The main driver for this work is to add support for EAI downgrade mechanisms,




Although the Intro text cites this, it doesn't explain how the change
will help.  Nor is this explained elsewhere in the document.



A single sentence summarizing what benefit is achieved with the change,
along with a couple of usage examples, would go a long way towards
showing how this update helps in practical ways.


I could live with a single sentence, but I strongly object to the inclusion
of examples, for the reasons I gave in my original response.

Ned


Re: Gen-ART LC review of draft-leiba-5322upd-from-group-06

2012-10-17 Thread ned+ietf


 --On Wednesday, October 17, 2012 12:00 -0700
 ned+i...@mauve.mrochek.com wrote:

  A single sentence summarizing what benefit is achieved with
  the change, along with a couple of usage examples, would go a
  long way towards showing how this update helps in practical
  ways.
 
  I could live with a single sentence, but I strongly object to
  the inclusion
  of examples, for the reasons I gave in my original response.

 Would a possible middle ground be to include a single
 well-crafted sentence with an informative citation of
 draft-ietf-eai-popimap-downgrade?  That document does contain
 examples and an explanation of that particular use case.

 I'd prefer to avoid that entirely for the same reasons Ned
 cites, but it would, IMO, be good to get on with this rather
 than quibbling for days or weeks.

Channeling my inner Maslow, I see the present text as best, an additional
sentence or two as next best, a sentence and a cite to the downgrade doc next
in line, and including actual EAI examples in this doc as the worst choice.

FWIW.

Ned



Re: Gen-ART LC review of draft-leiba-5322upd-from-group-06

2012-10-17 Thread ned+ietf




On 10/17/2012 2:32 PM, Ned Freed wrote:
 Channeling my inner Maslow, I see the present text as best, an additional
 sentence or two as next best, a sentence and a cite to the downgrade doc next
 in line, and including actual EAI examples in this doc as the worst choice.




The problem I have with the current text is that it says 'what'
motivated the change, but not how it is useful for the intended class of
uses.  The reader is left entirely to guess.



Self-actualization among the inadequately-informed invites fantasy more
than it invites utility.


That depends on both the utility and the desirability of repeating the existing
use case, doesn't it? The EAI use case of downgraded messages is one that only
exists because of a nasty confluence of restrictions and which we absolutely do
not want to see repeated in any other context.

If you really think this is important to explain why we're making this change
against the overall context of RFC 5322 - and I most certainly do not agree
that it is important to do so - then the best use case to add is the negative
one: The elimination of an unnecessary restriction that isn't followed in
practice.

I see no way to explain the narrow EAI use case in this context without either
dragging in a whole bunch of EAI that has no business being here or leaving
various things dangling. Either way the average reader looking at general
message usage isn't going to get it, and the more you try and waltz around this
the more likely you are to get exactly the outcome you fear: Someone is going
to see some sort of parallel with some problem they are trying to solve (some
sort of deliberate form of address obfuscation scheme immediately comes to
mind) and proceeds to make a mess of things.

Ned

P.S. It really would be best if this could wait until there was an update
to RFC 5322. But that's not how the timing has worked out, so this is the
best alternative available.


Re: Gen-ART LC review of draft-leiba-5322upd-from-group-06

2012-10-17 Thread ned+ietf
  Channeling my inner Maslow, I see the present text as best, an additional
  sentence or two as next best, a sentence and a cite to the downgrade doc
  next in line, and including actual EAI examples in this doc as the worst
  choice.
 
  The problem I have with the current text is that it says 'what' motivated
  the change, but not how it is useful for the intended class of uses.  The
  reader is left entirely to guess.

 So, is it better to put in a sentence about representing non-ASCII
 text in the group name without including a replyable address?

 Or is it better to remove the notation about the EAI use case, and
 just say that it's stupid to have the restriction, so we're removing
 it?

If the alternative is to dig into EAI in any depth at all, the latter is far
preferable.

Ned


Re: Obsoletes/Updates in the abstract (Was: Gen-ART LC Review of draft-ietf-eai-rfc5721bis-07)

2012-09-21 Thread ned+ietf


 On Sep 21, 2012, at 11:14 AM, Pete Resnick presn...@qualcomm.com wrote:

  [Changing the subject and removing GenArt and the document authors/chairs]
 
  On 9/21/12 10:52 AM, Glen Zorn wrote:
 
  -- The abstract should mention that this obsoletes 5721
 
  Why?  There is a statement in the header, 10 lines above the abstract, 
  that says Obsoletes: 5721 (if approved).
 
  The IESG put this into the nits check before my time. The Last Call and 
  publication announcements normally contain only the abstract, not the 
  metadata above, and I believe the thinking was that if you are a person who 
  scans through those announcements, you probably would (and would want to) 
  take notice of documents that purport to obsolete or update document that 
  you recognize. We could probably change the tool to add the metadata to the 
  announcements, but apparently quite a few people read abstracting 
  services that grab the abstracts of newly published documents. Not much we 
  can do for them.
 
  It's certainly useful to some folks. Necessary? (*Shrug*) Not enough wasted 
  bits for me to care one way or the other.
 

 As a Gen-ART reviewer, I called it out for exactly the reasons Pete mentions,
 and care about the same amount :-) But putting it there seems to hurt nothing,
 and maybe help just a little bit in some cases.

That's your opinion. Others, myself included, strongly disagree.

Ned


Re: Gen-ART LC Review of draft-ietf-eai-simpledowngrade-07

2012-09-20 Thread ned+ietf

On 09/19/2012 04:24 AM, Ben Campbell wrote:
 I am the assigned Gen-ART reviewer for this draft. For background on
 Gen-ART, please see the FAQ at

 http://wiki.tools.ietf.org/area/gen/trac/wiki/GenArtfaq  .

 Please resolve these comments along with any other Last Call comments
 you may receive.

 Document:  draft-ietf-eai-simpledowngrade-07
 Reviewer: Ben Campbell
 Review Date: 2012-09-18
 IETF LC End Date: 2012-09-20

 Summary: This draft is mostly on the right track, but has open issues

 Major issues:

 -- I'm concerned about the security considerations related to having a
 mail drop modify a potentially signed message.
...



Hm, sounds like a misunderstanding. Did you understand that the
modification happens in RAM, and that the message stored unmodified and
has the valid signature? If not I suppose extra verbiage is needed.


I think this is already pretty clear: The texts says things like present
and serve. It takes a pretty deliberate misreading to miss those words.

Moreover, stating that this happens only in RAM may not be correct. A server
could choose to store a downgraded version so it doesn't have to be rerendered,
for example. The point is that version isn't going to be presented to a client
that supports EAI, not that it only happens in RAM.


The signature issue has been discussed. The answer is more or less: The
WG expects EAI users to use EAI-capable software, and to accept partial
failure when using software that cannot be updated.



This entire draft is draft is about damage limitation when an EAI user
uses EAI-ignorant software (e.g. your phone, if you do your main mail
handling on a computer but occasionally look using the phone). That
there will be damage is expected and accepted. IMO it's unavoidable.


I think that's more a matter of fact, not opinion. If your software is
incapable of presenting something, it's incapable of presenting something.
The only question is whether you get a downgraded version the software is
capable of handling or nothing. EAI allows that choice to be made by
the implementation or operationally.


 Minor Issues:

 -- It's not clear to me why this is standards track rather than informational.



I don't remember. Perhaps because it needs to update 3501.



 -- section 3, 2nd paragraph:

 Are there any limits on how much the size can differ from the actual 
delivered message? Can it be larger? Smaller? It's worth commenting on whether 
this could cause errors in the client. (e.g. Improper memory allocation)



An input message can be constructed to make the difference arbitrarily
large. For instance, just add an attachment with a suggested filename of
a million unicode snowmen, and the surrogate message will be several
megabyte smaller than the original. Or if you know that the target
server uses a long surrogate address format, add a million short Cc
addresses and the surrogate will be blown up by a million long CC addresses.



I doubt that this is exploitable. You can confuse or irritate the user
by making the client say downloading 1.2MB when the size before
download was reported as 42kb, that's all. I wish all my problems were
as small.


If it's exploitable it almost certainly is what I refer to as the tiny twig on
the attack tree. That is, if there's a size-related issue there are going to
be much easier ways to exploit it than this.

I suppose some folks may believe that describing these sorts of twigs is 
valuable. I disgree and believe it to be unncesssary clutter that is likely to

end up detracting from overall security.


I'll add a comment and a reminder that the actual size is supplied along
with the literal during download.



 -- Open Issues section: Should Kazunori Fujiwara’s downgrade document also 
mention DOWNGRADED?

 Good question. It seems like they should be consistent on things like this. 
(This is really more a comment on that draft than this one.)



I think I've made up my mind that in this case it doesn't matter.
Kazunori's task is complex reversible downgrade and has the Downgraded-*
header fields, why then bother with the DOWNGRADED response code? But
it's not my decision.


I agree it doesn't matter.


 -- Abstract should mention that this updates 3501



Really? A detail of this document updates a minor detail of that
document, that's hardly what I would expect to see in a single-paragraph
summary.


Exactly right. This is a silly thing to do.


I know someone who likes to repeat the Subject in the first line of the
email body text. Just in case I didn't see it the first time, I suppose.


The policy for EAI documents, which was agreed to by the working group and
which is now reflected in a number of RFCs, is not to put this sort of nonsense
in the abstract.

Ned


Re: Draft IESG Statement on Removal of an Internet-Draft from the IETF Web Site

2012-09-10 Thread ned+ietf
 On 9/9/12 8:43 PM, John Levine wrote:
  Let's say I write to the IESG and say this:
 
Due to a late night editing error, draft-foo-bar-42 which I
submitted yesterday contains several paragraphs of company
confidential information which you can easily see are irrelevant to
the draft.  My boss wants it taken down pronto, even though he
realizes that third parties may have made copies of it in the
meantime.  I will probably lose my job if it stays up for more than a
few days.  Thanks for your consideration.
 
  Is this the response?
 
You didn't make any legal threats, and now that we know the
situation, we wouldn't believe any legal threats you might make in the
future, so you better check out those burger flipping opportunities.

 No, the response is that we refer you to our policy.  As an open
 organization we do not remove information once posted, except under
 extraordinary circumstances.

Exactly. This sort of thing is wh a policy is needed, although I note in
passing that the folks at this hypothetical might want to read up on the
Streisand Effect.

 
  What was wrong with the original version which gave the IESG the
  latitude to remove an I-D if they feel, for whatever reason, that it
  would be a good idea to do so?

 What original?  The draft policy states:

  An I-D will only be removed from the public I-D archive in compliance
  with a duly authorized court order.


  If the IESG were so screwed up that
  they started deleting I-Ds for bad reasons, no amount of process
  verbiage would help.

 Certainly, but let's not start from the wrong place to begin with.
 Let's also set expectations that the IESG may be used to clean up after
 other peoples' messes.  They have enough to do.

That is if anything an understatement. 

 And again, this is best developed with counsel.

A very emphatic +1 to this.

Ned


Re: Draft IESG Statement on Removal of an Internet-Draft from the IETF Web Site

2012-09-10 Thread ned+ietf

 Let's say I write to the IESG and say this:

   Due to a late night editing error, draft-foo-bar-42 which I
   submitted yesterday contains several paragraphs of company
   confidential information which you can easily see are irrelevant to
   the draft.  My boss wants it taken down pronto, even though he
   realizes that third parties may have made copies of it in the
   meantime.  I will probably lose my job if it stays up for more than a
   few days.  Thanks for your consideration.



 Exactly. This sort of thing is wh a policy is needed, although I note in
 passing that the folks at this hypothetical might want to read up on the
 Streisand Effect.



Note that I phrased it as a polite request, not a threat.


I don't see that as especially relevant: There have been plenty of cases where
a polite request called attention to something that would otherwise have been
ignored, although of course the ones that get reported at
http://www.thestreisandeffect.com/ tend to be the ones where bad behavior
was involved.


 And again, this is best developed with counsel.
 A very emphatic +1 to this.



Sure, but keep in mind that it's one thing to minimize legal risk,
is not the same as minimizing cost or complexity, or doing what's best for
the IETF and the community.


The narrow goal of minimizing the immediate legal risk is hardly the only, or
even an especially good, reason to seek advice of counsel. Counsel is there to
assist you in understanding the legal implications of your possible choices and
then in implementing the choice you make. They are not there to make your
decisions for you.

Ned


Re: [EAI] Last Call: draft-ietf-eai-popimap-downgrade-07.txt (Post-delivery Message Downgrading for Internationalized Email Messages) to Proposed Standard

2012-09-09 Thread ned+ietf
  The IESG has received a request from the Email Address
  Internationalization WG (eai) to consider the following document:
  - 'Post-delivery Message Downgrading for Internationalized Email
  Messages'
draft-ietf-eai-popimap-downgrade-07.txt as Proposed Standard

 Rats; I missed this in earlier review:

 -- Section 3.2.1 --

This procedure may generate empty group elements in From:,
Sender: and Reply-To: header fields.
[I-D.leiba-5322upd-from-group] updates [RFC5322] to allow (empty)
group elements in From:, Sender: and Reply-To: header fields.

 It does not.  It adds group syntax to From only.  Group syntax is
 already allowed in Reply-To, and the draft does not change the rules
 for Sender.

 If this spec also needs Sender to change, I can update the
 5322upd-from-group for that, but it's not there now.

I think the answer to that is yes, this needs to be allowed for Sender.

Ned


Re: [EAI] Last Call: draft-ietf-eai-popimap-downgrade-07.txt (Post-delivery Message Downgrading for Internationalized Email Messages) to Proposed Standard

2012-09-09 Thread ned+ietf
  I think 5322upd-from-group needs to apply to both
  backward-pointing address types to be effective for what
  popimap-downgrade needs.
 
  I will make the change.

 But in any case, the downgrade doc needs to remove reply-to from the
 list of header fields that 5322upd-from-group changes.  Maybe this:

 OLD
This procedure may generate empty group elements in
From:, Sender: and Reply-To: header fields.
[I-D.leiba-5322upd-from-group] updates [RFC5322] to allow
(empty) group elements in From:, Sender: and
Reply-To: header fields.

 NEW
This procedure may generate empty group elements in
From:,Sender: and Reply-To: header fields.
[I-D.leiba-5322upd-from-group] updates [RFC5322] to allow
(empty) group elements in From: and Sender:.

Looks like the correct change to me.

Ned


Re: Draft IESG Statement on Removal of an Internet-Draft from the IETF Web Site

2012-09-07 Thread ned+ietf
  In the case of DMCA

 i am not competent to speak to circumstances surrounding a dmca.  i am
 glad you and all the other engineers here are.  sure saves the ietf
 lawyer a lot of work.

Bingo. And even if we were competent to assess this stuff - which we most
assuredly are not - any notion that a specific policy can be drafted that would
cover all the eventualities that could arise in the context of a DMCA notice is
preposterous on its face.

Anyone who has looked at even a few DMCA situations knows they range from
looneys with no grounds and no resources to entirely legitimate cases of gross
infringement to what amounts to a SLAPP suit.

If and when this happens the IETF will have to deal with the situation as it
presents, hopefully assisted by competent legal counsel.

The only question that need concern us at present is whether or not the
stated policy gives the IESG the necessary flexibility. It seems to me
that it does.

Ned


Re: Basic ietf process question ...

2012-08-04 Thread ned+ietf

 On 03/08/2012, at 8:09 PM, ned+i...@mauve.mrochek.com wrote:

  Very much; when it becomes a document (e.g., mixed markup), XML is a much
  better choice.
 
  The other interesting case is where large amounts of data arrive in a 
  stream.
  SAX and SAX-like libraries makes this easy to implement with XML. I hope
  there's an equivalent for Json; if not there needs to be.

 Funny you mention that, I was just looking into that yesterday.

 This seems to be in the front running:
   http://lloyd.github.com/yajl/

This looks promising. Thanks for the pointer.

Ned





Re: Basic ietf process question ...

2012-08-03 Thread ned+ietf
 XML Schema is grossly over-engineered for its purpose; being the subject of
 requirements from not only the document markup field, but also databases and
 object models, it's a twisted monstrosity that tries to do many things, and
 fails at most.

Agreed, and I would add that it is seriously lacking in flexibility. (I would
like to have words with whoever thought the unique particle attribution rule
was a good idea.)

That said, there's a timing issue in play here. A variety of tools supporting
other validation systems, e.g., RelaxNG or Schematron are available now, but a
few years back alternatives to XML Schema were much more limited. Despite our
dislike we've used XML Schema in a couple of places for precisely this reason.

 Specifically, it's very common for people to try to use schema to inform
 binding tools into specific languages. However, the underlying metamodel of
 XML, the Infoset, is both complex and a poor fit for most languages, so
 bindings take shortcuts and expose a profile of XML's range of expression,
 encouraging some patterns of use, while discouraging (or disallowing) others.
 Since the bindings often make different decisions (based upon the language of
 use), interoperability is difficult (sometimes, impossible).

It very much depends on what you're doing and how you're doing it. If what
you want is for your data to manifest directly as a data structure, XML is
a lousy fit for that for a bunch of different reasons. Json is the clear choice
in such cases. But there are other uses where the more complex Infoset of
XML can be an asset.

Really, it's all about how you use the available tools.

 Furthermore, designing a schema that is extensible is incredibly convoluted
 in XML Schema 1.0. Schema 1.1 was designed to address this failure, but it
 hasn't been broadly adopted; most people I know in the field consider it a
 failure.

Yes, XML Schema makes this a lot harder to do than it should be, but in a lot
of designs I've seen it also has to do with how XML is actually used. A bad
design is a bad design, regardless of what schema language you use.

 What surprises me and many others is that people are still using it and
 promoting it, when it's well-understood by almost EVERYONE who was involved in
 using XML for protocols in the past ten years agrees that it's a mistake.

See above. I certainly wouldn't use XML Schema for anything new, but there's
a lot of legacy stuff out there.

Ned


Re: Basic ietf process question ...

2012-08-03 Thread ned+ietf

 On 03/08/2012, at 5:59 PM, Ned Freed ned.fr...@mrochek.com wrote:

  Specifically, it's very common for people to try to use schema to inform
  binding tools into specific languages. However, the underlying metamodel 
  of
  XML, the Infoset, is both complex and a poor fit for most languages, so
  bindings take shortcuts and expose a profile of XML's range of 
  expression,
  encouraging some patterns of use, while discouraging (or disallowing) 
  others.
  Since the bindings often make different decisions (based upon the language 
  of
  use), interoperability is difficult (sometimes, impossible).
 
  It very much depends on what you're doing and how you're doing it. If what
  you want is for your data to manifest directly as a data structure, XML is
  a lousy fit for that for a bunch of different reasons. Json is the clear 
  choice
  in such cases. But there are other uses where the more complex Infoset of
  XML can be an asset.

 Very much; when it becomes a document (e.g., mixed markup), XML is a much
 better choice.

The other interesting case is where large amounts of data arrive in a stream.
SAX and SAX-like libraries makes this easy to implement with XML. I hope
there's an equivalent for Json; if not there needs to be.

 
  Really, it's all about how you use the available tools.
 
  Furthermore, designing a schema that is extensible is incredibly convoluted
  in XML Schema 1.0. Schema 1.1 was designed to address this failure, but it
  hasn't been broadly adopted; most people I know in the field consider it a
  failure.
 
  Yes, XML Schema makes this a lot harder to do than it should be, but in a 
  lot
  of designs I've seen it also has to do with how XML is actually used. A bad
  design is a bad design, regardless of what schema language you use.
 
  What surprises me and many others is that people are still using it and
  promoting it, when it's well-understood by almost EVERYONE who was 
  involved in
  using XML for protocols in the past ten years agrees that it's a mistake.
 
  See above. I certainly wouldn't use XML Schema for anything new, but there's
  a lot of legacy stuff out there.

 That's the rub, isn't it?

Yeah, and it sure is rubbing me the wrong way every time I look at our usage.
I know it was the right choice at the time, and now that it's done it's not
cost effective to change unless we need additional capabilities, but that
all so ... unsatisfying.

Ned


Re: New Version Notification for draft-leiba-3777upd-eligibility-00.txt

2012-07-31 Thread ned+ietf
  I'd probably also recommend excluding paid employees of ISOC. I cannot
  really think of rationale that applies to the secretariat staff but
  not ISOC.

 perhaps we should take the leap of assuming folk are adults here (i
 realize it is a stretch), and not start a black-list with no proof of
 termination.

+1 on all points, including the stretch part.

Ned

P.S. I'm not a big fan of for appearance's sake. All too often it proves to
be a path to madness.


Re: registries and designated experts

2012-06-13 Thread ned+ietf
  It seems to me that if an expert reviewer thinks that something will do
  notable harm, they should decline to make a decision and defer it to the
  IETF at large

 so they are not an expert, they are a rubber stamp?  bs.

+1

More generally, the notion of appealing to the IETF at large, whatever that
is supposed to consist of, isn't part of the written policy of any registry I'm
aware of. There's supposed to be an appeals process (usually to the IESG), and
that's the obvious process to invoke should an issue come up that a reviewer
feels uncomfortable resolving directly.

Additionally, given the widespread use of IANA registries by people and
organizations not connected to the IETF in any way - and this usage is
something we want to encourage - appealing registrations to the IETF at large
is a really bad idea.

Ned


Re: RFC 2119 terms, ALL CAPS vs lower case

2012-05-18 Thread ned+ietf
 I find this morning a message on the URN WG list
 by Alfred Hines on RFC 6329, which has a new (AFAIK) convention on
 normative language

 3.  Conventions Used in This Document

The key words MUST, MUST NOT, REQUIRED, SHALL, SHALL NOT,
SHOULD, SHOULD NOT, RECOMMENDED, MAY, and OPTIONAL in this
document are to be interpreted as described in [RFC2119].

The lowercase forms with an initial capital Must, Must Not,
Shall, Shall Not, Should, Should Not, May, and Optional
in this document are to be interpreted in the sense defined in
[RFC2119], but are used where the normative behavior is defined in
documents published by SDOs other than the IETF.


 I am not sure this is in the direction of greater clarity. Should
 there be a need to
 overlay different degrees of normativeness onto a text, XML would
 probably be better bet.
 Whether the previous sentence is normative or not is left as an
 exercise for the reader.

By my count, there are also two lower case musts and six lower case shoulds
in there. In a document with compliance language this complex, those SHOULD
have been eliminated.

Be that as it may, this is asking more of the convention that it realistically
can be expected to deliver. I don't know the circumstances behind this
document - maybe there was no alternative - but the right thing IMO is to
try and avoid having to do anything like this.

Ned


Re: RFC 2119 terms, ALL CAPS vs lower case

2012-05-18 Thread ned+ietf
  I recommend an errata to RFC 2119: These words MUST NOT appear in a
  document in lower case.

 first, that is not an erratum, it is a non-trivial semantic change.

And therefore explicitly disallowed by the erratum process.

 second, do we not already have enough problems being clear and concise
 without removing common words from our language?

A very emphatic +1.

Ned


Re: RFC 2119 terms, ALL CAPS vs lower case

2012-05-16 Thread ned+ietf
 The key words MUST, MUST NOT, REQUIRED, SHALL, SHALL NOT,
 SHOULD, SHOULD NOT, RECOMMENDED, MAY, and OPTIONAL in this
 document are to be interpreted as described in [RFC2119] when they
 appear in ALL CAPS.  These words may also appear in this document in
 lower case as plain English words, absent their normative meanings.

 i like this a lot

I agree. In fact I just incorporated it into the media types registration
update.

Ned


Re: RFC 2119 terms, ALL CAPS vs lower case

2012-05-16 Thread ned+ietf

 On May 16, 2012, at 5:22 PM 5/16/12, ned+i...@mauve.mrochek.com wrote:

The key words MUST, MUST NOT, REQUIRED, SHALL, SHALL NOT,
SHOULD, SHOULD NOT, RECOMMENDED, MAY, and OPTIONAL in this
document are to be interpreted as described in [RFC2119] when they
appear in ALL CAPS.  These words may also appear in this document in
lower case as plain English words, absent their normative meanings.
 
  i like this a lot
 
  I agree. In fact I just incorporated it into the media types registration
  update.

 To be sure of meaning and help confusion avoidance, I would prefer that the
 key words not appear in the document in lower case and that authors use the
 suggested replacement words (or break out the thesaurus?).

Preferring it is one thing; I'm OK with that. Making it some sort of
hard-and-fast rule is another matter entirely. We have too many of those
as it is.

Ned


Re: Leverage Patent Search API to reduce BCP79 related issues [was:

2012-05-10 Thread ned+ietf
 Russ Housley wrote:
 
  BCP 79 says:
 
Reasonably and personally known: means something an individual
knows personally or, because of the job the individual holds,
would reasonably be expected to know.  This wording is used to
indicate that an organization cannot purposely keep an individual
in the dark about patents or patent applications just to avoid the
disclosure requirement.  But this requirement should not be
interpreted as requiring the IETF Contributor or participant (or
his or her represented organization, if any) to perform a patent
search to find applicable IPR.
 
  Your suggestion seems to be in direct conflict with BCP 79.


 IMHO your quote from BCP79 (page 4, bullet l.) is a very important
 point in BCP79.  I can not speak for others, but the Internet Proxy
 of our company blocks urls to all well known online patent search
 and patent publication sites (and these are the only blocked sites
 I've ever encountered) for the simple reason of the insane US patent
 laws with this 3-fold punitive damages for willful infringement
 of a patent, i.e. a patent that was known to exist when shipping.
 As engineers, we MUST NOT read any patents because of this.
 All of the patent reasarch stuff is done by patent lawyers exclusively.

I don't know if a similar block exists, but the policy is the same for us.
Such a policy, if implemented, could easily lead to various people being
unable to participate in the process.

 btw. I personally know only about one patent where I refused to be
 listed as inventor because I considered my contribution to it to be
 obvious to someone skilled in the technology.  When I read the
 description after it had been processed by patent lawyers, I had
 serious difficulties understanding what that text meant...

A pretty common occurance, unfortunately.

Ned


Re: Last Call: draft-levine-application-gzip-02.txt (The application/zlib and application/gzip media types) to Informational RFC

2012-05-04 Thread ned+ietf
 I do believe that, someday, someone should try to write up an
 up-to-date description of the difference that recognizes the
 fact that compressed files are in use as media types with
 application/zip (in assorted spellings) and application/gzip
 (from this spec and in assorted spellings) as examples.  But I
 now believe it is a separate task that should not block this
 document or registration.

That pretty much sums up my view as well.

 I'll be happy to do that if I can ever find enough spare time to write
 it.

 You're right, it would be nice if there were some way to distinguish
 containers from content in MIME types.  But given the existing
 historical mess, and that some kinds of compression are just a
 different way to encode a bunch of bits (zlib) whereas others are more
 like a small filesystem (zip and tgz), even if we could start with a
 clean sheet it's not obvious to me what would be the best thing to do.

This is further complicated by the fact that there are now a number of types
defined that are actually zip with specific semantics attached to the content.
There are also types defined for use only within such containers.

Ned


RE: Last Call: draft-levine-application-gzip-02.txt (The application/zlib and application/gzip media types) to Informational RFC

2012-05-03 Thread ned+ietf
  -Original Message-
  From: ietf-announce-boun...@ietf.org 
  [mailto:ietf-announce-boun...@ietf.org] On Behalf Of The IESG
  Sent: Thursday, May 03, 2012 2:21 PM
  To: IETF-Announce
  Subject: Last Call: draft-levine-application-gzip-02.txt (The 
  application/zlib and application/gzip media types) to Informational RFC
 
  The IESG has received a request from an individual submitter to
  consider the following document:
  - 'The application/zlib and application/gzip media types'
draft-levine-application-gzip-02.txt as Informational RFC
 
  The IESG plans to make a decision in the next few weeks, and solicits
  final comments on this action. Please send substantive comments to the
  ietf@ietf.org mailing lists by 2012-05-31. Exceptionally, comments may
  be sent to i...@ietf.org instead. In either case, please retain the
  beginning of the Subject line to allow automated sorting.

 Only two things:

 1) Shouldn't this be Standards Track?

Based on past practices, the answer to that seems to be no. See, for example,
RFC 6208, RFC 6207, RFC 6129, RFC 5967, and so on.

The rule seems to that if the specification defines the format as well as
registering one or more media types, then it needs to be on the standards
track. But if the specification simply registers a bunch of types, then
it's informational.

And I think this makes sense. After all, how would one assess interoperability
or pretty much anything else of a registration? 

I suppose you could argue that in cases where the registration points at an
external specification (which is not the case here) that it could be a sort of
hook to allow such assessment, but I think we have enough problems performing
evaluations of our own work that we don't need to added fun of evaluating
externally defined formats.

 2) Should it mention the xdash draft, since it talks about 
 application/x-* types?

That specification is really focused on not using x- in newly defined
registries, not on how it's used in existing registries. And since x-gzip and
friends are only mentioned to say they shouldn't be used, I'd say that while
such a reference would not bother me, I don't think it's especially helpful.

Ned


RE: Last Call: draft-levine-application-gzip-02.txt (The application/zlib and application/gzip media types) to Informational RFC

2012-05-03 Thread ned+ietf
  -Original Message-
  From: Ned Freed [mailto:ned.fr...@mrochek.com]
  Sent: Thursday, May 03, 2012 2:54 PM
  To: Murray S. Kucherawy
  Cc: ietf@ietf.org
  Subject: RE: Last Call: draft-levine-application-gzip-02.txt (The 
  application/zlib and application/gzip media types) to Informational RFC
 
   1) Shouldn't this be Standards Track?
 
  Based on past practices, the answer to that seems to be no. See, for
  example, RFC 6208, RFC 6207, RFC 6129, RFC 5967, and so on.
 
  [...]
 
   2) Should it mention the xdash draft, since it talks about
  application/x-* types?
 
  That specification is really focused on not using x- in newly defined
  registries, not on how it's used in existing registries. And since x-
  gzip and friends are only mentioned to say they shouldn't be used, I'd
  say that while such a reference would not bother me, I don't think it's
  especially helpful.

 Good enough, just thought I'd ask.

 I support publication as-is.

As do I.

Ned


RE: 'Geek' image scares women away from tech industry ? The Register

2012-04-30 Thread ned+ietf
  From: Mary Barnes [mary.ietf.bar...@gmail.com]
 
  Here is an article that does a far better job of explaining the
  situation than I did:
  http://www.todaysengineer.org/2011/May/women-in-engineering.asp
 
  The largest reason women leave engineering is due to the work
  environment and perceived lack of support from colleagues.

 Even the summary of the study that is given in the article is a
 fascinating read, with a lot of information on the subject that I've
 never seen before.

 Who would have expected:
 The study also found that 15 percent of women who earn undergraduate
 degrees in engineering never entered the profession at all.  Many of
 them went on to enter the legal or medical professions or other fields
 where their engineering education served them well.

 The full study is at:
 http://www.studyofwork.com/wp-content/uploads/2011/03/NSF_Women-Full-Report-0314.pdf

That link appears to be broken. The link that worked for me is:

  http://studyofwork.com/files/2011/03/NSF_Women-Full-Report-0314.pdf

Ned


Re: Proposed IESG Statement on the Conclusion of Experiments

2012-04-26 Thread ned+ietf
 Ned,

 On Apr 25, 2012, at 7:31 PM, Ned Freed wrote:
  I see no value in deallocating code point spaces

  It depends on the size of the space.
  Why?
  Because if you deallocate and reallocate it, there can be conflicts. Perhaps
  you haven't noticed, but a lot of times people continue to use stuff that 
  IETF
  considers to be bad ideas, including but not limited to things we called
  experiments at some point.
 
 Perhaps you haven't noticed, but no one was suggesting deallocating and
 reallocating anything that was in use.  Or do you have a different
 interpretation of if appropriate?

How can you possibly determine with any degree of reliability if something you
know deployed to some extent is still in use or not? The Internet is a big
place.

Again, the *only* case where it makes sense to deallocate is if the space is
small. In such cases the rewards outweigh the risks.

  And getting rid of information that people may need to get things to
  interoperate seems to, you know, kinda go against some of our core 
  principles.

 Sorry, where did anyone suggest getting rid of any information that people
 may need to get things to interoperate again?  Or do you interpret moving a
 XML page from a web server into an informational RFC to be getting rid of
 information?

Yes I most certainly did, because that what it amounts to. The instant you move
the information to a new place and break the old pointers to it, you have
effectively gotten rid of it.

 I'll admit I find this thread bordering on the surreal with some fascinating
 kneejerk reactions.

What's surreal is your belief that the sorts of actions you're proposing have
no consequences.

 As far as I can tell, the only thing that was proposed was something to
 encourage documentation of the conclusion of experiments and if 
 appropriate,
 deprecate any IANA code points allocated for the experiment.

Yes, the original statement said deprecate, and I had no problem with it. But
this quickly changed to people saying that code points need to be deallocated,
which is what I was responding to. Here's a direct quote from an early message
on this thread:

  From my experience at IANA, trying to figure out who to contact to remove a
  code point gets harder the longer the code points are not being used.  Unless
  the code space is unlimited, I'd argue that you want to deallocate as soon as
  an experiment is over.

remove and deallocate. Not deprecate. And no code space is unlimited. Oh,
and you're the one who wrote this.

  Both of these seem like good things to me.  This has somehow been translated 
 into variously:

 a) a declaration about how research is done
 b) deletion and/or reallocation of code point spaces that people are using
 c) killing off successful protocols because they're documented in 
 experimental not standards track rfcs
 d) violating our core principles
 e) process for the sake of process
 f) IANA being a control point for the Internet
 g) etc.

 Did I miss a follow-up message from the Inherently Evil Steering Group that
 proposed these sorts of things?

Did I ever say that I was repsonding to the original IESG statement? No, I
don't think I ever said that.

Anyway, I've made my point, and as PHB said, you've now devolved to stupid
tricks to bolster your argument. I'm done.

Ned


Re: Proposed IESG Statement on the Conclusion of Experiments

2012-04-25 Thread ned+ietf
 I see no value in deallocating code point spaces and a huge amount of
 potential harm.

It depends on the size of the space. I completely agree that if the space is
large - and that's almost always the case - then deallocating is going to be
somewhere between silly and very damaging.

Deprecacting the use of a code point, OTOH, may make sense even if the space is
large.

The takeaway here, I think, is that if you're going to conclude experiments,
you need to examine these allocations and do something sensible with them,
where sensible is rarely going to mean deallocate.

 Except at the very lowest levels of the protocol stack (IP and BGP)
 there is really no technical need for a namespace that is limited. We
 do have some protocols that will come to a crisis some day but there
 are plenty of ways that the code space for DNS, TLS etc can be
 expanded if we ever need to.

There may not be any technical need, but there are a number of legacy designs
that were done ... poorly.

 The Internet is driven by innovation and experiment. There are
 certainly people who think that the role of this organization is to
 protect the Internet from the meddling of people who might not get it
 but they are wrong.

+1

 Even more wrong is the idea that IANA can actually act as suggested.
 IANA only exists by mutual consent. I am happy to register a code
 point if there is a reasonable procedure to do so. But if the idea is
 to force me to kiss someone's ring then I'll pick a number and ship
 the code and let other folk work out the problems later.

 This already happens on a significant scale in the wild. SRV code
 points being a prime example. There are far more unofficial code
 points than recognized ones. Some of them are in standards I wrote at
 W3C and OASIS. It would be best if IANA tracked these but I think it
 rather unlikely anyone is going to accidentally overwrite the SAML or
 the XKMS SRV code points.

Media types are an excellent example of this. The original registration
procedures were too restrictive so people simply picked names to use. We fixed
that for vendor assignments (fill in a web form) and the registrations
starting rolling in. (We're now trying to do the same for standard
assignments.) But of course we now have a legacy of unassigned material
to deal with.

 It has happened with other registries too. Back in the day there was
 an attempt to stop the storage manufacturers using ethernet IDs on
 their drives. So the drive manufacturers simply declared a block of
 space (Cxx) as theirs and the IEEE has since been forced to accept
 this as a fait accompli. It has happened in UPC codes as well, when
 the Europeans decided that the US authority was charging them
 ridiculous fees they just extended the code by an octet and set
 themselves up as a meta-registry.

 The only role of IANA is to help people avoid treading on each other
 by accident. If it starts issuing code points that have been 'lightly
 used', the value of IANA is degraded. I certainly don't want my
 protocol to have to deal with issues that are caused by someone's
 'experiment' that has not completely shut down. The only value of
 going to IANA rather than choosing one myself is to avoid
 contamination with earlier efforts.

No, not quite. The other important role is (hopefully) to keep people from
having multiple allocations for the same thing.

 The only area where I can see the benefits of re-allocation
 outweighing the risks is in IP port assignments but even there I think
 the real solution has to be to simply ditch the whole notion of port
 numbers and use SRV type approaches for new protocols.

+1

 IANA is not a control point for the Internet. A fact that people need
 to keep in mind when the ITU attempts to grab control in Dubai later
 on this year.

 Weakness is strength: The registries are not the control points people
 imagine because they only have authority as long as people consent.

Good point.

Ned


Re: Proposed IESG Statement on the Conclusion of Experiments

2012-04-25 Thread ned+ietf
 Ned,

 On Apr 25, 2012, at 10:46 AM, Ned Freed wrote:
  I see no value in deallocating code point spaces and a huge amount of 
  potential harm.
  It depends on the size of the space.

 Why? 

Because if you deallocate and reallocate it, there can be conflicts. Perhaps
you haven't noticed, but a lot of times people continue to use stuff that IETF
considers to be bad ideas, including but not limited to things we called
experiments at some point.

 We're talking about completed experiments.

It doesn't matter if we're talking about pink prancing ponies. The issue is
whether or not the code point is valuable. If it isn't there's no reason to
deallocate it and every reason not to, although you may want to deprecate it's
use if the experiment proved to be a really bad idea.

 I'm unclear I see any particular value in having IANA staff continue to
 maintain registries (which is what I've translated code point _spaces_ to)
 for the protocol defined within the RFC(s) of that experiment.

Ah, I think I see the conflict. You're talking about experiments that define
namespaces themselves, rather than using code points out of some other more
general space, which is what I was talking about.

That said, I have to say I reach pretty much the same conclusion for
experimental code point spaces that I do for experimental use of code points
in other spaces: They should not be deallocated/deleted. Again, just because
the IETF deems an experiment to be over, or even if the IETF concludes that
the experiement was a failure, doesn't mean people won't continue to use
it. And getting rid of information that people may need to get things to
interoperate seems to, you know, kinda go against some of our core principles.

 I could, perhaps, see memorializing the final state of the registries for the
 experiment in an informational RFC, but don't really see the point in
 cluttering up http://www.iana.org/protocols with more junk than is already in
 there. Trying to find things is already annoying enough.

That's a problem with the organization of the web site, not an argument for
getting rid of information. There are abundant examples of web sites that
contains thousands of times as much stuff as IANA does where finding what you
want is easy if not outright trivial.

Ned


Re: Issues relating to managing a mailing list...

2012-03-15 Thread ned+ietf
   Is this really a big enough problem to be worth solving? I can't
   recall a single instance where I received IETF list with a problematic
   attachment.

  i travel to places with very poor bandwidth.  it is a problem.

  and the vast majority of users just do not get it.  we send 20mb
  documents around the office all the time.  suck it up.


 for who has problem in attachment downloading the solution should be
 at the delivery Message Store level, where the strip of the attachment could 
 be
 done accordingly to an user configurable mailbox parameter
 (as we do on our server, where we call it Easy Delivery).

Or better still, don't strip anything. Use a reasonably capable client and
that doesn't fetch attachments unless you tell it to.

I'll again note that implementation of this policy will cause people to send
single part messages in order to avoid attachment stripping, which will make it
impossible for clients to avoid downloading material they do not want. In other
words, it will exacerbate, rather than solve, the problem.

Ned


RE: Issues relating to managing a mailing list...

2012-03-15 Thread ned+ietf


 --On Thursday, March 15, 2012 00:00 -0400 Ross Callon
 rcal...@juniper.net wrote:

  I don't like this proposal for two reasons: I frequently read
  email while not connected; When connected, bandwidths have
  gotten high enough that attachments on the most part are not
  slowing things down in an uncomfortable way.
 
  It might be okay for really large attachments, as long as only
  a few messages are affected.

 Borrowing a bit from Randy, the solution to really large
 attachments is to ban them.  Personally, I'd find it perfectly
 reasonable to have any message in the megabyte range or above
 (or probably even an order of magnitude smaller) rejected with a
 message that amounted to if you have that much to say, write an
 I-D, post it, and point to it.  That is much more plausible
 today, when the mean time between I-D submission and posting is
 measured in minutes (except during blackout periods) than when
 it was in days.  During blackout periods, the last thing the
 community needs is people adding to already-overloaded lists by
 posting long documents in email.

 If people want to use up part of their maximum size quota by
 posting html in addition to text, or appending long disclaimers
 or autobiographies, that shouldn't the community's problem.

You begin by talking about banning large attachments. You then segue
into a discussion where you talk about a maximum size that includes the
primary message content, not attachments, then you throw in disclaimers,
which may or may not be attachments.

Other have followed up by supporting the limit on attachment size, others
still have talked about banning attachments regardless of size.

Do you see the problem here? The minute you start focusing on specifics
of message content, you're in a rathole. What counts as an attachment?
(And yes, we have a precise definition for what constitutes an attachement,
but following that definition gives people the ability to route around
it.)

It follows that any limit needs to be on overall message size. (Even
this is a little perilous because message sizes can change due to
MIME downgrading or upgrading, both of which happen regularly.) I would
not be opposed to imposing such a limit, although it's going to need to
be higher than some people would probably like - it's surprising how
easily you can approach 1Mb with a single part, plain text message
containing nothing remotely resembling an attachment.

Ned


Re: Issues relating to managing a mailing list...

2012-03-15 Thread ned+ietf
 Is this really a big enough problem to be worth solving? I can't
 recall a single instance where I received IETF list with a problematic
 attachment.

i travel to places with very poor bandwidth.  it is a problem.

and the vast majority of users just do not get it.  we send 20mb
documents around the office all the time.  suck it up.


   for who has problem in attachment downloading the solution should be
   at the delivery Message Store level, where the strip of the attachment 
   could be
   done accordingly to an user configurable mailbox parameter
   (as we do on our server, where we call it Easy Delivery).

  Or better still, don't strip anything. Use a reasonably capable client and
  that doesn't fetch attachments unless you tell it to.

 Yes, I agree, but we can leave the end user to choose accordingly their own
 no capable/capable clients or maybe accordingly to temporary condition.

You can as long as attachments remain as attachments, which will no longer be
true if this proposal is implemented. If they don't the strip attachments
approach no longer works. You either (a) Only strip attachments, which leaves
you with large messages to deal with, (b) Strip by part size, which will leave
you with empty message, or (c) Cut parts down to a given size, which is
guaranteed to work poorly.

The two list approach, if secondary list proves popaular, may lead to the same
result, BTW. As I commented previously in a private email, it never ceases
to amaze me how people in this community only apply the route around
problems mantra when it suits their purpose to do so.

Ned


RE: Issues relating to managing a mailing list...

2012-03-15 Thread ned+ietf
John, I agree completely with everything you say here.

Ned

 --On Thursday, March 15, 2012 08:16 -0700 Ned Freed
 ned.fr...@mrochek.com wrote:

 ...
   It might be okay for really large attachments, as long as
   only a few messages are affected.
 
  Borrowing a bit from Randy, the solution to really large
  attachments is to ban them.  Personally, I'd find it perfectly
  reasonable to have any message in the megabyte range or above
  (or probably even an order of magnitude smaller) rejected
  with a message that amounted to if you have that much to
  say, write an I-D, post it, and point to it.  That is much
  more plausible today, when the mean time between I-D
  submission and posting is measured in minutes (except during
  blackout periods) than when it was in days.  During blackout
  periods, the last thing the community needs is people adding
  to already-overloaded lists by posting long documents in
  email.
 ...
  You begin by talking about banning large attachments. You
  then segue into a discussion where you talk about a maximum
  size that includes the primary message content, not
  attachments, then you throw in disclaimers, which may or may
  not be attachments.
 
  Other have followed up by supporting the limit on attachment
  size, others still have talked about banning attachments
  regardless of size.
 
  Do you see the problem here? The minute you start focusing on
  specifics of message content, you're in a rathole. What counts
  as an attachment? (And yes, we have a precise definition for
  what constitutes an attachement, but following that definition
  gives people the ability to route around it.)

 Ok, that is fair.  At best, I skipped being explicit about
 several steps in my thinking. I personally see the attachment
 issue as almost irrelevant, in part because of the what
 constitutes an attachment issue you identify above.  In partial
 defense, I was distracted a bit by comments from others about
 the first N bytes of a message and text and attachment
 models (which, as you know better than almost anyone, is a user
 construct about how messages might be assembled but one that
 doesn't necessarily map directly and usefully back from MIME
 body parts).

 I think the issue is almost entirely about

   (i) size, independent of how a message is structured, and

   (ii) what, in IMAP terms, is automatic synchronization
   into offline or disconnected mode.  More generally, it
   is ability to read and work with IETF mailing lists at a
   time when one has little or no connectivity.

 From that point of view, Russ's original question is about
 something that is not only a bad idea but an ill-formed
 proposition... at least unless we could agree on and enforce a
 particular model of IETF participants assemble messages and send
 them -- a goal that I believe is so hopeless as to not be worth
 discussing.

  It follows that any limit needs to be on overall message size.
  (Even this is a little perilous because message sizes can
  change due to MIME downgrading or upgrading, both of which
  happen regularly.)

 Exactly.   And that also identifies the other distraction I was
 trying to get at.  In addition to MIME downgrading and
 upgrading, the size of a message can be distorted by such things
 as addition (possibly by submission servers or list exploders
 and out of user control) of lengthy explanatory headers
 (including some called for by IETF standards like DKIM), body
 text (including the notorious unless you agree to this
 conditions, please un-read the message announcements but all
 list-added footers), or duplicate content (e.g., sending both
 text/plain and text/html with multipart/alternative).  If we
 were to start trying to split hairs on how to count message
 sizes, I think we would rapidly go down a slightly different rat
 hole.

   I would not be opposed to imposing such a
  limit, although it's going to need to be higher than some
  people would probably like - it's surprising how easily you
  can approach 1Mb with a single part, plain text message
  containing nothing remotely resembling an attachment.

 Yes.  But an entirely anecdotal survey of recent postings to the
 IETF list and comparison with Thomas Narten's statistics
 indicates that a posting to that list of 50K in aggregate size
 is well out on the upper tail.  Most messages of twice that size
 and larger are either duplicate content or inclusion of a great
 deal of content from multiple previous messages. I'd be happy
 discouraging both of those patterns for other reasons so
 wouldn't be upset if a size rule had the accidental side-effect
 of pushing back on them.  I can well imagine the averages for
 some WG lists being much larger, especially if editors are
 including blocks of existing and proposed text for comment.
 But, as a notorious author of long messages and someone who
 often prefers consolidated responses to a lot of small messages,
 I'd 

Re: Issues relating to managing a mailing list...

2012-03-14 Thread ned+ietf
 Some suggestions have been made about the IETF mail lists.
 There is a way for mailman to strip attachments and put them
 in a place for downloading with a web browser.  This would be
 a significant change to current practice, so the community
 needs to consider this potential policy change.

 What do you think?

Is this really a big enough problem to be worth solving? I can't recall a
single instance where I received IETF list with a problematic attachment. OTOH,
I routinely get IETF messages with useful attachements - typically a critical
revision to a draft which for whatever reason can't be posted as an I-D - that
I really need to be able see without having to bother with some indirection.
There are also times when I have access to email but no web access. What am I
supposed to do then?

This also creates significant problems for list archiving. Right now I can
easily capture the entire content of the list and preserve it for however long
I want. Do this and that becomes significantly more difficult - I have to
detect these indirections, fetch and stuff the attachments back in, because
regardless of stated policy I cannot assume that web link will be valid for as
long as I need it to be.

I also worry about the ability of mailman to actually know what an attachment
is versus a message constructed in multiple parts. And there's also the issue
of what sort of indirection it uses. Past experience with advanced list
manager features like this doesn't give me a whole lot of confidence in their
MIME chops.

I suppose I could live with this - but not actively support it - if the
stripping was limited to abusively large attachments - say ones over 5Mb or
thereabouts. But otherwise it's a TERRIBLE idea, and will simply result in 
everyone including the draft or whatever in the primary message text in order
to avoid this nonsense, which results in a degradation of list quality for all
concerned.

Ned


Re: provisioning software, was DNS RRTYPEs, the difficulty with

2012-03-07 Thread ned+ietf
  There are some false equivalences floating around here. I don't
  think anyone is suggesting that having provisioning systems or even
  DNS servers themselves check for syntax errors in the contents of
  complex records like DKIM, SPF, DMARC, or whatever is necessarily a
  bad idea. (Whether or not it will actually happen is another
  matter; I'm dubious.)
 
  Rather, the issue is with requiring it to happen in order to deploy
  a new RRTYPE of this sort, which is the result you get if the DNS
  server returns some series of tokens instead of the original text
  string. That's the sort of thing that forces people to upgrade, or
  search around for a script to do the conversion (which won't even
  occur to some), and that's an extra burden we don't need to
  impose.

 It would still be possible to work around the need for a plugin, e.g.
 by depending on some wizard web site, as in John's thought experiment.

 For the rest of us, the possibility to install a plugin that takes
 care of all the nitty-gritty details, instead of having to wait for
 the release and distribution of the next version of BIND, can make the
 difference between deploying a new RR type right away and
 procrastinating endlessly.

You're still not separating the two cases. Again, an *optional* plugin to check
syntax of a record but not produce any sort of tokenized result is fine, a
plugin that's *mandatory* to deploy is going to be almost as much of an
impediment to deployment as requiring an upgrade. Code is code, and people
don't install new code willy-nilly.

 The issue is to upgrade once rather than on each new RR type.

Exactly. That's why mandatory plugins are a bad idea.

 Correct, but when you publish a complex record you are calling forth
 that complexity.  I don't see much difference if the bug is at mines
 or at the remote site, since their effects are comparable.

They most certainly are not. A bug in my client only affects me, a bug
in the server can easily kill the entire zone. And even if separation
techniques are employed, if the plugin fails the best you're going to be
able to do is server out a domain with missing entries.

Ned
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: provisioning software, was DNS RRTYPEs, the difficulty with

2012-03-07 Thread ned+ietf
 On 07/Mar/12 09:42, ned+i...@mauve.mrochek.com wrote:
 
  It would still be possible to work around the need for a plugin, e.g.
  by depending on some wizard web site, as in John's thought experiment.
 
  For the rest of us, the possibility to install a plugin that takes
  care of all the nitty-gritty details, instead of having to wait for
  the release and distribution of the next version of BIND, can make the
  difference between deploying a new RR type right away and
  procrastinating endlessly.
 
  You're still not separating the two cases.

 I think I did, however badly.  Let me try by example, assuming my
 server doesn't recognize SPF.  As it's unrecognized, I have to write
 it as such, e.g.

   TYPE99 \# 12 0B 763D73706631 20 2D613131

OK, now I'm completely confused. What does lack of SPF record support in your
server input format have to to with syntax checking of the *content* of the SPF
record? I can write the record as text or in hex or base64 or whatever format I
want; the issue is looking *inside* the data and either (a) just checking it's
syntax versus (b) checking it and turning it into some kind of tokenized stuff
the DNS server actually serves out.

  Again, an *optional* plugin to check syntax of a record but not
  produce any sort of tokenized result is fine,

 Now the plugin can check the syntax and spot the error.  I may correct
 it or not.  If I don't, I'll just serve bad data.  In any case, my
 zone source still contains the line above.  Thus the plugin is optional.

  a plugin that's *mandatory* to deploy is going to be almost as much
  of an impediment to deployment as requiring an upgrade. Code is
  code, and people don't install new code willy-nilly.

Any plugin that's necessary to transform a nontrivial input format into a
tokenized result is effectively mandatory. Sure, you can substitute a
preprocessing script that does the transform and spits out hex or whatever, but
no matter how you do it there's an essential translation component involved in
provisioning those records. You may be able to avoid having that component 
cause the entire server fail, but it's still in the critical path for
setting up those records.

 Possibly, I can also run, say:

   echo 'SPF v=spf1 -a11' | plugin --dump-hex --silent

 and have it dump the TYPE99 line above (without signalling the error,
 since I said --silent).  Then I copy its output and paste it into the
 zone source.

Let's please stop talking about what you can do manually. This isn't about
people whose main provisioning tool is emacs or bind, and who operate their own
servers with full autonomy and report to nobody. This audience doesn't have
issues with any of this stuff.

 Finally, I can enable automatic invocation of the plugin.  That way, I
 can write the SPF record directly in the zone source and have my
 DNS-fictional server do the copy and paste on the fly for me.

 I wouldn't call such thing mandatory.

Again, it depends on whether or not it's in the critical provisioning
path. A syntax checker isn't, a parser and tokenizer is.

Ned
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: provisioning software, was DNS RRTYPEs, the difficulty with

2012-03-06 Thread ned+ietf

 Input -P- DNS server -D- DNS stub -Q- Output

 P is the provisioning

 I think you want to break that into the provisioning interface and the data
 format it produces that the DNS server consumes. (My reason for that is we 
have
 a specification for at least such format, with all that implies.)



I was also going to mention that.  There's a lot of different formats for
zone file data, with BIND-ish master files being only one of them.



 We seem to believe that the D part is deployed so that adding new unknown
 RRTypes is not an issue.

 I think this is correct, but OTOH someone recently asked about possible issues
 in this area, and if I remember correctly, received no response.



Last month I ran into a guy on the dmarc list who complained that
his server returns NOTIMP in response to queries for SPF records (because
it doesn't implement them) and clients were doing odd things.  But it's
been a long time since I've run into anyone else like that, so I agree,
it's not an issue.



 Problem is then in P and Q.

 I personally don't see a big problem with Q, but others clearly do so
 we need to leave it in.



I'm not aware of problems, but I don't use Windows which is where the
problems are supposed to be.


Exactly - neither do I. (Well, OK, I have a Windows laptop at work to handle
the company HR stuff that won't work in anything other than IE, but I try
really, really hard never to use it.)

Ned
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-melnikov-smtp-priority-07.txt (Simple Mail Transfer Protocol extension for Message Transfer Priorities) to Proposed Standard

2012-03-05 Thread ned+ietf

 That said, I think an important omission in this document is that it
 only allows MSA's to change message priorities to conform to site policy.
 MTAs should also be allowed to do this.



Can you elaborate a bit more on possible use cases?


Nobody is going to simply allow priorities to be set willy-nilly on mail coming
from random sources outside their administrative domain. That's absurd.
However, they may be will to make bilateral arrangements with selected
partners, identified by IP address or whatever, that would allow such a
setting, perhaps within a restricted range of allowed values.


Would such an MTA
only lower the priority or do you think it might also raise it?


I don't see any reason to limit it to lowering it.


 Another issue is the silly prohibition against using Priority: and other 
header
 fields to set priority levels. What if is existing site policy is in fact to
 use those fields to determine message priority?



(Ignoring the question of whether use of MT-Priority header field is a
good thing or not for a moment.)



I actually don't have a strong feeling against usage of other existing
header fields.  Some of the existing header fields don't have the exact
semantics desired here.


Well, sure. You most definitely don't want to mix in Importance or other
MUA level priority fields.


Others (like the MIXER's Priority) have the right semantics but don't support
sufficient number of priorities required by MMHS (6 levels).


I think you're going to have to accept the fact that the overwhelming majority
of people out there running email systems have never even heard of MMHS and
even if they have don't give a damn about faithfully retaining it's
semantics. But they do care that new mechanism be made compatible with
whatever ad-hoc scheme they are currently using, even if said scheme
doesn't have the full range of values.

For example, I can easily see a site wanting to map this to and from the field
used by Microsoft Exchange (sorry, I forget the exact name) even though if
memory serves that field only accepts three values. And either this is going to
happen no matter what the specification says, or the specification simply won't
deploy in any meaningful way.


That is why a new header field was introduced.



But anyway, I am happy for this restriction to be removed/relaxed. Can you
recommend some specific text?


I'll try to do so later this week.

Ned
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-melnikov-smtp-priority-07.txt (Simple Mail Transfer Protocol extension for Message Transfer Priorities) to Proposed Standard

2012-03-05 Thread ned+ietf
 Ned:
  Another issue is the silly prohibition against using Priority: and other
  header fields to set priority levels. What if is existing site policy is 
  in fact
  to use those fields to determine message priority?

 Alexey:
  I actually don't have a strong feeling against usage of other existing
  header fields.
  Some of the existing header fields don't have the exact semantics desired
  here. Others (like the MIXER's Priority) have the right semantics but don't
  support sufficient number of priorities required by MMHS (6 levels). That is
  why a new header field was introduced.

 Right, this is the issue I have with Ned's desire to allow use of
 other fields: those fields have inconsistent semantics, and
 inconsistent and non-standard usage.  They're often used to indicate
 visual importance of the message to the MUA, rather than anything
 related to transmission priority.

I'm sorry, but that's simply a strawman. AFAICT nobody pushing back on this is
suggesting mapping MTA priority to or from a field with totally different
semantics. But there are plenty of fields in common use that are used to
control MTA priority. Those are the fields I'm talking about.

And yes, people will make mistakes. They always do. But as I commented
in a previous discussion, if we let that stop us we'd never standarize
anything.

 That said, I'd have no problem with some sort of MAY-level permission
 to *MSAs* to use these fields.  I'd feel very uncomfortable allowing
 *MTAs* to do it.  Ned, would it be adequate to your needs to handle it
 that way?

No. Like it or not, if you expect this to gain any traction at all, you're
going to have to accept the fact that people are going to want to tunnel it
through fields with the same general semantics but restricted as to possible
values.

  Something like this:

 OLD
Other message header fields, such as Importance [RFC2156], Priority
[RFC2156] and X-Priority, MUST NOT be used for determining the
priority under this Priority Message Handling SMTP extension.

I hadn't noticed importance listed as a possible source of MTA priority
information. That's actually committing the error of mapping a field with
entirely different semantics into MTA priority and at a minimum needs to be
removed - we should NOT be encouraging that.

 NEW
Other message header fields, such as Importance [RFC2156], Priority
[RFC2156] and X-Priority, are used inconsistently and often with
different semantics from MT-Priority.  Message Submission Agents
[RFC6409] MAY use such fields to assign an initial priority in the
absence of an SMTP PRIORITY parameter.  Otherwise, such fields
MUST NOT be used for determining the priority under this Priority
Message Handling SMTP extension.

It seems you're complaining about other people doing something they never
wanted to do while actually making that error yourself.

Ned
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-melnikov-smtp-priority-07.txt (Simple Mail Transfer Protocol extension for Message Transfer Priorities) to Proposed Standard

2012-03-05 Thread ned+ietf
  2. I, too, noticed all the lower-case should and may words.  I
  suggest that the best way to handle this is to make the following
  change to the RFC 2119 citation text at the beginning of section 2:
 
  NEW
     The key words MUST, MUST NOT, REQUIRED, SHALL, SHALL NOT,
     SHOULD, SHOULD NOT, RECOMMENDED, MAY, and OPTIONAL in this
     document are to be interpreted as described in [RFC2119] when they
  appear
     in ALL CAPS.  These words also appear in this document in lower case as
  plain
     English words, absent their normative meanings.
 
  I don't mind changing that, but ID-nits gives a warning when the text about
  keywords is changed and *everybody* likes to complain about that. Please
  talk to IESG about whether using a variant of the standard text is
  acceptable.

 I've used text like this before, and the IESG has never objected to
 it.  One advantage with this formulation, which uses the standard 2119
 text and *appends* to it, is that idnits likes it and doesn't generate
 a warning.

More generally, ID-nits is supposed to be helpful. Straightjackets are not
helpful.

Ned
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-melnikov-smtp-priority-07.txt (Simple Mail Transfer Protocol extension for Message Transfer Priorities) to Proposed Standard

2012-03-05 Thread ned+ietf

 I do prefer the latter as well (and yes, happy to remove the restriction),
 but I don't feel very comfortable pretending that tunneling wouldn't happen.



Of course people will tunnel stuff.  But will they all tunnel it the same
way, in which case a standard could be useful, or will they each have
their own hacks?  Given the vagueness about how you can tell that
something should come out of the tunnel back into the envelope, it sounds
more like the latter.


If the only issue was tunneling, I'm not sure it would matter as much as it
does. After all, if we have a spec that says use MT-Priority and someone
uses something else within their ADMD, who will even know?

But this isn't just about tunneling. There are a bunch of ad-hoc priority
mechanisms out there, some in fairly widespread use. If we cannot align
ourselves with those nobody is going to be interested in deploying this
extension. Instead they will be asking us to retain our present capability of
interoperating with those existing systems. And doing both is ... messy to say
the least, not to mention a violation of the current language in the
specification.

Ned
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-melnikov-smtp-priority-07.txt (Simple Mail Transfer Protocol extension for Message Transfer Priorities) to Proposed Standard

2012-03-05 Thread ned+ietf
     Other message header fields, such as Importance [RFC2156], Priority
     [RFC2156] and X-Priority, are used inconsistently and often with
     different semantics from MT-Priority.  Message Submission Agents
     [RFC6409] MAY use such fields to assign an initial priority in the
     absence of an SMTP PRIORITY parameter.  Otherwise, such fields
     MUST NOT be used for determining the priority under this Priority
     Message Handling SMTP extension.
 
  It seems you're complaining about other people doing something they never
  wanted to do while actually making that error yourself.

 I have no idea what you mean, so perhaps you can be more specific
 about what it is that you want to do.

I'm talking about the inclusion of Importance: in both the original and revised
text as a field we're suggesting might make sense to map. It should not be
there - the semantics are not compatible with MTA priority. And even if you
think it makes sense that messages marked important in the MUA should receive
an MTA priority boost, allow me to point out that by GOSIP rules (which are
still in effect in some places) elevated MTA priority is used to shorten the
time messages are retained before returning them as undeliverable. That's
really not a good idea to do for all messages marked important, and some
people with systems that do this are probably unaware of the behavior.

Mind you, I have no problem with someone doing it because it makes sense in
their case. But we should not be suggesting doing it in the text, and getting
into a discussion of the pitfalls of doing it doesn't is unnecessarily
distracting.

The semantics of X-Priority are less clear. It's sometimes used for MTA
priority setting, other times it's been an MUA field. AFAIK it has never had
the problematic semantics I just discussed. Mapping it is therefore a judgement
call implementations should be allowed to make, and that makes it a good
example to include in this sort of text.

Ned
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: provisioning software, was DNS RRTYPEs, the difficulty with

2012-03-05 Thread ned+ietf
 On 05/Mar/12 18:09, John Levine wrote:
  Sometimes an ASCII text record will be fine, in other cases, it probably 
  won't.
 
 My point is as we move again towards multiple text representations of the 
 digit five for example,
 both encoding and parsing is easier and more secure if that digit is really 
 for example eight bits
 and not text that someone has to parse.
 
  Unless you provision your DNS zones with a hex debugger, the digit
  will always start out as text that someone has to parse.  The question
  is who does the parsing, the DNS server or the application.  As I said
  in a previous message, I can see plausible reasons to put the parser into
  the application.
 
  Would you really want to build an SPF or DKIM parser into every DNS
  server?  That's a lot of code that the DNS manager doesn't care about,
  but the mail manager does.

 But it would be the same code, most likely by the same author(s).  It
 may be generic for a kind of syntax or specific for a RR type,
 according to its author's convenience.  On a system that allows new RR
 types without recompiling, the code would come as some sort of plugin
 in both cases.

There are some false equivalences floating around here. I don't think anyone is
suggesting that having provisioning systems or even DNS servers themselves
check for syntax errors in the contents of complex records like DKIM, SPF,
DMARC, or whatever is necessarily a bad idea. (Whether or not it will actually
happen is another matter; I'm dubious.)

Rather, the issue is with requiring it to happen in order to deploy a new
RRTYPE of this sort, which is the result you get if the DNS server returns some
series of tokens instead of the original text string. That's the sort of thing
that forces people to upgrade, or search around for a script to do the
conversion (which won't even occur to some), and that's an extra burden we
don't need to impose.

 Why is it important what the DNS manager cares about?

Speaking as a DNS manager myself, I care a lot about being forced to upgrade.
Upgrades bring unwanted bugs in other areas.

In fact I'm not entirely thrilled with the idea of plugins to do some extra
syntax. More code means more possibilities of bugs. I'd actually prefer to see
more cross-checking of existing stuff - less code and greater benefit.

 Parsers,
 including null parsers, would come with the same sub-package that
 enables the new RR type definition.  Their complexity would only
 matter to the people who read/maintain their sources.

I'm sorry, but you're being naive about this. Complexity does matter to the
people who just use software because added complexity translates to more bugs.

  PS: For anyone who didn't read my previous message, I am NOT saying
  that it's fine to overload everything into TXT.  I am saying that new
  RRTYPEs that are text blobs interpreted by client software wouldn't
  necessarily be bad.

 Agreed.  That doesn't preclude syntax checking on loading the zone,
 though.

As long as we stick with syntax checking I'm (mostly) OK with it.

Ned
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: provisioning software, was DNS RRTYPEs, the difficulty with

2012-03-05 Thread ned+ietf
 I think we should look a bit on the flow of data. If I simplify we have a
 flow like this:

Looking at data flows is usually a good idea.

 Input -P- DNS server -D- DNS stub -Q- Output

 P is the provisioning

I think you want to break that into the provisioning interface and the data
format it produces that the DNS server consumes. (My reason for that is we have
a specification for at least such format, with all that implies.)

 D is the DNS protocol
 Q is the query/parsing in the consumer of the data

 What we want is (as described in the IAB RFC on extensions of DNS) the
 ability to query (in D) for as precise data as possible in the triple {owner,
 type, class}. Some RR types like NAPTR and TXT have flaws where the selection
 between records in an RRSet is not in this triple. In some applications that
 is resolved by having a prefix to the owner. In some other applications that
 is resolved by parsing the RRSet.

 We all do believe that IF it was easier to add a new RRType for each
 application that would be an architecturally better solution (as adding prefix
 to the owner have its drawbacks). Now, the question is what blocks the ability
 to add new RRTypes.

Yes, I think we have agreement on all of that.

 We seem to believe that the D part is deployed so that adding new unknown
 RRTypes is not an issue.

I think this is correct, but OTOH someone recently asked about possible issues
in this area, and if I remember correctly, received no response.

 Problem is then in P and Q.

I personally don't see a big problem with Q, but others clearly do so
we need to leave it in.

 And when we talk about parsing, we talk about what the mapping between
 provisioning and DNS packet format is.

I think we need to be a little finer grained than that, per the above.

 Are we aligned so far?

Yes, pretty much.

Ned
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: provisioning software, was DNS RRTYPEs, the difficulty with

2012-03-03 Thread ned+ietf
 On 03/Mar/12 00:13, John R. Levine wrote:
 
  Until provisoning systems accept UNKNOWN record types they will
  always be a bottle neck.  Without that you will have to wait for
  the change request to be processed.  Given the history just getting
   records added to most of these system it will be forever.
 
   was unusually painful, since it requires adding a parser for IPv6
  addresses.  (Having hacked it into my provisioning system, I speak
  from experience.)  Most new RR types are just strings, numbers, names,
  and the occasional bit field.

 Yeah, and if ISPs already had troubles with ho-de-ho-de-ho-de-ho, how
 will they join in on skee-bop-de-google-eet-skee-bop-de-goat?

 Given that, designers of new RR types will want to stick to string
 formats just to spare ISPs some parsing, at the cost of losing a half
 of the advantages that RFC 5507 talks about, along with syntactic
 validations aimed at preventing some permerror/permfail cases.

Doubtful. If a record needs to have, say, a priority field, or a port number,
given the existence of MX, SRV, and various other RRs it's going to be very
difficult for the designers of said field to argue that that should be done as
ASCII text that has to be parsed out to use.

More generally, this is trying to have things both ways. We can't
simultaneously assert that deploying simple new RRs is a breeze, making this
unnnecessary, and that it's so difficult that everything should be crammed into
TXT Format no matter the actual structure is.

NNed
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: provisioning software, was DNS RRTYPEs, the difficulty with

2012-03-03 Thread ned+ietf

 On 3 mar 2012, at 16:56, ned+i...@mauve.mrochek.com wrote:

  Doubtful. If a record needs to have, say, a priority field, or a port 
  number,
  given the existence of MX, SRV, and various other RRs it's going to be very
  difficult for the designers of said field to argue that that should be done 
  as
  ASCII text that has to be parsed out to use.

 Agree with you but too many people today just program in perl och python 
 where the parsing is just a cast or similar, and they do not understand this 
 argument of yours -- which I once again completely stand behind myself.

Regardless of the language you program in, you still have to get your proposal
approved. For that to happen the folks who review these things would in effect
be conceding that there is in fact a major deployment problem out there.

That's the point I was trying to make, not that people wouldn't argue for this
approach.

I am unaware of any counterexamples that have actually made it through the
process, but if they exist feel free to point them out.


Ned
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-melnikov-smtp-priority-07.txt (Simple Mail Transfer Protocol extension for Message Transfer Priorities) to Proposed Standard

2012-03-03 Thread ned+ietf
  This draft also defines the MT-Priority header field. �It is quite unusual
  for a SMTP extension specification to define a mail header field. ...

 This is my major concern about this protocol as well, as I note in the
 PROTO writeup (which, unfortunately, can't be seen publicly because of
 a limitation in the datatracker; perhaps I should post it here).  I'm
 interested in hearing whether others share this concern, and what the
 community consensus is about it.

 I have no problem logging stuff from the SMTP session into a message
 header, we've been doing that since forever.  But I have a lot of
 problem turning a header back into SMTP for a subsequent relay, for
 two reasons.

 One is just that it's ugly.  There were good reasons to separate the
 envelope from the body back in the 1970s, and I think those reasons
 are just as good now.  Over in the EAI group, there was a valiant
 effort to tunnel EAI messages through non-EAI SMTP servers via
 downgrades, and we eventually gave up.  Now, if you want to send an
 EAI message, there has to be an EAI path all the way from the sender
 to the recipient.  This isn't exactly analogous, but it's instructive.

 The other reason is that I think it's too ill-specified to
 interoperate.  The theory seems to be that the relay server notices an
 MT-Priority: header, and (vigorous waving of hands) somehow decides if
 the message has sufficient virtue to promote the header back into the
 SMTP session.  The hand-waving is not persuasive; the suggestion of
 S/MIME, which obviously won't work, is not encouraging.

Both good points. While I personally am indifferent to header modification
being an issue - I believe we've long since crossed that bridge - I will also
note that simply changing this proposal to always insert the MT-Priority: field
at submission time and leave it unchanged throughout message transit would
leave most of the functionality intact and eliminate the header modification
issue entirely.

But the upleveling of header to envelope would remain. John is quite correct in
his assertion that these are mostly uncharted waters: The only remotely
comparable case I can think of is the mapping of some X.400 fields to the
NOTARY extension, but in that case the mapping was from X.400 envelope material
to SMTP envelope material. Additionally, the mapping was precisly specified
with no wiggle room for site policy. 

I think this will likely lead to interoperability problems. Now, in the case
of message processing priority the chance of such problems causing significant
trouble seem pretty slim, but it doesn't work well but it also doesn't
matter much is not a justification I feel comfortable embracing.

 If you want to brave the ugliness and standardize this, the spec needs
 to explain how the server decides whether to believe an MT-Priority
 header.  If it doesn't, it's too vague to interoperate.

 So I have two suggestions.  One is to leave it as is, and make it
 experimental. If it turns out the tunnels all work the same way, you
 can come back and add the spec about how they work and rev it as
 standards track.  The other is to take out the tunnel bit and make it
 standards track now, so a compliant implementation needs
 priority-aware MTAs from end to end.  Even if you take out the tunnel
 stuff in the spec, you can still tunnel in your implementations, using
 whatever non-standard ad hoc kludges you want.  Since the current spec
 has gaps that would need to be filled by non-standard ad hoc kludges
 anyway, you haven't lost anything.

I have to say I strongly prefer the latter, especially if it also includes
eliminating the prohibition on using other sources of priority information to
set the envelope priority level on submission. In fact given that prohibition
I see little point in implementing this, at least not in a fashion
that conforms to the specification.

Ned
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: provisioning software, was DNS RRTYPEs, the difficulty with

2012-03-02 Thread ned+ietf

I've rarely read a message on this list that I agreed with more.

+1000 to everything said below.

Ned


 By the way, what's your opinion of draft-levine-dnsextlang-02?

 It isn't clear to me what problem you're trying to solve. For resolving
 name servers 3597 should be enough. For authoritative name servers your
 idea is sort of interesting, but generally upgrading the software to
 pick up support for new RRtypes is a good idea, since you'll also pick
 up new security fixes, etc.



Oh, wow.  Now I see the problem: people here are totally unaware of what
using DNS software is like if you're not a DNS weenie.



If you think that it's reasonable to require a new version of your DNS
software every time there's a new RR, you've conceded total defeat.  On
most servers I know, software upgrades are risky and infrequent.  It could
easily take a year for a new version of BIND to percolate out of the lab,
into the various distribution channels, and to be installed on people's
servers, by which time everyone has built their applications with TXT
records and it's too late.



Moreover, the servers aren't the hard part, it's the provisioning systems.
You and I may edit our zone files with emacs, but civilians use web based
things with pulldown menus and checkboxes.  If they can't enter an RR into
one of them it doesn't matter whether the name server supports it or not.
This latest round of teeth gnashing was provoked by discussions in the
spfbis WG, where we're wondering whether to deprecate the type 99 SPF
record.  Among the reasons it's unused in practice is that nobody's
provisioning system lets you create SPF records.



The point of my draft is that it's an extension language that both name
servers and provisioning systems can read on the fly.  Add a new stanza to
/etc/rrtypes and you're done, no software upgrades needed.  The risk is
much lower since the worst that will happen if the new stanza is broken is
that the provisioning software and name servers will ignore it, not
that the whole thing will crash.



Sure, there are some RRs with complex semantics that can't be done without
new code (DNSSEC being the major example), but most RRs are syntactically
and semantically trivial, just a bunch of fields that the server doesn't
even interpret once it's parsed the master records or their equivalent.



Until the DNS crowd admits that provisioning systems are a major
bottleneck, and telling people to hand-code more types into them isn't
going to happen, there's no chance of getting more RRs deployed.



Regards,
John Levine, jo...@iecc.com, Primary Perpetrator of The Internet for Dummies,
Please consider the environment before reading this e-mail. http://jl.ly
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-melnikov-smtp-priority-07.txt (Simple Mail Transfer Protocol extension for Message Transfer Priorities) to Proposed Standard

2012-03-01 Thread ned+ietf
 The most significant item that needs to be called out is the issue of
 tunneling the PRIORITY value through non-conforming MTAs by turning it
 into a message header field (MT-Priority) and then back again.  This
 is a problematic technique, but is an important capability for those
 who need and intend to implement this extension.  It creates a trust
 issue, wherein a message containing MT-Priority can be originated with
 a Message Submission Agent that does not know about this extension,
 and when the message hits a Message Transfer Agent that does support
 this, the header field will be turned back into a valid PRIORITY
 value, on the unwarranted assumption that it was authorized.
 Intermediate MTAs have no way to distinguish this situation from one
 where the field was tunneled legitimately.

There may not be substantial experience with doing this sort of thing in SMTP
relays, but there's plenty of experience doing it in gateways to other mail
environments, e.g., X.400 and many of the old LAN email systems. In fact one of
the more common fields that has been mapped this way is message transfer
priority, so there is considerably experience with fields having more or less
the same semantics as what is being proposed here.

I am unaware of any cases where this was abused, probably because increased
transfer priority doesn't buy you all that much in most cases. Related
but more user-visible features, e.g., importance (in the X.400 sense)
have been known to be abused, however.

That said, I think an important omission in this document is that it 
only allows MSA's to change message priorities to conform to site policy.
MTAs should also be allowed to do this.

Another issue is the silly prohibition against using Priority: and other header
fields to set priority levels. What if is existing site policy is in fact to
use those fields to determine message priority?

 The counter-argument is that the use case for this specification
 involves out-of-band trust relationships, and that such situations
 will be known and dealt with.  I believe that limits the usability of
 those features on the open Internet, with other use cases.

Not if MTAs aren't also allowed to enforce site policy.

Ned
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: DNS RRTYPEs, the difficulty with

2012-02-28 Thread ned+ietf
 In the previous message it was suggested that people who use a hosted DNS
 service should switch if their service doesn't support Type SPF.  The problem
 is that, as far as I'm aware, none of them do.

Perhaps in some cases it's because the person or persons who want to publish
SPF records are not the ones who make the decision about which hosted DNS
service provider to use, and are unable to make a sufficient case for a change
to those who do have the authority to make it.

It's also the case that even when the ability to publish such records exists,
the person or persons who want to do so may not have the necessary access or
authority and have no way to get it.

Only today we were dealing with a customer who was asking about what options
our product provides for overriding entries in his own DNS domain with local
information. Yes, you read that right: The companies DNS has bogus stuff in it
that cannot or will not be corrected. Such requests are unfortunately not
uncommon.

All that said, a suggestion: Patrik published a pretty good list of most of the 
issues that arise when attempting to deploy new RRTYPEs. Some of those, like
the lack of GUI pulldowns in some hosting provider web interfaces, are clearly
beyond the IETF's purview to address. But others, like the formatting issues in
zone master files that arise when a new RRTYPE is defined, are things we can
address. (And no, external scripts to do the format conversion are not a
sufficient solution to this problem.)

So might I suggest that instead of arging about the operational realities of
deploying new RRTYPEs - which  appears to be turning into yet another retelling
of the parable about the blind men and the elephant - we instead turn our
attention to activities that would actually address parts of the problem.

In particular, John Levine has already written a draft that attempts to address
the formatting issues by defining a simple format extension language. That
might be a good place to start. (I've already sent him my review comments in
private email.)

Ned
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Issues with prefer IPv6 [Re: Variable length internet addresses in TCP/IP: history]

2012-02-23 Thread ned+ietf
 Yes, the issues with an unconditional prefer IPv6 approach
 have been noted, and operating systems of the vintages you
 mention certainly deserved criticism. In fact this has been a
 major focus of IPv6 operational discussions, and lies behind
 things like the DNS whitelisting method, the happy-eyeballs
 work, and my own RFC 6343.

 Old news; unfortunately it means you need new o/s versions.
 Disabling 6to4 and Teredo unless they are known to be working
 well is a good start, however.

Old news perhaps, but an unavoidable consequence of this is that the
oft-repeated assertions that various systems have been IPv6 ready for over 10
years don't involve a useful definition of the term ready.

Ned


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Issues with prefer IPv6 [Re: Variable length internet addresses in TCP/IP: history]

2012-02-23 Thread ned+ietf
 On 02/23/2012 13:51, ned+i...@mauve.mrochek.com wrote:
  Old news perhaps, but an unavoidable consequence of this is that the
  oft-repeated assertions that various systems have been IPv6 ready for over 
  10
  years don't involve a useful definition of the term ready.

 The OP specified IPv4 only network. I suspect that if he had IPv6
 connectivity his experience would have been quite different. I happily
 use Windows XP on a dual-stack network, for example.

And systems running these old OS versions never under any circumstances move
from one network to another where connectivity conditions change. Riiight.

Ned
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Issues with prefer IPv6 [Re: Variable length internet addresses in TCP/IP: history]

2012-02-23 Thread ned+ietf
 On 02/23/2012 14:48, Ned Freed wrote:
  On 02/23/2012 13:51, ned+i...@mauve.mrochek.com wrote:
  Old news perhaps, but an unavoidable consequence of this is that the
  oft-repeated assertions that various systems have been IPv6 ready for 
  over 10
  years don't involve a useful definition of the term ready.
 
  The OP specified IPv4 only network. I suspect that if he had IPv6
  connectivity his experience would have been quite different. I happily
  use Windows XP on a dual-stack network, for example.
 
  And systems running these old OS versions never under any circumstances move
  from one network to another where connectivity conditions change. Riiight.

 Brian already covered unconditional prefer-IPv6 was a painful lesson
 learned, and I'm not saying that those older systems did it right. What
 I am saying is that for most values of IPv6 Ready which included
 putting the system on an actual IPv6 network, they worked as advertised.

Which brings us right back to my original point: This definition of ready is
operationally meaningless in many cases.

Ned
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Issues with prefer IPv6 [Re: Variable length internet addresses in TCP/IP: history]

2012-02-23 Thread ned+ietf

 In message 01occ10b11tc00z...@mauve.mrochek.com, ned+i...@mauve.mrochek.com 
 w
 rites:
   On 02/23/2012 14:48, Ned Freed wrote:
On 02/23/2012 13:51, ned+i...@mauve.mrochek.com wrote:
Old news perhaps, but an unavoidable consequence of this is that the
oft-repeated assertions that various systems have been IPv6 ready for
  over 10
years don't involve a useful definition of the term ready.
   
The OP specified IPv4 only network. I suspect that if he had IPv6
connectivity his experience would have been quite different. I happily
use Windows XP on a dual-stack network, for example.
   
And systems running these old OS versions never under any circumstances 
m
  ove
from one network to another where connectivity conditions change. 
Riiight
  .
 
   Brian already covered unconditional prefer-IPv6 was a painful lesson
   learned, and I'm not saying that those older systems did it right. What
   I am saying is that for most values of IPv6 Ready which included
   putting the system on an actual IPv6 network, they worked as advertised.
 
  Which brings us right back to my original point: This definition of ready 
  i
  s
  operationally meaningless in many cases.
 
 I contend that OS are IPv6 ready to exactly the same extent as they
 are IPv4 ready.  This isn't a IPv6 readiness issue.  It is a
 *application* multi-homing readiness issue.  The applications do
 not handle unreachable addresses, irrespective of their type, well.
 The address selection rules just made this blinding obvious when
 you are on a badly configured network.

That's because the choice has recently been made that the place to deal with
this problem is in applications themselves. I happen to think this is an
exceptionally poor choice - the right way to do it is to provide a proper
network API that hides network selection details, rather than demanding every
application out there solve the problem separately.

And yes, I'm familiar with the line of reasoning that says applications are too
varied in their needs, or their internal environments conflict with the
necessary use of threads, or whatever. I don't buy any of it. Yes, such
applications exist, but like most things there's a sweet spot that solves a
significant fraction of the problem.

 No one expect a disconnected IPv4 network to work well when the
 applications are getting unreachable addresses.  Why do they expect
 a IPv6 network to work well under those conditions?

First, you're comparing apples and oranges. Losing all network connectivity is
very different thing from losing partial network connectivity. Nobody expects
an applications that's totally dependent on the network to work without
connectivity. That's just silly.

But with other sorts of applications, I *do* have that expectation. I often
work from places without any network connectivity using applications that
employ networking as part of their function but also do non-networked stuff,
and pretty much without exception they handle the transition fine, and have
done so for years.

Then again, I use a Mac. I have no idea what the state of play of Windows or
Linux apps is in this regard.

Ned
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-weil-shared-transition-space-request-14.txt (IANA Reserved IPv4 Prefix for Shared Address Space) to BCP

2012-02-14 Thread ned+ietf
 I support this updated draft, and I am keen for this to be published as a
 BCP.

+1

 I believe the amendments in this revision clarify the usage and intended
 purpose of the shared transition space.

+1

Ned
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-weil-shared-transition-space-request-14.txt (IANA Reserved IPv4 Prefix for Shared Address Space) to BCP

2012-02-14 Thread ned+ietf

To the addressed folks who's messages appear below:



I'm not sure I understand what you're saying. There was some objection
at the beginning of this thread by Wes George, Noel Chiappa, and Brian
Carpenter. I agreed that the document could be misunderstood as
encouraging the use of the space as 1918 space and proposed some
replacement text. There seemed to be some agreement around that text.
Are you now objecting to that replacement text and want -14 published as
is? Do you think the document should say that the new allocation can be
used as 1918 space? If so, please explain.


Not sure how a +1 to a statement saying I support this updated draft, and I
am keen for this to be published as a BCP. can be interpreted in any but one
way, or for that matter how it can be stated much differently.

Anyway, to use different words, I would like to see the current draft 
approved and published as a BCP. Clear enough?


Ned
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-weil-shared-transition-space-request-14.txt (IANA Reserved IPv4 Prefix for Shared Address Space) to BCP

2012-02-14 Thread ned+ietf

On 2/14/12 1:50 PM, ned+i...@mauve.mrochek.com wrote:



 Are you now objecting to that replacement text and want -14 published as
 is? Do you think the document should say that the new allocation can be
 used as 1918 space? If so, please explain.

 Not sure how a +1 to a statement saying I support this updated draft,
 and I
 am keen for this to be published as a BCP. can be interpreted in any
 but one
 way, or for that matter how it can be stated much differently.

 Anyway, to use different words, I would like to see the current draft
 approved and published as a BCP. Clear enough?



Nope. Perhaps my question was unclear. I'll try and ask my question
again with different words:



Do you, or do you not, object to the proposed change that changes the
text from saying, This space may be used just as 1918 space to This
space has limitations and cannot be used as 1918 space? Nobody on the
list objected to that new text. That new text significantly changes -14.
You have stated your support for -14. Do you object to changing the text?


I assumed the proposed change was in the current draft - isn't that how it's
supposed to be done? Anyway, I think the change is fine, but I also think
the draft is acceptable without it.

Ned
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: the success of MIME types was Re: discouraged by .docx was Re: Plagued by PPTX again

2011-11-29 Thread ned+ietf
 - Original Message -
 From: Julian Reschke julian.resc...@gmx.de
 To: Yaakov Stein yaako...@rad.com
 Cc: John C Klensin john-i...@jck.com; ietf ietf@ietf.org
 Sent: Monday, November 28, 2011 6:07 PM
  On 2011-11-26 21:52, Yaakov Stein wrote:
   That leaves ASCII, a few forms of PDF, and RFC 5198-conforming UTF-8.
   That wouldn't bother me much, but be careful what you wish form.
  
   What we have been told is that the rationale behind the use of ASCII and
 several other formats
   is that they will remain readable on devices that will be used X years
 hence.
  
   ASCII is already unreadable on many popular devices
   and in a few years will be no better than old versions of word.
   ...
  Can we *please* distinguish between the character encoding we use
  (US-ASCII) and the file format (text/plain)?
 
  If *we* don't get this right, how can we expect anybody else to get it
  right?

 You will be aware of the recent threads on apps-discuss about MIME types (of
 which the text/plain you mention is one)  which concluded, AFAICS, that there 
 is
 no rationale why a (top level) type should or should not exist, there are no
 criteria for creating new ones, that it is impossible to draw up a taxonomy of
 types because there is no underlying logic in any dimension.

And I would have to characterize all of that as 100% grade A hogwash. I'm not
going to bother refuting each point since even if this nonsense were true
top-level type rules, or the lack thereof are completely irreleant to the
matter at hand. But feel free to read the thread if you want the real story.

Ned
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: An Antitrust Policy for the IETF - why?

2011-11-28 Thread ned+ietf
+1 to all of John's points here. Especially about the essential nature
of lawyers - I've worked with plenty of them as well.

Ned

  The IETF legal counsel and insurance agent suggest that the IETF
  ought to have an antitrust policy.

 I would be interested in a brief explanation of why we need one now,
 since we have gotten along without one for multiple decades.

 Having worked with a lot of lawyers, my experience is that few lawyers
 understand cost-benefit tradeoffs, and often recommend spending
 unreasonably large amounts of money to defend against very remote
 risks.  Similarly, insurance agents will usually tell you to insure
 against anything.  (This is why NDAs are 12 pages long, and the
 standard deductible on policies is usually an order of magnitude too
 small.)

 I don't know the particular lawyer and agent involved, and it's
 possible they're exceptions to the rule, but before spending much
 money, I would want to understand better what problem we are trying to
 solve and what the realistic risk is.  Also keep in mind that the main
 effect of such a policy would be to shift whatever the risk is from
 the IETF onto participants.  It might also be educational, here's
 things that might lead to personal legal risk if you talk about them,
 but we don't need a formal policy for that.

 I understand that some other SDOs have antitrust policies, but they
 generally have organizational members, and other differences from the
 IETF that make them only weakly analogous.

 R's,
 John
 ___
 Ietf mailing list
 Ietf@ietf.org
 https://www.ietf.org/mailman/listinfo/ietf
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The death John McCarthy

2011-10-28 Thread ned+ietf

First, as someone who chartered the working group, who has implemented Lisp
(the programming language) at least four times, and who views Dr. McCarthy as a
hero I disagree that name is problematic or disrespectful. And I almost take
offense in the claim that this is a generational thing.


I didn't charter the group and I've only done two Lisp implementations, but
aside from that, +1. (Or should I say t instead?)

And frankly, if there's disrespect to be found here, IMO it lies in using this
sad event as a proxy to criticize some IETF work some people apparently don't
like.

Ned



___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Gen-ART review of draft-ietf-eai-rfc5335bis-12

2011-10-23 Thread ned+ietf



On 10/20/2011 3:37 PM, Pete Resnick wrote:
 So, if the limit is still 998, then there is no change with respect the 
former
 limit.

 See the next sentence:

 (Note that in
 ASCII octets and characters are effectively the same but this is not
 true in UTF-8.)

 Remember, in UTF-8, characters can be multiple octets. So 998 UTF-8 encoded
 *characters* are likely to be many more than 998 octets long. So the change is
 to say that the limit is in octets, not in characters.




The switch in vocabulary is clearly subtle for readers.  (I missed it too.)



I suggest adding some language that highlights the point, possibly the same
language as you just used to explain it.


The document already contains text for this:

 Section 2.1.1 of [RFC 5322 limits lines to 998 characters and recommends that
 the lines be restricted to only 78 characters. This specification changes
 the former limit to 998 octets. (Note that in ASCII octets and characters
 are effectively the same but this is not true in UTF-8.) The 78 character
 limit remains defined in terms of characters, not octets, since it is
 intended to address display width issues, not line length issues.

Note the parenthetical comment in the middle of the paragraph.

I really don't see any point in further elaboration of this issue in this
context.

Ned

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: how to make Apple Mail comply to email standards: or vv?

2011-10-09 Thread ned+ietf
 I've just received an important email from an important person (a former
 IETF chair even) on an IETF mailing list.

 The content-type was:
 Content-Type: text/plain; charset=us-ascii

 yet the first line of the mail was 491 characters wide.

... which is legal according to the relevant standards. There are legitimate
uses for fixed format messages with long lines, which is why the 78 character
wrap limit is a SHOULD, not a MUST.

Of course in the case you present the line should have been wraped. But that's
a user agent quality issue, albeit a fairly common one.

 There is an
 expectation that format=flowed; is now implied.

That particular wrongminded expectation has been with us in one form or another
for almost 20 years - it became a real problem shortly after MIME was
published, as a matter of fact. There have been many attempts to address it,
including the relatively recent definition of the format=flowed label.

 Has the YAM WG taken this into consideration?

Of course it hasn't. Read the charter. YAM has a single purpose:

The Yet Another Mail (YAM) WG will revise existing Internet Mail
specifications currently at Draft Standard to advance them to Full
Standard. 

Even if there was a consensus to make format=flowed the default (and I've seen
no sign of any such thing), YAM couldn't do it. You know, that whole pesky no
significant changes when moving from draft to full thing?

 Will a new standard now
 make make format=flowed; standard?

Seems unlikely.

 I'm really really really tired of this.  I never expected redmond to do
 the right thing, but at least everyone with a clue knew that they did
 the wrong thing, and some versions of that system there was even
 settings that mostly did the right thing.

 Meanwhile a colleague of mine in the non-IT space is complaining that he
 can't read email that I send because his Apple Mail program seems to
 forceably reformat all text, assuming it is format=flowed;
 RFC2646 has been around for 11 years now.

 So my request is simple:

1) can the IETF secretariat please document appropriate settings for
   commodity mail user agents that comply with RFC2464, and configure
   our mailing list software to filter anything that does not comply?
   I'm serious about this. It's embarassing.

The first part seems reasonable, the second seems like a massive overreaction
and IMNSHO has zero chance of being adopted.

2) alternatively, will the YAM (or another WG) please update the
   standard to say that format=flowed is now the default, and
   ideally also give me some idea if format=fixed will do anything
   at all with MUAs out there?

Again, making such a major change to the default handling of the most comon
type of email seems unlikely to be feasible at this stage.

And besides, we've tried standards-making in this space quite a few times
(simple text proposal, text/enriched, format=flowed, etc.) and nothing
has worked. I see no reason to believe that further standards-making
will do anything but further fragment the behavior of user agents.

So perhaps it's time to try a different approach. Instead of writing yet
another technical specification, write a best practices document that
explains the problem and gives specific advice to user agent developers.
This sort of thing has been conspicuous by its absence in previous
documents.

Just a thought.

Ned

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-melnikov-mmhs-header-fields-04.txt (Registration of Military Message Handling System (MMHS) header fields for use in Internet Mail) to Informational RFC

2011-09-16 Thread ned+ietf

On 2011-09-15 18:46, Mykyta Yevstifeyev wrote:
 ...
 9) I'd like you hereby disallowed further registration of header fields
 beginning with MMHS, likewise RFC 5504 Downgraded prefix
 (http://tools.ietf.org/html/rfc5504#page-18).
 ...



No. It's a bad idea in RFC 5504, and it would be a bad idea here.


Agreed. Bad idea.

Ned
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Conclusion of the last call on draft-housley-two-maturity-levels

2011-09-07 Thread ned+ietf
 to actually perform such analysis without
actually trying things at some sort of scale. I'm sorry, but I've seen no 
evidence that the necessary skills for this actually exists.

Ned
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Conclusion of the last call on draft-housley-two-maturity-levels

2011-09-06 Thread ned+ietf
  I find it impossible to believe that this will not result in even more
  hard-line positions on the part of some IESG members when something
  with which they disagree is a candidate for PS.  I see no way in which
  the draft solves this problem, which remains one of its implicit
  goals.  I said before, I don't care if it is published, because I
  think it will have little effect.  But I think we'd better be prepared
  for some IESG members to insist on the same high bar for PS that we
  have under RFC 2026, regardless of what the RFC says.

 +1

 Best statement of the problem with this document that I've seen so far.

Except for one small problem: It's nonsensical.

Why is it nonsensical? Because you're comparing the status quo with a possible
future course of action. The one thing that's we can be certain of is things
won't remain the same. They never do. So in order to make a reasonable
comparison you have to project what's likely to happen if this document isn't
approved, then compare that with what might happen if it is.

And the future where this isn't approved is fairly easy to predict: As more and
more documents become proposed standards and then fail to progress along the
standards track - and the trend lines for this could not be clearer - we
effectively move more and more to a one-step process. The IESG has only one
rational response to that: Continue to raise the bar on the initial step to
proposed.

Will the imposition of a two step process change this? It certainly won't do so
immediately, so the likely outcome is that yes indeed, the bar will continue to
go up, at least initially, irrespective of whether or not this document is
approved. But if more documents start advancing - and yes, that's an if - that
will lessen the pressure on the initial step, and perhaps break the cycle we're
currently stuck in.

And please don't try trotting out a bunch of additional what ifs about how if
this proposal fails we can then get past this distraction (or however you would
characterize it) and address whatever it is you think the actual problems are.
Given the time that has gone into trying to make this one simple and fairly
obvious change, if it fails, the odds of someone attempting even more
fundamental changes are pretty darned low.

Ned
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: Conclusion of the last call on draft-housley-two-maturity-levels

2011-09-02 Thread ned+ietf
 In looking through this discussion, I see:

  - People saying that moving from 3 steps to 2 steps is a small step in the
 right direction, lets do it. Many people who have said this (including I) have
 been silent for a while quite possibly because they have gotten frustrated 
 with
 the endless discussion.

Ross, I'm right there with you. I fully support this document at worse a small
incremental step that clears away the some brush (at best it may actually turn
out to be quite valuable) and I'm completely frustrated that this discussion is
continuing.

This really needs to stop now. And yes, some people aren't happy with the
outcome. Thems the breaks.

  - People saying that there are other more important problems that we should
 be focusing on. Therefore, rather than either making this simple change or
 discussing other possible improvements in the process, instead let's debate
 this simple step forever and never get anything done.

At least part of the problem is lack of agreement on what the issues are. Even
if this step is a waste of time - I think that's unlikely but let's suppose - 
at least it will make it clear where the problems *aren't*. 

  - People saying that this step won't do anything.

 Two things that I don't seem to have picked up on: (i) Any consensus that a 3
 step process is better than a 2 step process; (ii) Any hint of moving towards
 an agreement on other things that we might do to improve the process.

Well, that's the real problem, isn't it? Even if you believe this is a
distraction and even actively harmful, it's not like we've been able to move
past it either. The running code result here seems pretty clear, and it does
not argue in favor of another round of discussion.

 I think that we should go to a two maturity level process, be done with
 this little step, and also enthusiastically encourage people to write drafts
 that propose *other* changes to the process. Then at least we can be debating
 something different 6 months from now than we were debating last year.

+100

Ned
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Conclusion of the last call on draft-housley-two-maturity-levels

2011-09-02 Thread ned+ietf

 On Sep 2, 2011, at 5:36 PM, ned+i...@mauve.mrochek.com wrote:

  In looking through this discussion, I see:
 
  - People saying that moving from 3 steps to 2 steps is a small step in the
  right direction, lets do it. Many people who have said this (including I) 
  have
  been silent for a while quite possibly because they have gotten frustrated 
  with
  the endless discussion.
 
  Ross, I'm right there with you. I fully support this document at worse a 
  small
  incremental step that clears away the some brush (at best it may actually 
  turn
  out to be quite valuable) and I'm completely frustrated that this 
  discussion is
  continuing.
 
  This really needs to stop now. And yes, some people aren't happy with the
  outcome. Thems the breaks.

 As far as our process is concerned, the question is, do we have rough
 consensus to accept it?  I think it's dubious that we have such consensus, and
 apparently so do others.

Simply put, I've watched the responses to this fairly closely, and I completely
disagree with your assessment.

 Personally I think this proposal is Mostly Harmless, so I'm willing to hold
 my nose about it.   But I'm very concerned about the argument that the default
 assumption should be that we change our process even in the absence of
 consensus to do so.

 Regarding the proposal, I get the impression that people are mostly in three
 camps:

Well, none of these describe my own position, which is that eliminating the
three step process will at a minimum act as an incentive to move more documents
along. (You, and most others engaging in this debate, routinely neglect the
psychological factors involved.)

I can easily name a dozen RFCs, all currently at proposed, that I for one will
be strongly incented to work to advance if this step is taken. And there isn't
a chance in hell that I'll bother with any of them if this step doesn't happen,
especially after the recent debacle surrounding the attempt to move 4409bis to
full standard, and more generally given how the entire YAM experiment played
out. I'm sorry, but passing down the advancement gauntlet is plenty hard enough
to do once. Requiring it be done twice? Been there, done that, not remotely
interested in doing it again.

Additionally, by simplifying the process, we will gain essential insight into
where other problems lie. Without such simplification I see no chance at all at
making progress on any of these issues.

 1) Even if this is a baby step, it's a step in the right direction.  Or even
 if it's not a step in the right direction, taking some step will at least
 make it possible to make some changes in our process.  Maybe we'll not like
 the results of taking this step, but at least then we'll have learned
 something, and if the result is clearly worse we'll be motivated to change it.
 (I call this change for the sake of change)

That last substantially and obviously mischaracterizes this position. In fact
I strongly recommend that you stop trying to summarize complex position with
cute - and utterly wrong - phrases like this. This is annoying and
quite unhelpful.

 2) Fixing the wrong problem doesn't do anything useful, and will/may serve
 as a distraction from doing anything useful.
 (I call this rearranging the deck chairs)

 3) People should stop arguing about this and just hold their noses about it,
 because the arguing will make it harder to do anything else in this space.
 (I call this Oceana has always been at war with Eurasia.  Ok, that's
 probably too harsh, but it's what immediately comes to mind.)

Actually, I think there are a substantial numer of people who believe exactly
the opposite of this.

 All of these are defensible theories.As it happens, I don't believe #1
 applies in this space, I do believe #2, and I have to admit that #3 does
 happen.

 The arguments that people are giving in favor of approving this bother me
 more than the proposal itself does.  (I'm a firm believer that good decisions
 are highly unlikely to result from flawed assumptions, and flawed assumptions
 often affect many decisions.  So challenging a widely-held flawed assumption 
 is
 often more important than challenging any single decision.)

Well, the main argument I'm giving is based on my own perception of the effect
this will have on myself and similarly minded people as a contributor. If you
think that assessment is incorrect, then I'm sorry, but I think you're being
extraordinarily foolish.

ned
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Conclusion of the last call on draft-housley-two-maturity-levels

2011-09-02 Thread ned+ietf

First, I'm in full agreement with Ross.



Second, for the record and as a response to Keith, my read of the discussion
on the last call was the biggest group of responses said that we should move
forward with the draft. There were two smaller groups, those with a clear
objection and those with roughly a no-objection or it does not cause harm
opinion (and a group who seemed to discuss orthogonal issues and not respond to
the question). I could of course have made mistakes in this determination, but
I thought it was rough (perhaps very rough) consensus.


FWIW, this matches my own assessment almost exactly.


Of course, it gets more interesting if you start thinking about the reasons
why people wanted to move forward. Keith's latest e-mail has interesting
theories about those. I don't think anyone thinks this is the priority #1
process fix for the IETF.


Agreed.


For me, cleaning cruft from the IETF process RFCs is a big reason for
supporting this work. And I must admit that we seem to be in a place where its
very, very hard to make _any_ process RFC changes. Getting one done, even if
its a small change would by itself be useful, IMO. Finally, I think two levels
are enough.


Cruft elimination is also a Good Thing.

Ned
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Conclusion of the last call on draft-housley-two-maturity-levels

2011-09-02 Thread ned+ietf
 no chance at 
  all at
  making progress on any of these issues.

 Okay, I can see that as a possibility.  Sometimes when undertaking a great
 task, it doesn't matter what subtask you pick to do next, as long as you do
 something.   Momentum is often more important than doing things in order of
 importance.   My question is then, how many people think that we need to
 undertake a great task where our process is concerned, and how many of those
 think that given current political conditions, if we undertake such a task,
 we're likely to end up with something substantially better than we have now? 
 (I'm open to the idea but skeptical)

My answer is I have no idea. Our current process is so complex - and
unnecessarily so - that every time we even try and discuss substantial changes
the discussion goes off in about fifty different directions. When it comes to 
the really big stuff we can't even agree on where to start, let alone get
consensus on what to try and fix.

So let's please take the small step of simplifying the process a little first,
so perhaps we can get some perspective on the big stuff. Or not - it may  well
be that this small step isn't sufficient to gain any sort of perspective, but
I've already given my reasons for believing it's useful in it's own right even
if that does not happen.

 
  1) Even if this is a baby step, it's a step in the right direction.  Or 
  even
  if it's not a step in the right direction, taking some step will at least
  make it possible to make some changes in our process.  Maybe we'll not like
  the results of taking this step, but at least then we'll have learned
  something, and if the result is clearly worse we'll be motivated to change 
  it.
  (I call this change for the sake of change)
 
  That last substantially and obviously mischaracterizes this position. In 
  fact
  I strongly recommend that you stop trying to summarize complex position with
  cute - and utterly wrong - phrases like this. This is annoying and
  quite unhelpful.

 There are definitely cases where a journey of a thousand miles begins with a
 single step, I'm just skeptical that that argument applies in this specific
 case.

  2) Fixing the wrong problem doesn't do anything useful, and will/may serve
  as a distraction from doing anything useful.
  (I call this rearranging the deck chairs)
 
  3) People should stop arguing about this and just hold their noses about 
  it,
  because the arguing will make it harder to do anything else in this space.
  (I call this Oceana has always been at war with Eurasia.  Ok, that's
  probably too harsh, but it's what immediately comes to mind.)
 
  Actually, I think there are a substantial numer of people who believe 
  exactly
  the opposite of this.

 I'm not sure I understand what you mean here.   Are you saying that there are
 a substantial number of people who wish to make it harder to do anything at 
 all
 in this space, so they keep arguing about it?  Or something else?

I think a substantial number of people believe there is nonegligable benefit
in doing this. No nose-holding required.

  The arguments that people are giving in favor of approving this bother me
  more than the proposal itself does.  (I'm a firm believer that good 
  decisions
  are highly unlikely to result from flawed assumptions, and flawed 
  assumptions
  often affect many decisions.  So challenging a widely-held flawed 
  assumption is
  often more important than challenging any single decision.)
 
  Well, the main argument I'm giving is based on my own perception of the 
  effect
  this will have on myself and similarly minded people as a contributor. If 
  you
  think that assessment is incorrect, then I'm sorry, but I think you're being
  extraordinarily foolish.

 I think you're in an excellent position to understand how approval or
 disapproval of this document will affect your interest in doing work on the
 documents you mentioned, and I'm sure you're not the only one who would be
 encouraged by such a change to our process.

And IMO that's a good thing.

Ned
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: Minimum Implementation Requirements (Was: 2119bis)

2011-09-01 Thread ned+ietf
  -Original Message-
  From: ietf-boun...@ietf.org [mailto:ietf-boun...@ietf.org] On Behalf Of 
  Melinda Shore
  Sent: Thursday, September 01, 2011 12:45 PM
  To: ietf@ietf.org
  Subject: Re: Minimum Implementation Requirements (Was: 2119bis)
 
  Can anybody point to an incident in which lack of clarity around
  2119 language caused problems, and it was determined that 2119
  itself was the problem and not authors or editors being careless?

 +1.

 As we've defined SHOULD and MUST in RFC2119, they lay out conformance 
 requirements.  I still don't see what's broken.

 If the Why is this a SHOULD and not a MUST? question that Spencer pointed
 out is a common one, then guidance to authors might be an appropriate
 addition. 

The fact that reviewers sometimes question the choice between a SHOULD and MUST
(in either direction) demonstrates absolutely nothing in and of itself. In fact
I see such questions as entirely healthy and appropriate, indeed, if they
didn't come up fairly regularly I'd strongly suspect we have a much bigger
problem with our specifications not taking operational realities into account.

Now, if when such questions arise the eventual outcome is often that the SHOULD
gets changed to a MUST or vice versa, then we may have an issue with using the
terms that needs to be addressed. Or we may not - there are plenty of other
possible explanations.

But we can save the root cause analysis until there is at least some evidence
that compliance terms are regularly being *changed* as a result of review
feedback. AFAIK such evidence has not been introduced.

Ned

P.S. And if anyone seeks to provide such evidence, please remember that
data is not the plural of anecdote.
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


re: Pachyderm

2011-09-01 Thread ned+ietf

can you please explain *why* publishing conformance statements would be
such a bad idea? I am not being cynical, I really want to understand the
reasoning.


(I don't know Pete's reasons, but I suspect they're not dissimilar from my 
own. Which are ...)


The main problem with conformance languge is that conformance has a nasty way
of becoming an end unto itself, and the *reasons* why conformance is desired
get lost along the way. The result is technical compliance to a bunch of words
on paper that don't provide an actual, useful, result like, say, insisting on
interoperability does.

For example, the X.400 email standards are all about conformance. Incredibly
elaborate and picky conformance test suites can be, and have been, written for
this stuff. So how is it that, after passing a truly massive test suite that
checked every last little conformance detail in the specifications (and paying
lots of $$$ for the privilege), our software then failed to interoperate in at
least half a dozen different ways with another piece of software that as it
happened had also just passed the exact same test suite?

Heck, we couldn't even get the thing to properly negotiate a session to
transfer messages. And once we got that working (in the process we ended up
having to violate a couple of those test suite requirements) we were
immediately lost in a thicket of differing interpretations of not just protocol
fields but even the basic elements that comprise X.400 addresses.

And this is supposed to be useful? As a moneymaker for software developers,
maybe - you may rest assured the cost of all of this nonsense were passed on to
our customers, many of whom were bound by various regulations and had no choice
but to buy this crap - but as a way to get things to work, most assuredly not.

And this trap is a lot easier to fall into than you might think. I've fallen
into it myself - I once spent entirely too much time dithering about getting
severe error handling right in a particular programming language
implementation, completely losing sight of the fact that once an error this
severe occurs the options are limited and the outcomes are so poor it just
isn't that important that the rules are followed to the letter. It made a lot
more sense on getting rid of the conditions that could cause an error.


And, for extra credit, what do you make of
http://tools.ietf.org/html/rfc5996#section-4 (in my own backyard)?


Well, the section may be titled Conformance Requirements but the section
is all about interoperability, not conformance. So that's fine in my book.

Ned
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: 2119bis

2011-08-30 Thread ned+ietf
 On Aug 30, 2011, at 9:24 AM, Marshall Eubanks wrote:

  I support adding the SHOULD ... UNLESS formalism (although maybe it should 
  be MUST... UNLESS). It would be useful as there will be times where the 
  UNLESS can be specified and has been given due consideration at the time of 
  writing. That, however, will not always be the case. (More inline).

 How would SHOULD...UNLESS or MUST...UNLESS differ from using the current 2119 
 definitions and just writing SHOULD...unless or MUST ... unless?

 Personally I think 2119 is just fine and doesn't need to be updated.

+1. I'm still not seeing sufficient justification to open this particular can
of worms at this juncture.

Ned
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: 2119bis

2011-08-29 Thread ned+ietf
 Hi -

  From: Eric Burger eburge...@standardstrack.com
  To: Narten Thomas nar...@us.ibm.com; Saint-Andre Peter 
  stpe...@stpeter.im
  Cc: IETF discussion list ietf@ietf.org
  Sent: Monday, August 29, 2011 3:08 PM
  Subject: Re: 2119bis
 
  I would assume in the text of the document.  This paragraph is simply an 
  enumeration of Burger's Axiom:
  For every SHOULD, there must be an UNLESS, otherwise the SHOULD is a MAY.

 I disagree.

I concur with your disagreement. SHOULD should *not* be used when the
list of exceptions is known and practically enumerable.

 If the UNLESS cases can be fully enumerated, then
 SHOULD x UNLESS y is equivalent to WHEN NOT y MUST X.
 (Both beg the question of whether we would need to spell out that
 WHEN y MUST NOT X is not necessarily an appropriate inference.)

 RFC 2119 SHOULD is appropriate when the UNLESS cases are
 known (or suspected) to exist, but it is not practical to exhaustively
 identify them all.

 Let's not gild this lily.

+1

Ned
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: https

2011-08-28 Thread ned+ietf
 On 8/27/2011 4:12 PM, t.petch wrote:

  Glen
 
  Me again.
 
  Just after I posted my last message, I received a post on the ietf-ssh list,
  hosted by netbsd.org, and looking through the 'Received: from' headers, as 
  one
  does on a wet Saturday morning of a Bank Holiday weekend, there was TLS, 
  used to
  submit the message to the server, so even spammers have caught on that TLS
  should be used everywhere.  End to end?

As a side note, the reason spammers use TLS to submit mail is obvious: It's
required by many submission servers so they really don't have a choice. (The
reason it's required is to protect the authentication exchange, not because
there's any real expectation that it provides useful privacy protection for the
submitted email itself.)

 Apparently, from the POV of the spammer  his SMTP server.  Email is a
 store  forward system.  In any case, my original question was not about
 the definition of end-to-end, it was about Ned usage of the term hop.

I used the term hop in a very generic sense to refer to moving the
data around.

  Upon further analysis, however, it seems clear that he was referring to
 the email archives as if they are something other than simple files (as
 betrayed by his statement that Don't pretend a transfer protection
 mechanism covering exactly one hop provides real object security,
 because it doesn't.); thus, the retrieval of the archived data would be
 the last hop in the email system.

And that's incorrect. For one thing, I often retrieve material from web sites
and save it rather than looking at it right there on the screen. So the
transfer of the material from the web server is in no way, shape, or form the
final hop the information takes before it is consumed. As as Keith points out,
I and many others am often forced to do such access through corporate-mandated
proxies of various sorts - another hop.

 There seem to be two problems with
 this statement: one is taking the file transfer mechanism as if it was
 part of the email system itself,

Nobody is making any such claim.

 which it obviously is not (downloading
 an archived message is no different than downloading an RFC from a Web
 site); the other being that someone, somewhere was pretending that TLS
 does something that it was never designed to do (a straw man of, AFAICT,
 Ned's invention -- I don't recall anybody making such a claim on this
 thread,

I was responding to the justification given for the use of https in this
context. The exact words used were:

  The mail archives (and the minutes of the physical meetings)
  are the official record of the Working Groups, IETF, etc.
  Those archives should be available with a reasonably high
  level of integrity and authenticity.

Nor was I the only, or even the first, to suggest that object security
is needed for this sort of protection.

 nor for that matter saying they _wanted_ real object security
 applied to the archives, merely that it's not really a bad idea for a
 person retrieving them to have some assurance that they came from the
 IETF server and that they weren't modified in transit).

And once again, nobody is saying that TLS doesn't give some very limited
assurance along these lines - the notion that there are claims to the contrary 
is your own strawman. What we are saying is that there are also significant
costs, those costs appear to be greater than the benefits in this case, and if
there is real concern about archive integrity there are better ways to secure
them.

Anyway, this discussion is now well past it's sell-by date, so this will be my
final response on the topic.

Ned
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: https

2011-08-27 Thread ned+ietf
 I'm increasingly coming to think that *everything* should be done with
 TLS unless you can prove it's harmful.  Call me paranoid, but given
 the general state of the world, secure-by-default seems like the way
 to go.  -Tim

This sentiment always sounds nice, but the devil is in the details.

Back when STARTTLS was added to SMTP, a bunch of us thought this provided the
means to do opportunistic encyption of email transport, so there was a push to
deploy that. It was a total failure - it seems a certain large vendor with a
very popular implementation borked their server so it advertised STARTTLS even
though no certificate was installed and any attempt to negotiate TLS protection
would end in failure. And since the negotiation failure left the connection in
an unusable state, a new connection had to be tried, along with all the logic
to prevent the new one from ending up in the same state.

It tooks years before enough of the broken servers were fixed to try again, and
by then service defaults and deployment guidelines were well established and
the opportunity was gone as well.

Of course this set of issues was unique to the situation. But with TLS it's
always something - if it isn't broken default configurations, its incompatible
cipher suites, or problems with certificate formats, or expired certificates,
or dubious CAs, or major security holes in popular implementations, etc. etc.

The bottom line is this stuff is just too complicated for mere mortals to get
right, and the fact that they then proceed to get it wrong more often than not
causes an enormous amount of trouble.

In the present case, for example, you may think that an expired certificate is
not a big deal. Not so. For one thing, the more people have to click on the
this really isn't secure, proceed anyway button to get things done, the more
likely they are to do it when it's a real attack and not a maintenance problem.
And for another, there are some browser/certificate error combinations  where
there's no workaround - no amount of clicking I agree buttons will get you
the data you're after.

I don't have an anwwer here, but the one thing I'm fairly sure of is that
blindly pushing TLS everywhere is not the solution a lot of folks believe
it is.

Ned
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: https

2011-08-27 Thread ned+ietf

On 8/27/11 7:25 AM, ned+i...@mauve.mrochek.com wrote:
 I don't have an anwwer here, but the one thing I'm fairly sure of is that
 blindly pushing TLS everywhere is not the solution a lot of folks believe
 it is.



I tend to think that the problem here (and I agree that it's a big one)
isn't TLS, but that PKI as defined by pkix is very difficult to deploy
correctly.


Agreed.


I've seen similar sorts of problems with digital signatures
on email, but in those cases as often as not someone simply got
the certificate contents wrong (or the user doesn't understand how to
configure his mail client correctly and is using a name that doesn't
appear in the certificate) rather that the cert has expired (although
there's a lot of that, too).  There's a substantial usability problem.


Absolutely, and it's both architectural and operational - PKI is
full of complex and subtle concepts that implementations don't exactly
help you with.

Ned
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: https

2011-08-26 Thread ned+ietf


 --On Friday, August 26, 2011 09:43 -0400 Donald Eastlake
 d3e...@gmail.com wrote:

  Yup, but why are we using https at all?  Who decided, and
  please would they undecide?  Unexpired certificates can be
  circumvented, but all too often, the https parts of the web
  site just do not work and, more importantly, I think it wrong
  to use industrial grade security where none is called for.
 
  The mail archives (and the minutes of the physical meetings)
  are the official record of the Working Groups, IETF, etc.
  Those archives should be available with a reasonably high
  level of integrity and authenticity.

 Don,

 If that is the goal, wouldn't we be lots better off just
 digitally signing those things, just as we are gradually
 starting to create signatures for I-Ds, etc.?  Verifying that
 one is talking to the right server and that the content is not
 tampered with in transit is all well and good, but it doesn't
 protect against compromised documents or a compromised server at
 all.

+1. If you want signatures, do them properly. Don't pretend a transfer
protection mechanism covering exactly one hop provides real object security,
because it doesn't.

And as for the encrypt so the really secret stuff doesn't stand out argument,
that's fine as long as it doesn't cause inconvenience to anyone. That's clearly
not the case here. And I'm sorry, the mistakes were made notion doesn't
really fly: Certificates aren't a set it and forget it thing, so if you
haven't noted expiration dates on someone's to-do list so they can be updated
before expiration, you're not doing it right.

Ned
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: https

2011-08-26 Thread ned+ietf
 On Fri, Aug 26, 2011 at 11:14 AM,  ned+i...@mauve.mrochek.com wrote:
 
  +1. If you want signatures, do them properly. Don't pretend a transfer
  protection mechanism covering exactly one hop provides real object security,
  because it doesn't.
 

 It ensures that what you're getting is the same as what the IETF has
 on file,

No, it really doesn't do that. There can be, and usually are, a lot of steps
involves besides the one https hop.

 and (assuming you trust the IETF's archive integrity, which
 is a separate problem)

On the contrary, it's the same problem. You're just pretending that solving an
insignificant subset of that problem is useful. It isn't.

 what everyone else on the list received.

This, OTOH, *is* a separate problem, one that isn't addressed in any way by
https on archive archive access. Nor does a signed archive address this issue.

Nonrepudiation of delivery is in fact a very difficult problem to solve - the
algorithms I'm aware of for it either involve a TTP or are exceptionaly ugly -
and the fact that the architecture of our email services isn't designed for it
doesn't exactly help. (In case you care, it's the nontransactional nature of
POP and IMAP that gets in the way of doing nonrepudiation of delivery at the
email protocol level.)

We're very lucky that we don't operate in a regime where this is actually a
requirement. (And such regimes do exist, although the solutions they actually
use tend to be hacks.)

 It
 seems to me it's more important to know that than to know that stuff
 sent to the list actually came from who it claims to come from. Does
 it really matter if a proposed standard wasn't really designed by who
 the archive says it was designed by?

And now you're talking about yet a third problem - nonrepudiation of
submission. It's completely doable, but only by significantly increasing the
barriers to participation by requiring signed messages. I don't believe the
benefits come even close to the costs.

  And as for the encrypt so the really secret stuff doesn't stand out 
  argument,
  that's fine as long as it doesn't cause inconvenience to anyone. That's 
  clearly
  not the case here. And I'm sorry, the mistakes were made notion doesn't
  really fly: Certificates aren't a set it and forget it thing, so if you
  haven't noted expiration dates on someone's to-do list so they can be 
  updated
  before expiration, you're not doing it right.

 Yeah, it seems like the IETF would be on top of certificate
 expiration, having invented it. But I think having the encryption is a
 very good thing. Some government interests would be happy to keep
 information about some aspects of IETF business away from their
 citizenry, and have the resources to launch a MITM attack. (Of course,
 those governments may also have the resources to sign a fake
 certificate, but that is, again, a separate problem.)

I'm sorry, but the notion that the present use of https provides any
sort of protection against this sort of thing is just silly. The simple
facts that the archives are also available via regular http and that
there is no stated policy about the availability of archives via https
means the present setup is completely vulnerable to downgrade attacks.

Again, if you really care about this sort of stuff, the archives need to be
timestamped and signed. 

 Once the certificate expiration business is fixed, it should be fairly
 simple to make sure it's kept up to date so that this sort of thing
 doesn't happen again.

10+ years of experience with multiple certificates having multiple failures
says this is, at best, a crapshoot.

Ned
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-ietf-intarea-ipv6-required-01.txt (IPv6Support Required for all IP-capable nodes) to Proposed Standard

2011-08-22 Thread ned+ietf
 I find this document utterly bizarre and think it would seriously damage the
 Internet to publish it.

This seems a little ... extreme. The document appears to me to be Mostly
Harmless, with all that implies.

 The idea that ipv6 should be regarded as normal, as of equal standing to ipv4 
 is
 fine, the sort of statement that the IAB should make, or have made, as an RFC 
 or
 in some other form.

 But this I-D claims
  Updates [RFC1122] to clarify that this document, especially in
section 3, primarily discusses IPv4 where it uses the more generic
term IP and is no longer a complete definition of IP or the
Internet Protocol suite by itself.  

 IPv4 is a phenomenal success, and RFC1122 is a key part of that.  IPv4 was a
 confused jumble, as IPv6 is now, and RFC1122, with another two or so I-Ds, cut
 through the cruft and rendered it usable.  IPv6 desparately needs an 
 equivalent
 to RFC1122,

Complete agreement on this point. Such a document, informed by actual IPv6
deployment experience at some sort of scale, is urgently needed. And this most
certainly is NOT that document. But unless publishing this is seen as meeting
the need for an 1122v6 - and I've seen no indication that's the case - I fail
to see the harm.

OTOH, if this really is seen as being a 1122v6, then I join you in opposing
it's publication.

 as a trawl of the v6ops list archives shows, and clearly this I-D is
 never going to be it, but claiming that this I-D provides an update to 
 RFC1122,
 coupled
 with its title, gives the message that there is not going to be such an I-D;
 IPv6 will remain a confused jumble (and so is unlikely ever to emulate the
 success of IPv4).

Maybe I'm being clueless about this, but I don't see how IPv6 Support Required
for all IP-capable nodes gives this impression.

Ned
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


  1   2   3   >