Re: Montevideo statement
--On Wednesday, October 09, 2013 02:44 -0400 Andrew Sullivan a...@anvilwalrusden.com wrote: ... That does not say that the IAB has issued a statement. On the contrary, the IAB did not issue a statement. I think the difference between some individuals issuing a statement in their capacity as chairs and CEOs and so on, and the body for which they are chair or CEO or so on issuing a similar statement, is an important one. We ought to attend to it. Please note that this message is not in any way a comment on such leadership meetings. In addition, for the purposes of this discussion I refuse either to affirm or deny concurrence in the IAB chair's statement. I merely request that we, all of us, attend to the difference between the IAB Chair says and the IAB says. Andrew, While I agree that the difference is important for us to note, this is a press release. It would be naive at best to assume that its intended audience would look at it and say Ah. A bunch of people with leadership roles in important Internet organizations happened to be in the same place and decided to make a statement in their individual capacities. Not only does it not read that way, but there are conventions for delivering the individual capacity message, including prominent use of phrases like for identification only. Independent of how I feel about the content of this particular statement, if the community either doesn't like the message or doesn't like this style of doing things, I think that needs to be discussed and made clear. That includes not only at the level of preferences about community consultation but about whether, in in the judgment of the relevant people, there is insufficient time to consult the community, no statement should be made at all. Especially from the perspective of having been in the sometimes-uncomfortable position of IAB Chair, I don't think IAB members can disclaim responsibility in a situation like this. Unlike the Nomcom-appointed IETF Chair, the IAB Chair serves at the pleasure and convenience of the IAB. If you and your colleagues are not prepared to share responsibility for statements (or other actions) the IAB Chair makes that involve that affiliation, then you are responsible for taking whatever actions are required to be sure that only those actions are taken for which you are willing to share responsibility. Just as you have done, I want to stress that I'm not recommending any action here, only that IAB members don't get to disclaim responsibility made by people whose relationship with the IAB is the reason why that are, e.g., part of a particular letter or statement. john
Re: Last Call: Change the status of ADSP (RFC 5617) to Historic
--On Thursday, October 03, 2013 16:51 +0200 Alessandro Vesely ves...@tana.it wrote: ADSP was basically an experiment that failed. It has no significant deployment, and the problem it was supposed to solve is now being addressed in other ways. I oppose to the change as proposed, and support the explanation called for by John Klensin instead. Two arguments: 1) The harm Barry exemplifies in the request --incompatibility with mailing list posting-- is going to be a feature of at least one of the other ways addressing that problem. Indeed, those who don't know history are destined to repeat it, and the explanation is needed to make history known. 2) A possible fix for ADSP is explained by John Levine himself: http://www.mail-archive.com/ietf-dkim@mipassoc.org/msg16969.ht ml I'm not proposing to mention it along with the explanation, but fixing is not the same as moving to historic. It seems that it is just a part of RFC 5617, DNS records, that we want to move. Ale, Just to be clear about what I proposed because I'm not sure that you actually agree: If the situation is as described in the write-up (and/or as described by John Levine, Murray, and some other documents), then I'm in favor of deprecating ADSP. The _only_ issue I'm raising in this case is that I believe that deprecating a feature or protocol element by moving things to Historic by IESG action and a note in the tracker is appropriate only for things that have been completely ignored after an extended period or that have long ago passed out of public consciousness. When something has been implemented and deployed sufficiently that the reason for deprecating it includes assertions that it has not worked out in practice, I believe that should be documented in an RFC both to make the historical record clear and to help persuade anyone who is still trying to use it to cease doing so. There may well be arguments for not deprecating the feature, for improving it in various ways, or for contexts in which its use would be appropriate, but someone else will have to make them or propose other remedies. I have not done so nor am I likely to do so. best, john
Re: Last Call: Change the status of ADSP (RFC 5617) to Internet Standard
--On Wednesday, October 02, 2013 07:41 -0700 The IESG iesg-secret...@ietf.org wrote: The IESG has received a request from an individual participant to make the following status changes: - RFC5617 from Proposed Standard to Historic The supporting document for this request can be found here: http://datatracker.ietf.org/doc/status-change-adsp-rfc5617-to- historic/ Hi. Just to be sure that everyone has the same understanding of what is being proposed here, the above says to Historic but the writeup at http://datatracker.ietf.org/doc/status-change-adsp-rfc5617-to-historic/ says to Internet Standard. Can one or the other be corrected? After reading the description at the link cited above and assuming that Historic is actually intended, I wonder, procedurally, whether a move to Historic without document other than in the tracker is an appropriate substitute for the publication of an Applicability Statement that says not recommended and that explains, at least in the level of detail of the tracker entry, why using ADSP is a bad idea. If there were no implementations and no evidence that anyone cared about this, my inclination would be to just dispose of RFC 5617 as efficiently and with as little effort as possible. But, since the tracker entry says that there are implementations and that misconfiguration has caused harm (strongly implying that there has even been deployment), it seems to me that a clear and affirmative not recommended applicability statement is in order. thanks, john
Re: Last Call: Change the status of ADSP (RFC 5617) to Internet Standard
I assume we will need to agree to disagree about this, but... --On Wednesday, October 02, 2013 10:44 -0700 Dave Crocker d...@dcrocker.net wrote: If a spec is Historic, it is redundant to say not recommended. As in, duh... Duh notwithstanding, we move documents to Historic for many reasons. RFC 2026 lists historic as one of the reasons a document may be not recommended (Section 3.3(e)) but says only superceded... or is for any other reason considered to be obsolete about Historic (Section 4.2.4). That is entirely consistent with Maturity Levels and Requirement Levels being basically orthogonal to each other, even if Not Recommended and Internet Standard are presumably mutually exclusive. Even better is that an applicability statement is merely another place for the potential implementer to fail to look and understand. Interesting. If a potential implementer or other potential user of this capability fails to look for the status of the document or protocol, then the reclassification to Historic won't be found and this effort is a waste of the community's time. If, by contrast, that potential user checks far enough to determine that the document has been reclassified to Historic, why is it not desirable to point that user to a superceding document that explains the problem and assigns as requirement status of not recommended? The situation would be different if a huge amount of additional work were involved but it seems to me that almost all of the required explanation is already in the write-up and that the amount of effort required to approve an action consisting of a document and a status change is the same as that required to approve the status change only. If creating an I-D from the write-up is considered too burdensome and it would help, I'd be happy to do that rather than continuing to complain. ADSP is only worthy of a small effort, to correct its status, to reflect its current role in Internet Mail. Namely, its universal non-use within email filtering. If the specification had been universally ignored, I'd think that a simple status change without further documentation was completely reasonable. However, the write-up discusses harm caused by incorrect configuration and by inappropriate use, real cases, and effects from posts from users. That strongly suggests that this is a [mis]feature that has been sufficiently deployed to cause problems, not someone that is universally non-used. And that, IMO, calls for a explanation --at least to the extent of the explanation in the write-up-- as to why ADSP was a bad idea, should be retired where it is used, and should not be further deployed. best, john
RE: LC comments on draft-cotton-rfc4020bis-01.txt
--On Sunday, 29 September, 2013 09:19 +0100 Adrian Farrel adr...@olddog.co.uk wrote: Hi John, Thanks for the additions. Everything you say seems fine to me for the cases you are focusing on, but I hope that any changes to 4020bis keep two things in mind lest we find ourselves tangled in rules and prohibiting some reasonable behavior (a subset of which is used now). 4020bis is certainly not intended to prohibit actions we do now. AFAICS it actually opens up more scope for early allocation that was not there before. ... I am pretty sure that nothing in this document impacts on 5226 at all except to define how early allocation works for certain allocation policies defined by 5226. You are correct that 5226 is not a closed list. However we may observe that it is used in the significant majority of cases. I think that means that if some non-5226 policy is agreed by a WG and the relevant approving bodies together with IANA, then that policy needs to define its own early allocaiton procedure if one is wanted. ... This document, therefore requires the WG chairs and the AD to be involved in the decision to do early allocation. That was how I read the current version, so we are on the same page. Eric's note was considerably wider-ranging so, at you and Michelle work on revisions, I just wanted to caution against accidentally going too far in a direction that could be construed as changing other things. From my point of view, it would be a good thing to further emphasize that a code point, once allocated through any process, is, in a lot of cases, unlikely to be usable in the future for anything else. That is largely independent of whether the allocation is identified as early, preliminary, provisional, easy, or final. The current version is, I think, good enough about that, but better would be, well, better. thanks, john
RE: LC comments on draft-cotton-rfc4020bis-01.txt
--On Saturday, 28 September, 2013 23:44 +0100 Adrian Farrel adr...@olddog.co.uk wrote: Hi, I am working with Michelle on responses and updates after IETF last call. Most of the issues give rise to relatively easy changes, and Michelle can handle them in a new revision with a note saying what has changed and why. But Eric's email gives rise to a wider and more difficult point on which I want to comment. ... This is, indeed, the fundamental issue. And it is sometimes called squatting. ... The way we have handled this in the past is by partitioning our code spaces and assigning each part an allocation policy. There is a good spectrum of allocation policies available in RFC 5226, but it is up to working group consensus (backed by IETF consensus) to assign the allocation policy when the registry is created, and to vary that policy at any time. ... The early allocation thing was created to blur the edges. That is, to cover the case where exactly the case that Eric describes arises, but where the registries require publication of a document (usually an RFC). The procedures were documented in RFC 4020, but that was almost in the nature of an experiment. The document in hand is attempting to tighten up the rules so that IANA knows how to handle early allocations. ... Adrian, Everything you say seems fine to me for the cases you are focusing on, but I hope that any changes to 4020bis keep two things in mind lest we find ourselves tangled in rules and prohibiting some reasonable behavior (a subset of which is used now). (1) RFC 5226 provides, as you put it, a good spectrum of allocation policies but a WG (or other entity creating a registry) can specify variations on them and, if necessary, completely different strategies and methods. As long as they are acceptable to IANA and whatever approving bodies are relevant, 5226 is not a closed list. In particular, more than one registry definition process has discovered that the 5226 language describing the role of a Designated Expert and the various publication models are not quite right for their needs. (2) We've discovered in several WGs and registries that early allocation is just the wrong thing to do, often for reasons that overlap some of Eric's concerns. I don't see that as a problem as long as 4020bis remains clear that early allocation is an option that one can choose and, if one does, this is how it works rather than appearing to recommend its broad use. thanks, john
Re: [Fwd: I-D Action: draft-carpenter-prismatic-reflections-00.txt]
--On Sunday, 22 September, 2013 07:02 -0400 Noel Chiappa j...@mercury.lcs.mit.edu wrote: ... Yes. $$$. Nobody makes much/any money off email because it is so de-centralized. People who build wonderful new applications build them in a centralized way so that they can control them. And they want to control them so that they can monetize them. That is even true of the large email providers who are happy to provide free email in return for being able leverage their other products and/or sell the users and user base to advertisers. And people, including, I've noticed, a lot of IETF participants, go along. Email is, in practice, a lot more centralized than it was ten or 15 years ago and is at risk of getting more, not only as more users migrate but as those providers decide it is easier to trust only each other. With DKIM, increasing use of blacklists, and other things, the latter may be better (from a distributed environment standpoint) than it was a half-dozen years ago, but I'm concerned that the pattern may be cyclic with new domains providing new challenges and incentives for trust those you know already models. john
RE: [Fwd: I-D Action: draft-carpenter-prismatic-reflections-00.txt]
--On Sunday, 22 September, 2013 17:37 + Christian Huitema huit...@microsoft.com wrote: ... It is very true that innovation can only be sustained with a revenue stream. But we could argue that several services have now become pretty much standardized, with very little additional innovation going on. Those services are prime candidates for an open and distributed implementation. I mean, could a WG design a service that provides a stream of personal updates and a store of pictures and is only accessible to my friends? And could providers make some business by selling personal servers, or maybe personal virtual servers? Maybe I am a dreamer, but hey, nothing ever happens if you don't dream of it! I agree completely. However, one could equally well say that operations can only be sustained with a revenue stream and trust models among parties that don't already have first-hand relationships can get a tad complicated. Setting up a distributed email environment that supports secure communication among a small circle of friends (especially technically-competent ones) is pretty easy, even easier than the service you posit above. Things become difficult and start to encourage centralized behavior when, e.g., (i) the community allow basic Internet service providers to either prohibit running servers or make it unreasonably expensive, (ii) one wants the communications to be persistent enough that storage, backup, and operations becomes a big deal, and/or (iii) one wants on-net or in-band ways to introduce new parties to the group when there are Bad Guys out there (which more or less reinvents the PGP problem). Architecturally, one can make a case that the Internet is much better designed for peer to peer arrangements than for client to Big Centrally-Controlled Server ones, even though trends in recent years run in the latter direction (and I still have trouble telling the fundamental structural differences between a centralized operation with extensive web services and users on dumb machines on the one hand and the central computer services operations of my youth on the other). So, a good idea and one that should be, IMO, pursued. But there are a lot of interesting and complex non-technical barriers. best, john
Re: [Fwd: I-D Action: draft-carpenter-prismatic-reflections-00.txt]
--On Sunday, 22 September, 2013 12:59 -0400 Paul Wouters p...@cypherpunks.ca wrote: Except that essentially all services other than email have gained popularity in centralized form, including IM. Note that decentralising makes you less anonymous. If everyone runs their own jabber service with TLS and OTR, you are less anonymous than today. So decentralising is not a solution on its own for meta-data tracking. Perhaps more generally, there may be tradeoffs between content privacy and tracking who is talking with whom. For the former, decentralization is valuable because efforts to compromise the endpoints and messages stored on them without leaving tracks is harder. In particular, if I run some node in a highly distributed environment, a court order demanding content or logs (or a call asking that I cooperate) in disclosing data, keys, etc., would be very difficult to keep secret from me (even if it prevented me from telling my friends/ peers). And a lot more of those court orders or note would be required than in a centralized environment. On the other hand, as you point out, traffic monitoring is lots easier if IP addresses identify people or even small clusters of people. The other interesting aspect of the problem is that, if we want to get serious about distributing applications down to very small scale, part of that effort is, I believe necessarily, getting serious about IPv6 and avoidance of highly centralized conversion and address translation functions. john
Re: Transparency in Specifications and PRISM-class attacks
--On Friday, September 20, 2013 10:15 -0400 Ted Lemon ted.le...@nominum.com wrote: On Sep 20, 2013, at 9:12 AM, Harald Alvestrand har...@alvestrand.no wrote: From the stack I'm currently working on, I find the ICE spec to be convoluted, but the SDP spec is worse, becaue it's spread across so many documents, and there are pieces where people seem to have agreed to ship documents rather than agree on what they meant. I have not found security implications of these issues. This sort of thing is a serious problem; people do make efforts to address it by writing online guides to protocol suites, but this isn't always successful, and for that matter isn't always done. We could certainly do better here. Ted, Based in part on experience with the specs of, and discussions in, other standards bodies, the problem with guides (online or not) is (1) They may contain errors and almost always have omissions. The latter are often caused by the perfectly good intention of simplifying things and making them understandable by covering only the important cases. (2) If they are comprehensible and the standard is not, people tend to refer to them and not the standard. That ultimately turns them into the real standard as far as the marketplace is concerned. FWIW, the same problem can, and has, happened with good reference implementations. I don't know of any general solution to those problems, but I think the community and the IESG have got to be a lot more willing to push back on a spec because it is incomprehensible or contains too many options than has been the case in recent years. john
Re: PS Characterization Clarified
--On Wednesday, September 18, 2013 10:59 +0200 Olaf Kolkman o...@nlnetlabs.nl wrote: However, because the document will be read externally, I prefer that it be IETF in all of the places you identify. If we have to hold our noses and claim that the community authorized the IESG actions by failing to appeal or to recall the entire IESG, that would be true if unfortunate. I would not like to see anything in this document that appears to authorize IESG actions or process changes in the future that are not clearly authorized by community consensus regardless of how we interpret what happened in the past. ... But one of the things that we should try to maintain in making that change is the notion that the IESG does have a almost key-role in doing technical review. You made the point that that is an important distinction between 'us' and formal SDOs. It doesn't affect the document but can we adjust or vocabulary and thinking to use, e.g., more traditional rather than formal. There is, IMO, too little that we do that is informal any more, but that isn't the point. Therefore I propose that that last occurrence reads: ... I think that this language doesn't set precedence and doesn't prescribe how the review is done, only that the IESG does do review. ... In full context: In fact, the IETF review is more extensive than that done in other SDOs owing to the cross-area technical review performed by the IETF,exemplified by technical review by the full IESG at last stage of specification development. That position is further strengthened by the common presence of interoperable running code and implementation before publication as a Proposed Standard. Does that work? The new sentence does work and is, IMO, excellent. I may be partially responsible for the first sentence but, given other comments, suggest that you at least insert some so that it ends up being ...more extensive than that done in some other SDOs owing That makes it a tad less combative and avoids a potentially-contentious argument about counterexamples. The last sentence is probably ok although, if we were to do an actual count, I'd guess that the fraction of Proposed Standards for which implemented and interoperability-tested conforming running code exists at the time of approval is somewhat less than common. john
Re: ORCID - unique identifiers for contributors
--On Wednesday, September 18, 2013 14:30 +0100 Andy Mabbett a...@pigsonthewing.org.uk wrote: On 18 September 2013 14:04, Tony Hansen t...@att.com wrote: I just re-read your original message to ietf@ietf.org. What I had originally taken as a complaint about getting a way to have a unique id (in this case, an ORCID) for the authors was instead a complaint about getting a unique id for the people listed in the acknowledgements. I can't say I have a solution for that one. It wasn't a complaint, but a suggested solution, for both authors and other named contributors. Andy, we just don't have a tradition of identifying people whose contributed to RFCs with either contact or identification information. It is explicitly possible when Contributors sections are created and people are listed there, but contact or identification information is not required in that section, rarely provided, and, IIR, not supported by the existing tools. That doesn't necessarily mean that doing so is a bad idea (although I contend that getting it down to listings in Acknowledgments would be) but that making enough changes to both incorporate the information and make it available as metadata would be a rather significant amount of work and would probably reopen policy issues about who is entitled to be listed. For those who want to use ORCIDs, the suggestion made by Tony and others to just use the author URI field is the path of least resistance and is usable immediately. A URN embedding has several things to recommend it over that (mostly technical issues that would be clutter on this list). You would need to have a discussion with the RFC Editor as to whether, e.g., ORCIDs inserted as parenthetical notes after names in Contributor sections or even acknowledgments would be tolerated or, given a collection of rules about URIs in RFCs, removed, but you could at least do that in I-Ds without getting community approval. If you want and can justify more formal recognition for ORCIDs as special and/or required, you haven't, IMO, made that case yet. Perhaps more important from your point of view, if you were, impossibly, to get that consensus tomorrow, it would probably be years [1] before you'd see complete implementation. best, john [1] Slightly-informed guess but I no longer have visibility into ongoing scheduling and priority decisions.
Re: IPR Disclosures for draft-ietf-xrblock-rtcp-xr-qoe
--On Thursday, September 19, 2013 07:57 +1200 Brian E Carpenter brian.e.carpen...@gmail.com wrote: On 17/09/2013 05:34, Alan Clark wrote: ... It should be noted that the duty to disclose IPR is NOT ONLY for the authors of a draft, and the IETF reminder system seems to be focused solely on authors. The duty to disclose IPR lies with any individual or company that participates in the IETF not just authors. Companies don't participate in the IETF; the duty of disclosure is specifically placed on individual contributors and applies to patents reasonably and personally known to them. IANAL but I did read the BCP. Brian, That isn't how I interpreted Alan's point. My version would be that, if the shepherd template writeup says make sure that the authors are up-to-date (or anything equivalent) it should also say ask/remind the WG participants too. IMO, that is a perfectly reasonable and orderly suggestion (and no lawyer is required to figure it out). One inference from Glen's point that authors have already certified that they have provided anything they need to provide by the time an I-D is posted with the full compliance language is that it may actually be more important to remind general participants in the WG than to ask the authors. john
Re: IPR Disclosures for draft-ietf-xrblock-rtcp-xr-qoe
--On Wednesday, September 18, 2013 17:22 -0400 Alan Clark alan.d.cl...@telchemy.com wrote: John, Brian Most standards organizations require that participants who have, or whose company has, IPR relevant to a potential standard, disclose this at an early stage and at least prior to publication. The participants in the IETF are individuals however RFC3979 addresses this by stating that any individual participating in an IETF discussion must make a disclosure if they are aware of IPR from themselves, their employer or sponsor, that could be asserted against an implementation of a contribution. The question this raises is - what does participation in a ... Alan, Variations on these themes and options have been discussed multiple times. Of course, circumstances change and it might be worth reviewing them again, especially if you have new information. However, may I strongly suggest that you take the question to the ipg-wg mailing list. Most or all of the people who are significantly interested in this topic, including those who are most responsible for the current rules and conventions, are on that list. Your raising it there would permit a more focused and educated discussion than you are likely to find on the main IETF list. Subscription and other information is at https://www.ietf.org/mailman/listinfo/ipr-wg best, john
Re: PS Characterization Clarified
--On Tuesday, September 17, 2013 11:47 +0200 Olaf Kolkman o...@nlnetlabs.nl wrote: Based on the conversation below I converged to: t While less mature specifications will usually be published as Informational or Experimental RFCs, the IETF may, in exceptional cases, publish a specification that still contains areas for improvement or certain uncertainties about whether the best engineering choices are made. In those cases that fact will be clearly and prominently communicated in the document e.g. in the abstract, the introduction, or a separate section or statement. /t I suggest that communicated in the document e.g. in... now essentially amounts to ... communicated in the document, e.g. in the document. since the examples span the entire set of possibilities. Consequently, for editorial reasons and in the interest of brevity, I recommend just stopping after prominently communicated in the document.. But, since the added words are not harmful, I have no problem with your leaving them if you prefer. john
Re: PS Characterization Clarified
--On Tuesday, September 17, 2013 11:32 +0100 Dave Cridland d...@cridland.net wrote: I read John's message as being against the use of the phrase in exceptional cases. I would also like to avoid that; it suggests that some exceptional argument may have to be made, and has the implication that it essentially operates outside the process. Exactly. I would prefer the less formidable-sounding on occasion, which still implies relative rarity. And on occasion is at least as good or better than my suggestions of usually, commonly, or normally although I think any of the four would be satisfactory. --On Tuesday, September 17, 2013 07:06 -0400 Scott Brim scott.b...@gmail.com wrote: ... Exceptions and arguments for and against are part of the process. Having a process with no consideration for exceptions would be exceptional. Scott, it an IETF technical context, I'd completely agree, although some words like consideration for edge cases would be much more precise if that is actually what you are alluding to. But part of the intent of this revision to 2026 is to make what we are doing more clear to outsiders who are making a good-faith effort to understand us and our standards. In that context, what you say above, when combined with Olaf's text, is likely to be read as: We regularly, and as a matter of course, consider waiving our requirements for Proposed Standard entirely and adopt specifications using entirely different (and undocumented) criteria. That is misleading at best. In the interest of clarity, I don't think we should open the door that sort of interpretation if we can avoid it. I don't think it belongs in this document (it is adequately covered by Olaf's new text about other sections), but it is worth remembering that we do have a procedure for making precisely the type of exceptions my interpretation above implies: the Variance Procedure of Section 9.1 of 2026. I cannot remember that provision being invoked since 2026 was published -- it really is exceptional in that sense. Its existence may be another reason for removing exceptional from the proposed new text because it could be read as implying that we have to use the Section 9.1 procedure for precisely the cases of a well-documented, but slightly incomplete, that most of us consider normal. In particular, it would make the approval of the specs that Barry cited in his examples invalid without invoking the rather complex procedure of Section 9.1. I'd certainly not like to have text in this update that encourages that interpretation and the corresponding appeals -- it would create a different path to the restriction Barry is concerned about. john
Re: ORCID - unique identifiers for contributors
Hi. I agree completely with Joel, but let me add a bit more detail and a possible alternative... --On Tuesday, September 17, 2013 08:56 -0400 Joel M. Halpern j...@joelhalpern.com wrote: If you are asking that she arrange for the tools to include provision for using ORCHIDs, that is a reasonable request. SUch a request would presumably be prioritized along with the other tooling improvement that are under consideration. And either explicit provision for ORCID or more general provisions for other identifying characteristics might easily be added as part of the still-unspecified conversions to support non-ASCII characters. That said, you could get ORCID IDs into RFCs on your own initiative by defining and registering a URN type that embedded the ORCID and then, in xml2rfc terms, using the uri element of authoraddress to capture it. If you want to pursue that course, RFCs 3044 and 3187 (and others) provide examples of how it is done although I would suggest that you also consult with the URNBIS WG before proceeding because some of the procedures are proposed to be changed. The RFC Editor (at least) would presumably need to decide that ORCID-based URNs were sufficiently stable, but no extra tooling would be required. On the other hand, if youa re asking that the IETF endorse or encourage such uses, there are two problems. First, the RFC Editor does not speak for the IETF. You need to actually get a determination of IETF rough consensus on the ietf email list. That consensus would need to be based on a more specific question than do we want to allow ORCHIDs, and then would be judged on that question by the IETF chair. And, if you asked that the ORCID be used _instead_ of other contact information, the issues and several people have raised would apply in that discussion and, at minimum, would make getting consensus harder. john
RE: ORCID - unique identifiers for bibliographers
--On Monday, September 16, 2013 22:28 -0400 John R Levine jo...@taugh.com wrote: I do have an identical twin brother, and hashing the DNA sequence collides more regularly than either random or MAC-based interface-identifiers in IPv6. Also, he doesn't have the same opinions. Clearly, one of you needs to get to know some retroviruses. Or you aren't identical enough. Clearly the hash should be computed over both your DNA sequence and a canonical summary of your opinions. Are we far enough down this rathole? john
Re: ORCID - unique identifiers for contributors
--On Tuesday, September 17, 2013 11:20 -0400 Michael Richardson m...@sandelman.ca wrote: I did not know about ORCID before this thread. I think it is brilliant, and what I've read about the mandate of orcid.org, and how it is managed, I am enthusiastic. I agree with what Joel wrote: Asking for ORCID support in the tool set and asking for IETF endorsement are two very different things. Having tool support for it is a necessary first step to permitting IETF contributors to gain experience with it. We need that experience before we can talk about consensus. So, permit ORCID, but not enforce. The more I think about it, the more I think that Andy or someone else who understands ORCIDs and the relevant organizations, etc., should be working on a URN embedding of the things. Since we already have provisions for URIs in contact information, an ORCID namespace would permit the above without additional tooling or special RFC Editor decision making. It would also avoid entanglement with and controversies about the rather long RFC Editor [re]tooling queue. Doing the write-up would require a bit of effort but, in principle, URN:ORICD: is pretty close to trivially obvious. Comments about dogfood-eating and not inventing new mechanisms when we have existing ones might be inserted by reference here. An interesting second (or third) conversation might be about how I could insert ORCIDs into the meta-data for already published documents. With a URN embedding that question would turn into the much more general one about how URIs in contact metadata could be retroactively inserted and updated. In some ways, that is actually an easier question. best, john
Re: PS Characterization Clarified
Pete, I generally agree with your changes and consider them important -- the IESG should be seen in our procedural documents as evaluating and reflecting the consensus of the IETF, not acting independently of it. Of the various places in the document in which IESG now appears, only one of them should, IMO, even be controversial. It is tied up with what I think is going on in your exchange with Scott: --On Tuesday, September 17, 2013 18:10 -0500 Pete Resnick presn...@qti.qualcomm.com wrote: Section 2: ... the IESG strengthened its review ... The IETF as a whole, through directorate reviews, area reviews, doctor reviews, *and* IESG reviews, has evolved, strengthened, ensured, etc., its reviews. I believe that change would be factually incorrect Which part of the above do you think is factually incorrect? The issue here --about which I mostly agree with Scott but still believe your fix is worth making-- is that the impetus for the increased and more intense review, including imposing a number of requirements that go well beyond those of 2026, did not originate in the community but entirely within the IESG. It didn't necessarily originate with explicit decisions. In many cases, it started with an AD taking the position that, unless certain changes were made or things explained to his (or occasionally her) satisfaction, the document would rot in the approval process. Later IESG moves to enable overrides and clarify conditions for discuss positions can be seen as attempts to remedy those abuses but, by then, it was too late for Proposed Standard. And, fwiw, those changes originated within the IESG and were not really subject to a community consensus process either. However, because the document will be read externally, I prefer that it be IETF in all of the places you identify. If we have to hold our noses and claim that the community authorized the IESG actions by failing to appeal or to recall the entire IESG, that would be true if unfortunate. I would not like to see anything in this document that appears to authorize IESG actions or process changes in the future that are not clearly authorized by community consensus regardless of how we interpret what happened in the past. john
Re: ORCID - unique identifiers for contributors
--On Monday, September 16, 2013 18:34 +0100 Andy Mabbett a...@pigsonthewing.org.uk wrote: If the goal is to include contact info for the authors in the document and in fact you can't be contacted using the info is it contact info? While I didn't say that the goal was to provide contact info[*], an individual can do so through their ORCID profile, which they manage and can update at any time. The goal of the author's address section of the RFCs is _precisely_ contact information. See, e.g., draft-flanagan-style-02 and its predecessors. I can see some advantages in including ORCID or some similar identifier along with the other contact information. I've been particularly concerned about a related issue in which we permit non-ASCII author names and then have even more trouble keeping track of equivalences than you J. Smith example implies and for which such an identifier would help. But, unless we were to figure out how to require, not only that people have ORCIDs but that they have and maintain contact information there (not just can do so), I'd consider it useful supplemental information, not a replacement for the contact information that is now supposed to be present. Treating an ORCID (or equivalent) as supplemental would also avoid requiring the RSE to inquire about guarantees about the permanence and availability of the relevant database. It may be fine; I'd just like to avoid having to go there. best, john
Re: IPR Disclosures for draft-ietf-xrblock-rtcp-xr-qoe
--On Monday, September 16, 2013 07:14 -1000 Randy Bush ra...@psg.com wrote: can we try to keep life simple? it is prudent to check what (new) ipr exists for a draft at the point where the iesg is gonna start the sausage machine to get it to rfc. if the iesg did not do this, we would rightly worry that we were open to a submarine job. this has happened, which is why this formality is in place. Agreed. I hope there are only two issues in this discussion: (1) Whether the IESG requires that the question be asked in some particular form, especially a form that would apply to other-than-new IPR. I think the answer to that question is clearly no. (2) Whether the submitted in full conformance... statement in I-Ds is sufficient to cover IPR up to the point of posting of the I-D. If the answer is no, then there is a question of why we are wasting the bits. If it is yes, as I assume it is, then any pre-sausage questions can and should be limited to IPR that might be new to one or more of the authors. if some subset of the authors prefer to play cute, my alarms go off. stuff the draft until they can give a simple direct answer. Agreed. While I wouldn't make as big an issue of it as he has (personal taste), I agree with him that asking an author to affirm that he or she really, really meant it and told the truth when posting a draft submitted in full conformance... is inappropriate and demeaning. While I think there might have been other, more desirable, ways to pursue it, I don't think that raising the issue falls entirely into the cute range. john
Re: PS Characterization Clarified
--On Monday, September 16, 2013 15:58 +0200 Olaf Kolkman o...@nlnetlabs.nl wrote: [Barry added explicitly to the CC as this speaks to 'his' issue] On 13 sep. 2013, at 20:57, John C Klensin klen...@jck.com wrote: [… skip …] * Added the Further Consideration section based on discussion on themailinglist. Unfortunately, IMO, it is misleading to the extent that you are capture existing practice rather than taking us off in new directions. Yeah it is a thin line. But the language was introduced to keep a current practice possible (as argued by Barry I believe). Understood. Barry and I are on the same page wrt not wanting to accidentally restrict established existing practices. You wrote: While commonly less mature specifications will be published as Informational or Experimental RFCs, the IETF may, in ... I see where you are going. draft Proposed rewrite While commonly less mature specifications will be published as Informational or Experimental RFCs, the IETF may, in exceptional cases, publish a specification that still contains areas for improvement or certain uncertainties about whether the best engineering choices are made. In those cases that fact will be clearly communicated in the document prefereably on the front page of the RFC e.g. in the introduction or a separate statement. /draft I hope that removing the example of the IESG statement makes clear that this is normally part of the development process. Yes. Editorial nits: * While commonly less mature specifications will be published... has commonly qualifying less mature. It is amusing to think about what that might mean, but it isn't what you intended. Try While less mature specifications will usually be published Replace usually with commonly or normally if you like, but I think usually is closest to what you are getting at. * prefereably - preferably Additional observations based on mostly-unrelated recent discussions: If you are really trying to clean 2026 up and turn the present document into something that can be circulated to other groups without 2026 itself, then the change control requirement/ ... Along the same lines but more broadly, both the sections of 2026 you are replacing and your new text, if read in isolation, strongly imply that these are several decisions, including those to approve standardization, that the IESG makes on its own judgment and discretion. I think it is ... More important --and related to some of my comments that you deferred to a different discussion-- the IESG as final _technical_ review and interpreter of consensus model is very different from that in some other SDOs in which the final approval step is strictly a procedural and/or legal review that is a consensus review only in the sense of verifying ... So noted. As actionable for this draft I take that I explicitly mention that Section 4.1 2026 is exclusively updated. While I understand your desire to keep this short, the pragmatic reality is that your non-IETF audience is likely to read this document (especially after you hand it to them) and conclude that it is the whole story. Since the natural question that immediately follows why should we accept your standards at all is why can't you hand them off to, e.g., ISO, the way that many national bodies and organizations like IEEE do with many of their documents. Suggestion in the interest of brevity: in addition to mentioning the above, mention explicitly that there are requirements in other sections of 2026 that affect what is standardized and how. By the way, while I understand all of the reasons why we don't want to actually replace 2026 (and agree with most of them), things are getting to the point that it takes far too much energy to actually figure out what the rules are. Perhaps it is time for someone to create an unofficial redlined version of 2026 that incorporates all of the changes and put it up on the web somewhere. I think we would want a clear introduction and disclaimer that it might be be exactly correct and that only the RFCs are normative, but the accumulation of changes may otherwise be taking us too far into the obscure. If we need a place to put it, it might be a good appendix to the Tao. And constructing it might be a good job for a relative newcomer who is trying to understand the ins and outs of our formal procedures. best, john
Re: PS Characterization Clarified
--On Monday, September 16, 2013 10:43 -0400 Barry Leiba barryle...@computer.org wrote: ... I agree that we're normally requiring much more of PS documents than we used to, and that it's good that we document that and let external organizations know. At the same time, we are sometimes proposing things that we know not to be fully baked (some of these came out of the sieve, imapext, and morg working groups, for example), but we *do* want to propose them as standards, not make them Experimental. I want to be sure we have a way to continue to do that. The text Olaf proposes is, I think, acceptable for that. In case it wasn't clear, I have no problems with that at all. I was objecting to three things that Olaf's newer text has fixed: (1) It is a very strong assertion to say that the above is exceptional. In particular, exceptional would normally imply a different or supplemental approval process to make the exception. If all that is intended is to say that we don't do it very often, then commonly (Olaf's term), usually, or perhaps even normally are better terms. (2) While it actually may be the common practice, I have difficulty with anything that reinforces the notion that the IESG makes standardization decisions separate from IETF consensus. While it isn't current practice either, I believe that, were the IESG to actually do that in an area of significance, it would call for appeals and/or recalls. Olaf's older text implied that the decision to publish a not-fully-mature or incomplete specification was entirely an IESG one. While the text in 2026, especially taken out of context, is no better (and Olaf just copied the relevant bits), I have a problem with any action that appears to reinforce that view or to grant the IESG authority to act independently of the community. (3) As a matter of policy and RFCs of editorially high quality, I think it is better to have explanations of loose ends and not-fully-baked characteristics of standards integrated into the document rather than using IESG Statements. I don't think Olaf's new front page requirement is correct (although I can live with it) -- I'd rather just say clearly and prominently communicated in the document and leave the is it clear and prominent enough question for Last Call -- but don't want to see it _forced_ into an IESG statement. I do note that front page and Introduction are typically inconsistent requirements (header + abstract + status and copyright boilerplate + TOC usually force the Introduction to the second or third page). More important, if a real explanation of half-baked features (and why they aren't fully baked) may require a section, or more than one, on it own. One would normally like a cross reference to those sections in the Introduction and possibly even mention in the Abstract, but forcing the text into the Introduction (even with preferably given experience with how easily that turns into a nearly-firm requirement) is just a bad idea in a procedures document. We should say clearly, prominently, or both and then leave specifics about what that means to conversations between the authors, the IESG and community, and the RFC Editor. best, john
Re: IPR Disclosures for draft-ietf-xrblock-rtcp-xr-qoe
--On Monday, September 16, 2013 19:35 +0700 Glen Zorn g...@net-zen.net wrote: ... The wording of this question is not a choice. As WG chairs we are required to answer the following question which is part of the Shepherd write-up as per the instructions from the IESG http://www.ietf.org/iesg/template/doc-writeup.txt: (7) Has each author confirmed that any and all appropriate IPR disclosures required for full conformance with the provisions of BCP 78 and BCP 79 have already been filed. If not, explain why. We have no choice but to relay the question to the authors. I see, just following orders. For whatever it is worth, I think there is a rather different problem here. I also believe it is easily solved and that, if it is not, we have a far deeper problem. I believe the document writeup that the IESG posts at a given time is simply a way of identifying the information the IESG wants (or wants to be reassured about) and a template for a convenient way to supply that information. If that were not the case: (i) We would expect RFC 4858 to be a BCP, not an Informational document. (ii) The writeup template would need to represent community consensus after IETF LC, not be something the IESG put together and revises from time to time. (iii) The various experiments in alternative template formats and shepherding theories would be improper or invalid without community consensus, probably expressed through formal process experiment authorizations of the RFC 3933 species. The first sentence of the writeup template, As required by RFC 4858, this is the current template... is technically invalid because RFC 4858, as an Informational document, cannot _require_ anything of the standards process. Fortunately, it does not say you are required to supply this information in this form or you are required to ask precisely these questions, which would be far worse. From my point of view, an entirely reasonable response to the comments above that start As WG chairs we are required to answer the following question... and We have no choice but to relay... is that you are required to do no such thing. The writeup template is guidance to the shepherd about information and assurances the IESG wants to have readily available during the review process, nothing more. I also believe that any AD who has become sufficiently impressed by his [1] power and the authority of IETF-created procedures to insist on a WG chair's asking a question and getting an answer in some particular form has been on the IESG, or otherwise in the leadership much too long [2]. In fairness to the IESG, Has each author confirmed... doesn't require that the document shepherd or WG Chair ask the question in any particular way. Especially if I knew that some authors might be uncomfortable being, in Glen's words, treated as 8-year-old children, I think I would ask the question in a form similar to since the I-Ds in which you were involved were posted, have you had any thoughts or encountered any information that would require filing of additional IPR disclosures?. That question is a reminder that might be (and occasionally has been) useful. A negative answer to it would be fully as much confirming that any and all appropriate IPR disclosures... have been filed as one whose implications are closer to were you telling the truth when you posted that I-D. I think Glen's objections to the latter are entirely reasonable, but there is no need to go there. Finally, I think a pre-LC reminder is entirely appropriate, especially for revised documents or older ones for which some of the listed authors may no longer be active. I assume, or at least hope, that concern is were this item in the writeup template came from. Especially for authors who fall into those categories, asking whether they have been paying attention and have kept IPR disclosures up to date with the evolving document is, IMO, both reasonable and appropriate. Personally, I'm inclined to ask for an affirmative commitment about willingness to participate actively in the AUTH48 signoff process at the same time -- non-response to that one, IMO, justifies trimming the author count and creating a Contributors section. It seems to me that, in this particular case, too many people are assuming a far more rigid process than actually exists or can be justified by any IETF consensus procedure. Let's just stop that. best, john [1] pronoun chosen to reflect current IESG composition and with the understanding that it might be part of the problem. [2] Any WG with strong consensus about these issues and at least 20 active, nomcom-eligible participants knows what to do about such a problem should it ever occur. Right?
Re: PS Characterization Clarified
--On Friday, September 13, 2013 16:56 +0200 Olaf Kolkman o...@nlnetlabs.nl wrote: ... Based on the discussion so far I've made a few modifications to the draft. I am trying to consciously keep this document to the minimum that is needed to achieve 'less is more' and my feeling is that where we are now is close to the sweetspot of consensus. Olaf, I'm afraid I need to keep playing loyal opposition here. * Added the Further Consideration section based on discussion on themailinglist. Unfortunately, IMO, it is misleading to the extent that you are capture existing practice rather than taking us off in new directions. You wrote: While commonly less mature specifications will be published as Informational or Experimental RFCs, the IETF may, in exceptional cases, publish a specification that does not match the characterizations above as a Proposed Standard. In those cases that fact will be clearly communicated on the front page of the RFC e.g. means of an IESG statement. On the one hand, I can't remember when the IESG has published something as a Proposed Standard with community consensus and with an attached IESG statement that says that they and the community had to hold our collective noses, but decided to approve as PS anyway. Because, at least in theory, a PS represents community consensus, not just IESG consensus (see below), I would expect (or at least hope for) an immediate appeal of an approval containing such as statement unless it (the statement itself, not just the opinion) matched community consensus developed during Last Call. Conversely, the existing rules clearly allow a document to be considered as a Proposed Standard that contains a paragraph describing loose ends and points of fragility, that expresses the hope that the cases won't arise very often and that a future version will clarify how the issues should be handled based on experience. That is no known technical omissions since the issues are identified and therefore known and not omissions. In the current climate, I'd expect such a document to have a very hard time on Last Call as people argued for Experimental or even keeping it as an I-D until all of the loose ends were tied up. But, if there were rough consensus for approving it, I'd expect it to be approved without any prefatory, in-document, IESG notes (snarky or otherwise). The above may or may not be tied up with the generally stable terminology. I could see a spec with explicit this is still uncertain and, if we are wrong, might change language in it on the same basis as the loose end description above. Such language would be consistent with generally stable but, since it suggests a known point of potential instability, it is not consistent with stable. Additional observations based on mostly-unrelated recent discussions: If you are really trying to clean 2026 up and turn the present document into something that can be circulated to other groups without 2026 itself, then the change control requirement/ assumption of RFC 2026 Section 7.1.3 needs to be incorporated into your new Section 3. It is not only about internal debates, it is our rule against why we can't just endorse a standard developed elsewhere as an IETF standards track specification. Along the same lines but more broadly, both the sections of 2026 you are replacing and your new text, if read in isolation, strongly imply that these are several decisions, including those to approve standardization, that the IESG makes on its own judgment and discretion. I think it is fairly clear from the rest of 2026 (and 2028 and friends and IETF oral tradition) that the IESG is a collector and interpreter of community consensus, not a body that is someone delegated to use its own judgment. I believe that, if an IESG were ever to say something that amounted to the community consensus is X, but they are wrong, so we are selecting or approving not-X, we would either see a revolution of the same character that brought us to 2026 or the end of the IETF's effectiveness as a broadly-based standards body. More important --and related to some of my comments that you deferred to a different discussion-- the IESG as final _technical_ review and interpreter of consensus model is very different from that in some other SDOs in which the final approval step is strictly a procedural and/or legal review that is a consensus review only in the sense of verifying that the process in earlier stages followed the consensus rules and is not technical review at all. I don't think you need to spend time on that, but you need to avoid things that would make your document misleading to people who start with that model of how standards are made as an initial assumption. best, john
Re: What real users think [was: Re: pgp signing in van]
--On Tuesday, September 10, 2013 08:09 +1200 Brian E Carpenter brian.e.carpen...@gmail.com wrote: ... True story: Last Saturday evening I was sitting waiting for a piano recital to start, when I overheard the person sitting behind me (who I happen to know is a retired chemistry professor) say to his companion Email is funny, you know - I've just discovered that when you forward or reply to a message, you can just change the other person's text by typing over it! You'd have thought they would make that impossible. There is another interesting detail about this in addition to the part of it addressed by the brothers Crocker. When MIME was designed, there were a number of implicit assumptions to the effect that, if an original message was included in a reply or a message was forwarded, the original would be a separate body part from the reply or forwarding introduction. Structurally, that arrangement not only would have preserved per-body-part signatures but would have largely avoided a number of annoyances that have caught up with us such as an incoming message that uses different charset values than the replying or forwarding user is set up to support. Obviously, that would not help with replies interleaved with the original text, but that is a somewhat different problem (although it might take a bit of effort to explain the reasons to your chemistry professor). When things are interleaved, preventing charset conflicts, modification of quoted text, and other problems is pretty much impossible, at least, as Dave more or less points out, if the composing MUA is under the control of the user rather than being part of a centrally-controlled environment that can determine what gets typed where. It didn't work out that way. Indeed, more than 20 years later, forwarded messages and reply with original included ones are the primary vestiges of the popular pre-MIME techniques for marking out parts of a message. Perhaps we should have predicted that better, perhaps not. But the reasons why make that impossible are hard are not just security/ signature or legacy/installed base issues. best, john
Re: pgp signing in van
--On Friday, September 06, 2013 19:50 -0800 Melinda Shore melinda.sh...@gmail.com wrote: On 9/6/13 7:45 PM, Scott Kitterman wrote: They have different problems, but are inherently less reliable than web of trust GPG signing. It doesn't scale well, but when done in a defined context for defined purposes it works quite well. With external CAs you never know what you get. Vast numbers of bits can be and have been spent on the problems with PKI and on vulnerabilities around CAs (and the trust model). I am not arguing that PKI is awesome. What I *am* arguing is that the semantics of the trust assertions are pretty well-understood and agreed-upon, which is not the case with pgp. When someone signs someone else's pgp key you really don't know why, what the relationship is, what they thought they were attesting to, etc. I think you are both making more of a distinction than exists, modulo the scaling problem with web of trust and something the community has done to itself with CAs. The web of trust scaling issue is well-known and has been discussed repetitively. But the assumption about CAs has always been, more or less, that they can all be trusted equally and that one that couldn't be trusted would and could be held accountable. Things just haven't worked out that way with the net result that, as with PGP, it is hard to deduce why, what the relationship is, what they thought they were attesting to, and so on. While those statements are in the certs or pointed to from them in many cases, there is the immediate second-level problem of whether those assertions can be trusted and what they mean. For example, if what a cert means is passed some test for owning a domain name, it and DANE are, as far as I can tell, identical except for the details of the test ... and some are going to be a lot better for some domains and registrars than others. Assorted vendors have certainly made the situation worse by incorporating CA root certificates in systems based on business relationships (or worse) rather than on well-founded beliefs about trust. On the CA side, one of the things I think is needed is a rating system (or collection of them on a pick the rating service you trust basis) for CAs, with an obvious extension to PGP-ish key signers. In itself, that isn't a problem with which the IETF can help. Where I think the IETF and implementer communities have fallen down is in not providing a framework that would both encourage rating systems and tools and make them accessible to users. In our current environment, everything is binary in a world in which issues like trust in a certifier is scaled and multidimensional. As Joe pointed out, we don't use even what information is available in PGP levels of confidence and X.509 assertions about strength. In the real world, we trust people and institutions in different ways for different purposes -- I'll trust someone to work on my car, even the safety systems, whom I wouldn't trust to do my banking... and I wouldn't want my banker anywhere near my brakes. In both cases, I'm probably more interested in institutional roles and experience than I am in whether a key (or signature on paper) binds to a hard identity. In some cases, binding a key to persistence is more important than binding it to actual identity; in others, not. I trust my sister in most things, but wouldn't want her as a certifier because I know she don't have sufficient clues about managing keys. And the amount of authentication of identity I think I need differs with circumstances and uses too. We haven't designed the data structures and interfaces to make it feasible for a casual user to incorporate judgments --her own or those of someone she trusts -- to edit the CA lists that are handed to her, or a PGP keyring she has constructed, and assign conditions to them. Nor have we specified the interface support that would make it easy for a user to set up and get, e.g., warnings about low-quality certification (or keys linked to domains or registrars that are known to be sloppy or worse) when one is about to use them for some high-value purpose. We have web of trust and rating models (including PICS, which illustrates some of difficulties with these sorts of things) models for web pages and the like, but can't manage them for the keys and certs that are arguably more important. So, anyone ready to step up rather than just lamenting the state of the world? best, john
Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA
--On Friday, September 06, 2013 17:11 +0100 Tony Finch d...@dotat.at wrote: John C Klensin j...@jck.com wrote: Please correct me if I'm wrong, but it seems to me that DANE-like approaches are significantly better than traditional PKI ones only to the extent to which: ... Yes, but there are some compensating pluses: Please note that I didn't say worse, only not significantly better. You can get a meaningful improvement to your security by good choice of registrar (and registry if you have flexibility in your choice of name). Other weak registries and registrars don't reduce your DNSSEC security, whereas PKIX is only as secure as the weakest CA. Yes and no. Certainly I can improve my security as you note. I can also improve the security of a traditional certificate by selecting from only those CAs who require a high degree of assurance that I am who I say I am. But, from the standpoint of a casual user using readily-available and understandable tools (see my recent note) and encountering a key or signature from someone she doesn't know already, there is little or no way to tell whether the owner of that key used a reliable registrar or a sleazy one or, for the PKI case, a high-assurance and reliable CA or one whose certification criterion is the applicant's ability to pay. There are still differences and I don't mean to dismiss them.I just don't think we should exaggerate their significance. And, yes, part of what I'm concerned about is the very ugly problem of whether, if I encounter an email address and key for tonyfi...@email-expert.pro or, (slightly) worse, in one of the thousand new TLDs that ICANN assures us will improve the quality of their lives, how I determine whether that is you, some other Tony Finch who claims expertise in email, or Betty Attacker Bloggs pretending to be one of you. As Pete has suggested, one way to do that is to set up an encrypted connection without worrying much about authentication and then quiz each other about things that Tony(2), Betty, or John(2) are unlikely to know until we are confident enough for the purposes. But, otherwise By contrast, if I know a priori that the Tony Finch I'm concerned about is the person who controls dotat.at and you know that the John Klensin you are concerned about is the person who controls jck.com, and both of us are using addresses in those domains with which we have been familiar for years, then the task is much easier with either a PKI or DANE -- and certainly more convenient and reliable with the latter because we know each other well enough, even if mostly virtually, to be confident that the other is unlikely to be dealing with registrars or registries who would deliberately enable domain or key impersonation. Nor would either of us be likely to be quiet about such practices if they were discovered. An attacker can use a compromise of your DNS infrastructure to get a certificate from a conventional CA, just as much as they could compromise DNSSEC-based service authentication. Exactly. Again, my point in this note and the one I sent to the list earlier today about the PGP-PKI relationship is that we should understand and take advantage of the differences among systems if and when we can, but that it is a bad idea to exaggerate those advantages or differences. john
Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA
--On Friday, September 06, 2013 06:20 -0700 Pete Resnick presn...@qti.qualcomm.com wrote: Actually, I disagree that this fallacy is at play here. I think we need to separate the concept of end-to-end encryption from authentication when it comes to UI transparency. We design UIs now where we get in the user's face about doing encryption if we cannot authenticate the other side and we need to get over that. In email, we insist that you authenticate the recipient's certificate before we allow you to install it and to start encrypting, and prefer to send things in the clear until that is done. That's silly and is based on the assumption that encryption isn't worth doing *until* we know it's going to be done completely safely. We need to separate the trust and guarantees of safeness (which require *later* out-of-band verification) from the whole endeavor of getting encryption used in the first place. Pete, At one level, I completely agree. At another, it depends on the threat model. If the presumed attacker is skilled and has access to packets in transit then it is necessary to assume that safeguards against MITM attacks are well within that attacker's resource set. If those conditions are met, then encrypting on the basis of a a key or certificate that can't be authenticated is delusional protection against that threat. It may still be good protection against more casual attacks, but we do the users the same disservice by telling them that their transmissions are secure under those circumstances that we do by telling them that their data are secure when they see a little lock in their web browsers. Certainly encrypt first, authenticate later is reasonable if one doesn't send anything sensitive until authentication has been established, but it seems to me that would require a rather significant redesign of how people do things, not just how protocols work. best, john
Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA
--On Friday, September 06, 2013 08:41 -0700 Pete Resnick presn...@qti.qualcomm.com wrote: ... Absolutely. There is clearly a good motivation: A particular UI choice should not *constrain* a protocol, so it is essential that we make sure that the protocol is not *dependent* on the UI. But that doesn't mean that UI issues should not *inform* protocol design. If we design a protocol such that it makes assumptions about what the UI will be able to provide without verifying those assumptions are realistic, we're in serious trouble. I think we've done that quite a bit in the security/application protocol space. Yes. It also has another implication that goes to Dave's point about how the IETF should interact with UI designers. In my youth I worked with some very good early generation HCI/ UI design folks. Their main and most consistent message was that, from a UI functionality standpoint, the single most important consideration for a protocol, API, or similar interface was to be sure that one had done a thorough analysis of the possible error and failure conditions and that sufficient information about those conditions could get to the outside to permit the UI to report things and take action in an appropriate way.From that point of view, any flavor of a you lose - ok message, including blue screens and I got irritated and disconnected you is a symptom of bad design and much more commonly bad design in the protocols and interfaces than in the UI. Leaving the UI designs to the UI designers is fine but, if we don't give them the tools and information they need, most of the inevitable problems are ours. OK, one last nostalgic anecdote about Eudora before I go back to finishing my spfbis Last Call writeup: ... Working for Steve was a hoot. I can only imagine, but the story is not a great surprise. john
Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA
--On Friday, September 06, 2013 07:38 -0700 Pete Resnick presn...@qti.qualcomm.com wrote: Actually, I think the latter is really what I'm suggesting. We've got do the encryption (for both the minimal protection from passive attacks as well as setting things up for doing good security later), but we've also got to design UIs that not only make it easier for users to deal with encrpytion, but change the way people think about it. (Back when we were working on Eudora, we got user support complaints that people can read my email without typing my password. What they in fact meant was that if you started the application, it would normally ask for your POP password in ... Indeed. And I think that one of the more important things we can do is to rethink UIs to give casual users more information about what it going on and to enable them to take intelligent action on decisions that should be under their control. There are good reasons why the IETF has generally stayed out of the UI area but, for the security and privacy areas discussed in this thread, there may be no practical way to design protocols that solve real problems without starting from what information a UI needs to inform the user and what actions the user should be able to take and then working backwards. As I think you know, one of my personal peeves is the range of unsatisfactory conditions --from an older version of certificate format or minor error to a verified revoked certificate -- that can produce a message that essentially says continuing may cause unspeakable evil to happen to you with an ok button (and only an ok button). Similarly, even if users can figure out which CAs to trust and which ones not (another issue and one where protocol work to standardize distribution of CA reputation information might be appropriate) editing CA lists whose main admission qualification today seems to be cosy relationships with vendors (and maybe the US Govt) to remove untrusted ones and add trusted ones requires rocket scientist-level skills. If we were serous, it wouldn't be that way. And the fact that those are 75% of more UI issues is probably no longer an excuse. john
Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA
--On Friday, September 06, 2013 10:43 -0400 Joe Abley jab...@hopcount.ca wrote: Can someone please tell me that BIND isn't being this stupid? This thread has mainly been about privacy and confidentiality. There is nothing in DNSSEC that offers either of those, directly (although it's an enabler through approaches like DANE to provide a framework for secure distribution of certificates). If every zone was signed and if every response was validated, it would still be possible to tap queries and tell who was asking for what name, and what response was returned. Please correct me if I'm wrong, but it seems to me that DANE-like approaches are significantly better than traditional PKI ones only to the extent to which: - The entities needing or generating the certificates are significantly more in control of the associated DNS infrastructure than entities using conventional CAs are in control of those CAs. - For domains that are managed by registrars or other third parties (I gather a very large fraction of them at the second level), whether one believes those registrars or other operators have significantly more integrity and are harder to compromise than traditional third party CA operators. best, john
Re: Last Call: draft-resnick-retire-std1-00.txt (Retirement of the Internet Official Protocol Standards Summary Document) to Best Current Practice
--On Thursday, September 05, 2013 15:20 -0700 Pete Resnick presn...@qti.qualcomm.com wrote: IESG minutes as the publication of record The only reason I went with the IESG minutes is because they do state the pending actions too, as well as the completed ones, which the IETF Announce list does not. For instance, the IESG minutes say things like: ... The minutes also of course reflect all of the approvals. So they do seem to more completely replace what that paragraph as talking about. And we have archives of IESG minutes back to 1991; we've only got IETF Announce back to 2004. I'm not personally committed to going one way or the other. The minutes just seemed to me the more complete record. Pete, Scott, The purpose of the Official Protocol Status list was, at least IMO, much more to provide a status snapshot and index than to announce what had been done. I think the key question today is not where is it announced? but how do I find it?. In that regard, the minutes are a little worse than the announcement list today, not because the announcement list contains as much information, but because the S/N ratio is worse. With the understanding that the Official Protocol Standards list has not been issued/updated in _many_ years, wouldn't it make sense to include a serious plan about information locations, navigation, and access in this? For example, if we are going to rely on IETF minutes, shouldn't the Datatracker be able to thread references to particular specifications through it? The tracker entries that it can access appear to be only a tiny fraction of the information to which Pete's note refers. john
Re: PS Characterization Clarified
--On Monday, 02 September, 2013 14:09 -0400 Scott O Bradner s...@sobco.com wrote: There is at least one ongoing effort right now that has the potential to reclassify a large set of Proposed Standard RFCs that form the basis of widely used technology. These types of efforts can have a relatively big effect on the standards status of the most commonly used RFCs. Do we want to do more? Can we do more? seems like a quite bad idea (as Randy points out) take extra effort and get some interoperability data More than that. Unless we want to deserve the credibility problems we sometimes accuse others of having, nothing should be a full standard, no matter how popular, unless it reflects good engineering practice. I think there is more flexibility for Proposed Standards, especially if they come with commentary or applicability statements, but I believe that, in general, the community should consider bad design or bad engineering practice to fall into the known defect category of RFC 2026. If RFC 6410 requires, or even allows, that we promote things merely because they are popular, then I suggest there is something seriously wrong with it. john
Re: An IANA Registry for DNS TXT RDATA (I-D Action: draft-klensin-iana-txt-rr-registry-00.txt)
--On Saturday, August 31, 2013 23:50 +0900 Masataka Ohta mo...@necom830.hpcl.titech.ac.jp wrote: The draft does not assure that existing usages are compatible with each other. It absolutely does not. I actually expect it to help identify some usages that are at least confusing and possible incompatible. Still, the draft may assure new usages compatible with each other. That is the hope. However, people who want to have new (sub)types for the new usages should better simply request new RRTYPEs. I agree completely. But that has nothing to do with this draft: the registry is simply addressed to uses that overload TXT, not to arguing why they shouldn't (or why the use of label prefixes or suffixes is sufficient to make protocol use of TXT reasonable. If we need subtypes because 16bit RRTYPE space is not enough (I don't think so), the issue should be addressed by itself by introducing a new RRTYPE (some considerations on subtype dependent caching may be helpful), not TXT, which can assure compatibilities between subtypes. Again, I completely agree. But it isn't an issue for this proposed registry. For the existing usages, some informational RFC, describing compatibilities (or lack of them) between the existing usages, might help. Yes, I think so. thanks, john
Re: An IANA Registry for DNS TXT RDATA (I-D Action: draft-klensin-iana-txt-rr-registry-00.txt)
--On Saturday, August 31, 2013 02:52 -0700 manning bill bmann...@isi.edu wrote: given the nature of the TXT RR, in particular the RDATA field, I presume it is the path of prudence to set the barrier to registration in this new IANA registry to be -VERY- low. That is indeed the intent. If the document isn't clear enough about that, text would be welcome. I'm still searching for the right words (and hoping that the discussion will interact in both directions with the 5226bis effort (draft-leiba-cotton-iana-5226bis), but our thought is that the expert reviewer will provide advice and education about the desirability of good quality registrations back up by good quality and stable documents. But, if the best we can get is registrant contact info, name of a protocol, and a clue about distinguishing information, then that is the best we can get. Or is the intent to create a two class system, registered and unregistered types? In one sense, that result is inevitable because some of the locally-developed and used stuff that lives in TXT records will probably not be registered no matter what we do. That is still better than the current situation of a one-class system in which nothing is registered. But the intent is to get as much registered as possible. Again, if the I-D isn't clear, text would be welcome. john
An IANA Registry for DNS TXT RDATA (I-D Action: draft-klensin-iana-txt-rr-registry-00.txt)
Hi. Inspired by part of the SPF discussion but separate from it, Patrik, Andrew, and I discovered a shortage of registries for assorted DNS RDATA elements. We have posted a draft to establish one for TXT RDATA. If this requires significant discussion, we seek guidance from relevant ADs as to where they would like that discussion to occur. Three notes: * As the draft indicates, while RFC 5507 and other documents explain why subtypes are usually a bad idea, the registry definition tries to be fairly neutral on the subject -- the idea is to identify and register what is being done, not to pass judgment. * While the use of special labels (in the language of 5507, prefixes and suffixes) mitigates many of the issues with specialized use of RDATA fields, they do not eliminate the desirability of a registry (especially for debugging and analysis purposes). * While examining the DNS-related registries that exist today, we discovered that some other registries seemed to be missing and that the organization of the registries seemed to be sub-optimal. We considered attempting a fix everything I-D, but concluded that the TXT RDATA registry was the most important need and that it would be unwise to get its establishment bogged down with other issue. The I-D now contains a temporary appendix that outlines the other issues we identified. IMO, thinking through the issues in that appendix, generating the relevant I-D(s), and moving them through the system would be a good exercise for someone who has little experience in the IETF and who is interested in IANA registries and/or DNS details. I am unlikely to find time to do the work myself but would be happy to work with a volunteer on pulling things together. best, john -- Forwarded Message -- Date: Friday, August 30, 2013 05:52 -0700 From: internet-dra...@ietf.org To: i-d-annou...@ietf.org Subject: I-D Action: draft-klensin-iana-txt-rr-registry-00.txt A New Internet-Draft is available from the on-line Internet-Drafts directories. Title : An IANA Registry for Protocol Uses of Data with the DNS TXT RRTYPE Author(s) : John C Klensin Andrew Sullivan Patrik Faltstrom Filename: draft-klensin-iana-txt-rr-registry-00.txt Pages : 8 Date: 2013-08-30 Abstract: Some protocols use the RDATA field of the DNS TXT RRTYPE for holdingdata to be parsed, rather than for unstructured free text. Thisdocument specifies the creation of an IANA registry for protocol-specific structured data to minimize the risk of conflicting orinconsistent uses of that RRTYPE and data field. The IETF datatracker status page for this draft is: https://datatracker.ietf.org/doc/draft-klensin-iana-txt-rr-registry [...]
Re: An IANA Registry for DNS TXT RDATA (I-D Action: draft-klensin-iana-txt-rr-registry-00.txt)
--On Friday, August 30, 2013 11:48 -0400 Phillip Hallam-Baker hal...@gmail.com wrote: I believe that draft was superseded by RFC6335 and all service names (SRV prefix labels) are now recorded at http://www.iana.org/** assignments/service-names-**port-numbers/service-names-** port-numbers.xhtmlhttp://www.iana.org/assignments/service-na mes-port-numbers/service-names-port-numbers.xhtml - indeed several of those come from RFCs I have written that add new SRV names. Ah, its there but not in the DNS area where I was looking. And that is exactly the reason why that temporary appendix calls for some rethinking and reorganization of how the registries are organized so as to make that, and similar registries, easier to find. While I continue to believe that doing the work would be a good exercise for a relative newcomer, if one of you wants to go at it, please do so with my blessings. john
Re: An IANA Registry for DNS TXT RDATA (I-D Action: draft-klensin-iana-txt-rr-registry-00.txt)
Hi. I'm going to comment very sparsely on responses to this draft, especially those that slide off into issues that seem basically irrelevant to the registry and the motivation for its creation. My primary reason is that I don't want to burden the IETF list with a back-and-forth exchange, particularly about religious matters. The comments below are as much to clarify that plan as they are to response to the particular comments in Phillip's note. --On Friday, August 30, 2013 10:16 -0400 Phillip Hallam-Baker hal...@gmail.com wrote: RFC 5507 does indeed say that but it is an IAB document, not an IETF consensus document and it is wrong. Yes. it is an IAB document. And, yes, one of the coauthors of this draft is a coauthor of 5507 and presumably doesn't share your opinion of its wrongness. But that is irrelevant to this particular document. 5507 is used to set context for discussing the need for the registry and to establish some terminology. If the Hallam-Baker Theory of DNS Extensions had been published in the RFC Series, we might have drawn on it for context and terminology instead. Or not, but we didn't have the choice. Your description of the motives of the authors of 5507 is an even better example. Perhaps you are correct. Perhaps you aren't. But whether you are correct or not makes absolutely no difference to the actionable content of this draft. The more prefixes versus more RRTYPES versus subtypes versus pushing some of these ideas into a different CLASS versus whatever else one can think of are also very interesting... and have nothing to do with whether this registry should be created or what belongs in it. Since you obviously don't like 5507, I would suggest that you either prepare a constructive critique and see if you can get it published or, even better, prepare an alternate description of how things should be handled and see if you can get consensus for it. This document is not a useful place to attack it because its main conclusions just don't make any difference to it. If you have contextual or introductory text that you prefer wrt justifying or setting up the registry, by all means post it to the list. If people prefer it to the 5507 text, there is no reason why it shouldn't go into a future version of the draft. The consequence of this is that we still don't seem to have a registry for DNS prefixes, or at least not in the place I expect it which is Domain Name System (DNS) Parameters ... Yes. It it too hard to find. That also has nothing to do with this particular registry. It is connected to why the I-D contains a temporary and informative appendix about DNS-related registries and where one might expect to find them. Our hope is that the appendix will motivate others (or IANA) to do some work to organize things differently or otherwise make them easier to find. But whether action is taken on the appendix or not has nothing to do with whether this registry is created. The IANA should be tracking SRV prefix allocations and DNSEXT seems to have discussed numerous proposals. I have written some myself. But I can't find evidence of one and we certainly have not updated SRV etc. to state that the registry should be used. The IANA is extremely constrained about what it can do without some direction from the IETF. The appendix is there to provide a preliminary pointer to some areas that might need work (at least as much or more by the IETF as by IANA). If you have specific things that belong on the list (the working version of what will become -01 already is corrected to point to the SRV registry), I'd be happy to add them, with one condition: if we end up in a discussion of the details of the appendix rather than the particular proposed registry, the appendix will disappear. It is a forward pointer to work that probably should be done, not the work itself. Fixing TXT is optional, fixing the use of prefixes and having a proper registry that is first come first served is essential. Right now we have thousands of undocumented ad-hoc definitions. Let me restate that. Some of us believe that it is time to fix TXT or, more specifically, to create a registry that can accommodate identification of what is being done, whether one approves of it or not. If other things need fixing too --and I agree that at least some of them do-- please go to it. If the appendix is useful in that regard, great. If not, I'm not particularly attached to it. Neither the structure and organization of IANA registries generally nor the future of service discovery have anything to do with this draft. If you want to discuss them, please start another thread. thanks, john
Re: Last Call: draft-ietf-repute-query-http-09.txt (A Reputation Query Protocol) to Proposed Standard
--On Friday, August 30, 2013 09:56 -0700 Bob Braden bra...@isi.edu wrote: CR LF was first adopted for the Telnet NVT (Network Virtual Terminal). I think it was Jon Postel's choice, and no one disagreed. A tad more complicated, IIR. It turns out that, with some systems interpreting LF as same position next line and some as first position, next line, some interpreting CR as same position, this line and some as first position next line, CR LF was the only safe universal choice. At least one of those four interpretations was a clear violation of the early versions of the ASCII standard but the relevant vendor didn't care or was too ignorant to notice. That particular bit of analysis was known pre-ARPANET' I wouldn't be surprised to find it on some earlier Teletype documentation. I have no idea who make the decision for Telnet and friends, but I wouldn't be at all surprised if it were Jon. The decision was, however, pretty constrained. Similarly, it was an important design constraint for FTP and later for SMTP, WHIS, FINGer, and a bunch of other things that they (the control connection for the form) be able to run over Telnet connections (on a different port). I don't know whether that was cause or effect wrt the CRLF choices for those protocols, but it probably figured in. I've suspected whether that was part of what drove the port mode in preference to some fancy service-selection handshaking at the beginning of the connection but I have no idea how that set of decisions were made. Then when FTP was defined, it seemed most economical to use the same. In fact, doesn't the FTP spec explicitly say that the conventions on the control connection should be those of Telnet? Yep. RFC 959, Page 34 (snicker) and RFC 1123 Section 4.1.2.10. There were even some discussions about the interactions between Telnet option negotiation and FTP (Section 4.1.2.12 of RFC 1123 was, I think, intended to definitively settle those). Later, when Jon defined SMTP, I am sure that Jon would not have dreamed of using different end-of-line conventions in different protocols. I would hope that you would not dream of it, either. Indeed. john
Re: Last Call: draft-ietf-spfbis-4408bis-19.txt (Sender Policy Framework (SPF) for Authorizing Use of Domains in Email, Version 1) to Proposed Standard
--On Wednesday, August 28, 2013 07:21 -0700 Dave Crocker d...@dcrocker.net wrote: RFC 5507 primarily raises three concerns about TXT records: RFC 5507 is irrelevant to consideration of the SPFbis draft. Really. RFC 5507 concerns approaches to design. However the SPFbis draft is not designing a new capability. It is documenting a mechanism that has existed for quite a long time, is very widely deployed, and has become an essential part of Internet Mail's operational infrastructure that works to counter abuse. ... Dave. I may be violating my promise to myself to stay out of SPF-specific issues, but this does not seem to me to be an SPF-specific issue. I suggest to you that the notions of IETF change control and consensus of the IETF community are very important to the integrity of how the IETF does things. The question of where and how the IETF adds value comes in there too. If some group --whether an IETF WG or some external committee or body-- comes to the IETF and says we have this protocol, it is well-tested and well-deployed, and we think the community would benefit from the IETF publishing its description that is great, we publish as Informational RFC (or the ISE does), and everyone is happy. If that group can get IETF community consensus for the idea that the spec should get a gold star, someone writes an Applicability Statement that points to the other document and says Recommended, we push any quibbles about downrefs out of the way, and we move on. However, it seems to me that, for anything that is proposed to be a normal standards track document, the community necessarily ought to be able to take at least one whack at it on IETF LC. That one whack principle suggests that one cannot say this was developed and deployed elsewhere and is being published as Experimental (which is what, IIR, was one thing that happened in the discussion of 4408) and then say now the design quality of SPF is not a relevant consideration because it has been considered elsewhere and widely deployed. If the IETF doesn't get a chance to evaluate design quality and even, if appropriate, to consider the tradeoffs between letting a possibly-odious specification be standardized and causing a fork between deployed-SPF and IETF-SPF, then the IETF's statements about what its standards mean become meaningless (at least in this particular type of case). Now I think it is perfectly reasonable to say, as you nearly did later in your note, that SPF-as-documented-in-4408bis is sufficiently deployed and would be sufficiently hard to change that the community should swallow its design preferences and standardize the thing. One can debate that position, but it is at least a reasonable position to take.Modulo some quibbles, it probably the position I'd take at this point if I were willing to take a position, but it is different from saying can't discuss the design choices. Things would also be very different if the present question involved updating or replacing an existing Proposed Standard. If design decisions were made in that earlier version (and that went through IETF LC and got consensus), I think it would be perfectly reasonable to say the IETF community looked at that before and it is now too late. You've done that before, I've done it before, and I don't think anyone who isn't prepared to explain why, substantively and in terms of deployment, it isn't too late should be able to object. But, in the absence of demonstrated and documented IETF consensus --independent of WG consensus, implementation consensus, deployment consensus, silent majority consensus, or any other type of claim about broader community consensus-- I don't think one can exclude a discussion of a specification's relationship to various design considerations, if only because that may be deployed but the IETF should not endorse it in that form by standardizing it or even if the community that is advocating this won't allow design issues to be discussed, then there is no IETF value-added and the IETF should decline to standardize on that basis have got to be possible IETF community responses. To consider RFC 5507 with respect to SPFbis is to treat the current draft as a matter of new work, which it isn't. No, it is to treat the current draft as a matter of work that the IETF is being asked to standardize for the first time... which, as far as I can tell, it is. I think those distinctions about standardization (including the value-added and change control ones) and what can reasonably be raised on IETF LC are important to the IETF, even for those who agree with you (entirely or in part) about what should happen with this particular specification at this particular point. YMMD. best, john
Re: Last Call: draft-cotton-rfc4020bis-01.txt (Early IANA Allocation of Standards Track Code Points) to Best Current Practice
--On Thursday, August 29, 2013 12:43 -0400 Barry Leiba barryle...@computer.org wrote: In Section 2: 'a. The code points must be from a space designated as Specification Required (where an RFC will be used as the stable reference), RFC Required, IETF Review, or Standards Action.' I suggest not having the comment (where) and leaving it to RFC 5226 to define Specification Required. Yes, except that's not what this means. I tripped over the same text, and I suggest rephrasing it this way: NEW The code points must be from a space designated as SpecificationRequired (in cases where an RFC will be used as the stable reference),RFC Required, IETF Review, or Standards Action. Barry, that leaves me even more confused because it seems to essentially promote Specification Required into RFC Required by allowing only those specifications published as RFCs. Perhaps, given that this is about Standards Track code points, that is just what is wanted. If so the intent would be a lot more clear if the text went a step further and said: NEWER: The code points must normally be from a space designated as RFC Required, IETF Review, or Standards Action. In addition, code points from the Specification Required are allowed if the specification will be published as an RFC. There is still a small procedural problem here, which is that IANA is asking that someone guarantee RFC publication of a document (or its successor) that may not be complete. There is no way to make that guarantee. In particular, the guarantee of Section 2 (c) without constraining the actions that IETF LC can reasonably consider. As I have argued earlier today in another context, language that suggests really strong justification for the tradeoff may be acceptable, but a guarantee to IANA by the WG Chairs and relevant ADs, or even the full IESG, that constrains a Last Call is not. Section 3.2 begins to examine that issue, but probably doesn't go quite far enough, especially in the light of the four conditions of Section 2. It would probably be appropriate to identify those conditions as part of good-faith beliefs. It might even be reasonable to require at least part of tee what if things change analysis that Section 3.2 calls for after the decision is made to be included in the request for early allocation. Requiring that analysis would also provide a small additional safeguard against the scenarios discussed in the Security Considerations section. Incidentally, while I'm nit-picking about wording, the last sentence of Section 3.3 has an unfortunate dependency on a form of the verb to expire. Language more similar to that used for RFCs might be more appropriate, e.g., that the beginning of IESG Review (or issuance of an IETF Last Call) suspends the expiration date until either an RFC is published or the IESG or authors withdraws the document. best, john
Re: Last Call: draft-ietf-spfbis-4408bis-19.txt (Sender Policy Framework (SPF) for Authorizing Use of Domains in Email, Version 1) to Proposed Standard
--On Thursday, August 29, 2013 12:28 -0700 Dave Crocker d...@dcrocker.net wrote: On 8/29/2013 9:31 AM, John C Klensin wrote: I may be violating my promise to myself to stay out of SPF-specific issues, Probably not, since your note has little to do with the realities of the SPFbis draft, which is a chartered working group product. You might want to review its charter: http://datatracker.ietf.org/wg/spfbis/charter/ Note the specified goal of standards track and the /very/ severe constraints on work to be done. Please remember that this is a charter that was approved by the IESG. The working group produced was it was chartered to produce, for the purpose that was chartered. I have reviewed the charter, Dave. THe reasons I've wanted to stay out of this discussion made me afraid to make a posting without doing so. But the last I checked, WG charters are approved by the IESG after reviewing whatever comments they decide to solicit. They are not IETF Consensus documents. Even if this one was, the WG co-chair and document shepherd have made it quite clear that the WG carefully considered the design issue and alternatives at hand. I applaud that but, unless you are going to argue that the charter somehow allows the WG to consider some issues that cannot be reviewed on IETF Last Call, either the design issue is legitimate or the WG violated its charter. I, at least, can't read the charter that way. More broadly, you (and others) might want to review that actual criteria the IETF has specified for Proposed in RFC2026. Most of us like to cite all manner of personal criteria we consider important. Though appealing, none of them is assigned formal status by the IETF, with respect to the Proposed Standards label; I believe in fact that there is nothing that we can point to, for such other criteria, represents IETF consensus for them. The claim that we can't really document our criteria mostly means that we think it's ok to be subjective and whimsical. The statement to which I objected was one in which you claimed (at least as I understood it) that it was inappropriate to raise a design consideration because the protocol was already widely deployed. Your paragraph above makes an entirely different argument. As I understand it, your argument above is that it _never_ appropriate to object during IETF Last Call on the basis of design considerations (whether it is desirable to evaluate design considerations in a WG or not). I believe that design issues and architectural considerations can sometimes be legitimate examples of known technical defects. If they were not, then I don't know why the community is willing to spend time on such documents (or even on having an IAB). Again, it think it is perfectly reasonable to argue that a particular design or architectural consideration should not be applied to a particular specification. My problem arises only when it is claimed that such considerations or discussions are a priori inappropriate. Also for the broader topic, you also might want to reevaluate much of what your note does say, in light of the realities of Individual Submission (on the IETF track) which essentially never conforms to the criteria and concerns you seem to be asserting. If that were the case, either you are massively misunderstanding what I am asserting or I don't see your point. I believe that my prior note, and this one, assert only one thing, which is that it is inappropriate to bar any discussion --especially architectural or design considerations-- from IETF Last Call unless it addresses a principle that has already been established for the particular protocol by IETF Consensus. I remain completely comfortable, modulo the various rude language topics, with a discussion of why some architectural principle is irrelevant to a particular specification or even that trying to apply that principle would be stupid. But a discussion along those lines is still a discussion, not an attempt to prevent a discussion. And, yes, I believe that Individual Submissions should generally be subject to a much higher degree of scrutiny on IETF Last Call than WG documents. I also believe that, if there appears to be no community consensus one way or the other, that the IESG should generally defer to the WG on WG documents but default to non-approval of Individual Submissions. But, unless I'm completely misunderstanding the point you are trying to make, I don't see what that has to do with this topic. Dave, we had these sorts of discussions before. If there are a common patterns about them, they are that neither of us is likely to convince the other and that both of us soon get to the point of either muttering he just doesn't get it (or worse) into our beards or getting really short-tempered. I suggest that we not subject the community to that. By all means respond to this note if you feel a need to do (I'm not trying to get the last word and, if I
Overloaded TXT harmful (was Re: [spfbis] Last Call: draft-ietf-spfbis-4408bis-19.txt (Sender Policy Framework (SPF) for Authorizing Use of Domains in Email, Version 1) to Proposed Standard)
--On Monday, August 26, 2013 10:49 -0400 John R Levine jo...@taugh.com wrote: Sorry if that last one came across as dismissive. Until such time, I'd personally prefer to see some explicit notion that the odd history of the SPF TXT record should not be seen as a precedent and best practice, rather than hope that this is implicit. I'd have thought that the debate here and elsewhere already documented that. Since it's not specific to SPF, perhaps we could do a draft on overloaded TXT considered harmful to get it into the RFC record. With the help of a few others, I've got a I-D in the pipe whose function is to create an IANA registry of structured protocol uses for TXT RR data and how to recognize them. I hope it will be posted later this week. Its purpose is to lower the odds of overloaded sliding into different uses for forms that are not easily distinguished. Other than inspiration, its only relationship to the current SPF discussion is that some SPF-related information is a candidate for registration (whether as an active use or as a deprecated one). It already contains some text that warns that overloading TXT is a bad idea but that, because it happens and has happened, identifying those uses is appropriate. Once it is posted, I/we would appreciate any discussion that would lead to consensus about just how strong that warning should be and how it should be stated. best, john
Re: Call for Review of draft-rfced-rfcxx00-retired, List of Internet Official Protocol Standards: Replaced by an Online Database
--On Tuesday, August 20, 2013 14:01 -0500 Pete Resnick presn...@qti.qualcomm.com wrote: On 8/15/13 2:06 PM, SM wrote: At 11:48 14-08-2013, IAB Chair wrote: This is a call for review of List of Internet Official Protocol Standards: Replaced by an Online Database prior to potential approval as an IAB stream RFC. My guess is that draft-rfced-rfcxx00-retired cannot update RFC 2026. Does the IAB have any objection if I do something about that? [...] The document argues that STD 1 is historic as there is an online list now. The IESG and the IAB had an email exchange about these two points. Moving a document from Standard to Historic is really an IETF thing to do. And it would be quite simple for the IETF to say, We are no longer asking for the 'Official Protocol Standards' RFC to be maintained by updating (well, effectively removing) the one paragraph in 2026 that asks for it, and requesting the move from Standard to Historic. So I prepared a *very* short document to do that: http://datatracker.ietf.org/doc/draft-resnick-retire-std1/ FWIW, I've reviewed your draft and have three comments: (1) You are to be complemented on its length and complexity. (2) I agree that the core issue belongs to the IETF, and IETF Stream, issue, not the RFC Editor and/or IAB. (3) I far prefer this approach to the more complex and convoluted RFC Editor draft. If we really need to do something formally here (about which I still have some small doubts), then let's make it short, focused, and to the point. Your draft appears to accomplish those goals admirably. john
Re: Academic and open source rate (was: Charging remote participants)
--On Sunday, August 18, 2013 17:04 -0700 SM s...@resistor.net wrote: I'd love to get more developers in general to participate - whether they're open or closed source doesn't matter. But I don't know how to do that, beyond what we do now. The email lists are free and open. The physical meetings are remotely accessible for free and open. On reading the second paragraph of the above message I see that you and I might have a common objective. You mentioned that you don't know how to do that beyond what is done now. I suggested a rate for people with an open source affiliation. I did not define what open source means. I think that you will be acting in good faith and that you will be able to convince your employer that it will not make you look good if you are listed in a category which is intended to lessen the burden for open source developers who currently cannot attend meetings or who attend meetings on a very limited budget. I think this is bogus and takes us down an undesirable path. First, I note that, in some organizations (including some large ones), someone might be working on an open source project one month and a proprietary one the next, or maybe both concurrently. Would it be appropriate for such a person (or the company's CFO) to claim the lower rate, thereby expecting those who pay full rate to subsidize them? Or would their involvement in any proprietary-source activity contaminate them morally and require them to pay the full rate? Second, remember that open source is actually a controversial term with some history of source being made open and available, presumably for study, but with very restrictive licensing rules associated with its adaptation or use. Does it count if the open source software is basically irrelevant to the work of the IETF? Written in, e.g., HTML5? Do reference implementations of IETF protocols count more (if I'm going to be expected to subsidize someone else's attendance at the IETF, I think they should). Shouldn't we be tying this to the discussion about IPR preference hierarchies s.t. FOSS software with no license requirements get more points (and bigger discounts) than BSD or GPL software, which get more points than FRAND, and so on? Finally, there seems to be an assumption underlying all of this that people associated with open source projects intrinsically have more restrictive meeting or travel budgets and policies than those working on proprietary efforts in clearly-for-profit organizations (especially large one). As anyone who have lived through a serious travel freeze or authorization escalation in a large company knows too well, that doesn't reflect reality. best, john
Re: Academic and open source rate (was: Charging remote participants)
--On Monday, August 19, 2013 12:49 -0700 SM s...@resistor.net wrote: ... First, I note that, in some organizations (including some large ones), someone might be working on an open source project one month and a proprietary one the next, or maybe both concurrently. Would it be appropriate for such a person (or the company's CFO) to claim the lower rate, thereby expecting those who pay full rate to subsidize them? Or would their ... The above reminds me of the Double Irish with a Dutch sandwich. If I was an employee of a company I would pay the regular fee. If I am sponsored by an open source project and my Internet-Draft will have that as my affiliation I would claim the lower rate. Without understanding your analogy (perhaps a diversity problem?), if you are trying to make a distinction between employee of a company and sponsored by an open source project, that distinction just does not hold up. I'm particular, some of the most important reference implementations of Internet protocols -- open source, freely available and usable, well-documented, openly tested, etc.-- have come out of companies, even for-profit companies. If the distinction you are really trying to draw has to do with poverty or the lack thereof, assuming that, if a large company imposes severe travel restrictions, its employees should pay full fare if they manage to get approval, then you are back to Hadriel's suggestion (which more or less requires that someone self-identify as poor) or mine (which involves individual self-assessment of ability to pay without having to identify the reasons or circumstances). ... Does it count if the open source software is basically irrelevant to the work of the IETF? Written in, e.g., HTML5? Do reference implementations of IETF protocols count more (if I'm going to be expected to subsidize someone else's attendance at the IETF, I think they should). This would require setting a demarcation line. That isn't always a clear line. What I'm trying to suggest is that the line will almost always be unclear and will require case by case interpretation by someone other than the would-be participant. I continue to find any peer evaluation model troubling, especially as long as the people and bodies who are likely to made the evaluations are heavily slanted toward a narrow range of participants (and that will be the case as long as those leadership or evaluation roles require significant time over long periods). A subsidy is a grant or other financial assistance given by one party for the support or development of another. If the lower rate is above meeting costs it is not a subsidy. I note that you used that term in a later message, More important, I believe the IAOC has repeatedly assured us that, at least over a reasonable span of meetings, they never seek to make a profit on registration fees. Indeed, I suspect that, with reasonable accounting assumptions, meetings are always a net money-loser although not my much and more than others. Any decision that some people are going to pay less than others (including the reduced fee arrangements we already have) is a decision that some people and groups are going to bear a higher share of the costs than others. And that is a subsidy, even by your definition above. best, john
Re: Academic and open source rate (was: Charging remote participants)
--On Sunday, 18 August, 2013 08:33 -0400 Hadriel Kaplan hadriel.kap...@oracle.com wrote: ... And it does cost the IETF lots of money to host the physical meetings, and that cost is directly proportional to the number of physical attendees. More attendees = more cost. I had promised myself I was finished with this thread, but I can't let this one pass. (1) If IETF pays separately for the number of meeting rooms, the cost is proportionate to the number of parallel sessions, not the number of attendees. (2) If IETF gets the meeting rooms (small and/or large) for free, the costs are borne by the room rates of those who stay in the hotel and are not proportionate to much of anything (other than favoring meetings that will draw the negotiated minimum number of attendees who stay in that hotel). (3) Equipment costs are also proportional to the number of meetings we run in parallel. Since IASA owns some of the relevant equipment and has to ship it to meetings, there are some amortization issues with those costs and shipping costs are dependent on distance and handling charges from wherever things are stored between meetings (I assume somewhere around Fremont, California, USA). If that location was correct and we wanted to minimize those charges, we would hold all meetings in the San Francisco area or at least in the western part of the USA. In any event the costs are in no way proportionate to the number of attendees. (4) The costs of the Secretariat and RFC Editor contracts and other associated contracts and staff are relatively fixed. A smaller organization, with fewer working groups and less output, might permit reducing the size of those contracts somewhat, but that has only the most indirect and low-sensitively relationship to the number of attendees, nothing near proportional. (5) If we have to pay people in addition to Secretariat staff to, e.g., sit at registration desks, that bears some monotonic relationship to the number of attendees. But the step increments in that participate function are quite large, nothing like directly proportional. (6) The cost of cookies and other refreshments may indeed be proportional to the number of attendees but, in most facilities, that proportionality will come in large step functions. In addition, in some places, costs will rise with the number of unusual dietary requirements. The number of those requirements might increase with the number of attendees, but nowhere near proportionately. Unusual is entirely in the perception of the supplier/facility but, from a purely economic and cost of meetings standpoint, the IETF might be better off if people with those needs stayed home or kept their requirements to themselves. So, meeting cost directly proportional to the number of physical attendees? Nope. best, john p.s. You should be a little cautious about a charge the big companies more policy. I've seen people who make the financial decisions as to who comes say things like we pay more by virtue of sending more people, if they expect us to spend more per person, we will make a point by cutting back on those we send (or requiring much stronger justifications for each one who wants to go). I've also seen reactions that amount to We are already making a big voluntary donation that is much higher than the aggregate of the registration fees we are paying, one that small organizations don't make. If they want to charge us more because we are big, we will reduce or eliminate the size of that donation. Specific company examples on request (but not on-list), but be careful what you wish for.
Re: Radical Solution for remote participants
--On Friday, August 16, 2013 04:59 -0400 Joel M. Halpern j...@joelhalpern.com wrote: Maybe I am missing something. The reason we have face-to-face meetings is because there is value in such meetings that can not reasonably be achieved in other ways. I would like remote participation to be as good as possible. But if would could achieve the same as being there then we should seriously consider not meeting face-to-face. Conversely, until the technology gets that good, we must not penalize the face-to-face meeting for failures of the technology. Joel, I certainly agree with your conclusion. While I hope the intent wasn't to penalize the face-to-face meeting, there have been several suggestions in this thread that I believe are impractical and a few that are probably undesirable even if they were practical. Others, such as improved automation, are practical if we want to make the effort, would probably help, and, fwiw, have been suggested by multiple people in multiple threads. I do believe it would be helpful for everyone involved in the discussion to be careful about their reactions and rhetoric. While it is certainly possible to go too far in any given direction, significant and effective remote participation will almost certainly require some adjustments by the people in the room. We've already made some of those adjustments: for example while it is inefficient and sometimes doesn't work well, using Jabber as inbound channel with someone in the room reading Jabber input at the Mic does help remote participants at some cost to the efficient flow of the f2f discussions. Perhaps that penalizes the face to face participants. I believe it is worth it and that it would be worthwhile going somewhat further in that direction, e.g., by treating remote participants as a separate mic queue. I also see it as very closely related to some other tradeoffs: for example, going to extra effort to be inclusive and diverse requires extra effort by existing f2f participants and very carefully balancing costs -- higher costs and even costs at current levels discourage broader participants but many ways of increasing diversity also increase costs. Wrt not meeting face-to-face, I don't see it happening, even with technology improvements. On the other hand, the absolutely most effective thing we could do to significantly decrease costs for those who need the f2f meetings but are cost-sensitive would be to reverse the trends toward WG substituting interim meetings for work on mailing lists, toward extending the IETF meeting week to include supplemental meetings, and even to move toward two, rather than three, meetings a year. Those changes, especially the latter two, would probably require that remote participation be much more efficient and effective than it is today, but would not require nearly the level of perfection required to eliminate f2f meetings entirely. And any of the three would penalize those who like going to extended f2f meetings and/or prefer working that way and who have effectively unlimited travel support and related resources. best, john
Re: Charging remote participants
--On Friday, August 16, 2013 13:07 -0300 Carlos M. Martinez carlosm3...@gmail.com wrote: ... And, before the IETF would commit to take steps in that direction, it would be interesting to see some numbers about how much money needs to be invested in deploying and operating remote participation tools that would actually make people feel they are getting value back for a $100 remote attendance fee. Please Dave Crocker's note before my comment below -- I agree with mose of it don't want to repeat what he has already said well. As someone who favors charging remote participants, who has paid most or all of the travel and associated costs for every meeting I've attended in the last ten plus years, and who doesn't share in a view of if I can, everyone can, let me make a few observations. (1) As Dave points out, this activity has never been free. The question is only about who pays. If any participants have to pay (or convince their companies to pay) and others, as a matter of categories, do not, that ultimately weakens the process even if, most of the time, those who pay don't expect or get favored treatment. Having some participants get a free ride that really comes at the expense of other participants (and potentially competing organizations) is just not a healthy idea. (2) Trying to figure out exactly what remote participation (equipment, staffing, etc.) will cost the IETF and then trying to assess those costs to the remote participants would be madness for multiple reasons. Not least of those is the fact that, if new equipment or procedures are needed, there will be significant startup costs with the base of remote participants arriving only later. One could try to offset that effect with some accounting assumptions that would be either rather complex, rather naive, or both, but, as a community, we aren't good at those sorts of calculations nor at accepting them when the IAOC does them in a way that doesn't feel transparent. (3) Trying to establish a more or less elaborate system of categories of participants with category-specific fees or to scale the current system of subsidies and waivers to accommodate the full range of potential in-person and remote participants is almost equally insane. While we might make such arrangements work and keeping categories and status off badges helps, it gets us entangled with requiring that the Secretariat and/or IAD and/or some IAOC or other leadership members be privy to information that is at least private and that might be formally confidential. We don't want to go there if we can help it. (4) The current registration fee covers both some proportion of meeting-specific expenses and some proportion of overhead expenses that are not specific to the meetings or to meeting attendance. Breaking those proportions down specifically also would require some accounting magic, especially given the differences between meetings with greater or lesser degrees of sponsorship. But I believe that, if we can trust the IAOC to set meeting registration fees for in-person attendees, we can trust them to set target (see below) meeting registration fees for remote participants. Note that such a fee involves some reasonable contribution to overhead expenses (including remote participation costs, secretariat site visits, and the like) just as the fee for in-person participants does -- it is not based on the costs of facilities for remote participation. So, to suggest this again in a different context: Remote participants then pay between 0 and 100% of that target fee, based on their consciences, resources, and whatever other considerations apply. No one asks how given remote participants or their organizations arrive at the numbers they pick. No one is asked to put themselves into a category or explain their personal finances. The IETF does not need to offer promises about the confidentiality of information that it doesn't collect. Any Euro we collect is one Euro more than we are collecting now and, if a Euro or two is what a participant from a developing area feels is equitable for him or her to pay, then that is fine. That voluntary fee model would be a terrible one except that I think we can actually trust the vast majority of the community to be reasonable. Certainly some people will not be, but they would probably figure out how to game a category system or any more complex system we came up with. Just as the price of running a truly open standards process including tolerating a certain number of non-constructive participants (and other subspecies of trolls), it may require tolerating a certain number of people who won't want to pay their fair share (or whose judgments of fair might be at variance with what other people with the same information would conclude). Absent clear indications that more complex process, or one that relied more on leadership judgments about individual requests, would produce more than enough additional revenue to compensate
Re: Call for Review of draft-rfced-rfcxx00-retired, List of Internet Official Protocol Standards: Replaced by an Online Database
--On Thursday, August 15, 2013 12:06 -0700 SM s...@resistor.net wrote: At 11:48 14-08-2013, IAB Chair wrote: This is a call for review of List of Internet Official Protocol Standards: Replaced by an Online Database prior to potential approval as an IAB stream RFC. The document is available for inspection here: https://datatracker.ietf.org/doc/draft-rfced-rfcxx00-retired/ From Section 2.1 of RFC 2026: 'The status of Internet protocol and service specifications is summarized periodically in an RFC entitled Internet Official Protocol Standards.' My guess is that draft-rfced-rfcxx00-retired cannot update RFC 2026. Does the IAB have any objection if I do something about that? SM, You have just identified another aspect of why I find this document troubling. I note that requirement of RFC 2026 has not been satisfied for years unless one interprets periodically as consistent with whenever we get around to it, which, in today's age, is likely to be never. I note that the last version of STD 1 was RFC 5000, published in May 2008 and that its predecessor was RFC 3700 in July 2004, i.e., there was a four year interval followed by at least a seven year one. That is well outside most normal interpretations of periodic. I don't personally think it is worth it (or, more specifically, think the resources could be better spent in other ways) but, if one believed the keep anything that might turn out to be historically important theme of the IETF 86 History BOF, then there is value in maintaining the sort of comprehensive status snapshot that STD 1 was supposed to provide (once its [other] original purpose of being part of a report to the sponsor became irrelevant) even if that snapshot is taken only once every few years. That aside, I think this document is almost completely unnecessary. RFC 5000 already points to the HTML version of the RFC index as the authority for contemporary information. There has, as far as I know, never been a requirement that STD 1 be issued as RFCs numbered NN00, nor that all such numbers be reserved for that purpose, outside the internal conventions of the RFC Editor function. At the same time, if the IAB and RSE believe that assembling and publishing this statement formally and in the RFC Series is a good use of their time and that of the community, I think it is basically harmless, _unless_ it becomes an opportunity to nit-pick such questions as its relationship to requirements or statements in 2026 or elsewhere. From Section 3: This document formally retires STD 1. Identifier STD 1 will not be re-used unless there is a future need to publish periodic snapshots of the Standards Track documents (i.e., unless the documentation is resumed). The document argues that STD 1 is historic as there is an online list now. The above reserves an option to restart periodic snapshots if there is a future need. I suggest removing that option as I presume that the IAB has thought carefully about the long term evolution of the Series before taking the decision to retire STD 1. This is another form of the nit-picking (if there were protocols involved, the historical term would involve the phrase protocol lawyer) that concerns me. I don't remember where it is written down (if at all), but the RFC Editor has had a firm rule ever since I can remember that STD numbers are never reused for a different topic. Violating that prohibition against reuse would be a really stupid move on the part of the RFC Editor and/or the IAB. If they were to be that stupid, we have much more serious other problems. If they are going to continue to avoid that sort of stupidity, then that part of the statement above is completely unnecessary - but still harmless. As far as removing the option is concerned, I think doing so would be pointless if the rest of the statement remains. For better or worse, anything that is written into one RFC by the IAB (or, under different circumstances, the IETF) can be amended out of it by another RFC. While I think it unlikely, I can imagine at least one scenario, tied to the historical concern above, under which we would resume publishing a snapshot. Whether the IAB has considered it or not and whatever promises this document does or does not make are irrelevant to whether or not that would happen. Summary: I think the RFC Series Editor should just make whatever announcement she feels it is appropriate to make, recognizing that we stopped regularly updating STD 1 long ago and have no present intention of restarting. I think this document and the process and associated work it imposes on the IAB and the community are a waste of time that could be better used in other ways.However, if they feel some desire to publish it in some form, let's encourage them to just get it done and move on rather than consuming even more time on issues that will make no difference in the long term. best, john
Re: Charging remote participants
--On Friday, August 16, 2013 15:46 -0400 Hadriel Kaplan hadriel.kap...@oracle.com wrote: On Aug 16, 2013, at 1:53 PM, John C Klensin john-i...@jck.com wrote: (1) As Dave points out, this activity has never been free. The question is only about who pays. If any participants have to pay (or convince their companies to pay) and others, as a matter of categories, do not, that ultimately weakens the process even if, most of the time, those who pay don't expect or get favored treatment. Having some participants get a free ride that really comes at the expense of other participants (and potentially competing organizations) is just not a healthy idea. Baloney. People physically present still have an advantage over those remote, no matter how much technology we throw at this. That's why corporations are willing to pay their employees to travel to these meetings. And it's why people are willing to pay out-of-pocket for it too, ultimately. It's why people want a day-pass type thing for only attending one meeting, instead of sitting at home attending remote. Being there is important, and corporations and people know it. Sure. And it is an entirely separate issue, one which I don't know how to solve (if it can be solved at all). It is unsolvable in part because corporations --especially the larger and more successful ones-- make their decisions about what to participate in, at what levels, and with whatever choices of people, for whatever presumably-good business reasons they do so. I can, for example, remember one such corporation refusing to participate in a standards committee that was working on something that many of us thought was key to their primary product. None us knew, then or now, why they made that decision although their was wide speculation at the time that they intended to deliberately violate the standard that emerged and wanted plausible deniability about participation. Lots of reasons; lots of circumstances. An audio input model (ie, conference call model) still provides plenty of advantage to physical attendees, while also providing remote participants a chance to have their say in a more emphatic and real-time format. We're not talking about building a telepresence system for all remote participants, or using robots as avatars. IIR, we've tried audio input. It works really well for conference-sized meetings (e.g., a dozen or two dozen people around a table) with a few remote participants. It works really well for a larger group (50 or 100 or more) and one or two remote participants. I've even co-chaired IETF WG meetings remotely that way (with a lot of help and sympathy from the other co-chair or someone else taking an in-room leadership role). But, try it for several remote participants and a large room full of people, allow for audio delays in both directions, and about the last thing one needs is a bunch of disembodied voices coming out of the in-room audio system at times that are not really coordinated with what is going on in the room. Now it can all certainly be made to work: it takes a bit of coordination on a chat (or equivalent) channel, requests to get in or out of the queue that are monitored from within the room, and someone managing those queues along with the mic lines. But, by that point, many of the disadvantage of audio input relative to someone reading from Jabber have disappeared and the other potential problems with audio input -- noise, level setting, people who are hard to understand even if they are in the room, and so on-- start to dominate. Would I prefer audio input to typing into Jabber under the right conditions? Sure, in part because, while I type faster than average it still isn't fast enough to compensate for the various delays. But it really isn't a panacea for any of the significant problems. (2) Trying to figure out exactly what remote participation (equipment, staffing, etc.) will cost the IETF and then trying to assess those costs to the remote participants would be madness for multiple reasons. [...snip...] Yet you're proposing charging remote participants to bear the costs. I'm confused. I am proposing charging remote participants a portion of the overhead costs of operating the IETF, _not_ a fee based on the costs of supporting remote participation. And, again, I want them to have the option of deciding how much of it they can reasonably afford to pay. ... best, john
Re: Radical Solution for remote participants
--On Tuesday, August 13, 2013 06:24 -0400 John Leslie j...@jlc.net wrote: Dave Cridland d...@cridland.net wrote: On Tue, Aug 13, 2013 at 2:00 AM, Douglas Otis doug.mtv...@gmail.com wrote: 10) Establish a reasonable fee to facilitate remote participants who receive credit for their participation equal to that of being local. I understand the rationale here, but I'm nervous about any movement toward a kind of pay-to-play standardization. Alas, that is what we have now. :^( There are a certain number of Working Groups where it's standard operating practice to ignore any single voice who doesn't attend an IETF week to defend his/her postings. Thee is also a matter of equity even if one were to ignore the costs to the community of enabling remote participation. Some fraction of the registration fee goes to support IETF overhead activities that are not strictly associated with the costs of particular meetings. Although it would be a pity to turn us into a community of hair-splitting amateur accountants [1], it is inappropriate to expect those who participate in f2f meetings to fully subsidize those who participate remotely. ... I don't always understand what Doug is asking for; but I suspect he is proposing to define a remote-participation where you get full opportunity to defend your ideas. This simply doesn't happen today. ... One option might be to give chairs some heavy influence on remote burserships. ... That seems premature at this point: the likely costs aren't neatly correlated to number of remote participants; so it's not clear there's any reason to support an individual, rather than support the tools. Worse, enabling WG Chairs to made de facto decisions about who participates or not would have the appearance of enabling the worst types of abuse. It would be worse than figuring out how to call on advocates of only one position to speak. Even if those abuses never occurred, the optics and risk would be bad news.[1[ ... Conceivably what we need is an automated tool to receive offers to (partially) subsidize the cost of a tool for a particular session. Seems to me to be the wrong way to go. I wouldn't want to discourage Cisco's generosity. And I think it is time to declare the Meetecho experiment to have been concluded successfully. It can use some improvements and I hope it continues to evolve (I could say the same thing about WebEx but I'm less optimistic about evolution). But it works well. If we want to use it, it is probably time to take it seriously. Certainly that means having a available for all relevant sessions, rather than constrained by the size of the current team and their resources. If that means training for operators other than the core team, having to put in-room operators on a non-volunteer basis, and/or IETF assumption of equipment expenses, that would, IMO, be completely appropriate (of course, that interacts with your comment about remote participation having costs). Similarly, everyone pays but some pay less or zero and we set up a procedure and/or bureaucracy to figure out who the latter are seems like a bad idea. Simpler suggestion (this interacts with the data collection thread): (1) Remote participants are required [3] to register (remote lurkers should continue to get a free ride for multiple reasons). (2) The IAOC sets and announces a remote registrant fee based on overhead expenses (those not associated with physical presence at meetings). Marginal costs of remote participation are treated as overhead, not direct meeting expenses, because they benefit the whole community. (3) Remote participants pay that fee, or part of it, on a good faith and conscience what you can afford basis with information about what any particular person pays kept confidential by the secretariat. Again, we depend on good faith. Financially, whatever we collect is better than what we collect today. Collecting some fee is better than none and either too high a fee or the necessity to beg would discourage registration or, worse, participation. (4) If the IAOC or ISOC decide to conduct a diversity campaign to help keep those fees low, more power to them, but such a campaign (or its success) are not requirements for the above model working. best, john [1] Possibly an improvement on a community of amateur lawyers, perhaps not. [2] Incidentally, one of the advantages of the otherwise clumsy and efficient mic lines is that they make the queue clear to everyone. [3] We should recognize that we have no realistic enforcement ability, at least unless non-registration is used to subvert IPR rules. Any mechanism we might devise would not stop the truly malicious. This has to be a good faith requirement.
Re: Last Call: draft-bormann-cbor-04.txt (Concise Binary Object Representation (CBOR)) to Proposed Standard
--On Saturday, August 10, 2013 11:14 -0400 Ted Lemon ted.le...@nominum.com wrote: On Aug 10, 2013, at 11:00 AM, Dave Crocker d...@dcrocker.net wrote: Most of this thread has ignored the IETF's own rules and criteria. As such, it's wasteful, at best, though I think it's actually destructive, since it provides fuel to the view that the IETF is a questionable venue for standards work. +1 +1 more. Not doing this sort of thing to ourselves is probably far more important, in the long run, than trying to slightly re-tune the language of 2026 in that hope that will impress someone. The latter might still be worthwhile, but it is, IMO, far more important to stop shooting ourselves in the foot. john
Re: Community Feedback: IETF Trust Agreement Issues
--On Thursday, August 08, 2013 23:20 + John Levine jo...@taugh.com wrote: That sounds right. Someone might want to add commentary (even in English) to the Tao, such as to discuss local participants, diversity, and so on. Someone might, or they might rewrite it to say that IETF meetings have simultaneous translation, and while the IAB is all U.S. greybeards, the IESG members are chosen to represent the gender and ethnic balance of the whole world. Or they might rewrite it to say that the IETF has corporate members, you have work for one to participate, and all RFCs are standards. ... It's extremely hard to let just one of the cat's paws out of the bag. In practice either we have change control or we don't, and I don't see much sentiment for giving it up to unknown CC users. I have to agree with John Levine about this. The decision to move the Tao to a web page didn't change the degree to which we refer to it as an authoritative document. As long as we are going to do that, we need to maintain chance control over anything pretending to be that document. That view is, IMO, entirely consistent with the change approval process specified in RFC 6722. Also like John, I don't see a problem with CC BY-ND if someone wants that on some sort of principle. But I really don't think we should be changing the Trust agreement in this area unless someone can identify one or more cases in which there are specific benefits to the IETF from doing so. For the reasons above, I don't think the Tao and a CC license that permits modifications is it. Indeed, I can more easily see if for a subset of Independent Stream RFCs rather than something that the IETF points to as an authority (disclaimers or not), but we haven't heard a request from the ISE. john
Re: procedural question with remote participation
--On Tuesday, August 06, 2013 11:06 -0400 Andrew Feren andr...@plixer.com wrote: ... I think this sort of misses the point. At least for me as a remote participant. I'm not interested in arguing about whether slides are good or bad. I am interested in following (and being involved) in the WG meeting. When there are slides I want to be able to see them clearly from my remote location. Having them integrated with Meetecho works fine. Having slides and other materials ... Let me say part of this differently, with the understanding I may be more fussy (or older and less tolerant) than Andrew is... If the IETF is going to claim that remote participation (rather than remote passive listening/ observation with mailing list follow up) is feasible, then it has to work. If, as a remote participant, I could be guaranteed zero-delay transmission and receipt of audio and visual materials (including high enough resolution of slides to be able to read all of them) and that speakers (in front of the room and at the mic) would identify themselves clearly and then speak clearly and at reasonable speed, enunciating every word, I wouldn't care whether slides were posted in advance or not. Realistically, that doesn't happen. In some cases (e.g., lag-free audio) it is beyond the state of the art or a serious technical challenge (e.g., video that is high enough resolution that I can slides that have been prepared with 12 point type). In others, we haven't done nearly enough speaker training or it hasn't been effective (e.g., people mumbling, speaking very quickly, swallowing words, or wandering out of microphone or camera range). And sometimes there are just problems (e.g., intermittent audio or video, servers crashing, noisy audio cables or other audio or video problems in the room). In those cases, as a remote participant, I need all the help I can get. I'd rather than no one ever use a slide that has information on it in a type size that would be smaller than 20 pt on A4 paper. But 14 pt and even 12 pt happen, especially if the slides were prepared with a tool that quietly shrinks things to fit in the image area. If I'm in the room and such a slide is projected, I can walk to the front to see if if I'm not already in front and can't deduce what I need from context. If I'm remote and have such a slide in advance, I can zoom in on it or otherwise get to the information I need (assuming high enough resolution). If I'm remote and reading the slide off video, especially low resolution video, is hopeless. More generally, being able to see an outline of what the speaker is talking about is of huge help when the audio isn't completely clear. Others have mentioned this, but, if I couldn't read and understand slides in English easily in real time, it would be of even more help if I had the slides far enough in advance to be able to read through them at my own pace before the WG session and even make notes abut what they are about in my most-familiar language ... and that is true whether I'm remote or in the room. And, yes, for my purposes, 48 hours ahead of the WG meeting would be plenty. But I can read and understand English in real time. If the IETF cares about diversity as well as about remote participation and someone whose English is worse than mine is trying to follow several WGs, 48 hours may not be enough without requiring a lot of extra effort. That is not, however, the key reason I said a week. The more important part of the reason is that a one-week cutoff gives the WG Chair (or IETF or IAB Chairs for the plenaries) the time to make adjustments. If there is a nominal one week deadline, then the WG Chair has lots of warning when things don't show up. She can respond by getting on someone's case, by accepting a firm promise and a closer deadline, by finding someone else to take charge of the presentation or discussion-leading, or by rearranging the agenda. And exceptions can be explained to the WG on the mailing list. With a 48 hour deadline, reasonable ways to compensate are much less likely, the Chair is likely to have only the choice that was presented this time (accepting late slides or hurting the WG's ability to consider important issues) and one needs to start talking about sanctions for bad behavior. I would never suggest a firm one week or no agenda time rule. I am suggesting something much more like a one week or the WG Chair needs to make an exception, explain it to the WG, and be accountable if the late slides cause too much of a problem. There is some similarity between this and the current I-D cutoff rule and its provision for AD-authorized exceptions. That similarity is intentional. --On Monday, August 05, 2013 13:36 -0500 James Polk jmp...@cisco.com wrote: At 12:38 PM 8/5/2013, John C Klensin wrote: Hi. I seem to have missed a lot of traffic since getting a few responses yesterday. I think the reasons why slides should be available well in advance
Re: Berlin was awesome, let's come again
--On Wednesday, August 07, 2013 00:52 +0200 Martin Rex m...@sap.com wrote: ... IETF 39 was in Munich (August 1997) ArabellaSheraton @ Arabella Park, and it was HOT pretty much the whole week. If I recall, another very successful meeting in a place we should go back to. Now, if only the IAOC could work out a better weather contract for both Europe and, perhaps, a required March thaw in Minneapolis. john
Re: Anonymity versus Pseudonymity (was Re: [87attendees] procedural question with remote participation)
--On Sunday, August 04, 2013 19:31 + Ted Lemon ted.le...@nominum.com wrote: If you came to the IETF and were working for company X, registered pseudonymously, and didn't disclose IPR belonging to you or company X, and then later company X sued someone for using their IPR, you and company X would get raked over the coals, jointly and severally; the deliberate attempt to deceive would make things worse for you. And that's the point: to provide you with a strong disincentive to doing such a thing. So whether the rules prevent you from being anonymous, or prevent you from suing, everybody's happy. If company X wanted to collaborate with Yoav in preserving his pseudonym (i.e., not disclosing the binding to his name), they could presumably file a disclosure without identifying the particular employee for whom the disclosure was made. Especially with the ambiguities created by anonymous and pseudonymous remote participation, I assume we would not decline to post an IPR disclosure from an organization on the grounds that we didn't know who was affiliated with it who participated in the IETF. (IANAL, so I'm just explaining my understanding of the situation.) ditto. best, john
Re: procedural question with remote participation
Hi. I seem to have missed a lot of traffic since getting a few responses yesterday. I think the reasons why slides should be available well in advance of the meeting have been covered well by others. And, as others have suggested, I'm willing to see updates to those slides if things change in the hours leading up to the meeting, but strongly prefer that those updates come as new alides with update-type numbers or other identification rather than new decks. In other words, if a deck is posted in advance with four slides numbered 1, 2, 3, and 4, and additional information is needed for 3, I'd prefer to see the updated deck consist of slides 1, 2, 3, 3a, 3b, 4 or 1, 2, 3a, 3b, 3c, 4, rather than 1, 2, 3, 4, 5, 6. I also prefer consolidated decks but, if WG chairs find that too difficult, I'm happy to do my own consolidating if everyting is available enough in advance for me to do sol Almost independent of the above, the idea that one should just watch the slides on Meetecho implies that Meetecho is available in every session (it isn't) and that everything works. In addition, they either need the slides in advance or need to be able to broadcast real-time video at a resolution that makes the slides readable. The latter was not the case last week in some of the sessions in which Meetecho was transmitting the slides sometimes due in part to interesting speaker-training issues. The reasons to discourage anonymity aren't just patent nonsense (although that should be sufficient and I rather like the pun). Despite all we say and believe about individual participation, the IETF has a legitimate need to understand the difference between comments on a specification from an audience with diverse perspectives and organized campaigns or a loud minority with a shared perspective. That requires understanding whether speakers are largely independent of each other (versus what have sometimes been referred to as sock puppets for one individual) or whether they are part of an organization mounting a systematic campaign to get a particular position adopted (or not adopted). The latter can also raise some rather nasty antitrust / anti-competitiveness issues. Clear identification of speakers, whether in the room or remote, can be a big help in those regards, even though it can't prevent all problems. And the IETF having a policy that requires clear identification at least establishes that we, organizationally and procedurally, are opposed to nefarious, deceptive, and posslbly illegal behavior. A rule about having slides well in advance helps in another way: slides that are bad news for some reasons but posted several days in advance of the meeting provide opportunities for comments and adjustments (from WG Chairs and others). Ones that are posted five minutes before (or 10 minutes after) a session lose that potential advantage. Again, I don't think we should get rigid about it: if slides are posted in advance and then supplemented or revised after feedback is received, everyone benefits. I want to stress that, while I think registration of remote people who intend to participate is desirable for many reasons, I think trying to condition microphone use (either remote on in-room) with proof of registration and mapping of names would be looking for a lot of trouble with probably no significant benefits. best, john
Re: procedural question with remote participation
--On Tuesday, August 06, 2013 02:06 +0100 Stephen Farrell stephen.farr...@cs.tcd.ie wrote: ... On 08/05/2013 06:38 PM, John C Klensin wrote: The reasons to discourage anonymity aren't just patent nonsense (although that should be sufficient and I rather like the pun). Thanks. The pun was accidental as it happens, but I did leave it in after I spotted it :-) Puns aside, its an important point. Most patents are nonsense (in terms of being really inventive) and we shouldn't base our processes anywhere near primarily on the existence of that nonsense. Agreed, modulo observations about how much time we seem to put into fine-tuning IPR policies and devising threats to make to those who don't seem inclined to comply. Despite all we say and believe about individual participation, the IETF has a legitimate need to understand the difference between comments on a specification from an audience with diverse perspectives and organized campaigns or a loud minority with a shared perspective. Good point. We have similar issues with folks who do lots of contract work I guess. But, IMO we should first make sure we can hear the good points that are to be made, and only then modulate our reactions to those in terms of who-pays-whom or whatever. Indeed. Put another way, regardless of patents or who's paying, if someone (even anonymously) comes up with a really good technical point, then we do have to pay attention. But I think we do do that. When, as you indirectly point out, we can hear them. In contrast, I think the real challenge remote participants face is being heard. And when/if we solve that problem, I suspect that remote participants with bad ideas will be a far worse problem than those who'd like to submarine a patent or further a subtle corporate agenda. Of course, that is also true of participants who show up and more f2f meetings. So again that leads me back to trying to encourage folks to just make the tools better for us all and to only then try figure out how we need to manage that. Perhaps Hadriel's anecdote above means that how we use jabber is, after about a decade, now mature enough that we ought think more about how we formalise its use. I'm ok with waiting another longish time before even thinking about how to do the same with successful inbound audio for example. I'm actually not a big fan of inbound audio, at least not yet. It is subject to the same technical and operational issues that make outbound audio fragile, including the difficulties of clear and standard pronunciation plus the same how to raise your hand, get in line, or otherwise ask for the floor issues that Jabber does. But, if we are going to rely on Jabber for input, we need to move toward treating it as a source of input with the same priority as those in the room (and relatively more real time), not something that is a nice-to-have when it happens to work. best, john
Re: procedural question with remote participation
--On Sunday, August 04, 2013 07:27 -0400 Michael Richardson m...@sandelman.ca wrote: ... * On several occasions this week, slides were uploaded on a just-in-time basis (or an hour or so after that). Agreed. I'd like to have this as a very clear IETF-wide policy. No slides 1 week before hand, no time allocation. I had two different WG chairs (from two different WGs) tell me this week that their WGs really needed the presentations and discussion to move forward and they therefore couldn't do anything other than let things progress when they didn't get the slides and get them posted before the session started. This is part of what I mean by the community not [yet] taking remote participation seriously. If having the slides in advance is as important to remote participants as Michael and I believe, then the community has to decide that late slides are simply unacceptable behavior except in the most unusual circumstances, with unacceptable being viewed at a level that justifies finding replacements for document authors and even WG chairs. I also note that the 1 week cutoff that Michael suggests would, in most cases, eliminate had no choice without impeding WG progress as an excuse. A week in advance of the meeting, there should be time, if necessary to find someone else to organize the presentation or discussion (and to prepare and post late slides that are still posted before the meeting if needed). If it is necessary to go ahead without the slides, it is time to get a warning to that effect and maybe an outline of the issues to be discussed into the agenda.If the WG's position is that slides 12 or 24 hours before the WG's session are acceptable, then the odds are high that one glitch or another will trigger a well, there are no slides posted but they are available in the room and the discussion is important decision. Again, I think the real question is whether we, as a community, are serious about effective remote participation; serious enough to back a WG chair who calls off a presentation or replaces a document author, or an AD who replaces a WG chair, for not getting with the program. best, john
Re: The Friday Report (was Re: Weekly posting summary for ietf@ietf.org)
--On Sunday, August 04, 2013 19:53 + John Levine jo...@taugh.com wrote: If there is a serious drive to discontinue the weekly posting summary - I strongly object. As far as I can tell, one person objects, everyone else thinks it's fine. I do not want to be recorded as thinking it is fine. If nothing else, I think was is being reported is meaningless statistically (which doesn't mean people can't find value in it). However, I do not object to its being posted as long as it isn't used to justify personal attacks on individuals for their ranking. It seems to me that isn't quite what you said, rough consensus or not. best, john
Re: Berlin was awesome, let's come again
--On Friday, August 02, 2013 06:47 -0700 Randall Gellens rg+i...@qualcomm.com wrote: I can rattle off a very small number of hotels around the world where they do wash in-room items in the restaurant dishwasher. Or equivalent. Most of those seal the washed cups in paper or plastic to show that they have done so. Sadly from an environmental standpoint, there are fewer that do wash things that way than there were a decade or two ago: especially for a large hotel with many rooms it is simply cheaper to supply disposable paper and/or plastic cups in the rooms that it is to haul the glassware to the washing device, seal it, and haul it back to the rooms. It is a problem, but there is little point in blaming a single hotel -- it is a chain-wide and industry-wide economic and policy problem. Bring your own and/or wash your own. And, by the way, if you are sensitive to such things washing cups or glassware from which you intend to eat or drink with hand or bath soap is often not really optimal because of moisturizing or fragrance ingredients. A 100 ml bottle of dish soap goes a long way. john
Re: procedural question with remote participation
--On Saturday, August 03, 2013 08:55 +0200 Olle E. Johansson o...@edvina.net wrote: ... Just a note for the future. I think we should allow anonymous listeners, but should they really be allowed to participate? We don't allow anonymous comments at the microphone in face-to-face meetings, requiring all people to clearly state their names and have those names recorded in the meeting minutes and in the Jabber log.I don't see why we would change this for remote participants. ... (moving to ietf mailing list) Absolutely. Now, should we add an automatic message when someone joins the chat rooms, or a message when meetings begin that all comments made in the chat room is also participation under the note well? Ole, First, probably to the when meetings begin part, but noting that someone who gets onto the audio a few minutes late is in exactly the same situation as someone who walks into the meeting room a few minutes late -- announcements at the beginning of the session are ineffective. But, more generally... I've said some of this in other contexts but, as a periodic remote attendee, including being remote for IETF 87, I'd support a more radical proposal, for example: We regularize remote participation [1] a bit by doing the following. At some level, if remote participants expect to be treated as serious members of the community, they (we) can reasonably be expected to behave that way. * A mechanism for remote participants should be set up and remote participants should be to register. The registration procedure should include the Note Well and any other announcement the IETF Trust, IAOC, or IESG consider necessary (just like the registration procedure for f2f attendance). * In the hope of increased equity, lowered overall registration fees, and consequently more access to IETF participation by a broader and more diverse community, the IAOC should establish a target/ recommended registration fee for remote participants. That fee should reflect the portion of the registration fee that is not specifically associated with meeting expenses (i.e., I don't believe that remote participants should be supporting anyone's cookies other than their own). * In the interest of maximum participation and inclusion of people are aren't attending f2f for economic reasons, I think we should treat the registration fee as voluntary, with people contributing all or part of it as they consider possible. No questions asked and no special waiver procedures. On the other hand, participation without registration should be considered as being in extremely bad taste or worse, on a par with violations of the IPR disclosure rules. * I don't see a practical and non-obtrusive way to enforce registration, i.e., preventing anyone unregistered from speaking, modulo the bad taste comment above. But we rarely inspect badges before letting people stand in a microphone line either. In return, the IETF generally (and particularly people in the room) needs to commit to a level of seriousness about remote participation that has not consistently been in evidence. In particular: * Remote participants should have as much access to mic lines and the ability to participate in discussions as those who are present in the room. That includes recognizing that, if there is an audio lag and it takes a few moments to type in a question or comment, some flexibility about the comment queue is closed may have to be in order. For some sessions, it might require doing what ICANN has started doing (at least sometimes), which is treating the remote participants as a separate mic queue rather than expecting the Jabber scribe (or remote participant messenger/ channeler) to get at the end of whatever line is most convenient. * It is really, really, important that those speaking, even if they happen to be sitting at the chair's table, clearly and carefully identify themselves. Last week, there were a few rooms in which the audio was, to put it very politely, a little marginal. That happens. But, when it combines with people mumbling their names or saying them very quickly, the result is as little speaker identification as would have been the case if the name hadn't been used as all. In addition, some of us suffer from the disability of not being able to keep track of unfamiliar voices while juggling a few decks of slides, a jabber session, audio, and so on. I identified myself 10 minutes ago is not generally adequate. * On several occasions this week, slides were
Re: The Trust Agreement
FWIW, I share Brian's concern and reasoning about these questions (and his allergy). I might have a lower threshold of necessity as a requirement for changing the agreement, but I'm not convinced -- from either the slide or what I could hear of the audio-- that it is necessary. john --On Friday, August 02, 2013 08:59 +1200 Brian E Carpenter brian.e.carpen...@gmail.com wrote: Hi, Re the Trust's plenary slides (I was not in Berlin): I have an allergy to modifying the Trust Agreement unless there's an overwhelming reason to do so. It was a very hard-won piece of text. ...
Re: PS to IS question from plenary
--On Tuesday, July 30, 2013 16:41 +0200 IETF Chair ch...@ietf.org wrote: Last night there was a question in the plenary about how many PS-IS transitions have occurred since RFC 6410 was published in October 2011. That RFC changed the three-step standards process to two steps. There was also a question of how this compared to previous times before that RFC got approved. Looking at the timeframe from October 2011 to today (22 months), there have been four such protocol actions. These results are given by searching the IETF Announce mail archive: ... Prior to the publication of RFC 6410, in the preceding 22 months there were these 20 actions raising standards to either Draft Standard or Full Standard: ... I should insert here the Standard disclaimer about possibly faulty search methodology or records, misunderstanding the question, or the hasty interpretation of results. In particular, the above search was not easy on ARO, involved manual steps, and I might have easily missed something. And I wish I had been able to do a database query instead. Feel free to repeat verify my results... Jari, Thanks for this. Disclaimers and possible small classification errors aside and being careful to avoid making causal assumptions, I believe that the implication of the above is that there is no evidence that the 3 - 2 transition has increased the number of documents being moved or promoted out of Proposed Standard. If one were to assume a causal relationship and an absence of external confounding variates or processes, one might even conclude the the 3 - 2 transition has made things quite a lot worse. Conversely, it seems to me that one could argue that the change has made things better only by demonstrating the existence of a process that would have led to considerably fewer than four documents being moved out of Proposed Standard in the last 22 months in the absence of the change. While the apparently-significant reduction in documents moved out of Proposed Standard is far worse than we expected, is it time for Scott Bradner and myself to review draft-bradner-restore-proposed-00, issue a new version, and start a serious discussion about that model of a solution? Would be willing to sponsor such a draft or, if you prefer, organize a WG or equivalent to consider it? thanks, john
Re: Remote participants, newcomers, and tutorials
--On Monday, July 29, 2013 01:37 -0400 Brian Haberman br...@innovationslab.net wrote: ... One of the things that I ask the Internet Area chairs to do is send in a summary of their WG after each IETF meeting. Those summaries generally give folks a good idea of the current state of each WG. I post those summaries on the Internet Area wiki. An alternative that would work as well is to have each WG post summaries to their own wikis. Each WG has a wiki available via their Tools page (e.g., http://trac.tools.ietf.org/wg/6man/trac/wiki). I like seeing the summaries from my chairs and I have gotten feedback from participants that they find them quite useful for keeping up with WGs that are tangential to their primary focus. I would encourage every WG chair to periodically summarize the state of their WG/drafts. Dave and a few other ancients will recall that there was a time when there was a requirement for ADs to put together per-meeting Area Reports, which went into the minutes. Unless ADs were masochists who wanted to do all the work themselves, that pretty much required that sort of status reports that he and Brian are talking about. It also ensured that ADs were aware of what was going on in all of the WGs for which they were responsible and that, if there were two ADs in an area, they were talking with each other. If those expectations were not met, someone observing that would presumably have something very concrete to tell the Nomcom. In the context of the current discussion, a set of well-written and frequently-updated area reports could also be a big help to a newcomer trying to navigate WG names and acronyms. I agree that it would probably help to be more descriptive about WG names rather than looking for things that will make cute acronyms. Whether we move in that direction or not, most newcomers and isolated/remote participants are going to find it easier to identify an Area of interest than a specific WG. A well-written Area Report that includes brief descriptions of the main focus of each WG along with current status information would be, IMO, a huge help in matching people and specific interests. I think a Wiki or equivalent would be a fine way to maintain such pages but, given how well we do about keeping benchmarks and similar information up to date and the advantages deadlines seem to bring, I'd like to see at least snapshots or the equivalent in meeting minutes. john
Re: Remote participants access to Meeting Mailing Lists was Re: BOF posters in the welcome reception
--On Saturday, July 27, 2013 03:25 -0700 Alexa Morris amor...@amsl.com wrote: (3) While it is almost certainly too late to populate it before Berlin, I think the meeting page template could use a Remote Participants main section with pointers to hints and other relevant materials, including which mailing lists one should watch and that the web version of the meeting agenda should be refreshed at least daily. Wouldn't hurt to repeat the instructions about what to do if the feeds go bad there either. We have created a small section called Remote Participation on the lower right side of the 87 meeting page here: http://www.ietf.org/meeting/87/index.html. It can and will be improved over time, but it's a start. This is a wonderful step forward. Thanks. john
Remote participants, newcomers, and tutorials (was: IETF87 Audio Streaming Info)
Hi. For a newcomer or someone expecting to write I-Ds, some of the most important sessions at the IETF are the various Sunday afternoon tutorials and introductions. Many of them are (or should be) of as much interest to remote participants as to f2f attendees. Until and unless a newcomer's tutorial can be prepared that is focused on remote participants, even that session should be of interest. For this particular meeting all of the following seem relevant to at least some remote participants: Newcomers' Orientation Tools for Creating I-Ds and RFCs IAOC Overview Session Multipath TCP Applying IP Flow Information Export (IPFIX) to Network Measurement and Management So... (1) The note below strongly implies that none of those sessions are being audiocast.Why not and can that be fixed? (2) There is no hint on the agenda or tools agenda about availability of presentation and related materials (slides, etc.) for those sessions. Do those materials not exist? I know, but a newcomer or remote participant might not, that I can find some tutorials by going to the IETF main page and going to Tutorial under Resources, but I have no idea which of those links actually reflects what will be presented on Sunday. Assuming the presentation materials do exist for at least several of the sessions, finding them is much like the situation with subscribing to the 87all list. It should no involve a treasure hunt at which only very experienced IETF participants can be expected to succeed. Specific suggestions: (i) Let's get these open Sunday sessions audiocast and/or available over Meetecho or WebEx. If that is impossible for IETF 87, it should be a priority for IETF 88 and later. (ii) If there are presentation materials available, links from the tools agenda and an announcement to IETF-Announce as to where to find them would be desirable. (iii) If presentation materials are not available, why not? And, more important, can this be made a requirement for IETF 88 and beyond? thanks, john --On Friday, July 26, 2013 12:00 +0200 Nick Kukich nkuk...@verilan.com wrote: Greetings, For those interested in monitoring sessions or participating remotely the following information may prove useful. ... All 8 parallel tracks at the IETF 87 meeting will be broadcast starting with the commencement of working group sessions on Monday, July 29, 2013 at 0900 CEST (UTC+2) and continue until the close of sessions on Friday, August 2nd. ...
dnssdext BOF (was: Re: Remote participants, newcomers, and tutorials (was: IETF87 Audio Streaming Info))
--On Friday, July 26, 2013 11:29 -0700 SM s...@resistor.net wrote: POSH has not published a session agenda. However, the BoF is listed on the meeting agenda. Is the BoF cancelled or will this be one of those willful violations of IETF Best Current Practices? On a similar note, according to its agenda, the core of the DNS-SD Extensions BOF (dnssdext) is apparently draft-lynn-sadnssd-requirements-01. The link from the agenda page [1] yields a 404 error and attempts to look up either draft-lynn-sadnssd-requirements or the author name lynn in the I-D search engine yield nothing. If one thinks to go to the I-D search engine and enter just draft-lynn, draft-lynn-mdnsext-requirements-02 shows up which I'm guessing is the relevant draft. FWIW, I also note that the posted agenda is heavily dependent on the Chairs and mentions an agreed charter. The Chairs are not identified, preventing interested participants from contacting them for information (and others from contacting them about errors like the one above) and there is no link or other pointer to the proposed agreed charter. So I am wondering why this BOF was approved, which AD is watching the BOF agenda, and why it is still on the meeting agenda? I am mentioning this on the IETF list only because it is another example of the point that I (and probably SM and others) are trying to make: If we are interested in newcomers, remote participants without years of IETF experience, and/or increased diversity, we should not allow these kinds of issues to become requirements for treasure hunts or other sorts of obstacles in people's paths. And, IMO, we should be especially careful about BOFs because they provide newcomers (present at the meeting or remote) good opportunities to get in at the beginning of new work items. john [1] http://tools.ietf.org/agenda/87/agenda-87-dnssdext.html
Re: Oh look! [Re: Remote participants, newcomers, and tutorials]
--On Saturday, July 27, 2013 08:38 +1200 Brian E Carpenter brian.e.carpen...@gmail.com wrote: And there is a Training section in the meeting materials page. It's empty... but thanks to somebody for putting it there. All we need to do is figure out how to pre-load it. And to remember that link appears on the main meeting page because it isn't on either of the agenda pages. I suggest again that these little treasure hunts work better for very experienced participants and regular participants who are very patient about searching for information, but much less well for newcomers, remote-only participants, and the diverse and curious potential participants we'd supposedly like to encourage. I still believe that the agenda pages should be one-step shopping for these types of meeting program-specific bits of information, whether it be remote participation bits (or at lest a pointer to whether they can be found) and meeting material pages (ditto). It is slightly extraneous information but I note that we have had a list of Areas and ADs on the agenda pages ever since I can remember. That information is much more easily located than most of the things I've been commenting on and, if one locates it (IETF Home Page - IESG - Members) one even gets contact information as a bonus. And the listing of AD names is pretty useless without contact info. More inline. ... (1) The note below strongly implies that none of those sessions are being audiocast.Why not and can that be fixed? I think that would mean that the crew (partly volunteers) would need to mobilise 24 hours earlier. Not impossible, I suppose, but not free of costs either. Brian, there are two reasons I'm pushing on this set of issues. One is that there are folks like you and me (and, since he dropping into a different part of the thread, SM) who are reasonably experienced participants but who are not likely to attend most or all f2f meetings in the future. To the extent to which it is in the IETF's interest to keep us active --and I hope that it is-- then a lot of this stuff ought to work (even though we will know about and, given enough patience, be able to find meeting materials lists, mechanisms to subscribe to slightly-hidden mailing lists, the actual names and locations of incorrect links to drafts, names of BOF Chairs and responsible ADs, etc. If we are desperately concerned about hearing a particular tutorial, I imagine that, with a little planning, either of us could find someone to sit in the room and do a Skype or equivalent if there was a functional network or get someone to sit in the room with a voice recorder and make something that could be converted into an MP3 file for transmission after the network comes up. I assume that, were the question posed, there would be general IETF consensus that I run out of patience a lot faster than you do. I'm willing to concede that and agree that anything that doesn't irritate you too is my problem and I should live with it. I certainly would have a lot of difficulty arguing for folks going to a lot of extra trouble or expense on my account (or even on yours). However, the IETF has been having a lot of discussions about newcomers, diversity, and attracting new folks to participate and get work done. I think those populations will be better served if it is possible for people a lot less experienced than the two of us can participate actively and constructively without attending every meeting. I also think that, especially for many people from developing countries, universities, small companies, and far-away places, we will be far more successful in recruiting if we can encourage remote participation as a starting point with the expectation of getting people physically to meetings only after the value to them and their organizations of doing so has been demonstrated. I'd personally even favor making remote participation at a could of meeting be a prerequisite for most applications for ISOC's IETF Fellows program. But the above picture isn't going to happen unless we are serious and treat that seriousness as an integral part of our strategies about newcomers and diversity. Seriousness to me says that we get more careful about how experienced one has to be to find critical information, that we make sure remote participation works, and that we make any session that would be relevant to remote participants accessible to them (and with materials available as much as possible in advance and from easy-to-find places). Seriousness implies that, if there are extra costs, we figure out how to cover them (or how to cut somewhere else). Or, if we are not serious, it would probably be to the benefit of the community for us to face that and stop wasting energy and resources on outreach efforts that are expensive in one or the other (or both). best, john
Re: dnssdext BOF (was: Re: Remote participants, newcomers, and tutorials (was: IETF87 Audio Streaming Info))
--On Friday, July 26, 2013 22:48 +0100 Tim Chown t...@ecs.soton.ac.uk wrote: ... On a similar note, according to its agenda, the core of the DNS-SD Extensions BOF (dnssdext) is apparently draft-lynn-sadnssd-requirements-01. The link from the agenda page [1] yields a 404 error and attempts to look up either draft-lynn-sadnssd-requirements or the author name lynn in the I-D search engine yield nothing. Hi John, Apologies for this. The correct draft name, and the BoF chair contacts, are now in the agenda file at: https://datatracker.ietf.org/meeting/87/agenda/dnssdext/ Tim, Many thanks. Let me stress that I didn't set out to attack you or your BOF. You just lucked out and became the first example that came in handy. See below. FWIW, I also note that the posted agenda is heavily dependent on the Chairs and mentions an agreed charter. That means the charter agreed from the bashing of the draft charter in the previous 40 minutes, not that a charter is already agreed. If there is something to be bashed for those 40 minutes, I'd expect a link to at least a skeleton first draft. I note that draft charter does have a link from the meeting materials page, just not from the agenda. But, modulo the comment below, that is a matter of taste to be working out between you, Ralph, and the IESG. I am mentioning this on the IETF list only because it is another example of the point that I (and probably SM and others) are trying to make: If we are interested in newcomers, remote participants without years of IETF experience, and/or increased diversity, we should not allow these kinds of issues to become requirements for treasure hunts or other sorts of obstacles in people's paths. True. Though the chair names are on the posters linked in the materials page, which I assume is well-advertised to newcomers as access to slides is rather important. As far as I know, the only advertisement is the link from the Agendas and Meeting Materials section of the main meeting page and the similar links from the Meetings entry on the IETF home page. Now I'd personally like to see the New Attendees category on the main meeting page changed to New Attendees and Participantes and then including a link to a page that would give hints about where these things are and how to navigate around them. But that fairly clearly won't happen before Sunday and YMMD. And also on the BoF wiki, which you should know about. Yep. I know about it and where to find it. But, as I explained in my note to Brian, I'm a lot more concerned about newcomers and remote participants without years or experience than I am about what I can find if I remember all of the reasonable places where I might look. ... Best wishes for a successful BOF. john
Re: dnssdext BOF (was: Re: Remote participants, newcomers, and tutorials (was: IETF87 Audio Streaming Info))
--On Saturday, July 27, 2013 00:37 +0100 Tim Chown t...@ecs.soton.ac.uk wrote: ... While we/you can try to guess what the problems are, it may be better to surveymonkey those who registered as newcomers in a couple of weeks and ask them about their experience, whether they were aware of certain things, and what could be done better. I hope that the mentoring program and assorted ask me dots will take care of any of these issues for anyone who is in Berlin. If those newcomers have problems finding these things, I hope they will bug someone. If they do and can't get answers or don't bother asking, those I'm slightly different problems. So, right now, I'm personally more concerned about people who are trying to participate remotely (or understand the IETF remotely) for the first several times. Given that, in general, we have no idea who those people are, surveying them would be a little difficult. If a side-effect of these discussions is that we change things enough that we do know who they are (at least those who are willing to tell us in exchange for a bit more support, sympathy, and the ability to take remote-participant-specific surveys if those are offered), I'd personally consider that a good thing. best, john
Re: BOF posters in the welcome reception
--On Wednesday, July 24, 2013 09:22 +0300 IETF Chair ch...@ietf.org wrote: I wanted to let you know about an experiment we are trying out in Berlin. ... But we want as many people as possible to become involved in these efforts, or at least provide their feedback during the week. So we have given an opportunity for the BOFs to display a poster in the Welcome Reception (Sunday 5pm to 7pm). If you are attending the reception, take a look at the posters and look for topics that interest you. Someone running the BOF is also likely standing by, so you can also get directly involved in discussions, sign up to help, etc. We hope that this helps you all network with others even more :-) In the interest of encouraging remote participation and involvement in those BOFs, could these posters be made available online before the reception? Will they eventually be incorporated into the minutes? And, incidentally, is there a way for remote participants to sign up for one or both meeting-related mailing lists without registering (or using a remote participation registration mechanism, which would be my preference for other reasons)? thanks, john
Re: BOF posters in the welcome reception
--On Wednesday, July 24, 2013 11:17 +0300 Jari Arkko jari.ar...@piuha.net wrote: And, incidentally, is there a way for remote participants to sign up for one or both meeting-related mailing lists without registering (or using a remote participation registration mechanism, which would be my preference for other reasons)? I sent the mail to ietf-announce, so I would guess many non-attendees got it as well. Yes. I was thinking a bit more generally. For example, schedule changes during the meeting week, IIR, go to NNall, and not ietf-announce. As a remote participant, one might prefer to avoid the usual (and interminable) discussions about coffee shops, weather, and the diameter of the cookies, but it seems to me that there is a good deal of material that goes to the two meeting lists that would be of use. Since I'm on those lists in spite of being remote (registered and then cancelled), I can try to keep track of whether anything significant to remote participants appears on the meeting discuss list this time if it would help. best, john
Remote participation and meeting mailing lists (was: Re: BOF posters in the welcome reception)
--On Wednesday, July 24, 2013 06:43 -0800 Melinda Shore melinda.sh...@gmail.com wrote: On 7/24/13 12:30 AM, John C Klensin wrote: Yes. I was thinking a bit more generally. For example, schedule changes during the meeting week, IIR, go to NNall, and not ietf-announce. As a remote participant, one might prefer to avoid the usual (and interminable) discussions ... Yes, the meeting mailing lists are here: http://www.ietf.org/meeting/email-list.html Yes, but... (1) A hypothetical remote participating newcomer is expected to find that how? I note that the IETF home page links to mailing lists do not list either 87all or 87attendees in any of the obvious places. (2) That page provides a mechanism for subscribing to 87attendees (the cookies and coffee and, I assume, bumped-from-hotel-attendees very soon, discussion list) but not a link to the far more important for remote folks 87all. If there is a way to subscribe to that without registering, I haven't found it. Note to remote participants: The cookies and coffee list is also where a lot of comments about the meeting network and related facilities appears. While most remote participants don't need to know how poorly the network works in some alternate hotel, information about Meetecho and Webex status tends to appear there too. john
Re: Remote participants access to Meeting Mailing Lists was Re: BOF posters in the welcome reception
--On Wednesday, July 24, 2013 17:46 +0100 Tim Chown t...@ecs.soton.ac.uk wrote: I see no reason why the 87attend...@ietf.org list shouldn't be open to remote participants. Is that not the case already? We should be doing all we can to encourage participation. It is already. It is a bit hard to find, but it is there and open. ... Unfortunately 87...@ietf.org --the announce version of the list-- is where the really important things, like schedule changes, show up. And, at least as far as I can tell, there is no way for a non-registrant to get on that list. ... best, john
Re: Remote participants access to Meeting Mailing Lists was Re: BOF posters in the welcome reception
--On Wednesday, July 24, 2013 14:36 -0400 Barry Leiba barryle...@computer.org wrote: Unfortunately 87...@ietf.org --the announce version of the list-- is where the really important things, like schedule changes, show up. And, at least as far as I can tell, there is no way for a non-registrant to get on that list. Has anyone tried to subscribe on the listinfo page?: https://www.ietf.org/mailman/listinfo/87all I just did, using another email address, and that email address got the normal click here to confirm response. Barry, I'm sorry to be difficult about this, but the point I was trying to make was about access by relatively remote relative newcomers. For them, at least, the question is not does the listinfo page work if one can find it or guess at its URL. Instead, suppose such a person goes looking with reasonable knowledge of the information on the IETF home page, the meeting main page, and perhaps even the Tao and previous Newcomer's Introduction slides. So, first she goes to the Meetings page for this meeting (http://www.ietf.org/meeting/87/index.html). Seems like Meeting Communication would be a reasonable place to look, and, lo, there is a Mailing Lists entry there. It points to 87attendees (plus runners, companions, and food links). Dead end. So, back to the mail IETF page and the Mailing Lists entry and its links. 87all is an Announcement List, but that page http://www.ietf.org/list/announcement.html) discusses only IETF-Announce, I-D-Announce, and IPR-Announce plus the IESG Agenda Distribution. No joy. If this lucky newcomer figures out that it is not a discussion list, she skips the Discussion List link (http://www.ietf.org/list/discussion.html) but all that is there is another link to http://www.ietf.org/meeting/email-list.html and hence information about 87attendees, etc. Not a hint so far that 87all even exists. So her (most IETF-type men would have run out of patience by now) adventure then takes her to the Non-WG Lists page (http://www.ietf.org/list/nonwg.html). It starts out by assuring the reader that, if the list exists, it will be listed (or at least that is how I would interpret attempts to list all the active, publicly visible lists that are considered to be related to the IETF, but are not the main list of any working group, in alphabetical order by list name. as a rather strong indicate that 87all and 87announce ought to be listed there: they are certainly active and publicly visible and it would be hard to claim that they are not relate to the IETF. No joy... neither list appears there. If we are serious about remote participants, that list should be known, advertised, and accessible unless it really isn't used for anything but local logistical information (as Joel suggests). But it didn't take me long to find examples of announcements there (and apparently nowhere else) that would be of interest to remote participants. Examples: * At IETF 86, the Thursday Lunchtime Panel was announced there and apparently nowhere else (http://www.ietf.org/mail-archive/web/86all/current/msg00033.html). I don't know if it was made available to remove participants or not but, if it wasn't and I were a remote participant with significant interest in the subject matter, I'd want to know about the presentation and have ample time to ask that appropriate arrangements be made. * I also found an agenda change announcement (http://www.ietf.org/mail-archive/web/86all/current/msg00031.html) that was apparently posted only there. This one might not count if the change was made before the meeting started. * However, if I go back a few more meetings, I find http://www.ietf.org/mail-archive/web/84all/current/msg00010.html, which is a meeting room change. And meeting room changes affect audiocasts. Wanda's note about the web agenda is relevant, but our hypothetical newcomer isn't given that warning on the meeting page or, since she has been unable to subscribe to NNall, the announcements there. Recommendations, with some comment about things that aren't rocket science included by reference: (1) Put 87all and 87attendees on the Non-WG mailing list page. Or, perhaps better yet, mention there (as well as on the main discussion list page) that there are meeting-specific lists and include a link to the Meeting Email Lists page. It couldn't hurt and might help. (2) Modify the meeting email lists page to discuss NNall as well as NNattendees and explain what it is for and that, while there will be some local logistics (maybe mostly local logistics), remote participants should probably keep an eye on it as well as ietf-announce. (3) While it is almost certainly too late to populate it before Berlin, I think the meeting page template could use a Remote Participants main section with pointers to hints and other relevant materials, including which mailing lists one should watch and that the web version of the meeting agenda should be refreshed at least daily. Wouldn't hurt
Re: Last call: draft-montemurro-gsma-imei-urn-16.txt
Hi. Borrowing from several other notes and comments, it seems to me that we have three interlocking issues that keep recurring and producing long discussions. They are by no means independent of this particular draft, but seem to be becoming generic. (1) Are we willing to publish (or even standardize) specs whose nature is to provide a vehicle for making privacy-sensitive information public? The arguments against doing so seem obvious. The arguments for doing so include those who claim they need this will do it anyway so we are better off publishing a spec that will at least reduce interoperability side-effects and permit spelling out the issues as privacy considerations or security ones. (2) Is turning hardware identifiers (physical-layer objects) into applications or user level identifiers an acceptable idea? Are DNS RRTYPEs that map application-level identifiers into other identifiers that can loop back through the DNS without guarantees that the process will terminate part of the same problem or a different one? (3) Do either of the above answers change if the proposal comes from another SDO or a major industry group? I don't know the answers, but I'm pretty sure that trying to address each of these issues separately every time a new protocol, RRTYPE, or URN (or URI) type comes along that interacts with one of them is not the way. It seems to me that we ought to have something along the lines of RFC 1984 in our future and that a plenary discussion might be a useful first step. I don't suppose the IAB or IESG would be willing to postpone or push something out of the announced agendas to allow for that discussion, which, given this Last Call and the recent one over RRTYPs would seem to be critical path. Any volunteers to get in front of the mic lines? john
Re: Last call: draft-montemurro-gsma-imei-urn-16.txt
--On Saturday, July 20, 2013 15:51 +0100 Stephen Farrell stephen.farr...@cs.tcd.ie wrote: ... But, even if the outcome wasn't a BCP along the lines I'd prefer, I think such a beast would still be worth having if it meant we could avoid a whole lot of these kinds of similar discussions on individual drafts. That was exactly what I was thinking. I think the security analogy is a combination of BCP 61 (RFC 3365) and RFC 1984. That is a quibble but relates to the question of whether draft-iab-privacy-considerations is sufficient. I think it is necessary, but not sufficient. The other piece would be a fairly clear and ideally consensus, statement about what we do and do not intend to do and why. IMO, the only want to make progress on avoiding these similar discussions on individual drafts would be to develop such a consensus and focus the discussions on it. john
Re: Last call: draft-montemurro-gsma-imei-urn-16.txt
--On Saturday, July 20, 2013 15:19 -0700 Doug Barton do...@dougbarton.us wrote: On 07/20/2013 01:48 PM, Andrew Allen wrote: I think IANA registration of namespaces has a lot of value. I think backfilling registrations for poached identifiers sets a bad/dangerous example. Doug, This is another of those arguments that I wish we could avoid repeating for each individual document. One of our oldest precedents and principles is that it is better to have things registered than not, even when they are ugly. Having such registrations gives us a fighting chance of avoiding the interoperability problems associated with two different parties accidentally using the same name for different purposes. It also provides a centralized mechanism for mapping identifiers onto documentation. That is both a good thing in itself and, in some cases, provides a place to include warnings about improper uses, nasty side-effects, and security and/or privacy problems. The Internet is not an operating system (or even the PSTN) in which the arbiters of taste can keep something from being incorporated in a release or used. If someone is determined to use a particular capability under a particular name, we can try to talk them out of it but, if they are still determined, the only question is whether we are better off with registration and documentation than without them. And, generally, we are better off with the first. Discussions about poaching, squatting, and even stupidity or bad taste are interesting but not generally helpful. john
Re: Last call: draft-montemurro-gsma-imei-urn-16.txt
--On Saturday, July 20, 2013 11:36 -0700 Tim Bray tb...@textuality.com wrote: So if it's going to be used, exactly as specified, whatever we do, then what value is added by the IETF process? -T See my earlier note, but mostly to aid in getting the documentation right. For example, to the extent that the recent discussion results in a more complete treatment of privacy and/or security considerations in the documentation, that is a net improvement and added value. john
Re: Last call: draft-montemurro-gsma-imei-urn-16.txt
--On Saturday, July 20, 2013 19:17 -0700 Tim Bray tb...@textuality.com wrote: Fair enough. I think it would be reasonable to ask that: - the draft include the word privacy - the draft discuss the issues around relying on an identifier that persists across changes in device ownership There may be an issue concerning a SIP-related identifier which is unavailable on millions of mobile devices which do not have IMEIs, but it's quite possible that it's non-applicable in the context of the draft. -T Personally, I'd consider getting all three of those issues addressed in the document (including a discussion of the applicability of the latter and a serious discussion of the privacy issues) as adding value... and as requirements for RFC publication in the IETF Stream. john
RE: Internationalization and draft-ietf-abfab-eapapplicability
--On Wednesday, July 17, 2013 07:56 -0700 Bernard Aboba bernard_ab...@hotmail.com wrote: Sam said: We don't get to place requirements on applications except to say what they need to do to use EAP. The protocol requirement for that is that applications using EAP need to know what character set they have so that EAP can convert the identity to UTF-8 and so that EAP methods can do any needed conversions. [BA] That sounds right. I think it is right, with one qualification that might reasonably be reflected in the document (perhaps as a Security Consideration). In the complex world of multiple coded character sets and models for designing them, there is no guarantee of a unique, reversible, mapping from any given one of them to and from Unicode. There is obviously no problem with ASCII (or, more precisely, ISO 646 IRV) or ISO 8859-1 because the Unicode ranges 0020..007F and 00A0..00FF are defined in terms of those specs. But, if the CCS used in an application, is some odd code page, mappings onto Unicode may be a matter of conventions and judgment, not a standard or universally-accepted rule. That problem is further complicated by issues of normalization and precomposed characters versus composing sequences, but would exist even without it. As Sam more or less points out, this is an application problem, not an EAP one, but the conversion process may lead to surprising EAP results, particularly including false negatives. best, john
Re: The case of dotless domains
--On Tuesday, July 16, 2013 11:07 -0400 Ofer Inbar c...@a.org wrote: ... What this brings to mind is that we had a DNS system that was vulnerable to the addition of something to the DNS that people had expected nobody would make the mistake of doing, but it happened and caused damage, and the net reacted by altering how DNS software works in order to protect against that damage. At the time, the obvious defensive change was don't do implicit domain search. If dotless domains cause damage as many people here predict, what I'm saying is that I think we'll react similarly, and that I guess the defensive change people will widely deploy is to reject A//MX records at the top level. Two additional observations about this. Ofer's explanation about the edu.com example and the resultant ban on heuristic search is consistent with RFC 1535 and what little I remember of the history. However... * The RFC 1535 defensive change was far more practical in 1993 than it is today, just because of the size of the Internet, the number of deployed DNS servers and resolvers, and the difficulties of deploying required upgrades. So the harm level or likely duration of harm should dotless TLDs be deployed is greater today or at least less likely to yield quickly to a defensive change. * A different example of searching issues caused a good deal of disruption when it came up (several years earlier than the EDU.COM. case) and may be relevant to ICANN's current plans and their likely effects. In that case, a very large number of computer science departments in universities all over the world quite reasonably were delegated domains like CS.university-name.EDU, often resulting in host1.CS.university.EDU, host2.CS.university.EDU and email addresses like usern...@cs.university.edu. Whether by relying on the heuristics that RFC 1535 banned or by explicit search, institutions set up name completion so that a reference from within that institution to username@CS (or username2@chem or username3@art -- you get the idea) worked as everyone expected, yielding usern...@cs.university.edu, etc. Usually, host1.CS did too -- this is not just about dotless domains. Then the TLD for Czechoslovakia was delegated under the then-assigned ISO 3166 code of CS and unpleasant things started happening. Users in some institutions couldn't find Czech sites at all; others had no problems. In some cases in which search rather than mere completion was used, some names would be found, some names wouldn't, and false positives become possible. If there is advice for ICANN in this it is that, when DNS searching scenarios are involved, it is not sufficient to worry about possible conflicts between proposed new TLDs and existing private-use pseudo-TLDs (such as local.) Instead, conflicts with existing second or third-level labels (and perhaps ones even further down the tree) have to be considered if completion or search configurations involving those labels might give them the same appearance as a TLD or subdomains of that TLD. Whether the IAB should have included any of this in its Statement is, IMO, another matter. My personal bias is that, if the IAB starts making Statements about the implications of a technological choice, they should be comprehensive about the relevant subject. That principle would call for discussion, not only of the email issues that started this thread, but the CS. case and many of the other issues that have been mentioned. On the other hand, there are arguments against that principle, especially when a body like ICANN is concerned and there is reason to believe that any statement more than a paragraph or two long will be unread by many people in the intended audience. Personally, I would have preferred it had the IAB been explicit about what issues they were and were not addressing if they decided to avoid a comprehensive discussion -- at least then, people might be debating their choices but not whether they are exhibiting their ignorance. But that is just my opinion. john
Re: I-D Action: draft-barnes-healthy-food-07.txt
--On Tuesday, July 16, 2013 18:09 +0100 Adrian Farrel adr...@olddog.co.uk wrote: ... Personally, I will strongly try to be vegetarian, but eat meat rather than starve (a situation that arises when travelling). ... I'm in much the same situation, but suggests that part of this feeds back into some of the venue selection issues: if a venue is chosen that forces you (or me or others) into a meat or starve or, much worse, eat something severely damaging to health or beliefs or starve situation, is that really an acceptable venue? And, if it is not and it is chosen anyway (either deliberately after considering other factors or out of ignorance), who is accountable and to whom? john
Re: IAB Statement on Dotless Domains
--On Saturday, July 13, 2013 16:28 + John Levine jo...@taugh.com wrote: I guess I'm missing something. How exactly is having a gTLD going to bring in the Big Bucks? Do people actually type addresses into the address bars on their browsers any more, or do they just type what they're looking for into the search bar? Let's just say you're not allowed to ask that question, any more than you can ask a fundamentalist Christian how he knows he's going to heaven. Noel asked at least two different questions. One is not supposed to ask either of them, whether your analogy is appropriate or not. (And see my note from yesterday.) You are definitely not allowed to look at the history of .AERO, .TRAVEL, .JOBS, .ASIA, .MUSEUM, .COOP, .MOBI, .TEL or .PRO. One could quibble about that list -- I'd think about deleting one or two that actually met the rather narrow expectations for them and maybe add a few others that didn't. But, yes, the track record of big profits from selling names out of new gTLDs, especially if defensive registrations are excluded, has been abysmal. As far as I know, the only completely successful business model for post-2001 new gTLDs that were not intended as a service for a restricted community has involved an extreme form of the encourage defensive registrations model, so extreme that others have described it as extortion. john
Re: IAB Statement on Dotless Domains
Hi. I've been trying to stay out of the broader conversation here, but it seems to have gone far enough into general issues... Disclaimer and context: I felt that the DNS was better off with deep hierarchy since before the work that led to RFC 1591 started. I hadn't changed my mind when the NRC report [1] tried to stress that it was much more important to look at navigation issues than at how many names one could sell. I felt the same way during the gTLD-MOU effort and, during the period leading up to ICANN, argued that generic TLDs should be encouraged to compete on services, not only price. I think we would have been better off if we had called this critter the domain mnemonic system because we may have been doomed as soon as the world name and the folks who design user interfaces and marketing campaigns caught up with each other. For the same reason, I thought TLD labels should be treated as codes with names being a user interface property and have had misgivings about top-level IDNs because I was concerned that they would immediately introduce name translation problems [2]. I haven't changed my mind much in the last several years and believe that the only likely effect of having a few thousand TLDs will be to increase the rate at which users --most of whom already don't know the difference between a domain name and a search term-- go to search engines rather than trying to remember and use any but a very few domain names.I assume there are folks around ICANN who aren't aware of those views and the reasoning behind them, but it isn't because either my versions of them or those of others have been a secret. That said: (1) It is clear to me that ICANN is committed to the gTLD course --including generic terms, IDNs and variants, and a number of other things that may be ill-advised-- and that they, case-by-case decisions about a few names notwithstanding, are not going to change course unless something happens externally that gives them no choice. (2) In the context of the above, making statements at this time is largely an effort in a**-covering: allowing various entities to say, if something goes wrong, don't blame us, we warned you. If the IAB really wanted to make a statement that might have affected the overall situation, the window on that probably closed a year or two ago. Perhaps they should have done that, perhaps not, but it is too late. (3) If the IAB is going to make statements now, for whatever reason, I believe those statements should be technically comprehensive. Because I don't expect such statements to have any real effect, that has as much or more to do with IAB long-term credibility as it does with statement content. For that reason, focusing this one on the DNS and ignoring the applications consequences is probably suboptimal. (4) There may be an IETF community issue with how the IAB is handling statements like this. On the one hand, I believe it is very important that the IAB be able to reach conclusions and expose them to the wider world without IETF consensus approval. On the other, I think that their taking advantage of that too often, especially when there should be reason to believe that there are useful perspectives in the community that they may not have internally, represents poor judgment. IMO, there has to be a balance, the IAB has to decide where that balance lies, and the community's best recourse if they regularly get it wrong involve conversations with the Nomcom. My own guess is that this new gTLD stuff is going to work out badly for the Internet. In one scenario, some new gTLD applicants get the domains they asked for, things don't work out as they expected when they applied (whether technically or economically makes no difference) and they respond unhappily (which might involve lawyers but probably doesn't really affect the IETF or the Internet in a substantive way. In another, users go even more to search engines and the value of domain names drops significantly. That could, indirectly, have bad effects on ISOC and how the IETF budget is supported. In still another, there could be some nasty effects on ICANN and/or its leadership that could disrupt whatever balance now exists in Internet governance and/or the interactions among players in, e.g., the Internet protocol space. But, IMO, the thing that all these issues and discussions threads have in common is that we are in between the time that different plans could have been made and the time that we find out how things are really going to sort themselves out. A statement here and there aside, we mostly need to wait... and debates about what happened in the past and why might be interesting
Re: IETF registration fee?
--On Thursday, July 11, 2013 10:34 -0400 Phillip Hallam-Baker hal...@gmail.com wrote: ... Using paid conferences as a profit center is a risky long term prospect at best. Refusing to adapt the format of the conferences to protect the profit center worse. Or adapting the format to attract more paying attendees, such a what we have sometimes called tourists, with no real expectation that they will do work, because it increases the income. Still better than building a funding structure based on sale of publications, however :-( john
Re: Regarding call Chinese names
--On Thursday, July 11, 2013 11:26 -0400 Noel Chiappa j...@mercury.lcs.mit.edu wrote: From: Simon Perreault simon.perrea...@viagenie.ca I think I've seen Chinese names written in both orders. That is, sometimes Hui Deng will be written Deng Hui. Am I right? Does this happen often? I'm not certain about Chinese, but I know that with Japanese names, which have the same issue (family name first in native language), both orders happen a fair amount. (This also happens with Hungarian names, which also use family-first - rare in the West, but it does happen.) ... Not that rare if one includes family name in the middle as another case that is unusual relative to normal English usage. Worse, the standard convention for expressing a Chinese names in Latin characters differs among Chinese-speaking populations/ countries. One can usually deduce which one is the family name/ surname from context (not limited to knowing a certain amount about the language, but it sometimes requires a lot of context. And, as you know, the most family names are one character and most personal names are more than one rule mentioned in the draft works for Chinese names but not for Japanese ones. Incidentally, purging first name and last name from our vocabulary would be a big step in avoiding confusion. Hence the common practise in some academic circles of giving the family name in all capitals, to show which it is. So whether you see Junichiro KOIZUMI or KOIZUMI Junichiro, you know what you're seeing. Not just in academic circles but in some UN ones and elsewhere. I strongly recommend wider adoption of this convention and note that it has been used in some RFCs (but not consistently). Maybe the IETF should adopt that practise in, e.g., attendee lists? I'm not sure what to do about e.g. RFC's - there's a pretty strict X. Yyyy form for names, where X is the given initial, the Yyyy the family name. Do we want to change that, or just say 'sorry, family-first people, you'll have to mangle your name to fit the RFC format'? The upper-case family name convention helps if there is any possible ambiguity. I also note that this is all sufficiently confusing that at least one RFC was published with First-letter-of-family-name. Personal-name for the editor. Comments to the RSE might be in order and some of the suggestions in this draft probably belong in the Style Guide, especially as things evolve toward having author/editor names in their original scripts. ... Do we want to encourage people to do the capitalization in their email addresses (the full-name part, not the mailbox name part), so that people know? That's obviously not under our control, but we might _suggest_ it. Indeed. Folks, might I encourage making editorial and similar suggestions in notes to the authors rather than trying to edit the document on the IETF list? best, john
Re: IETF registration fee?
--On Wednesday, July 10, 2013 14:50 -0400 Donald Eastlake d3e...@gmail.com wrote: The IETF values cross area interaction at IETF meeting and attendees have always been encouraged to attend for the week. Allowing one day passes is a recent phenomenon to which some people, including myself, are on balance opposed. I would add that the registration fee covers a number of IETF expenses that are fixed and independent of the number of people-days at a meeting. So, independent of Donald's concern, even if one were to use a formula similar to the one you suggest it would be more like (1/5 * day-cost-part-of-fee) + fixed-expense-part-of-free or at least a much larger fraction of the latter. john
Re: Final Announcement of Qualified Volunteers
--On Saturday, July 06, 2013 14:53 -0700 NomCom Chair 2013 nomcom-chair-2...@ietf.org wrote: I am pleased to announce that we have 140 qualified individuals who have generously volunteered to serve as voting members of this year's Nomcom. Allison, Given my previous comment about statistical assertions, a quick look at this list sheds some additional light on cross-sections of the community. (1) While you have 140 people from whom to do a random draw, the no more than two volunteers with the same primary affiliation (RFC 3777, Section 4, Bullet17) rule means that the actual nomcom pool is only 81 even though some of the people are indeterminate (I've made some assumptions about company boundaries -- other assumptions would yield very slightly different results). That number is obtained by listing the companies and then counting any company with more that two volunteers as having only two. Neither the 140 number nor the 81 number is incorrect, they just measure different things. (2) Four companies account for 44.3% of the volunteers. As others have pointed out, while having a very large number of volunteers cannot produce more than two Nomcom voting seats due to that restriction, it can virtually guarantee (statistical randomness notwithstanding) that one will end up with one or two seats. Specifically, if the people selected constitute a cross-section of the volunteers, then more than 10% of the pool (14 volunteers) would predict to at least one seat. Two companies exceed that number and a third is very close. Randomness could make the actual numbers either better or worse, but, if an organization's goal were to assure at least one seat, that seems easily accomplished. Of course, having lots of volunteers doesn't imply that an organization was trying to accomplish any such thing -- it could just have a lot of public-spirited IETF participants and policies that allow people to commit the time. (3) It is probably too late to even discuss it for this year (see below) but it occurs to me that, if one wanted to minimize the odds of organizations trying to game the nomcom selection process, it would be rational to do a two step draw, first randomly selecting two volunteers from any organization offering more than two and then including only those two in the final selection process.On the other hand, that would give you around 81 candidates for the final selection this year. If running the final selection against order 140 people rather than order 81 causes the community to believe that it has a better sample, then that option probably would not be appropriate. I am not, however, convinced that we would actually have consensus for minimizing those odds, nor about whether a company's ability to nearly guarantee that at least one of its employees will be on the Nomcom by providing a large fraction of the volunteers violates the provision of Section 4, bullet 16, of RFC 3777 requiring ...unbiased if no one can influence its outcome in favor of a specific outcome. Actually, if I read Bullet 16 correctly, the choice of methods is up to you, with RFC 2777 (and 3797) being just examples. So, in theory, you could make that change. I wouldn't personally recommend it, but you are the one who has personal responsibility for things being fair and unbiased while this is just a statistical exercise for me. best, john
Re: Final Announcement of Qualified Volunteers
--On Tuesday, July 09, 2013 13:49 + Ted Lemon ted.le...@nominum.com wrote: I find the presumption that IETF attendees employed by companies that send large number of attendees are robots to be somewhat distasteful. It also doesn't match my experience. I am sure that _some_ attendees from large companies are just as partisan as you fear, but some are not. So I am not convinced that your proposal would have a good outcome. It was _not_ a proposal, merely an analysis of the numbers and an exploration of alternatives. I also didn't say robots or anything that could be reasonably construed as robots. I don't even have any experience that would lead to expect that a company would expect any selected Nomcom members to march in lockstep. I do note that the no more than two people with the same primary affiliation rule is part of RFC 3777 (BCP 10) and take that as an indication that the community was unhappy with at least the appearances of one company having more than 1/5 of Nomcom voting membership. That limit is not part of any proposal I or, to my knowledge, others have made recently. With regard to that limit, my analysis is merely an exploration of how the intent of that rule might best be satisfied. I'd welcome a discussion of whether the analysis is correct or not. You might reasonably believe that it is irrelevant. But, as far as disagreeing with a proposal or not, please wait until someone makes one (fwiw, it won't be me). john
Re: Final Announcement of Qualified Volunteers
--On Tuesday, July 09, 2013 19:43 -0400 Donald Eastlake d3e...@gmail.com wrote: Hi John, Excuse me for replying to just part of your message below: No problem. I found your explanation helpful.Two observations at the risk of repeating myself (1) I did not make a proposal. I did point out that there were other possibilities within the general bounds of the no more than two rule. Beyond that, about as far as I'm willing to go is to say that different mechanisms (the current ordered list, selecting a company down to two candidates, your square root of V alternate suggestion, and maybe some other things) each have advantages and disadvantages and, probably, each optimizes for something different or deals with a different marginal case. I suspect that, most of the time, the differences in practice are likely to be trivial (or smaller). But I haven't even tried to do the analyses in large part because I think they would be much too speculative. (2) In response to Brian, you wrote... That said, the not more than two from the same employer rule was written in anticipation of a theoretical problem; it seems I think not. If I recall correctly, there was at least one noncom with three voting members with the same affiliation. While there was no particular evidence that these voting members were acting as other than individuals, the consensus was that it just smelled bad and so the limit of two ... The bad smell issue is, IMO, far more important than any discussions of whether people or organizations have misbehaved or might misbehave in the future. It is especially important should we have another round of discussions about antitrust policies because, for an SDO, the ability, even in principle, for one organization or set of interests to dominate a body or its leadership selection process can easily create a lawyer's playground. I don't think it would be wise to try to completely eliminate that risk even if it were possible, but passing a smell test is nonetheless important for multiple reasons. best, john
Re: Final Announcement of Qualified Volunteers
--On Sunday, July 07, 2013 19:50 +0300 0 skar...@science.alquds.edu wrote: I am just wondered why there is any names from Arab world, no volunteers or no acceptance for Arab people. Thanks for giving chance to ask. Keeping in mind that people have to volunteer their own names (no one else can do it for them) and that volunteers for the Nomcom need to meet various requirements (of which the most restrictive are face to face attendance at three of the last IETF meetings and a commitment to attend up to a year or 18 months of upcoming IETF meetings face to face), I think your question turns into three or maybe four others: (1) Assuming that there are Arab people (quotes because I'm not certain everyone in the IETF community would identify that group the same way you do) who can satisfy those requirements to volunteer, why didn't any of them volumteer? (2) If more Arab people don't participate sufficiently in IETF to meet those requirements, and hence to be eligible to volunteer, why not? (3) Are there Arab people participating actively in the IETF who would volunteer for the Nomcom if the requirements were different but still met the IETF community's reasonable needs? Note that there is a draft on that subject floating around (draft-moonesamy-nomcom-eligibility). Speaking quite personally and with the understanding that I may be in the minority, I believe that the current 3 of 5 rules try to provide a single measure that captures two entirely reasonable requirements: that a nomcom volunteer understand the IETF community and how it functions and that the volunteer understand the current situation in the IETF well enough to meaningfully participate and contribute to the Nomcom's work.It may be that the formula is wrong or that, as I now believe, we should be separating those concerns into two separate requirements, or not, but I think that it will be a very long time before the f2f prerequisite is eliminated entirely and perhaps longer before the community agrees that the Nomcom doesn't need to meet f2f. (3) If you meet the 3 of 5 requirement and assuming you are an instance of Arab people, why didn't you volunteer? And, especially if you don't meet that requirement, does the IETF expect to see you in Berlin as a first step to qualifying for future Nomcoms? best, john
RE: IETF 87 Registration Suspended
--On Friday, July 05, 2013 07:40 +0100 l.w...@surrey.ac.uk wrote: It strikes me that 'membership fees' as opposed to 'entrance fees' could work around this payment issue. Or incur a different tax... But the use of a term like membership fee has profound implications for what the IETF claims is our way of doing business. Folks, it is clear that this is both inconvenient and complicated. Would it be possible to just let the IAOC engage in whatever discussions and consultations are needed --i.e., allow them to do their jobs-- without endless amateur [1] speculation on what is going on and what should or could be done about it. Since people are obviously curious and in the interest of openness and transparency, I hope that the IAOC will explain the details and the solutions when they have things under control. But let's let them get them under control first. Just my opinion, of course. john [1] both amateur lawyers and amateur international taxation experts. Anyone who is part of the conversation who is _not_ am amateur should probably volunteer (or market) her or his services offlist to the IAOC or directly to Ray at i...@ietf.org
Re: [IETF] Re: Appeal Response to Abdussalam Baryun regarding draft-ietf-manet-nhdp-sec-threats
--On Wednesday, July 03, 2013 13:02 -0400 Warren Kumari war...@kumari.net wrote: Thank you -- another worthwhile thing to do is look at who all has appealed and ask yourself Do I really want to be part of this club? I am honored to be a member of that club. Remembering that appeals, as others have pointed out, a mechanism for requesting a second look at some issue, they are an important, perhaps vital, part of our process. We probably don't have enough of them. Effectively telling people to not appeal because they will be identified as kooks hurts the process model by suppressing what might be legitimate concerns. In addition, it is important to note that the page does _not_ list every appeal since 2002. If one reads Section 6.5 of RFC 2026, it describes a multi-step process for appears in each of a collection of categories. The web page lists only those that were escalated to full IESG review. That is important for two reasons: * The majority of appeals, and a larger majority of those that are consistent with community consensus or technical reasonableness, are resolved well before the issues involved come to the formal attention of the full IESG. If an issue is appealed but discussions with WG Chairs, individuals ADs, or the IETF Chair result in a review of the issues and a satisfactory resolution, then that is an that is completely successful in every respect (including minimization of IETF time) but does not show up in the list on the web page or statistics derived from it. * A few minutes of thought will probably suffice to show you that appeals that have significant merit are far more likely to be resolved at stages prior to full IESG review. By contrast, a hypothetical appeal that was wholly without merit, or even filed with the intent of annoying the IESG, is almost certain to reach the IESG and end up on the list, badly distorting the actual situation. best, john p.s. to any IESG members who are reading this: community understanding of the process might be enhanced by putting a note on the appeals page that is explicit about what that list represents, i.e., only appeals that reached full IESG review and not all appeals. Other than a *very* small minority of well known and well respected folk the http://www.ietf.org/iesg/appeal.html page is basically a handy kook reference.
RE: NOMCOM 2013-14 Volunteering - 3rd and Final Call for Volunteers
Allison, Just one or two observations... --On Tuesday, July 02, 2013 13:50 +0800 rex corpuz rex_corpuz2...@yahoo.com wrote: ... The more volunteers we get, the better chance we have of choosing a random yet representative cross section of the IETF. Respond to this challenge and strengthen our statistical significance... You must know that statement isn't true and I think it would actually help the IETF is we stopped kidding ourselves about it. From an almost-elementary statistical standpoint, increasing the sample proportion within a particular subset of a population has nothing to do with strengthening the statistical significance or representativeness of the total population. As other recent conversations have illustrated, that particular subset excludes those who don't attend lots of meetings f2f, it excludes those who are wiling to serve in positions the Nomcom appoints (and who believe that they have or could obtain the resources and support to do so), and it excludes anyone who lacks the time, support, and resources to make a major commitment to the Nomcom for an extended period. Some of the community would argue that those restrictions are A Good Thing and create a better Nomcom than we would have with a representative cross section of the IETF. Others would (and have) argued that those restrictions are both a problem in themselves and an important contributor to less diversity than they think desirable. But it is, IMO, impossible to argue that a group selected under those restrictions represents a statistically-valid sample of the community (or, in different language, a cross-section of that community). Selecting more people from that same pool doesn't increase the odds of getting a valid cross-section of the community, it just increases the odds of getting a valid cross-section of the pool. Again, I'm not arguing, at least in this note, that the pool from which you are soliciting volunteers isn't appropriate and/or the absolutely best we can do or even what we want whether it is or not. But we should stop pretending that it represents a statistically-valid cross-section of the community, much less that getting more volunteers has anything to do with that. best, john
Re: RFC 6234 code
--On Friday, June 28, 2013 16:41 -0400 Joe Abley jab...@hopcount.ca wrote: If you really think you see a legal difference in doing the second, fine; I propose that you are just searching for problems that do not exist. Quite possibly they don't, and I'm not presuming to talk for John. But the vague thoughts that crossed my mine were the issues that allowed Phil Zimmerman to publish PGP source code in book form and avoid conviction for munitions export without a licence. That was one of the examples I had in mind, along with a few others. However, given Paul's extensive legal experience and knowledge of the details and applicability of export regulations and associated case law, I apologize for wasting the time of people on the list by raising this obviously spurious issue. I am delighted that, on the basis of that knowledge and experience, Paul was willing to step in on this issue and assure us that there is nothing to worry about. john