RE: Internet SYN Flooding, spoofing attacks
Steve, Let's be clear: a DOS attack is something the end point itself can do very little to prevent, since it usually fails or succeeds upstream of that end point. Therefore, the end point relies on its upstream ISPs to "do the right thing" and indeed, each of those ISPs relies on other ISPs to similarly filter. Each point can mitigate the damage to the point where in sum these attacks become ineffective. Each RPF check can remove bad packets. Each violated ACL can remove and LOG the bad packets. These are the best controls available today. Shall we not use them? Also, we raise the bar from some kid injecting packets to someone breaking into an ISP, a more difficult challenge (at least a level 3 attack on my Dungeons and Dragons guide of Hackers ;-). - Original Message - From: Stephen Kent [EMAIL PROTECTED] Newsgroups: cisco.external.ietf Sent: Saturday, February 12, 2000 1:55 PM Subject: Re: Internet SYN Flooding, spoofing attacks Paul, When one suggests that a first tier ISP would not need to filter traffic from down stream providers, because IF they do the filtering, then the problem will not arise via those links, one is suggesting precisely this sort of model. You're approaching this from the wrong perspective, in my opinion. There is no assumption implied that RFC2267 filtering is needed -- it is required. What good is it if one or two or 300 people do it, and another 157,000 do not? Well, there is a little good, but the more people that do it, the better off we all are. The bottom line here is that RFC2267-style filtering (or unicast RPF checks, or what have you) stops spoofed source address packets from being transmitted into the Internet from places they have no business being originated from to begin with. In even the worst case, those conscientious network admins that _do_ do it can say without remorse that they are doing their part, and can at least be assured that DoS attacks using spoofed source addresses are not being originated from their customer base. And this is a Bad Thing? it is a bad thing if one bases defenses on the assumption that ALL the access points into the Internet will perform such filtering, and will do it consistently. Even if all ISPs, and down stream providers performed the filtering, there is no guarantee that attackers could not circumvent the filter controls, either through direct attack on the routers, or through indirect attack on the management stations used to configure them. I'm just saying that while edge filtering is potentially useful, it would not be a good idea to assume that it will be effective. Edge filtering would often be helpful, but it is not a panacea, as pointed out by others in regard to the current set of attacks, nor is the performance impact trivial with most current routers. It is negligible at the edge in most cases, but you really need to define "edge" a little better. In some cases, it is very low speed links, in others it is an OC-12. In talking with the operations folks at GTE-I, they expressed concern over the performance hit for many of their edge routers, based on the number of subscribers involved and other configuration characteristics. Because most routers are optimized for transit traffic forwarding, the ability to filter on the interface cards is limited, as I'm sure you know. No, I don't know that at all. _Backbone_routers_ are optimized for packet forwarding -- I do know that. I would state that devices that examine IP headers and make routing decisions entirely on interface cards are optimized for traffic forwarding, vs. firewall-style devices that focus on header examination and ACL checking, and which typically do this by passing a packet through a general purpose processor, vs. in I/O interfaces. But, these are just generalizations. Also, several of the distributed DoS attacks we are seeing do not use fake source addresses from other sites, so simple filtering of the sort proposed in 2267 would not be effective in these cases. Again, you're missing the point. If attackers are limited to launching DoS attacks using traceable addresses, then not only can their zombies be traced found, but so can their controller (the perpetrator himself). Of this, make no mistake. Not necessarily. The traffic from a controller to the clients may be sufficiently offset in time as to make tracing back to the controller hard. I agree that tracing to the traffic sources (or at least to the sites where the traffic sources are) would be easier if edge filtering were in place, and if it were not compromised. Finally, I am aware of new routers for which this sort of filtering would be child's play, but they are not yet deployed. One ought not suggest that edge filtering is not being applied simply because of laziness on the part of ISPs. Steve, you said that -- I didn't. I think ISP's will do what their customers
Re: recommendation against publication of draft-cerpa-necp-02.txt
Part of the problem here is that a knife may be used as a food utensil or a weapon. Safe handling, however, is always required, and should be documented. I would add two other comments. I tried to locate the RFC for HTTP/0.9, but the best I could find was a reference to a CERN ftp site for the protocol. In any case, by the time HTTP got to the IETF it was deployed over a vast number of end stations, and comparisons to it are probably not apt. Finally, rechartering is precisely what you ought to have done, and should do, IMHO.
Re: NAT-IPv6
It is a complete fallacy that NAT provides any sort of security. It does no such thing. Security is provide by a firewall, and (more importantly) by strong security policies that are policed and enforced. - Original Message - From: Leonid Yegoshin [EMAIL PROTECTED] Newsgroups: cisco.external.ietf Sent: Tuesday, April 25, 2000 10:11 PM Subject: Re: NAT-IPv6 From: John Stracke [EMAIL PROTECTED] "J. Noel Chiappa" wrote: So, you're the CIO for Foondoggle Corp, and you're trying to figure out whether to spend any of your Q3 funds on IPv6 conversion. Let's see, benefits are not very many (autoconfig may be the best one), and the cost is substantial. Sure. Then you buy out Moondoggle Corp, which used some of the same private IP numbers you did, and you're faced with having to renumber everything. While you're at it, you decide to convert both networks to v6 so it'll be easier next time. (Yes, I know you could put a NAT between the two former companies; but it'll *hurt*.) Once in company where I worked somebody brought a virus and it crashed a lot of Windows host. I don't remember details about it's fast propogation but I remember how terrific IS staff wanted to put firewalls/NATs between each floor ! They considered it as the only warranty and _asked_ money for that. - Leonid Yegoshin, LY22
Re: draft-ietf-nat-protocol-complications-02.txt
From: Bill Manning [EMAIL PROTECTED] So, of the 7763 visable servers, 45 are improperly configured in the visable US. tree. Thats 4.53% of those servers being "not well maintained. Keith, These two data points seem to bear your assertion out. It is always possible to do something poorly. You can take the best engineered product and misconfigure it, or otherwise not maintain it. Sorta like not changing the oil. On the other hand, how many times do you see a name service failure for someone in the Keynote 40? How often do you get a failure for a site that uses a commercial service like Akamai? If you still get lots of failures when people are using the mechanism as recommended, then the mechanism itself is poorly designed. It also strikes me that DNS is continuing to mature, and that things are getting better, as more products come to the market, but this is more gut feel. There are some open questions out there: does DNS provide sufficient granularity? Are its semantics rich enough for a mobile world? People are currently playing tricks with DNS that PVM had not envisioned, and Paul Vixie loves to say something like, "DNS is not the droid you're looking for" (forgive me, Paul, if I didn't get that quite right). Just some thoughts...
Re: NAT-IPv6
It's also completely naive that source routing is your only threat. One can break into a NAT. One can forge packets and address them appropriately. Firewalls prevent this, not NATs.
Re: BGP4
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 I have multi-ISP connectivity, multiple exit to internet. I am planning to use BGP and MEDs as primary attribute to choose the best path. Is there any other suggestions ? MEDs are only useful if you are connecting multiple points of one AS to multiple points of another AS. In your case, if you refer to "multi-ISP" as more than one ISP, then MEDs are not necessary. If, on the other hand, you have multiple connections to the same ISP, then MEDs are useful, assuming your ISP cooperates. Also, if you have connectivity to two or more ISPs, then you don't want MEDs to be your PRIMARY selection criteria, but tie breaker when you're going to the same ISP. Eliot Lear -BEGIN PGP SIGNATURE- Version: PGPfreeware 6.5.3 for non-commercial use http://www.pgp.com iQA/AwUBOTf9pW6AD2cTbjy4EQLRYQCfWklQuPk1vboU6rGVwo17VNMFd+sAoMza 33+WXENlYRDREbpTmrEUz0a6 =m9YP -END PGP SIGNATURE-
Re: Mobile Multimedia Messaging Service
Mohsen BANAN-Public wrote: Remember the web (http, ...)? What was IETF's role in Internet's main modern application? You mean aside from MIME? -- Eliot Lear [EMAIL PROTECTED]
Proposal to deal with archiving of I-Ds
Greg, it is so rare that I disagree, but I'm seeing a pattern now of loss of institutional memory, and that scares me. This especially holds for bad ideas or transition documents. So here's what I propose to address the matter: Convert the I-Ds to ps or pdf files (something hard to change) and stamp them on each page from end to end with an bright red watermark, like "Not a current IETF Document. For historical use only." Allow search ONLY on the original text, though, and require agreement that the document will not be mentioned or referred to in any publication. Don't make a unique URL available (use post mode to generate dynamic HTML). How's that? -- Eliot Lear [EMAIL PROTECTED]
Re: Proposal to deal with archiving of I-Ds
Hi Bill, Postscript files are straightforward for a postscript hacker to change. I imagine the same is true for pdf files. If you want to make the files hard to change, try a pgp signature. I have no problem with that, but it's not enough. I'm interested in putting something in front of a trade press person that they cannot ignore. Perhaps the watermark should simply be "REJECTED or EXPIRED". Hmmm I think this stuff could all be done with commonly available UNIX tools...
Re: Topic drift Re: An Internet Draft as reference material
Would everybody please stop sending me search results!? Google seems to have it on the front page. Yahoo doesn't. People are getting mixed results out of Altavista. [Talk about a dumb message that shouldn't have been archived ;-]. The document that Christian found on the IETF server is NOT what I referred to.
Re: Topic drift Re: An Internet Draft as reference material
John, Let's assume that Mike O'Dell never submits his idea as an RFC. What then? It's not just Mike O'Dell who loses. Perhaps he doesn't lose at all, since he'll be able to reproduce what he wrote from his personal archives. So while you may argue what's morally correct (and others might argue otherwise), the fact is that we don't have the document, and that's a loss to us. Also, I'm picking on Mike's document (not saying he's right or wrong for not publishing the doc. as an RFC), but there are many other examples where this has happened (something similar has happened to a few of my own drafts that other authors have then asked me to reproduce). Sometimes the ideas are BAD. More often they're either undeveloped or just so far out that the standards process doesn't know how to cope with them - YET. Also, quite frankly, often times a working group will "compromise" away some good ideas for the sake of expedience in the standards process, and a useful idea is dropped or neutered, and we've all seen this. And there have been a number of documents which were meant to discuss an ephemeral topic, where the author has no intent of going through the rigmarole of getting the document turned into an RFC. I think there was great concern about this during IPng, which is why MOST of those documents eventually turned into RFCs. But even in this VISIBLE case one has slipped through the cracks. Eliot - Original Message - From: John C Klensin [EMAIL PROTECTED] To: Eliot Lear [EMAIL PROTECTED] Cc: [EMAIL PROTECTED]; "Mike O'Dell" [EMAIL PROTECTED] Sent: Friday, September 29, 2000 8:45 AM Subject: Re: Topic drift Re: An Internet Draft as reference material --On Thursday, 28 September, 2000 12:02 -0700 Eliot Lear [EMAIL PROTECTED] wrote: John, I would accept your interpretation if you can go to a major search engine, like Yahoo or Altavista, and find me in a brief period of time ANY version of Mike O'Dell's 8+8 proposal. Don't you think it shameful that there is no permanent record about a serious effort to deal with a serious problem (multihoming)? And this is a recent (read: current) problem! Eliot, I agree with Bob Braden's identification of this particular document's slipping through the cracks as "shameful" and hope we can fix it. But, if one looks at the bottom line, Mike essentially chose to let it expire, and no one else chose to pick it up and push for RFC publication. The shame lies there, not in issues with the I-D expiration/ archiving process. john
Re: Topic drift Re: An Internet Draft as reference material
Rather than debate the matter people seem to like to go hyperbolic and that's not useful. (Yes, Brian, you could use a search engine to find times when I've gone hyperbolic). I never suggested that I or anyone else should republish someone else's work without their permission. I cited examples of lost institutional memory and I proposed some mechanism as a straw man. I think we are continuing a mistake by not removing the six month limit on I-Ds. I'll not repeat the rest of my argument, but I do wish you would address my point about institutional memory. -- Eliot Lear [EMAIL PROTECTED]
Announcing a new mailing list on middleware
Please redistribute to appropriate forums. As I promised in the MIDCOM working group in San Diego, I've created a mailing list for discussion on diagnostics and discovery of intermediate devices. Here are the particulars: List name: [EMAIL PROTECTED] Subscribe: [EMAIL PROTECTED] Archive:none as of yet march is short for Middleware ARCHitecture. All I mean by that is that how it all fits together: diagnostics, discovery, and communications with middleware devices is in scope for this list. So is OM as relates to those devices and their end points. Flaming about intermediate devices is not in scope. I'd like to start, however, by focusing discussion on diagnostics and discovery. Please join me on this list to consider how these devices make themselves known, what the implications of diagnostic messages such as ICMP errors could be in these cases, and what additional mechanisms are needed. Cheers! -- Eliot Lear [EMAIL PROTECTED]
CORRECTION: Middleware/Middle Boxes Architecture List information
I know, this is completely silly, but the subscription email address I gave out previously is not working. The correct subscription and list information is as follows: List name: [EMAIL PROTECTED] Subscribe: [EMAIL PROTECTED] While the service is run by majordomo, the majordomo alias does not exist. Again, the topic of this list will be focused on the architecture, diagnosis, and discovery of devices in the middle of the network, so that the applications could then communicate with them, or otherwise report errors to their users. This work is complimentary to the MIDCOM working group. My apologies for the confusion. -- Eliot Lear [EMAIL PROTECTED]
Re: redesign[ing] the architecture of the Internet
I strongly disagree. IETF essentially "owns" the Internet Protocol specification and has change control over it. Well, I still disagree, but at least you've taken a step in the right direction by being more specific. Internet Architecture is an amorphous blob. A small group individuals with a cute idea can have dramatic impact, no matter what the IETF thinks. Witness WWW and NAT. no argument that such a group can have "dramatic impact" (for good or ill) but that's not the same thing as changing the architecture. Today we have transparent proxies, reverse caches, global DNS redirectors, and all sorts of other amusing Things. You can say they're not part of the architecture. But what does it mean? They're there because the functionality needs to be there and otherwise wasn't. The same could be said about NATs, as bad as they are. Remember Ritchie's famous quote, "you can fill a void and it could still suck"? The fact is X is here as opposed to something better. Did the people at MIT have the right to write X???
Re: [midcom] WG scope/deliverables
Dave, Technogeeks, perhaps. The vast majority of people on the Internet who are behind NATs most likely don't even know it. With all the discussion of Napster and so-called "peer to peer" networking, I think NATs are going to become far more visible to users as these applications grow in popularity. Today, you can use something like Gnutella if at least one party is not behind a NAT.
Re: Why XML is perferable
You know, the people on this list make great computer scientists, network architects, application and protocol designers. I'm not so sure how many of us understand CHI. Some of us like to think we do, but I suspect very few of us actually do. So, given this, why don't we ask some people who really DO understand this problem to come up with a decent recommendation, rather than perennially flaming about it? Then we can ask them to help us with NATs ;-)
Re: IETF Travel Woes (was Deja Vu)
Lyndon Nerenberg wrote: For travel planning purposes it's important to me that the location of the London meeting be announced as early as possible. I doubt very much I'll be staying in the conference hotel (or anywhere near it), which means I need to book alternate accomodation as early as possible. (BTW, if you want to reproduce the Minneapolis-in-winter experience in Europe, I highly recommend Brighton in February.) --lyndon Cost of a ticket from SF is approximately $1100US right now. Can't say which way it's going, given the economy and hoof and mouth disease, but if last year's silliness was any predictor, prices peaked at around $3000US for two week advance fare.
Re: Why is IPv6 a must?
Actually, the engineering cost of building IPv6 into operating systems is already essentially paid. The cost of building it into routers and the like is paid. The vendors are all (basically) IPv6-ready. Valdis, Your message is generally well put. However, while it is possible to send the packets on the wire, the fundamental underlying scaling point, the routing system, has not been properly addressed. Perhaps a solution can be retrofitted in, but then again, who knows? This, I thought, was largely the point of multi6. Eliot
Re: Why is IPv6 a must?
Perry Geoff, Quite simply, a bunch of us *are* searching for a paradigm shift. Geoff's good work in this area reveals the complexity of the whys and wherefores of the routing system. Given that 8+8 was a serious consideration (and to some deserves some amount of revisiting -- at least as a starting point), I don't believe we can say that the deployment of v6 is completely divorced from the routing system. 8+8 is merely an existence proof of how the routing system and v6 can be intertwined, for better or worse. Eliot
example .procmailrc stuff for announce lists
For those who don't know procmail, here is a sample config for just the IETF-announce list. I think this is what Harald is talking about. Change the flags if you need locking. YMMV. MAILDIR=YOUR-HOME-DIRECTORY-HERE :0 * ^To.*IETF-Announce.* * ^Subject:.*Last Call:.* last_calls :0 * ^To.*IETF-Announce.* * ^Subject: WG Review:.* wg_reviews :0 * ^To.*IETF-Announce.* * ^Subject: I-D ACTION:.* internet-drafts :0 * ^To.*IETF-Announce.* other-stuff
Re: Why is IPv6 a must?
does the rendezvous location really have to be the original topological location of the host or is that just how folks started thinking about it? and given that the rendezvous location has to be somewhere in the network, how can we get around the problem that that location might become unreachable no matter where it's located? I mean, you could replicate the information to multiple sites and use anycast to find a location, but would this fundamentally change the mobile IP protocol? How would you do this such that you did not further aggravate route scaling problems? If you do it with mobile-ip and anycast, would that would require such mobile addresses having either additional IGP or (worse) BGP paths? On the other hand, if you have the rendevous be a database that maps some sort of a name to an IP address you do not perturb the routing system and you impose the overhead on those who are mobile, and their agents. It requires a model without TTLs but initialization and active notification. Whether this can be done is questionable and a valid topic of research (a bunch of us have been questioning for a while ;-) Eliot
Re: Proposal for a revised procedure-making process for the IETF
The IETF list should be reserved for proper technical discussions, such as the format of RFCs and Internet Drafts, NATs are good/bad/ugly, add me/remove me messages, and conference location debates.
RE: Generic Client Server Protocol
Have you looked at BEEP? RFC 3080. Eliot
7/8 bit wars
Dan, If you have data on interoperability regarding 8-bits, can you send a pointer? I'd be interested to know just what we could expect to break, and what we could expect not to break... Eliot
Re: Palladium (TCP/MS)
Christian Huitema wrote: Your fears appear to be based more on emotions than facts. To the best of my knowledge, the TCP/IP stack that ships in Windows conforms to the IETF standards and interoperates with the stacks that ship on other platforms -- it is certainly meant to. Several Microsoft employees participate to the IETF, volunteering a sizable amount of their time. Microsoft itself has a history of working with the IETF, including providing financial support to the RFC editor through ISOC. Christian, most of this note sounds like an apology of the form Microsoft is Okay because we give to charity, not because they do the right thing. If Microsoft is doing development on enhancements to the stack, this organization has good reason not to trust the results, based on past experience (i.e., Kerberos). Eliot
Re: kernelizing the network resolver
Bill, The field may have been well plowed by NIMROD, but the IETF forgot to water it. This organization has never sufficiently answered the route scaling problem, and the ISPs are paying for it today. The question is really whether IPv6 is properly deployable over the long term without a new multihoming paradigm (for `pick your definition of multihoming`). The usual suspects (including Noel) are working through these scaling issues on multi6. Eliot Bill Manning wrote: % The multi6 wg is working on scalable multihoming in IPv6. It looks like % this will be done by separating identifiers from locators. One (radical) % way to do this would be to use the full host name as the identifier and % IP(v6) addresses as the locators. Other than the fact that it breaks % all known transport protocols, this makes a lot of sense IMO. % % Iljitsch van Beijnum for those doomed to about to repeat history... check the archives for NIMROD. or current work in DCCP. this is not greenfield, its well plowed ground. have you done your homework?
Re: namedroppers mismanagement, continued
Dan, Were you one of those kids who had trouble following directions? Randy has given you a pretty plain solution that even my mother could follow (and my mother barely knows how to find the on button of a computer). Join the list already. How hard is that for a so-called mail guru? Eliot
Re: a personal opinion on what to do about the sub-ip area
increasingly often I find WGs whose definition of the best possible outcome is inconsistent with, and in some cases almost diametrically opposed to, the interests of the larger community. I have two problems with this statement. First, while I am all for being critical of our processes for the purposes of improving them, we as a group should avoid making these sorts of generalizations. Say what you will about Dan Bernstein. At least his complaints are specific and backed up. Second, I believe the complaints that are alluded to have been raised again and again and again. Can we as a community learn to agree to disagree on points of architecture, once decisions have been made? Eliot
Re: Dan Bernstein's issues about namedroppers list operation
Paul, I would settle for the message being logged as dropped on some web site. Or, if disk space is really an issue, I would also find it acceptable to have a global IETF whitelist and bounce mail from people who are not on it. That having been said, I have no problem with the message returning as a bounce above the MTA level, so long a sa bounce is returned. That means that you have to have a valid return address. I don't believe the process has to be so open such that we should respond to invalid return addresses. Eliot Paul Vixie wrote: thomas, et al, i have a bone to pick. while dr. bernstein would most likely say that i am no friend to him nor to his chosen issues, the fact is that his complaint has some validity if you look at it edge-on rather than face-on. Namedroppers is a posters-only mailing list that is run in conformance with the policies outlined in http://www.ietf.cnri.reston.va.us/IESG/STATEMENTS/mail-submit-policy.txt. Specifically, all mail sent to namedroppers is: 1) first run through spamassassin. Mail that is rejected here is not archived, as the number of such messages is large. All mail sent to mailing lists on the server hosting namedroppers is run though spamassassin, so this is not a namedroppers-specific procedure. this is just wrong. spamassassin operates at the MUA level, and as such, the originating MTA receives a final OK and drops the connection before spamassassin begins its job. the MUA's only choices when dropping mail due to spamassassin's filtering are: issue a new message back to the sender to inform them of the bounce, or silently drop. because most of the mail spamassassin will drop has an invalid sender address, choice #2 is common. for an IETF list, if submitted mail is being dropped, there must be notice back to the sender. because the sender address will generally not be usable, the only possible choice is to issue the rejection at the MTA level. this will require tighter integration of spamassassin into the MTA level than is currently done on randy's system, or indeed, currently done anywhere at all. silent drops are fatal to the IETF's policy of open discourse, and dr. bernstein is right to complain about that aspect of the namedroppers problems he is experiencing. every time one of dr. bernstein's messages is dropped, dr. bernstein's originating MTA should experience an SMTP delivery error and therefore have the option of informing dr. bernstein that such rejection has occurred. one other very minor point: I've noticed that Randy Bush discarded Len Budney's note on this topic: http://groups.google.com/groups?selm=asnul4%24640g%241%40isrv4.isc.org Not so. Len's note was posted to usenet, not to the namedroppers mailing list. Mail from usenet cannot be assumed to get gatewayed back to the mailing list. as the usenet moderator of comp.protocols.dns.std, i set up a bidirectional gateway which isc operates. this gateway is far from perfect, and many posts have been dropped (silently, of course) over the years, and anyone who wants to really ensure that their words appear on namedroppers should avoid the use of the usenet-namedroppers gateway, and mail their words directly to the namedroppers list. paul
SF restaurant stuff
Do you constantly run into your fellow convention goers at dinner? Are you concerned you're getting ripped off by restaurants that serve low quality food? Do you feel that you've just stayed five days in a city and know nothing more about it than when you arrived? If you answered yes to these questions you are suffering from conventionitis. Don't let that happen to you in San Francisco. Here are some of my favorite restaurants. The fancier ones require reservations, but during the week they shouldn't be too hard to come by. If you don't see a genre here, that certainly doesn't mean San Francisco lacks it. It just means I haven't eaten there. http://www.ofcourseimright.com/~lear/sfrestaurant.html Eliot ps: no, I'm not keeping a list to be updated.
Re: Fw: Welcome to the InterNAT...
Tony Hain wrote: Trying to use SL for routing between sites is what is broken. But that's not all... The space identified in RFC 1918 was set aside because people were taking whatever addresses they could find in documentation. Not as I recall. Jon Postel received several requests for extraordinarily large chunks of address space, particularly from Europe. I believe Daniel Karrenberg might have more information. This forced his hand. In addition, people such as Paul Vixie were trying to do the best they could to make random address space sork, which is admittedly a trick in a small name space. Recall at the time that CIDR was a new thing. You couldn't simply use a portion of network 10, for instance. The same cannot be said for IPv6. SL was set aside because there are people that either want unrouted space, or don't want to continuously pay a registry to use a disconnected network. Any address space can be unrouted address space. Fix the underlying problem, Tony. Making renumbering easy. If we don't do that, IPv6 is no better than Ipv4 (with the possible exception of MIPv6). It is far cheaper to train an app developer (though there may be an exception or two) to deal with it than it is to fix all the ad-hoc solutions that people will come up with to replace SL. Fix the renumbering problem and this isn't an issue. Eliot
Re: Fw: Welcome to the InterNAT...
Tony Hain wrote: History shows people will use private address space for a variety of reasons. Getting rid of a published range for that purpose will only mean they use whatever random numbers they can find. This has also been shown to create operational problems, so we need to give them the tool they want to use in a way we can contain the fallout. Site local is defined to do that job, and we do not have WG concensus on depricating it. As I wrote previously, one must understand the history in order to understand its applicability to the future. The reason there was a problem at all was that there was a not just a non-zero chance of an address clash but that this percentage rose based based on the class of address chosen. If you took 80.1.4.6 you clashed with all of 80 because classless addressing had not made it into the base yet. The chance of this sort of clash in IPv6 is miniscule. Not that it's a good idea. Even so, if you used upper range class B or C address space the chance of a clash at the time was low (still is). And indeed in the case of a merger you were better off than using net 10 because the likelihood of clash with net 10 is high. Eliot
Re: Fw: Welcome to the InterNAT...
Ultimately, as I wrote with others some nine years ago, some practices should not be codified. With IPv4 at least there was a plausible argument for network 10. I didn't like it, nor did I agree with it, but it was plausible. The same cannot be said for v6. Incidentally, Sun HP's use of default network numbers didn't really cause any great consternation or motivation for RFC 1597, so far as I could tell. Were that the case we wouldn't have needed either a /16 or a /8. Eliot
Re: site local addresses (was Re: Fw: Welcome to the InterNAT...)
Michel, What you say is possible, and has happened. But dumb things happen. Those dumb things could happen with non site-local addresses as well. But look. Ultimately I think we as a community do need to own up to better tooling, which can lead to better expectations. Also, I don't see any reason why an IP v6 prefix allocation can't linger for a very long time after a contract ends. The tools need to set expectations, and perhaps some of the DHCP prefix delegation code can help here. Regards, Eliot
Re: Thinking differently about the site local problem (was: RE:site local addresses (was Re: Fw: Welcome to the InterNAT...))
Tony Hain wrote: Margaret Wasserman wrote: Of course, in the case of site-local addresses, you don't know for sure that you reached the _correct_ peer, unless you know for sure that the node you want to reach is in your site. Since the address block is ambiguous, routing will assure that if you reach a node it is the correct one. This FUD needs to stop! Right up till the point where two companies start communicating with one another directly with site-locals. Even if there is a router frob to keep the scopes scoped, you can bet it won't be used until someone realizes that the above problem occurred. Eliot
Re: Thinking differently about names and addresses
Keith Moore wrote: HIP only solves part of the problem. It lets you use something besides an address as a host identity, but it doesn't provide any way of mapping between that identity and an address where you can reach the host. That's not entirely true. It doesn't give you a very scalable way to do a reverse lookup, but forward lookups are quite possible with DNS.
Re: The utilitiy of IP is at stake here
Tony Hain wrote: The IETF needs to recognize that the ISPs don't really have a good alternative, and work on providing one. If they have an alternative and continue down the path, you are right there is not much the IETF can do. At the same time, market forces will fix that when customers move to the ISP that implements the alternative. This is very well said. That first sentence could arguably be the credo of the IETF, only perhaps not limiting to ISPs. Eliot
Re: The utilitiy of IP is at stake here
Dave, Please indicate some historical basis for moving an installed base of users on this kind of scale and for this kind of reason. History is replete with examples. From the Internet Worm to Code Red, consumers do install software when they perceive either a threat or a benefit. Getting rid of spam is a HUGE benefit. Heck. What I've found so amusing is that people seem to upgrade their Microsoft systems just 'cause, with no perceived benefit, but merely protecting from Bit Rot. Eliot
Re: The utilitiy of IP is at stake here
Paul Hoffman / IMC wrote: At 11:36 PM -0700 5/29/03, Dave Crocker wrote: The POP-IMAP example is excellent, since it really demonstrates my point. IMAP is rather popular in some local area network environments. However it's long history has failed utterly to seriously displace POP on a global scale. Exactly right. The benefits of IMAP are obvious to everyone who has looked at it in any depth, and yet it is very thinly deployed. The main reason: the perceived additional administrative overhead. I think what this shows is that client and server implementations still do a lousy job with IMAP. In particular, the few IMAP implementations I'm familiar with don't do a very good job of online/offline separation. In one case, you have to modify the properties of each folder, and it doesn't indicate whether or not a message is downloaded. Also, since most consumers don't use procmail, there's probably not much of a direct benefit to them -- yet. Eliot
Mailing list or bust (was Spam, nasty exchanges, and the like)
Can we please move along? Does anyone care to start a mailing list for technical proposals? If not, perhaps all of these debates are a waste of bandwidth?! Eliot
Re: A modest proposal - allow the ID repository to hold xml
I don't know about about you, Paul, but I'm writing my drafts using EMACS and Marshall's tool. That allows for generation of HTML, NROFF, and text. The HTML allows for hyperlinks, which is REALLY nice. Eliot
Re: Appeal to the IAB on the site-local issue
Stephen Sprunk wrote: Or we all just got sick of the bickering and accepted defeat (unlike Tony). For the record, I can't support deprecating site locals until we have something else approved to replace them -- at which point I say good riddance. There are several drafts in the WG to that end which haven't gained any momentum thus far. And this is why the appeal is in my opinion untimely. The working group has not yet been given a chance to provide any documentation on that What next? question Tony asked. For the IESG or the IAB to interfere at this stage would be micromanagement. But here we are. Tony has decided to push an appeal of a vote, and not even one that generated a document. That the appeal has even gotten this far seems is a flaw in the process. Had the IESG been able to disallow the appeal, simply on this basis, then perhaps the working group could actually provide documents about which we could argue over technical merit. As it stands, I predict yet another round of appeals once IETF last call closes, and the document is reviewed by the IESG. Welcome to Court TV, IETF style. Eliot
Re: accusations of cluelessness
Vernon Schryver wrote: 15 years ago a defining difference between the IETF and the ISO was that the IETF cared about what happens in practice and the ISO cared about what happens in theory. As far as I can tell, the IPv6 site local discussion on both sides is only about moot theories. Without getting into the other points of this argument, let me suggest that Vernon is correct on this point, and in as much as we think site-locals are bad we must provide a better alternative. In order to do that we must address the underlying needs for site-locals. Many have written at length on this subject, but it all seems to boil down to this: We need a way for sites to be internally stable even when their relationship to the world around them changes for whatever reason. This goes to the heart of identity and service location. In as much as we can address this problem we would do much to nullify the arguments for site-locals. IMHO this is where the IETF, IRTF, and other bodies should put our efforts. Eliot
Re: IETF mission boundaries (Re: IESG proposed statement on the IETF mission )
The example I'm thinking about involved predecessors to OpenGL. As this example doesn't even involve communication over a network, I would agree that it is out of scope. ... [OpenGL example] It's not that other examples such as X couldn't have used more network knowledge to avoid problems (e.g. the mouse stuff), but that the network stuff is the tail of that and many other dogs. Because of my employement history, I may know a little more about how to do graphics in general or over IP networks than many IETF participants, but I know that I'm abjectly completely utterly incompetent for doing exactly what the IETF started to do in that case. Great scope example. The issue for OpenGL, however, demonstrates a gap in as much as the developers would probably have liked something like dccp so that they could use a library to get Nagle, backoff, etc. While we're a wire protocol sort of a group, we all should realize the importance of generality and good library support ;-) If out of scope were removed as an acceptable reason to not do things, then you would never squelch bad efforts. An effort isn't bad because it's out of scope. An effort is bad because it's bad, and we invest our faith in the IESG that they will use good judgment to catch bad efforts. If anyone on the IESG does not feel empowered to say no they should not be on the IESG. WG chairs need to vet their own group's work first, of course. And we could certainly do a better job on that. Eliot
Re: IETF mission boundaries
Vernon, I'm not much for mission statements either. But it's easy to fall into a Dilbert view of the world, even when such things might actually help. I think the intent is to derive from some community consensus on goals how to evolve the organization. And we are at a crossroads. Either we pretty much limit our scope to a more confined area, or we need to grow. And growing is painful. It involves changing of responsibilities, delegation, and the like. If a mission statement serves as a point of consensus from which the IETF management can march, I'm all for it, in principle. Of course, it has to have sufficient substance that one can in fact derive a direction for management of the organization. Eliot
Re: IETF58 - Network Status
I have no first-hand information on how much time this costs So I'll dream up what I think the right number of people should be! I think part of the blame should go to the access points that kept disappearing. Someone told me this was because the AP transmitters were set to just 1 mw. If this is true, it was obviously a very big mistake. Oh really?! Please explain why. Your words strike me as those of a volunteer for South Korea. Do I have that about right? As long as we're bitching about the network: would it be possible to start doing some unicast streaming of sessions in the future? Access to multicast hasn't gotten significantly better the past decade, but streaming over unicast is now routine as the codecs are so much better these days, as is typical access bandwidth. I'll happily take 40 kbps MPEG-4 audio only; the video is so badly out of sync that it is unwatchable most of the time anyway. Will you happily pay for the privilege? Eliot
Re: arguments against NAT?
I've argued strongly against NAT, but he's one of those people who seem to be willing to accept arbitrary amounts of pain (we don't need to use [protocols that put IP addresses in payload], timeouts aren't a problem). I'm now pointing him at some relevant RFCs. My question for the list is is there a web page or other document anywhere that comprehensively states the case against NAT? This organization makes its opinions known through not only standards but RFCs that have been reviewed by IETF working groups, the community, and the IESG. RFC 2663 is the one you want. See in particular scaling issues, multihoming, DNS, and IPsec, just to name a few. Eliot
just a brief note about anycast
I realize that the anycast discussion was meant by Karl as an example. But there was precisely one technical concern I had when discussion got going. And that was that if something went wrong- meaning that someone was returning bad data- the IP address wouldn't necessarily provide a clear answer as to who the source of the bad data is. I expressed this concern privately to Paul Vixie who provided me a very satisfactory answer: you can query the name server for a record that will provide you uniquely identifying information. I'll let Paul describe this, but it amounts to the borrowing of an unused class for management purposes. While there is always room for improvement of course, Paul's answers make it clear to me that the root folk have given this some fairly careful thought. I also agree with Paul on another point- different methods used by different servers ARE a good thing, so that no one logical attack could take them all out. Good documentation is also really important. It turns out there is some for F, at least. See http://www.isc.org/tn/isc-tn-2003-1.html by Joe Abley. Eliot
Re: PKIs and trust
[EMAIL PROTECTED] wrote: I'd put this a different way. Until PKIs are able to represent the rich diversity of trust relationships that exist in the real world, they are mere curiosities with marginal practical value. That's a true statement whether it's the PKI's fault or not. I think Keith has mixed up authentication with authorization. It is true that I will only trust certain people in certain ways. But whether those certain people are who they are, and whether a message from is in fact from them, is something we can determine with PKIs. That having been said, they still don't work. Why? Because nobody actually has the patience for them, so far as I can tell. CRLs are not managed at ALL on the Internet, and so far as I know, every Tom, Dick, and Jane will ignore PKI warnings. Eliot
Re: The IETF Mission
Bob, I agree that many works of great value can be found in early RFCs. But here's my question to you: if the focus is too much on standards, how do we scale the process so that we can have great works that are NOT standards? Clearly neither the IESG nor the IETF need be involved in that process, so long as the work does not misrepresent itself as a standard or other IETF-type work. Now, you may say that the RFC Editor should take that job. Okay. Then I presume we need to scale the RFC Editor to this sort of task. Another way of looking at this would be to create some sort of refereed track. That would be fine, but then it seems to me that the day of ASCII has to really come to an end (IMHO). (and with the mention of ASCII, why do I feel as though I've just violated Godwin's Law?) By the way, I think we should also ask where our role ends and where the role of SIGCOMM and the like pick up (he says trying to pay his dues ;-). Eliot * * Lets take an example. I have been involved in QoS work, and there have been * a number of specifications written on the subject; much of that started * with white papers, including especially * * 0896 Congestion control in IP/TCP internetworks. J. Nagle. * Jan-06-1984. (Format: TXT=26782 bytes) (Status: UNKNOWN) * * 0970 On packet switches with infinite storage. J. Nagle. Dec-01-1985. * (Format: TXT=35316 bytes) (Status: UNKNOWN) * Fred, Please note that these references are to published RFCs, which are available in perpetuity in the official document archive of the Internet community, the RFC series. It is an illustration that publishing significant white papers and discussion papers as RFCs has real value, which is being lost lost in what you correctly characterize as an over-emphasis on standards. We are letting the marketing types rule. * * To leave white papers and internet drafts, many of which are never * published as RFCs and a relatively small portion ever become standards, * out, and to leave the discussion part out is, I think, to leave out much of * the real value of the IETF. Yes, both of those predate the IETF as we now * know it, but had the IETF existed then, they would have been very * appropriate in it. Today's counterparts include papers like some I * currently have posted (not intending to self-aggrandize, but they're the * one's I know most quickly). The posting of questions, problems, and ideas * is perhaps *the* key part; standards are from my perspective only one of * the products, and perhaps a byproduct. Yes. So let's consciously endeavor to ensure that sigificant non-standards documents -- responsible position papers, white papers, new ideas, etc. -- become RFCs. (Making Internet Drafts into an archival series seems like a terrible idea to me, but that is a different topic.) Bob Braden
Re: Death of the Internet - details at 11
Dave, RRSh But of course the whole point is that we don't need this.. at least RRSh not with SCTP There is a small matter of getting 500 million hosts to convert to SCTP and then to convert all Internet applications over to it. I think this argument can be taken too far. Yes, there are 500 million hosts out there. So long as we don't break the existing world for them, there is nothing wrong with adding new functionality. The next question is whether the new functionality is useful until it's implemented in 500 million hosts. I claim the answer is yes. Like any other advance it can be used in limited deployments for limited purposes until such time as it is generally available. Imagine an application checkbox that says try SCTP first. I think part of the SCTP transition, however, should be something more than simply modifying socket options. I would rather see our notion of services and well known ports more comprehensively addressed at the same time, so as to avoid such checkbox solutions in the future. Eliot
Re: Proposed Standard and Perfection
Sam, As the person who most recently complained, let me elaborate on my comments. The problem I believe we all are facing is that the distinction between Proposed, Draft, and Internet Standard has been lost. I agree with you 100% that... The point of proposed standard is to throw things out there and get implementation experience. But when it comes to... If specs are unclear, then we're not going to get implementation experience; we are going to waste time. We disagree (slightly). In my experience one needs to actually get the implementation experience to recognize when things are unclear. And my understanding is that this is precisely why we have PS and DS. I've had a lot of experience with a rather unclear spec with some significant problems that managed to make its way to proposed standard: For the past 10 years I have been dealing with problems in Kerberos (RFC 1510). This leads me to believe very strongly that catching problems before documents reach PS is worth a fairly high price in time. We come to different conclusions here. My conclusion is that no standard should remain at proposed for more than 2 years unless it's revised. Either it goes up, it goes away, or it gets revised and goes around again. Your fundamental problem with RFC 1510 is that it is too painful for people to go and fix the text. And that's a problem that should be addressed as well. Thus, let the IESG have a bias towards approval for PS, and let implementation experience guide them on DS and full standard. But set a clock. This has impact on the WG process of course. People want to do their work and go home. We like WGs to end. Well, what really needs to happen is that either the WG hangs around to push the thing forward, or the doc needs to be assigned some sort of standing WG, akin to a an area directorate, who will take responsibility for moving it forward or killing it. And moving it forward shouldn't be that hard EITHER. Mostly in the editing of clarifications, removal of functions not found to be used, or perhaps changing a few SHOULDs to MUSTs and visa versa. Let's take an example: COPS-PR; RFC 3084. How many people actually implement it? If we can't find anyone who is, won't it just cause confusion to leave it at PS? And I'd like to know when someone plans to do the work to get Kerberos to DS. Heck, at least it's used by people. Consider HARPOON, RFC1496 on the downgrading of X.400/88 - X.400/84 with MIME. Ya think Harald wants to take the time to update that one now?! Well, why didn't it happen in some reasonable period of time, when perhaps it might have been more interesting? Was it because nobody actually implemented it or was it simply because nobody felt the need to update it? That said, I realize too much time can be spent on a review. When we're not sure we understand the implications of an issue well enough to know whether it will a be a problem, letting a document go to PS and getting implementation experience can be useful. Don't get me wrong. Some review is definitely in order. In as much as they are going to happen they should happen either prior to sending the doc to the IESG at all (remaining within the WG) or in parallel with IESG review. Similarly, if the review process will never successfully conclude, then having the review early is good. ALso, I am simply saying that waiting for complete reviews is good and the pressure to get things out as PS faster with less review is dangerous. Only because today PS = Internet Standard, in reality. And that's what needs to change. Eliot
Re: IESG review of RFC Editor documents
Keith, Okay, I read draft-iesg-rfced-documents-00.txt regarding a proposed change in IESG policy regarding RFC-Ed documents. I'm opposed to the change, because I believe it would make it too easy for harmful documents to be published as RFCs. As I'm sure you well know, the RFC Editor takes very seriously their obligations to provide thorough review of non-IETF documents. In fact, what makes you think that the IESG is more conservative than the RFC Editor? Eliot
Re: IESG review of RFC Editor documents
Personally, I'm more concerned by WGs demanding their right to have their half-baked specifications published as RFCs, and the for IESG to approve them without any IETF review or other community review, or (as has happened in the past) even when substantial oversights or design flaws in those specifications were pointed out by individuals. Please cite an example. In what case was there not a last call? Eliot
Re: IESG review of RFC Editor documents
Keith, These days, for a protocol specification to be of reasonable use on a wide scale it needs to avoid causing harm. First, something can be of reasonable use while still causing harm. Fossil based fuels prove that. And while I agree that there are certain areas where causing harm to others needs to be considered (such as UDP-based protocols that lack well known congestion avoidance algorithms), we as a community cannot be so risk averse that we drive development elsewhere. Consider the case where someone *DID* invent a UDP-based file transfer protocol (FSP). The work was done completely outside the IETF and satisfied a demand. When that demand subsided use of that protocol diminished. And yet does not have a specification for this Historic protocol. Similarly, SOCKS went quite far before the IETF ever got a look at it. Why? Because we are no longer viewed as a place where development can seriously take place. Risk averse. You know that thing about running code? Taken too far we fail what I think part of our mission is, which is to be a place to collaborate, because everyone will have shown up with their respective running code, only to fight over whose running code (if anybody's) will become the standard. See, for instance, the XMPP/IMPP wars. There have been too many exploits of security holes and privacy holes in poorly-designed protocols. While it might be useful to publish an informational specification of a widely-deployed protocol on the theory that publishing it will make the public more aware of its limitations and help them migrate to better protocols, publishing a specification of a hazardous protocol that is not widely deployed can encourage wider deployment and increase the risk of harm. Keith is trying to raise the bar. I prefer to keep the bar low. I, frankly, don't see a problem with there being more crap published as RFCs, whether produced by WGs or produced by individuals. Publishing crap dilutes the value of the RFC series, and makes it more difficult for the public to recognize the good work that IETF does. It also costs money which could be better put to other uses. This was never the series' intent. We've attempted to warp it into this, and the result has been The Official Dogma, with a corresponding lack of development within the IETF. If we want to allow for REAL innovation WITHIN the IETF, then you have to let some crap through, and you have to trust the RFC Editor and others to hold the bar at some level. Eliot
Re: RFC 3164 i.e. BSD Syslog Protocol
In theory there's no reason multicast SYSLOG shouldn't work. The packet format doesn't need to change and you just need to bind to a multicast socket. I haven't any idea how implementations will currently behave. But you're addressing two separate problems- distribution and reliability. Reliability is addressed with SYSLOG/BEEP, which I believe is a Proposed Standard (RFC 3195). Eliot ___ Ietf mailing list [EMAIL PROTECTED] https://www1.ietf.org/mailman/listinfo/ietf
Re: Options for IETF administrative restructuring
[EMAIL PROTECTED] wrote: All, My two cents worth... 5. Section 3.1 of Carl's Report ( Page 20 ) states Evaluation of applicants might consist of a search committee appointed by the IETF Chair. Isn't the appointment of committee members what the IETF empowers the Nomcom for ? Not any committee. The NOMCOM focuses on filling our leadership positions. We then need to trust that the NOMCOM has done a good job by allowing their choices the freedom to manage the administrative stuff (IMHO). Eliot ___ Ietf mailing list [EMAIL PROTECTED] https://www1.ietf.org/mailman/listinfo/ietf
Re: isoc's skills
Dave Crocker wrote: The IETF is choosing ISOC to do a job. The IETF is specifying the job. If the IETF does not like the job that ISOC is doing, the IETF will get someone else to do it. And you think that isn't called contractor? See below. What label would you use? And how does it describe something different from contracting? How about parent organization? But all this does lead to the thought that a basic (inter-) organizational chart would be helpful, showing who reports to whom in terms of giving direction and making hire/fire decisions. Indeed. Eliot ___ Ietf mailing list [EMAIL PROTECTED] https://www1.ietf.org/mailman/listinfo/ietf
Re: Reminder: Poll about restructuring options
Harald Tveit Alvestrand wrote: John, what I expected when I caused this poll to be created was that there would be a significant number of people choosing No, I do not wish to state an opinion. For multiple reasons - I trust the leadership to decide better than I can was one that people talking to me gave me, in addition to all the ones you mention below. Hi. Just to be clear, I trust the leadership to decide better than I can. I don't know about the rest of you, but I have a day job that has absolutely nothing whatsoever to do with IETF governance. I'd like to have the time to go over all this fun stuff, but even if I did I'd rather spend it pestering some CEOs as to how they would structure such organizations(s) for success. I prefer to think of myself as an engineer, not an MBA. Eliot ___ Ietf mailing list [EMAIL PROTECTED] https://www1.ietf.org/mailman/listinfo/ietf
Re: Reminder: Poll about restructuring options
Hi Margaret, My reading of the situation is that the differences between scenarios 0 M revolve around contract and corporate law, potentially in multiple jurisdictions. I'm not a subject matter expert in this area. If you're asking that I run this by lawyers, I'd reluctantly do so. But I would hope that the IAB and IESG could do so and report back. Don't get me wrong. It's not that I don't care. I do. And I'm not saying don't ask for community input. I'm just saying that we need you guys to do leg work for us. Eliot ___ Ietf mailing list [EMAIL PROTECTED] https://www1.ietf.org/mailman/listinfo/ietf
Re: Reminder: Poll about restructuring options
Kai Henningsen wrote: Only Harald disagrees with that, because that is certainly not the question his poll asked - there was no neither option. Nor need there be. If the leadership is down to these two choices and one of them is going to be The Onetm, then you might as well run with those two choices. Even if the leadership want to know hypothetically which one is best, there is still no need to have to state I don't like either. Everyone has had plenty of airtime on this issue. Decisions, please. Eliot ___ Ietf mailing list [EMAIL PROTECTED] https://www1.ietf.org/mailman/listinfo/ietf
Re: Reminder: Poll about restructuring options
John, I agree with you that there is reason to be concerned about a group of technical people who are not lawyers having to make decisions about the organization. However, I don't see delay at this point in time assisting our cause. In fact, the general membership of the IETF (whatever that means) has very few lawyers, and probably very few MBAs. One would have to wait a LONG time for community consensus. As it is I question the validity of the poll answers simply based on the qualifications of the respondents to answer. Rather I hope that the considerably smaller group has been consulting subject matter experts on the best ways to go forward. As I responded to Margaret, if you want me to lawyer up, fine but that costs time and quite frankly which one of 0 or M (or any other) gets chosen doesn't seem worth waiting. That a decision gets made by people we in fact empowered through the NOMCOM process (the IAB IESG) seems to me more important. If you do not like the decision you have every right to make your displeasure known to the NOMCOM. And If the [Ll]eadership of this organization screws up badly enough, the Internet Community *WILL* route around the damage. It's happened before. That's how W3C came to be. Eliot John C Klensin wrote: --On Friday, 01 October, 2004 20:09 +0200 Eliot Lear [EMAIL PROTECTED] wrote: Kai Henningsen wrote: Only Harald disagrees with that, because that is certainly not the question his poll asked - there was no neither option. Nor need there be. If the leadership is down to these two choices and one of them is going to be The Onetm, then you might as well run with those two choices. Even if the leadership want to know hypothetically which one is best, there is still no need to have to state I don't like either. Eliot, I have hoped to not have to get into this explicitly, and have been talked out of it (or talked myself out of it) several times. But part of your note calls for a response. The Leadership (the reason I'm capitalizing that term will be clear below) has only the authority that the IETF community, rather explicitly, gives them. We have, for example, given the IESG rather broad authority to interpret the standards process and to determine community consensus on what should be standardized. Even there, that distinction is important -- the IESG has no authority to make or proclaim standards on their own and has not attempted to do so. But, to take the IESG as an example, there is nothing in the various procedural documents that gives them _any_ authority to reorganize the IETF, create new organizations, etc. Without such authority, there is rather little difference between * Harald (whom I'm picking out only because of the chair he occupies, not because of anything he has or has not done) standing up and saying I am the IETF Chair. With IESG and IAB support, I have decided X and * Joe Blow (a hypothetical person) standing up and saying I am the Bozo. With the support of a dozen or two of my close friends, I have decided Y Now, I'm actually a big fan of leadership (small l). Without it, especially in a complex discussion, there are high odds of everyone going off wandering in the weeds. So I appreciate the efforts and good intentions of Leslie, Harald, the IAB and IESG, and those they have worked with, to draw these issues together into coherent form and to offer good summaries and advice to the community on what _The Community_ should decide. Those efforts to summarize the issues and delineate and differentiate choices have been at least moderately successful in some parts of this and, in my personal opinion, rather disappointing in others (as has been pointed out, my producing a summary of an even more complex discussion and not being able to get it under 360 lines is not a sign that we have understood all of the issues in a crisp way). Everyone has had plenty of airtime on this issue. Decisions, please. But I get very concerned, partially because of my doubts about the extensive training and experience of most of the IAB and IESG in organizational behavior and structures, enterprise-level management, large-organization budget management, contracting and handling multiple subcontractors with interlocking tasks and critical deadlines, etc., when someone says something that sounds to me like ok, start behaving like the King(s) (or Tyrant(s)) you are and decide for us. We need either clear community consensus on where we are headed, or clear community consensus that the IESG and/or IAB (or their Chairs) really should have decision authority for the community in this non-standards area.I suggest we have neither at the moment, although community consensus seems to be becoming more clear on some parts of the issues. john ___ Ietf mailing list [EMAIL PROTECTED] https://www1.ietf.org/mailman/listinfo/ietf
Re: Reminder: Poll about restructuring options
Spencer Dawkins wrote: Erk! I haven't been involved with W3C since 2000, but I WAS involved in W3C during the late 1990s. It's worth pointing out that the alternate routing mechanism _did_ include a king - at that time, Tim was doing final endorsement for all recommendations, and it looks like Director Endorsement is still the case (see http://www.w3.org/2004/02/Process-20040205/tr.html#q73). You know, Spencer. We *had* a king for a VERY long time, and it was Jon Postel as RFC Editor and IANA. And somehow we survived. While Jon was around somehow a vast plethora of standards got vetted, not the least of which were IP, TCP, UDP, ICMP, SMTP, FTP, SMTP, NNTP and DNS. I agreed with some of his decisions and disagreed with others. Probably the same would be true with Tim Berners Lee. But you missed my point. Don't like the IETF or the W3C? Try the TMF or the DMTF or the ITU or the GGF or the IEEE or roll your own (everyone else has ;-). I'm not saying don't make the IETF better. I think you do, by the way, through your participation (same with John Klensin and Dave Crocker, fwiw). I am saying that people have and will route around damage. Over this decision I doubt it will come to this. I have faith that the people in the IAB and the IESG care enough about the organization to listen to experts and make a good decision. I hope my faith is not misplaced. Regards, Eliot ___ Ietf mailing list [EMAIL PROTECTED] https://www1.ietf.org/mailman/listinfo/ietf
Re: Reminder: Poll about restructuring options
Hi Dave, I am trying to imagine any sort of serious protocol development process that used that sort of logic and then had acceptance and/or success. Here-in lies the rub. If you try to use our rules of protocol development to develop an organization we'll never get there. And you and I agree that there are bigger fish to fry, and right now the kitchen is backing up. No, I'm not going to pick apart the rest of your message. I disagree with some of your points but aside from one point to John and perhaps a response to Mr. Raymond, I've had my say. Eliot ___ Ietf mailing list [EMAIL PROTECTED] https://www1.ietf.org/mailman/listinfo/ietf
Re: Copying conditions
Simon, What is your goal, here? What is it you want to do that you can't do because of either RFC 3667 or RFC 2026? Eliot ___ Ietf mailing list [EMAIL PROTECTED] https://www1.ietf.org/mailman/listinfo/ietf
Re: Shuffle those deck chairs!
On my way to the dust bin of history, I happened to notice this posting from Eric S. Raymond: In what way? Microsoft now knows that with the mere threat of a patent it either can shut down IETF standards work it dislikes or seize control of the results through the patent system. The IETF has dignaled [sic] that it will do nothing to oppose or prevent these outcomes. If you want an SDO to not allow use of work based on asserted or claimed patent rights, then you have to accept that a company such as Microsoft will always be able to intervene and shut down a group or attempt to force decisions for its strategic benefit. This is actually a strong argument for a compromise position, such as RAND. In short, we've come full circle. Can you take this conversation to the IPR group? Thank you, Eliot ___ Ietf mailing list [EMAIL PROTECTED] https://www1.ietf.org/mailman/listinfo/ietf
Re: Why people by NATs
Right. While I didn't want to continue this discussion on the IETF list, as I understand it this is precisely what prefix delegation was meant to be able to handle. Eliot Fred Baker wrote: At 12:35 PM 11/22/04 -0500, Eric A. Hall wrote: One potentially technical hurdle here is the way that the device discovers that a range/block of addresses is available to it. Some kind of DHCP sub-lease, or maybe a collection of options (is it a range of addresses or an actual subnet? how big is it, and does that include net/bcast addresses?),is going to be required. I think you're saying that the router/firewall/gateway thingie needs to have some sequence like: - initial boot or expiration of previous lease occurs - CPE router has or forms link-local association with upstream router (note that a non-link-local address on the upstream interface is optional) - CPE router sends DHCP request for configuration - upstream router replies with address of DHCP server, DNS Server, and a prefix with a lease. It also configures itself with a local route to that prefix via CPE router. - CPE router configures interior interface with said prefix and starts some combination of autoconfiguration and DHCP configuration of downstream hosts. - If Dynamic DNS is in use, some hosts may advise the DNS server of their new address. If there is a management contract (ISP knows about and does something with the CPE router), supplying the router's address upstream is one possible use of DDNS. Note that in the case that DDNS is in use and we are triggering off lease expiration, the process needs to take the concepts and issues of http://www.ietf.org/internet-drafts/draft-ietf-v6ops-renumbering-procedure-02.txt into account. I have added Ralph Droms to this. Ralph, your suggestion? So it would obviously be useful that Linksys et al make sure that the specs are there to help them continue providing the same kind of high-value low-management experience. This is the kind of cross-industry participation I'm talking about needing. I'll argue that this is pretty much what the IETF has always done. It comes down to someone who sees the need propose a solution and make sure the other folks who are likely to be interested buy into it. It is fundamental to what we do. ___ Ietf mailing list [EMAIL PROTECTED] https://www1.ietf.org/mailman/listinfo/ietf
Re: Why people by NATs
Eric S. Raymond wrote: Indeed. I think this is true. Several people on this list have tried to tell me that I don't really want the IP address space on my local net to be decoupled from the server address. They are wrong. I want to be able to change ISPs by fixing *one* IP address in *one* place, and I want to control the mapping from global IP addresses to local ones. This desire has nothing to do with IPv4 vs. IPv6 and everything to do with wanting to be able to make only small, conservative changes in my network configuration rather than having to completely disrupt it. You wouldn't care about touch points if even a large number were reliable and secure, and that is the key. At the consumer level I think it's VERY important that most people not care about the IP address they are assigned. In fact it's important that they not have to know anything about what they're addressed! And you're right: it doesn't matter whether it's v4 or v6. So. Where are the gaps? Eliot ___ Ietf mailing list [EMAIL PROTECTED] https://www1.ietf.org/mailman/listinfo/ietf
Re: Another document series?
Mike, As the other co-author to http://www.ietf.org/internet-drafts/draft-ietf-newtrk-cruft-00.txt, [...]Its unclear that either the work in progress or the cited drafts will ever be published as RFCs. Its also unclear that this (restructuring etc) will be resolved within the 6 month lifetime of any given ID. Its also unclear that we can afford to either have these expire, or continually resubmit them. And finally, we NEED to have this set of documents as permanent archivable documents to maintain the historical record. I think the current cruft draft is best suited only as an I-D. We are right now conducting a cruft experiment. It is quite possible -- in fact darn near certain -- that we will need to update that draft based on the experiences of the old-standards mailing list, making the current draft, well, cruft. Depending on the results of the experiment I think the WG could produce either an update to RFC 2026/3667/ or an informational RFC. Why is this not sufficient? What is it you want to capture? Eliot ___ Ietf mailing list [EMAIL PROTECTED] https://www1.ietf.org/mailman/listinfo/ietf
List of Old Standards to be retired
Hello, This is an update from the Old Standards experiment. Below are a list of proposed standards that are candidates to be obsoleted. The old standards mailing list has vetted out a good number, but still a good number remains. We are looking for experts who can say affirmatively whether a standard is implemented and in use. In particular, many of the docs class into four categories: - telnet options - MIBs (for X.25, 802.5, FDDI, and other) - SOCKS - Interaction with other protocol stacks (ISO, IPX, Appelalk, SNA, etc) If you see a document on the list below and you know it to be in use, would you please reply to this message indicating the RFC number, and whether you believe the doc should be advanced beyond proposed? Also, if you know of work to update anything on the list below, please include that. A note along these lines is generally sufficient to remove a document from the list below. RFC0698 Telnet extended ASCII option RFC0726 Remote Controlled Transmission and Echoing Telnet option RFC0727 Telnet logout option RFC0735 Revised Telnet byte macro option RFC0736 Telnet SUPDUP option RFC0749 Telnet SUPDUP-Output option RFC0779 Telnet send-location option RFC0885 Telnet end of record option RFC0927 TACACS user identification Telnet option RFC0933 Output marking Telnet option RFC0946 Telnet terminal location number option RFC1041 Telnet 3270 regime option RFC1043 Telnet Data Entry Terminal option: DODIIS implementation RFC1053 Telnet X.3 PAD option RFC1234 Tunneling IPX traffic through IP networks RFC1239 Reassignment of experimental MIBs to standard MIBs RFC1269 Definitions of Managed Objects for the Border Gateway Protocol: Version 3 RFC1276 Replication and Distributed Operations extensions to provide an Internet Directory using X.500 RFC1277 Encoding Network Addresses to Support Operation over Non-OSI Lower Layers RFC1285 FDDI Management Information Base RFC1314 A File Format for the Exchange of Images in the Internet RFC1328 X.400 1988 to 1984 downgrading RFC1370 Applicability Statement for OSPF RFC1372 Telnet Remote Flow Control Option RFC1378 The PPP AppleTalk Control Protocol (ATCP) RFC1381 SNMP MIB Extension for X.25 LAPB RFC1382 SNMP MIB Extension for the X.25 Packet Layer RFC1397 Default Route Advertisement In BGP2 and BGP3 Version of The Border Gateway Protocol RFC1414 Identification MIB RFC1415 FTP-FTAM Gateway Specification RFC1418 SNMP over OSI RFC1419 SNMP over AppleTalk RFC1420 SNMP over IPX RFC1421 Privacy Enhancement for Internet Electronic Mail: Part I: Message Encryption and Authentication Procedures RFC1422 Privacy Enhancement for Internet Electronic Mail: Part II: Certificate-Based Key Management RFC1423 Privacy Enhancement for Internet Electronic Mail: Part III: Algorithms, Modes, and Identifiers RFC1424 Privacy Enhancement for Internet Electronic Mail: Part IV: Key Certification and Related Services RFC1461 SNMP MIB extension for Multiprotocol Interconnect over X.25 RFC1469 IP Multicast over Token-Ring Local Area Networks RFC1471 The Definitions of Managed Objects for the Link Control Protocol of the Point-to-Point Protocol RFC1472 The Definitions of Managed Objects for the Security Protocols of the Point-to-Point Protocol RFC1473 The Definitions of Managed Objects for the IP Network Control Protocol of the Point-to-Point Protocol RFC1474 The Definitions of Managed Objects for the Bridge Network Control Protocol of the Point-to-Point Protocol RFC1478 An Architecture for Inter-Domain Policy Routing RFC1479 Inter-Domain Policy Routing Protocol Specification: Version 1 RFC1494 Equivalences between 1988 X.400 and RFC-822 Message Bodies RFC1496 Rules for downgrading messages from X.400/88 to X.400/84 when MIME content-types are present in the messages RFC1502 X.400 Use of Extended Character Sets RFC1512 FDDI Management Information Base RFC1513 Token Ring Extensions to the Remote Network Monitoring MIB RFC1518 An Architecture for IP Address Allocation with CIDR RFC1519 Classless Inter-Domain Routing (CIDR): an Address Assignment and Aggregation Strategy RFC1525 Definitions of Managed Objects for Source Routing Bridges RFC1552 The PPP Internetworking Packet Exchange Control Protocol (IPXCP) RFC1553 Compressing IPX Headers Over WAN Media (CIPX) RFC1582 Extensions to RIP to Support Demand Circuits RFC1584 Multicast Extensions to OSPF RFC1598 PPP in X.25 RFC1618 PPP over ISDN
Re: [newtrk] List of Old Standards to be retired
Margaret, Thanks for your note. Please see below for responses: Margaret Wasserman wrote: RFC0885 Telnet end of record option This option was, at least at one time, used for telnet clients that connected to IBM mainframes... It was used to indicate the end of a 3270 datastream. I don't know if it is still used in that fashion, but Bob Moskowitz might know. Thanks. It sounds about right. I'm sure tn3270 is out there and used but I don't know what options it uses. RFC1041 Telnet 3270 regime option I'm not sure what this was ever used for, but again Bob Moskowitz would be a good person to ask if this is still in-use. Right. RFC1269 Definitions of Managed Objects for the Border Gateway Protocol: Version 3 Why would this be cruft? The BGP4 MIB was just recently approved... Good thing too. Take a good look at 1269. I don't think it would pass a MIB compiler test today. If you approved the BGP4-MIB, ought not that have obsoleted this guy? RFC1518 An Architecture for IP Address Allocation with CIDR RFC1519 Classless Inter-Domain Routing (CIDR): an Address Assignment and Aggregation Strategy CIDR is still in-use and rather frequently discussed in other documents. Are there newer references or something? Yah. Something slipped here. One of these two docs is cruftier than the other, and while we don't have a newer reference, we're likely to cruftify one of them and recommend that the other be revised and advanced. Eliot ___ Ietf mailing list [EMAIL PROTECTED] https://www1.ietf.org/mailman/listinfo/ietf
Re: RFC1269 - [was: RE: [newtrk] List of Old Standards to be retired]
Bert, I'll remove it from the list with the expectation that the new MIB will obsolete the old one. However, I note that is currently not stated in the header of the draft. Eliot Wijnen, Bert (Bert) wrote: W.r.t. RFC1269 Definitions of Managed Objects for the Border Gateway Protocol: Version 3 Why would this be cruft? The BGP4 MIB was just recently approved... Good thing too. Take a good look at 1269. I don't think it would pass a MIB compiler test today. If you approved the BGP4-MIB, ought not that have obsoleted this guy? The new doc (in IESG evaluation status) is draft-ietf-idr-bgp4-mib-15.txt And its abstract says it all: Abstract This memo defines a portion of the Management Information Base (MIB) for use with network management protocols in the Internet community In particular, it describes managed objects used for managing the Border Gateway Protocol Version 4 or lower. The origin of this memo is from RFC 1269 Definitions of Managed Objects for the Border Gateway Protocol (Version 3), which was updated to support BGP-4 in RFC 1657. This memo fixes errors introduced when the MIB module was converted to use the SMIv2 language. This memo also updates references to the current SNMP framework documents. This memo is intended to document deployed implementations of this MIB module in a historical context, provide clarifications of some items and also note errors where the MIB module fails to fully represent the BGP protocol. Work is currently in progress to replace this MIB module with a new one representing the current state of the BGP protocol and its extensions. This document obsoletes RFC 1269 and RFC 1657. Distribution of this memo is unlimited. Please forward comments to [EMAIL PROTECTED] So 1269 will soon be made OBSOLETED I will look at other MIB related documents next week. Bert ___ Ietf mailing list [EMAIL PROTECTED] https://www1.ietf.org/mailman/listinfo/ietf
Re: List of Old Standards to be retired
The way we've been running the experiment, if there is a portion of the doc that is still useful then we leave it alone but recommend an update. In other words no status change until someone takes further action. Having said that, I will remove RFC 1618 from the list, but it would be great if you did the update and in the process then obsoleted RFC 1618. Eliot Carsten Bormann wrote: On Dec 16 2004, at 12:46 Uhr, Eliot Lear wrote: RFC1618 PPP over ISDN We had a short discussion about this in pppext. The gist was: The document is pretty bad (partly because things were murky in 1994, but also because it was written by Martians that had no space ship to take them to the ISDN planet), but some parts of it do describe what currently shipping, actively marketed products do (and should do) in this domain. Having done some ISDN work in the late 80s/early 90s, I all but volunteered during this discussion to do a DS version of the thing (probably by striking 80 % of the text and rewriting the rest). If this is what it takes to keep an active standards-track specification about IP over ISDN, I'll do that tiny amount of work. Or we could leave it where it is. Gruesse, Carsten ___ Ietf mailing list [EMAIL PROTECTED] https://www1.ietf.org/mailman/listinfo/ietf ___ Ietf mailing list [EMAIL PROTECTED] https://www1.ietf.org/mailman/listinfo/ietf
Re: [newtrk] Re: List of Old Standards to be retired
Eric Rosen wrote: Let me echo Bob Braden's if it's not broken, why break it? query. Because maybe it is broke. Even if someone *has* implemented the telnet TACACS user option, would a user really want to use it? The process is broke. We say in 2026 that proposed standards should hang around and there is good reason why they shouldn't. The stuff that does hang around falls into three categories: 1. that which works so well that we just never got around to advancing it 2. that which was useful at some point but is no longer 3. that which was never useful, and what we really had was a failed experiment. In both [2] and [3] if someone lacking experience decides to (re)implement one of these that person is likely in for a snoot full of trouble by way of security and interoperability owing to the fact that the world has changed since many of these documents were written. And if something wasn't useful in the first place, perhaps smarter people than that someone figured out why. This is a simple way to simply do what we said we were going to do. Eliot ___ Ietf mailing list [EMAIL PROTECTED] https://www1.ietf.org/mailman/listinfo/ietf
Re: [newtrk] Re: List of Old Standards to be retired
Eric Rosen wrote: Even if someone *has* implemented the telnet TACACS user option, would a user really want to use it? I don't know. Do you? Yes, I do. Many of us do. And that's the point. How do we go about answering a question like that? We will spend less time discussing that option then we will the meta conversation, it seems. ___ Ietf mailing list [EMAIL PROTECTED] https://www1.ietf.org/mailman/listinfo/ietf
Re: [newtrk] Why old-standards (Re: List of Old Standards to be retired)
John, Harald, while I agree in principle, I would suggest that some of the comments Eric, Bill, and others have pointed out call for the beginnings of an evaluation of your experiment. I further suggest that evaluation is appropriate at almost any time, once data start to come in. First a reminder of what the procedure for this experiment is, before we talk about modifying it: 1. Start with the list of RFCs prior to RFC 2001 and marked PS. 2. Remove all RFCs known to still be relevant where new implmenetations might be necessary. Be liberal in what we accept for removal purposes. 3. Take the list to the IETF and other relevant mailing lists so that as wide an audience as possible can have a say. 4. Present the list to the IETF and the NEWTRK WG in Minneapolis. 5. Here things depend on the state of the Johns draft. If it's ready and takes into account cruft, then we follow that procedure. Otherwise, extended WG last call, extended IETF last call, IESG consideration, and then take one of the following two steps: 6. Either reclassify documents as Historic and write up BCP. or Don't and write up failed experiment. This follows the WG milestones: Dec 04 If the consensus was to create a new RFC cleanup process then initiate a trial of the cleanup process based on the description in teh Internet Draft Mar 05 Determine if there is WG consensus tha the trial of the RFC cleanup process was successful enough to proceed to finalize the process. Jul 05 If there was WG consensus that the trial of the RFC cleanup process was successful submit an ID describing the process to IESG for publication as a BCP RFC Now to your points: 1. Nobody in the IETF *has* to participate in this experiment. The fewer who do, the more likely we'll end up with a substandard list, and the less likely the IESG will buy the list. We don't seem to be having this problem at the moment. 2. Changing the procedure should only be necessary if we can identify specific flaws in the procedure. The objections I hear right this very moment fall into two categories: [a] There's obviously a lot of trash out there so can't you just focus on that? [vjs, others] The working group agreed to Larry Masinter's suggestion that we make people take the most minimum step of having to send an email to request a document remain PS. [b] This whole effort is a waste of time, and you could do more damage than good. [braden, rosen, perhaps you] Other than stopping the procedure I agreed to carry out without WG consensus is not something I would consider short a family or health emergency. Eliot ___ Ietf mailing list [EMAIL PROTECTED] https://www1.ietf.org/mailman/listinfo/ietf
Re: Excellent choice for summer meeting location!
Elwyn Davies wrote: Oh, and BTW I can go there on an (air-conditioned) train in only 3 hours. The USA should invest in a few high speed trains.. they are the world's best way to travel. There's a slightly bigger pond between the U.S. and France... Eliot ___ Ietf mailing list Ietf@ietf.org https://www1.ietf.org/mailman/listinfo/ietf
looking for archives
Hello, Does anyone have an archive of the IETF list prior to 1991? I am specifically looking for 88-90 incl. Thanks, Eliot ___ Ietf mailing list Ietf@ietf.org https://www1.ietf.org/mailman/listinfo/ietf
Re: individual submission Last Call -- default yes/no.
Dave, You make an assumption here that there is some relationship between the usefulness of a standard done from a working group and those individual submissions. Is that assumption borne out in truth? Just asking. I haven't checked too much. Eliot Dave Crocker wrote: On Fri, 07 Jan 2005 10:46:41 +0100, Harald Tveit Alvestrand wrote: The usual case for an individual submission is, I think: - there are a number of people who see a need for it - there are a (usually far lower) number of people who are willing to work on it - nobody's significantly opposed to getting the work done Harald, Given that we are talking about an individual submission, two points from your list are curious: 1. The last point is at least confusing, since the submission comes *after* the work has been done; otherwise it would be a working group effort; so I do not know what additional work you are envisioning. 2. Since there is no track record for the work -- given that it has not been done in an IETF working group -- then what is the basis for assessing its community support, abssent Last Call comments? If one has no concern for the IETF's producing useless and unsupported specifications, then it does not much matter whether marginal specifications are passed. However the IESG's diligence at seeking perfection in working group output submitted for approval suggests that, indeed, there is concern both for efficacy and safety. How are either of these assessed for an individual submission, if not by requiring a Last Call to elicit substantial and serious commentary of support? d/ ps. The IESG used to be very forceful in requiring explicit statements (demonstrations) of community support; . I suspect we have moved, instead, towards delegating the assessment almost entirely to our representatives and their subjective preferences for work that is submitted. -- Dave Crocker Brandenburg InternetWorking +1.408.246.8253 dcrocker a t ... WE'VE MOVED to: www.bbiw.net ___ Ietf mailing list Ietf@ietf.org https://www1.ietf.org/mailman/listinfo/ietf ___ Ietf mailing list Ietf@ietf.org https://www1.ietf.org/mailman/listinfo/ietf
Re: Appeals text in IASA BCP -06
On administrative decision the board of directors of non-profit ought to have final say and we should trust that they're not going to overturn a decision that causes us to break a contract or otherwise subject us to liability without VERY good cause. In short we can't do this stuff without some trust in those folk. Eliot ___ Ietf mailing list Ietf@ietf.org https://www1.ietf.org/mailman/listinfo/ietf
Re: MARID back from the grave?
Keith Moore wrote: IMHO, charters should not be bound to specific documents. It's one thing to say WG X will produce a document describing protocol Y, quite another to say WG X shall publish draft-ietf-x-joe's-specification-for-y. It's up to the WG, not the ADs, to decide which specification to submit to IESG to meet a particular charter requirement. And WGs should be able to change their minds about such things. I agree with this. In fact I think it should take rough consensus to keep a draft from becoming a WG draft. Alternatives are good! Eliot ___ Ietf mailing list Ietf@ietf.org https://www1.ietf.org/mailman/listinfo/ietf
Re: Last Call: 'Requirements for IETF Draft Submission Toolset' to Informational RFC
Bruce Lilly wrote: Such as line breaks in the middle of words, followed by loss of indentation? N.B. no smiley. So what? The nice thing about an XML format is that if you don't like the representation you can change it without changing the source. Isn't that nice?! Eliot ___ Ietf mailing list Ietf@ietf.org https://www1.ietf.org/mailman/listinfo/ietf
Re: Last Call: 'Requirements for IETF Draft Submission Toolset' to Informational RFC
Bruce Lilly wrote: Not if the primary output is unusable. But maybe I missed your point... Yes. Don't like the software? Write your own... Eliot ___ Ietf mailing list Ietf@ietf.org https://www1.ietf.org/mailman/listinfo/ietf
Re: Last Call: 'Requirements for IETF Draft Submission Toolset' to Informational RFC
Scott W Brim wrote: I wonder how many of those have actually written a draft using both? Isn't it sufficient for one to have to have suffered *roff in other contexts? ___ Ietf mailing list Ietf@ietf.org https://www1.ietf.org/mailman/listinfo/ietf
Re: Voting (again)
Yakov, Perhaps the IETF traditional motto, rough consensus and working code should be revised to make it clear that the rough consensus goes only up to a certain point, but after that point the IETF operates solely by a decree from the IESG. You and I were both in the room when the Ethernet-MIB WG LOUDLY objected to Jon Postel having tweaked the Ethernet-MIB to properly align definitions with what the IEEE counters were. The WG chair was rip roaring upset. Here you had rough consensus and running code. How dare others interfere! Of course it was a broken spec, but nevermind that! The WG knew better. Good thing *someone* was minding the store. Eliot ___ Ietf mailing list Ietf@ietf.org https://www1.ietf.org/mailman/listinfo/ietf
Re: Voting (again)
An individual has the ability to write a draft. The IESG has the ability to gauge consensus as to whether that draft should become a standard. So in essence they have the capability today. The fact that it is rarely used is a testament to the good judgement these good people display. In other words, at least this part ain't broke. Eliot william(at)elan.net wrote: On Tue, 19 Apr 2005, Eliot Lear wrote: Yakov, Perhaps the IETF traditional motto, rough consensus and working code should be revised to make it clear that the rough consensus goes only up to a certain point, but after that point the IETF operates solely by a decree from the IESG. You and I were both in the room when the Ethernet-MIB WG LOUDLY objected to Jon Postel having tweaked the Ethernet-MIB to properly align definitions with what the IEEE counters were. The WG chair was rip roaring upset. Here you had rough consensus and running code. How dare others interfere! Of course it was a broken spec, but nevermind that! The WG knew better. Good thing *someone* was minding the store. At the same time reverse is not true, i.e. I do not think IESG should be allowed to make a decision on document on its own if there is no consensus. ___ Ietf mailing list Ietf@ietf.org https://www1.ietf.org/mailman/listinfo/ietf
Re: Voting (again)
Dave Crocker wrote: the current ietf's track is quite poor, both with respect to timeliness and quality. quite simply we are taking a long time to turn out lots of specifications that tend not to get used very much. I think we can each find examples on either side of this assertion. MIME took quite some time to get right, for instance, but it's a mighty fine standard. SIP similarly so. IPv6? Remains to be seen. There are others that are less tested and some that are probably unused, but if everything we did was used it would mean that our bar was so high to get stuff through that even more good stuff could have been done. It would be better to have some sort of analysis of this. so it is unclear that the considerable costs of the current 'quality' mechanisms are working very well. Fair enough assertion, and an area in which I'm concerned having been told by IESG members previously that they've held up documents just because they were concerned about premature ossification. However, the other side of this argument is that it is onerous to update a standard, and I think it is more due to real installed base matters that IETF procedures. Again, IPv6 probably shows the the most extreme example of this. you might say that that means we need to get the 10% to a smaller number, but my point is that the 90% is being approved without much benefit to the community (or, apparently, much harm) so it's not clear that all that effort to enforce quality criteria imposed by individual area directors is all that useful... You need to back up your estimate. It's not reasonable to just accept those numbers without some analysis. except for frustrating the heck out of all those hard-working working group participants who took so long to create the now-delayed specification... Certainly this happens. And as to this: Note the kinds of examples that John cited; there are plenty of opportunities for the IESG to do a good job, but without taking so much time out of an AD's life. However this requires a) prioritizing what ADs will spend their time on and, almost certainly, b) giving up some power. Depends on who gets that power. If it's the WG chair we have any number of examples where a WG really doesn't do the job well enough for release, the most famous being the one I related earlier. Eliot ___ Ietf mailing list Ietf@ietf.org https://www1.ietf.org/mailman/listinfo/ietf
Re: improving WG operation
Margaret, The words I hate most when I am in a WG meeting are these: take it to the mailing list That is usually short for we can't agree in person so we'll now continue to disagree by email. Debate has been shut off, and usually prematurely because there is something else on the agenda. I'd rather that never happen. I think it's fair to specify the parameters for a decision and then go to the mailing list so that people could evaluate different solutions based on those parameters, but simply blowing off a topic because the group cannot agree is a failure of leadership. So, in answer to the question you asked Dave, I would agree with him about [1] and [2] in his message. I don't fully understand [3]. I would go a bit further to say that the agendas should be approved by an AD. Why? Because it forces the AD to pay attention to the group. No group should run on auto-pilot. Any AD that cannot do this with little or no effort, should spend more time with the WG in question. The AD gets to approve the order. If agenda bashing shows that the chair missed something, then there was a failure on the mailing list, and corrective action should be taken to fix the problem. I would not penalize a WG for not getting to the end of its agenda. That, in fact, is a call for an interim meeting, perhaps. I would add one more thing. We need whiteboards, ones with erasers. it used to be that we had them years and years ago. Being able to draw out solutions and list and reorder problems is a good thing. So, a not so fictitious example: The ISMS WG is currently struggling to choose between one of three architectures for integrated SNMP security models. A call for consensus has been issued, and thus far there is none. The reason there is none is that people do not yet agree on the underlying requirements, IMHO. This is all good fodder for an in person meeting. If neither mailing list nor in person meeting can solve the problem, then the AD needs to step in and do something. Prior to the meeting there should be a short summary of the issues, pro and con for each alternative, as well as proposed evaluation criteria. The meeting may be a good venue to expand or contract those criteria. Eliot ___ Ietf mailing list Ietf@ietf.org https://www1.ietf.org/mailman/listinfo/ietf
Re: Uneccesary slowness.
Working groups are expensive. Very expensive. To me this discussion shows that documents are as expensive as working groups, and maybe more so. Unless the document is relatively straight forward, it's easier for someone doing an individual draft to be funneled to a working group so that there's a first round of review. In fact, a working group that doesn't produce anything seems pretty darn cheap, especially if it doesn't meet. Eliot ___ Ietf mailing list Ietf@ietf.org https://www1.ietf.org/mailman/listinfo/ietf
Re: Uneccesary slowness.
Dave, You described the charter as a contract between the WG and the rest of the IETF. I'll argue an alternative below, but let's stick with contracts for the moment. My basic understanding of contract law tells me that there are certain real world and legal limitations on contracts that must be considered: 1. A contract requires due consideration from all parties, meaning that nobody performs without an expectation of receiving some sort of benefit. What are the benefits to the WG and what are the benefits to the rest of the community? 2. Most contracts don't require the dissolution of a party should it default. Rather there are remedies, often times monetary. Sometimes there are bonuses. My point: dissolving a working group because they're late would often be imprudent and short of replacing the chair there are very few other viable remedies. 3. A contract in which one of the parties cannot reasonably be expected to carry out his or her obligations is not a contract at all. Again, if the WG is a party and the community is a party, what is the expectation of the community and is it reasonable? But even with the WG is it reasonable to ever expect innovation as part of the solution under such a circumstance? 4. Contracts generally have limitations when it comes to unforeseen circumstances, force majeure, etc. One such unforeseen circumstance is where a working group comes up with a solution and the IESG finds sufficient fault with it to require a review. In this case, the terms of the contract need to allow for a resetting of the charter. As I wrote above, I don't think the contract analogy is perfect. I think we want a cross between a research proposal a'la NSF etc. and a contract, where we allow for changes in direction based on developments within the working group. I'm not saying we should ignore non-performing or under-performing groups, but we should certainly allow some flexibility. Eliot ___ Ietf mailing list Ietf@ietf.org https://www1.ietf.org/mailman/listinfo/ietf
Re: Uneccesary slowness.
Hi Dave, Dave Crocker wrote: interesting note. it is always provocative to challenge long-standing precepts, in this case as per section 2.2 of RFC 2418, Working Group Guidelines, first published as RFC 1603, in 1994. That does not guarantee that your challenge is mis-placed but rather that it is fundamental to the long-established view (or that it isn't so much as a challenge but rather a call to make sure we know what we have been meaning.) Yes, I understand. My point is that the analogy is strained. Please see below. Contracts typically DO have an out. So, dissolution of the relationship is not at all uncommon. Yes, having steps for first attempting to cure the problem is, equally, typical, but cures often do not work. First I agree that terms of dissolution are important, and I probably wasn't as clear as I could have been. I meant that one shouldn't start with the nuclear option, if you'll pardon the use of the term. The community expects working groups to produce material that is timely, relevant -- to the targeted consumer base -- and useful. (For anyone who hasn't been tracking the ietf list recently, these 3 items are my own mantra, though I think they are entirely or largely compatible with the views of many others.) Yes. You and I differ on what timely means. We also differ about the nature of work done in WGs. As much as the IESG does, indeed, seem to be a force of nature, your example nicely underscores the problem of last-minute requirements-creation by the iesg. They have a choice in the matter. It does not come from some external, uncontrollable source. We have a working group that did something unexpected. It wasn't against its charter, but it doesn't sit well with various IESG members. What should happen? Should the IESG suffer the choice? If so, why do they exist? Keep in mind, by the way, that IESG members change. What happens then? Should the new member let such a concern go just because the old guy didn't catch it earlier? All of this just happened in ISMS. In the larger and more diverse IETF, I must entirely disagree with any suggestion that IETF working groups can operate in a research mode of any sort. That's what the IRTF is for. The IETF side of things is for developing a community consensus about the engineering of a technology or practise for the global Internet. I guess my concern is that we're not building cookie cut houses. We're not a fab factory. Companies who have hierarchical control can't predict the exact release of a product when new techniques are used. They can't do this precisely because the techniques are new. Who knew, for instance, that going into ietf-smtp we would end up with MIME? Not I. Who knew going into the IPng effort we'd end up with a 16 byte address. Definitely not I. Why? Because some of this stuff is Hard. While recognizing that good is the enemy of great, I would hate if a WG felt so under the gun to complete in order to get shlock out the door. I can demonstrate several examples of that. Regards, Eliot ___ Ietf mailing list Ietf@ietf.org https://www1.ietf.org/mailman/listinfo/ietf
Re: IANA Action: Assignment of an IPV6 Hop-by-hop Option
Rob, | is whether the proposed mechanism will interfere | with existing or other proposed mechanisms. This is totally irrelevant. We're talking about an option. Options, by their very nature are optional. If use of an option interferes with some other processing that you require, then you simply don't use the option. Let's look at an analogy that worked just as you suggest: the assignment of 10/8 172.16/16 and 192.168/16 in RFC 1597. Jon Postel's decision absent community review (and it was absent) had a profound impact on the Internet architecture. Now we can argue till the cows come home as to whether that impact was good or bad [I really am over it! ;-], people will agree that we live with the results today and will continue to do so for years to come. I claimed at the time and still believe that Jon's actions were reckless and the IAB was wrong to not at least review the allocation request. Your argument that it's just an option doesn't wash with me not so much because intervening devices will have to ignore it but quite the opposite. What happens if intervening devices implement it? Had nobody made use of Network 10 the debates wouldn't rage on till this day. If people make use of Dr. Roberts' work where it provides some marginal value to some subset of the community but at the expense of the rest of us, what do we do then to repair the damage? And so with respect to the following: The only potential cost here is the loss of an option code value, until such time as this option is certified extinct (assuming that it fails of course, if it succeeds, then the option would be in active use, and no-one would want to reclaim it). I would disagree for the same reasons. I care more about what happens if the thing gets used and causes a bad interaction. I don't see us getting into a situation of having to implement the equivalent of the telnet EXOPL given that Bob is talking about deprecating hop-by-hop altogether. In this case, I can't say whether or not there will be bad interactions, but clearly the IESG is very concerned. Eliot ___ Ietf mailing list Ietf@ietf.org https://www1.ietf.org/mailman/listinfo/ietf
Re: IANA Action: Assignment of an IPV6 Hop-by-hop Option
Please see below: Whether that discussion amounted to consensus or not I wouldn't like to say after all of this time, but it certainly occurred. Not publicly. Certainly there was a problem. Indeed someone (I forget who) had made a request for a /8, which forced the issue. | What happens if intervening devices implement it? And, I presume you mean, people actually send packets containing the option. Then, either it does useful things (for someone), or it does not. In the former case, the option remains, and whether you, or I, like the effects produced, someone clearly would have to. And if they're getting a benefit, they are going to do that, whether blessed or not, registered or not (just consider the amount of rewriting the v4 TOS byte that happened, long before anyone considered blessing that practice) If it doesn't work, or no-one likes it, then it gets disabled (the vendor who added it gets pressure to at least make processing the option optional, and it is disabled). My concern is the ground in between, where some people derive some benefit from it to the detriment of others because of some interaction introduced on the device (be it a queuing behavior that leads to fairness problems or what have you), where the function has gotten implemented and is used where perhaps it wasn't intended (say, like oh... network 10). Is this architectural purity? Perhaps. It's really not what I would aim for. I think the standard should simply be will the thing cause harm? and the judges of that standard should be the IESG and not IANA. That's all I'm saying. Eliot ___ Ietf mailing list Ietf@ietf.org https://www1.ietf.org/mailman/listinfo/ietf
Re: IANA Considerations
Joe, It seems like such [IANA] considerations are, by definition, relevant only for standards-track RFCs. It is not useful to require it for other documents. I think you're correct in general but this is not always the case. Consider URI schemes. I think they're often informational, and in general I see no reason to be otherwise. Eliot ___ Ietf mailing list Ietf@ietf.org https://www1.ietf.org/mailman/listinfo/ietf
Re: Question about Obsoleted vs. Historic
I would point out that it is historically useful to be able to track changes between draft and full or proposed and draft and we don't list status information in the RFCs... Eliot [EMAIL PROTECTED] wrote: Hi, I was wondering if someone could help me out on this one. I was doing a bit of analysis on the current RFC list, and noticed that some Draft Standard documents are obsoleted. For example: 954 NICNAME/WHOIS. K. Harrenstien, M.K. Stahl, E.J. Feinler. Oct-01-1985. (Format: TXT=7397 bytes) (Obsoletes RFC0812) (Obsoleted by RFC3912) (Status: DRAFT STANDARD) This really made me scratch my head. One would imagine if a protocol is obsoleted by another, it would not be listed as a Draft Standard any longer. What is the reason for continuing to list something obsolete as a Draft Standard? John ___ Ietf mailing list Ietf@ietf.org https://www1.ietf.org/mailman/listinfo/ietf ___ Ietf mailing list Ietf@ietf.org https://www1.ietf.org/mailman/listinfo/ietf
calendar file for IETF
For the daring, there is http://www.ofcourseimright.com/~lear/ietf63.ics. I claim no competence in any of this. No responsibility if you miss your meetings. No promises to update it. But it works for me. Eliot ___ Ietf mailing list Ietf@ietf.org https://www1.ietf.org/mailman/listinfo/ietf
Re: calendar file for IETF
Thanks for the file. Unfortunately it is not a valid iCalendar file To fix this, just add the following line below the 'BEGIN:VCALENDAR' line: VERSION:2.0 Done! In addition, each VEVENT component needs to have a UID property with a unique identifier in each one. Done! Also, I note there is a new updated agenda out that is different from the one you posted. NOT DONE - will work on it tomorrow. Eliot BTW I think it might be worthwhile for the folks working on tools for IETF processes to look into having an automatic iCalendar generator for IETF agendas as a lot of people now have iCal capable clients that they could use to display the agendas. Another case where we should eat our own dog food! AGREE! ___ Ietf mailing list Ietf@ietf.org https://www1.ietf.org/mailman/listinfo/ietf
Re: calendar file for IETF
An additional update reflecting yesterdays changes is now available at http://www.ofcourseimright.com/~lear/ietf63.ics. Additional stuff: - UIDs *should* be stable across changes. - An attempt has been made to make proper use of SEQUENCE - An attempt has been made to parse out LOCATION information - Garbage in/Garbage out problem repaired. - Several bad dates have been corrected. Usual cautions apply. This calendar file could blow up any tool it is applied to. But it didn't blow up iCal, at least. Eliot ___ Ietf mailing list Ietf@ietf.org https://www1.ietf.org/mailman/listinfo/ietf
Re: calendar file for IETF
Bill, I couldn't agree with you more regarding multiple overlapping events. They're all designed for the case where one might double-book, and even on occasion triple book, but 8 or 9 events? None of them deal with that correctly. I could go on and on about what these Calendar programs don't do, but as I'm not going to fix them... Eliot ___ Ietf mailing list Ietf@ietf.org https://www1.ietf.org/mailman/listinfo/ietf