RE: accusations of cluelessness
I have a feeling I'm gonna regret getting involved in this rat-hole, but here goes anyway... I mostly agree with the let the market decide philosophy, but there have to be some limits. Following this strictly leads to pathological cases. To point out incidents from the IETF PKIX group over the years that demonstrate what can happen: 1 - a gentleman (I'll be polite and call him that) asked PKIX to adopt as a work item a time-stamping technology he developed. This gentleman was given an opportunity (at Adelaide and I believe at one other meeting, as well as on the list) to present his material to the working group. In the end, PKIX didn't pick up his work as a work item because it didn't fall under the charter of the WG. A total of only about 3 people even thought it was interesting, and they weren't sure it was within the working group's purview. This gentleman has devoted much energy since then to disparaging the IETF, PKIX, and the PKIX WG chair personally, for this. His view, stated in public numerous times, is that the way the IETF should work is that any item which is properly formatted must be published as an RFC. That is, any individual or group can create a document. If it meets the formatting requirements, it must be published as an RFC. It is not the job of the WG, the IESG, IAB or any other group to approve/disapprove, evaluate, analyze or otherwise pass judgement on the technical contents of the document. Furthermore, all RFCs must be given the same status. It is not appropriate for the IETF to favor one RFC or the technology contained in it over any other RFC or technology. Any other strategy constitutes restraint of trade, interference in the market place, and basically a violation of all that a standards group should do. Now, fortunately, not too many people agree with this gentleman, but it does represent a pathological case of let the market decide. 2 - the PKIX WG published two competing, non-interoperable protocols for the same function. One is referred to as CMP; it's in RFC 2510. The other is referred to as CMC; it's specified in RFC 2797. There are a lot of reasons why we did this; but it boiled down to a schism. One of the two largest PKI players at that time preferred CMP, and its allies refused to yield. The other of the two largest players preferred CMC, and its allies refused to yield. This was because those protocols represented what their products did, and nobody wanted to change his product. So, to prevent bogging down and making no progress at all, the WG decided to progress both protocols and let the market decide. Most people now agree that this was a mistake. The strategy of publishing both protocols as equals did not lead to interworking in the Internet; it arguably did the opposite. It was also horribly misunderstood by those not in the know; it was believed in some quarters that one protocol was the interim strategy and the other was the long-range target. (The supporters of each side did nothing to discourage this misunderstanding.) While this is not the biggest factor in the failure of PKI to become ubiquitous, it didn't help. My bottom line on this: while I'm not a strong believer in having the cognoscenti dictate to the unwashed masses the one true way to do things, it helps a lot if the Internet Engineering Task Force actually does some Internet Engineering. Al Arsenault -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of Christian Huitema Sent: Sunday, October 12, 2003 1:59 AM To: Keith Moore Cc: [EMAIL PROTECTED] Subject: RE: accusations of cluelessness It is perfectly fine to review a specification, understand the intent of the original designer, and suggest ways to better achieve the same result. That is exactly what working groups are supposed to do. It is also perfectly fine, if the original designer won't change their design, to publish an alternative design that hopefully works better, and then rely on market forces to sort it out. But it is not fine to try to prevent the original designer from actually shipping products, either by preventing publication of the specification or by trying to prevent deployment. On the contrary, it is our duty to do all of these things. What is not fine is for participants to expect IETF to lend its support to bad designs. Well, who made us kings? It is one thing to work and publish designs that hopefully will be good. It is quite another to judge someone else's design and brand it bad. It is far better to let the market be judge. -- Christian Huitema
Re: Let the market be judge (Re: accusations of cluelessness)
Mark Seery wrote: [..] The ethos of running code is all about establishing a proof that something works. I never said otherwise. Running code has been a useful means for reducing the solution space from which the IETF publishes. It has never been a hard and fast metric of good design. (Indeed, if anything the IETF's history has shown a number of protocol designs that were running code and yet still only waypoints towards better designs in the future.) But that's really not the point here. The role of an _engineering_ taskforce is to act like engineers, not a vanity press. Our output should be educated guidance to the wider community - created with diligence and offered with humility. We can do no more and should do no less. (Arguments that the IETF prevents deployment by preventing publication are a red-herring and should be discarded as such. The market has shown a remarkable affinity, at times, to protocols that have either never been fully published or published outside the IETF.) cheers, gja -- Grenville Armitage http://caia.swin.edu.au I come from a LAN downunder.
Re: accusations of cluelessness
Scott's data point gives a better view of what is reserved/not allocated, but another data point is that only 30%-32% of the IPv4 address space is currently being advertised (according to RIB dumps at routeviews.org). -mark --- Scott Bradner [EMAIL PROTECTED] wrote: the reason I don't try to repudiate BCP 5 is that it's clear that for IPv4 we're out of addresses, total BS http://www.iana.org/assignments/ipv4-address-space
Re: Let the market be judge (Re: accusations of cluelessness)
* * But that's really not the point here. The role of an _engineering_ taskforce * is to act like engineers, not a vanity press. Our output should be educated * guidance to the wider community - created with diligence and offered with humility. * We can do no more and should do no less. Grenville, Nicely said!! I would like to see your motto inscribed over the IETF portals... We create with diligence and offer with humility. Bob Braden
Re: Let the market be judge (Re: accusations of cluelessness)
Bob Braden wrote: * * But that's really not the point here. The role of an _engineering_ taskforce * is to act like engineers, not a vanity press. Our output should be educated * guidance to the wider community - created with diligence and offered with humility. * We can do no more and should do no less. Grenville, Nicely said!! I would like to see your motto inscribed over the IETF portals... We create with diligence and offer with humility. Bob Braden Bob, I agree this is a good ethos, which I support, and in general, the IETF is trending back in this direction, the discussion over the last ~year of meatier meetings being one example of the trend in this direction; yours, Keith's and Grenville's expressions being others. The specific issue though in this case, IMO, was whether engineering was conflicting with pragmatics (WRT existing deployments) - happily, the OT thread appears to be flushing this out and moving on. Obviously, the long-lived tension will always be between under-engineering a solution and over-engineering a solution. As this subthread shows, discussing the philosophy doesn't advance the ball on any one particular technical issue very much, only skilled engineers arguing opposing views (a good thing) can do that. Best
RE: accusations of cluelessness
It is perfectly fine to review a specification, understand the intent of the original designer, and suggest ways to better achieve the same result. That is exactly what working groups are supposed to do. It is also perfectly fine, if the original designer won't change their design, to publish an alternative design that hopefully works better, and then rely on market forces to sort it out. But it is not fine to try to prevent the original designer from actually shipping products, either by preventing publication of the specification or by trying to prevent deployment. On the contrary, it is our duty to do all of these things. What is not fine is for participants to expect IETF to lend its support to bad designs. Well, who made us kings? It is one thing to work and publish designs that hopefully will be good. It is quite another to judge someone else's design and brand it bad. It is far better to let the market be judge. -- Christian Huitema
Re: accusations of cluelessness
On Sat, 11 Oct 2003 22:58:55 PDT, Christian Huitema said: Well, who made us kings? It is one thing to work and publish designs that hopefully will be good. It is quite another to judge someone else's design and brand it bad. On the other hand, the same skills that allow us to evaluate our own designs as workable should also be able to evaluate other's designs for workability. And if we find fatal flaws in a design, it is our responsibility to say so, rather than let it be deployed in the real world. This applies whether we are civil engineers, aeronautic designers, or software creators. It is far better to let the market be judge. If letting the market be judge is a good idea, why is such a large percentage of my bandwidth chewed up by emails bearing viruses? pgp0.pgp Description: PGP signature
Re: accusations of cluelessness
On zondag, okt 12, 2003, at 03:23 Europe/Amsterdam, Scott Bradner wrote: If you have $2500 to ante up for the allocation. you might take a look at the RIR web pages - it does not cost an ISP $2500 to get additional address space allocated - the additional fee for additional space for large ISPs is generally zero. Yes, really fair. Let the newcomers pay for a resource but not the people using up most of it. end site allocations are a different story but there are a few routing table issues with doing many of those The problem is that the RIRs require the use of at least a /22 (I think even a full /20 for ARIN) before they allocate a PA block. This makes it hard for newcomers to get their own block. In theory this policy helps keep down the size of the routing table. But in practice it doesn't, because: 1. many people take a smaller block from their ISP and announce that 2. large networks keep getting /20s and /19s that they announce individually Where is the routing table police when you need them?
Re: accusations of cluelessness
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 | | Well, who made us kings? It is one thing to work and publish designs | that hopefully will be good. It is quite another to judge someone else's | design and brand it bad. It is far better to let the market be judge. | I agree. But should we not as a community try to produce results which attempt to be consistent? If a feature in the network layer break assumptions in the application layer then (setting aside the discussion of whose fault the inconsistency is) if we want to be consistent one has to give. I put it to you that the difference between 'keeping' and 'removing' in this case is just in our heads. Eg. keeping SL in the original form (I agree this is a hypothetical situation) would have implied removing features from several protocols. Did we remove SL or did we keep those protocols intact? Cheers Leif -BEGIN PGP SIGNATURE- Version: GnuPG v1.0.7 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQE/iR3Q8Jx8FtbMZncRAjjaAJ96ayKHER/PH/pZjXeeMHEqRR/cTgCdHE1p 1B77cE8kUFURCAO3DoUlMhk= =Ofi4 -END PGP SIGNATURE-
Re: accusations of cluelessness
Christian Huitema wrote: [..] Well, who made us kings? a track record of being decent engineers. which is why most of us are ignored when we offer legal advice, and some of us are acknowledged when we offer clues on networking. (true, others of us are merely princes, pages or court jesters, but still...) It is one thing to work and publish designs that hopefully will be good. that's a good way to be crowned a king, yes. It is quite another to judge someone else's design and brand it bad. it is called critique. even vendors go through internal processes designed to weed out bad product ideas, poorly thought out features, nasty implementation trade-offs (academics have a similar, imperfect process called peer review. nothing new here.) It is far better to let the market be judge. who blessed the market with engineering insight? the market has sight. market analysts have hind-sight. engineering is about fore-sight. therein lies a world of difference in roles and responsibilities. cheers, gja -- Grenville Armitage http://caia.swin.edu.au I come from a LAN downunder.
Re: accusations of cluelessness
Well, who made us kings? It is one thing to work and publish designs that hopefully will be good. It is quite another to judge someone else's design and brand it bad. It is far better to let the market be judge. You're not insisting that the market be judge; you're insisting that IETF lend its support to bad designs in order to help mislead the market. Also, you're simply wrong that it's better to let the market judge things - that is, if you want things to actually work. When vendors insist on a level playing field - what they mean is that they want things slanted in a way that gives them an advantage.
RE: accusations of cluelessness
-Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Keith Moore Sent: Sunday, October 12, 2003 1:25 AM To: Scott Bradner Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED] Subject: Re: accusations of cluelessness Just what would you suggest in the way of relaxing? since I view this as a hypothetical situation anyway (and one that isn't likely to happen in the real world) I don't think it's necessary to pin down exactly how they'd go about relaxing the criteria - only to realize that it is possible to relax those criteria There are currently several discussions in the ARIN region regarding IPv4 allocations both in regard to minimum size and criteria. Discussions about IPv4 policy and IPv6 policy are of a contiuous nature in all of the RIRs reflecting changes in the operational community. Ray
Let the market be judge (Re: accusations of cluelessness)
--- grenville armitage [EMAIL PROTECTED] wrote: snip the market has sight. market analysts have hind-sight. engineering is about fore-sight. therein lies a world of difference in roles and responsibilities. The fore-sight of any role is constrained by assumptions. An engineer who designs a building to withstand a collision with a 707 suddenly has a lot more hindsight (than fore-sight) when it is hit with a 767. So let's acknowledge there is no such thing as perfect fore-sight for any role, only better or worse given a set of assumptions. The ethos of running code is all about establishing a proof that something works. Challenging design assumptions is worthwhile, especially if you can see the 767 coming, but this institution appears to give weight to running code. So in the context of running code, the market can be the judge. --- [EMAIL PROTECTED] wrote: snip If letting the market be judge is a good idea, why is such a large percentage of my bandwidth chewed up by emails bearing viruses? A large percentage of your bandwidth is chewed up because there is no market - no cost associated with sending an email. A large percentage of those emails have viruses because a) viruses are to cyber-assets what WMD are to physical assets - i.e. widespread damage from one device b) the crimial justice system has not kept pace with technological change and c) there is resitance to allowing the crimial justice system to keep pace (which may or may not be good thing depending on your view). Put another way, if the criminal justice system had the same level of effectiveness at protecting physical assets, I wonder whether civilization as we know it would exist.
Re: Let the market be judge (Re: accusations of cluelessness)
On Sun, 12 Oct 2003 08:02:09 PDT, Mark Seery [EMAIL PROTECTED] said: thing depending on your view). Put another way, if the criminal justice system had the same level of effectiveness at protecting physical assets, I wonder whether civilization as we know it would exist. If you want to take it in that direction, we have a case where the market decided that locks weren't needed on doors in a high-crime area of town. OK, so maybe the crime rate was still low when the door was installed - there's been a lot of delay and negligence in installing said locks. pgp0.pgp Description: PGP signature
Re: Let the market be judge (Re: accusations of cluelessness)
That is a fair point, but I would observe that there are plenty of more cases where locks were not sufficient. If there are no tradeoffs to actions/feedback loops, then agents in a system can not make optimization/fitness decisions and evolution does not occur. [EMAIL PROTECTED] wrote: On Sun, 12 Oct 2003 08:02:09 PDT, Mark Seery [EMAIL PROTECTED] said: thing depending on your view). Put another way, if the criminal justice system had the same level of effectiveness at protecting physical assets, I wonder whether civilization as we know it would exist. If you want to take it in that direction, we have a case where the market decided that locks weren't needed on doors in a high-crime area of town. OK, so maybe the crime rate was still low when the door was installed - there's been a lot of delay and negligence in installing said locks.
RE: accusations of cluelessness
Nobody made us kings. In fact, we are not kings. On the other hand, we are not a publication house. Someone having an idea and writing it up does not require us to publish it as an IETF standard. Even if it is a good idea. If someone wants to have the IETF work on something and produce a standard for a problem, then one dimension of that is agreeing to accept IETF review and modification of the idea and solution. We can not stop someone from doing their own thing. For example, if you want to use a new header call X-CHRISTIAN-TYPE and define content type values as you please, as long as you do not request IETF agreement that it is a good idea we can not stop you. In that regard, the market can still choose. It is true that the market attaches value to the IETF standardization. To that degree, the market has made us judges and asked us to judge. Yours, Joel M. Halpern At 10:58 PM 10/11/2003 -0700, Christian Huitema wrote: Well, who made us kings? It is one thing to work and publish designs that hopefully will be good. It is quite another to judge someone else's design and brand it bad. It is far better to let the market be judge.
Re: Let the market be judge (Re: accusations of cluelessness)
snip the market has sight. market analysts have hind-sight. engineering is about fore-sight. therein lies a world of difference in roles and responsibilities. The fore-sight of any role is constrained by assumptions. An engineer who designs a building to withstand a collision with a 707 suddenly has a lot more hindsight (than fore-sight) when it is hit with a 767. So let's acknowledge there is no such thing as perfect fore-sight for any role, only better or worse given a set of assumptions. there is no perfect foresight. there is no perfect hindsight either. nor is the market either efficient or reliable at choosing good solutions. but we are probably better off using some combination of foresight, hindsight, and market forces than by excluding any of these or relying on any of these exclusively.
RE: accusations of cluelessness
For those who wish to become involved in the policy process or view the archives: http://www.apnic.net/community/lists/index.html http://www.arin.net/mailing_lists/index.html http://lacnic.net/en/discussion_boards.html http://www.ripe.net/ripe/about/maillists.html Ray -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Ray Plzak Sent: Sunday, October 12, 2003 10:09 AM To: [EMAIL PROTECTED] Subject: RE: accusations of cluelessness -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Keith Moore Sent: Sunday, October 12, 2003 1:25 AM To: Scott Bradner Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED] Subject: Re: accusations of cluelessness Just what would you suggest in the way of relaxing? since I view this as a hypothetical situation anyway (and one that isn't likely to happen in the real world) I don't think it's necessary to pin down exactly how they'd go about relaxing the criteria - only to realize that it is possible to relax those criteria There are currently several discussions in the ARIN region regarding IPv4 allocations both in regard to minimum size and criteria. Discussions about IPv4 policy and IPv6 policy are of a contiuous nature in all of the RIRs reflecting changes in the operational community. Ray
Re: Let the market be judge (Re: accusations of cluelessness)
Keith Moore wrote: snip the market has sight. market analysts have hind-sight. engineering is about fore-sight. therein lies a world of difference in roles and responsibilities. The fore-sight of any role is constrained by assumptions. An engineer who designs a building to withstand a collision with a 707 suddenly has a lot more hindsight (than fore-sight) when it is hit with a 767. So let's acknowledge there is no such thing as perfect fore-sight for any role, only better or worse given a set of assumptions. there is no perfect foresight. there is no perfect hindsight either. nor is the market either efficient or reliable at choosing good solutions. It appears the word market is overloaded with lots of social/political ideology so rather than go down that road, let me redirect the point by simply observing that running code allows the designer(s) to get feedback about a design and its implementation implications, which is the point about letting the market decide. Many can imagine a better design for some problem space, according to some fitness bias we have, but imagining a better design doesn't help unless it can be run and produce results, against the fitness bias the consumers of a solution have. There is nothing wrong with basing an implementation on previous experience and foresight, but there are many things which are more complex than what we can readily understand and therefore our foresight is limited - our implementation experience helps here. This is the age old debate about waterfall vs iteration - probably another dead end discussion point. but we are probably better off using some combination of foresight, hindsight, and market forces than by excluding any of these or relying on any of these exclusively. ack. so as usual, there are no absolutes and a fair amount of subjective judgement, which I guess is the reason why discussion occurs.
Re: Let the market be judge (Re: accusations of cluelessness)
there is no perfect foresight. there is no perfect hindsight either. nor is the market either efficient or reliable at choosing good solutions. It appears the word market is overloaded with lots of social/political ideology so rather than go down that road, let me redirect the point by simply observing that running code allows the designer(s) to get feedback about a design and its implementation implications, which is the point about letting the market decide. Within IETF, running code has usually meant interoperability tests of multiple implementations of a single protocol. These days we find we need to be more concerned about the interaction between protocols than we used to be. The notion that protocols can be examined in isolation no longer holds. So for instance: we expect non-TCP-based protocols to share bandwidth fairly with TCP-based protocols; and we expect all protocols to be reasonably secure because when even a single protocol is compromised it can disrupt operation of the network, other protocols, and hosts that weren't directly compromised. The market is not in a good position to evaluate these characteristics because many of these effects do not become visible to the market until the market has already substantially invested in a particular path. Indeed, the very discipline of engineering exists to minimize such catastrophes. If you make design decisions based on guesswork, that's guesswork; if you make design decisions based on analysis of how well a solution meets well-defined criteria, that's engineering. These days, a lot of people seem to be arguing that IETF shouldn't do engineering.
Re: Let the market be judge (Re: accusations of cluelessness)
Keith Moore wrote: snip These days, a lot of people seem to be arguing that IETF shouldn't do engineering. Hopefully not - not I for sure, I mean that would require a name change ;-) What I think exists is a difference of opinion about what the definition of engineering is. Another well-known institution seems also to be stuggling with this same question: *where numbers count for everything and hunches are scorned *http://www.cnn.com/2003/TECH/space/10/12/nasa.reformers.ap/index.html
Re: Let the market be judge (Re: accusations of cluelessness)
These days, a lot of people seem to be arguing that IETF shouldn't do engineering. What I think exists is a difference of opinion about what the definition of engineering is. another way to put it is - after ~17 years of IETF, we need to start defining what Internet Engineering means. Another well-known institution seems also to be stuggling with this same question: *where numbers count for everything and hunches are scorned ignoring hunches didn't cause the demise of Challenger and Columbia. ignoring hard data did. of course, paying attention to intuition might be useful in those inevitable (hopefully rare) cases where you do ignore hard data.
Re: accusations of cluelessness
From: Keith Moore [EMAIL PROTECTED] ... I don't have any problem with IETF/IANA saying the addresses formerly allocated to site-local will never be re-assigned. I do have a problem with IETF giving any support to the notion that it's reasonable to use site-local addresses. In the real world among adults and outside the delusions of those who think standards committees are Powerful and In Charge, having the IETF/IANA say the addresses formerly allocated to site-local will never be re-assigned is indistinguishable from having the IETF/IANA say here are some site-local addresses; have fun. The talk about the evils of site local addresses (and NAT) and the errors of those who want them may be accurate (I'm inclined to agree), but it is also functionally indistinguishable from the talk about IPv8 and the foolisness of those someone likes to call legacy internet engineers. Neither side is doing anything to help get IPv6 deployed, but just the opposite. Partisans on either side who wanted to see IPv6 in use by the end of the century would conceded the point just to get on to something worth doingwell, unless they don't have any designs to complete, protocols to implement, code to debug, applications to teach about IPv6, IPv6 networks to get running, or anything else worth doing. Don't the IESG and IAB have anything better to do than hearing appeals, counter-appeals, and counter-counter-appeals from people with nothing better to do than prove their analytic and political powers by arguing this issue? Letting the IESG and IAB spend time on this issue is as bad as letting them decide between IPv8 and IPv16. Instead of playing childish lawyer games, why not write successors to RFC 1627 and RFC 1597 or an equivalents to RFC 3027 and let the market decide? The market will decide no matter what the IETF says or many zillion times it says it. I suppose the next pressing network standards issue the IETF and this mailing list will consider is the wrongheaded evilness of phrases like IP network bandwidth and whether to use band-size or frequespan after bandwidth is declared anathema. (See http://www.postel.org/pipermail/end2end-interest/2003-October/date.html ) Vernon Schryver[EMAIL PROTECTED] P.S. I meant my question about :::10.0.0.0/104 seriously. Are those IPv6 site local addresses that are already available and impossible to retract or even deprecate? If so, how can anyone justify arguing (not to mention appealing) this issue?
Re: accusations of cluelessness
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Vernon Schryver wrote: |From: Keith Moore [EMAIL PROTECTED] | snip | is also functionally indistinguishable from the talk about IPv8 and the | foolisness of those someone likes to call legacy internet engineers. That is a bit below the belt isn't it? It's one thing to step up to the mike and claim to channel Keith Moore but to be accused of beeing functionally equivalent to a troll is a different kettle of fish altogether. -BEGIN PGP SIGNATURE- Version: GnuPG v1.0.7 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQE/iDem8Jx8FtbMZncRAtw+AKC9GMvYXWHm2fT/awNnQX93Hjbj6gCfdSG9 H8zjjJ3KMlXbnMlF+CWlOuY= =Lb2i -END PGP SIGNATURE-
Re: accusations of cluelessness
I don't have any problem with IETF/IANA saying the addresses formerly allocated to site-local will never be re-assigned. I do have a problem with IETF giving any support to the notion that it's reasonable to use site-local addresses. In the real world among adults and outside the delusions of those who think standards committees are Powerful and In Charge, having the IETF/IANA say the addresses formerly allocated to site-local will never be re-assigned is indistinguishable from having the IETF/IANA say here are some site-local addresses; have fun. some people can't read. I'm not sure what we can do about that. things that we write can only benefit people who are capable of reading. fortunately, some people can read, so writing can have a useful effect. The talk about the evils of site local addresses (and NAT) and the errors of those who want them may be accurate (I'm inclined to agree), but it is also functionally indistinguishable from the talk about IPv8 and the foolisness of those someone likes to call legacy internet engineers. some people lack the background to understand technical arguments. I'm not sure what we can do about that either. fortunately, some people do have such a background, and those people do have some influence. it might not be as much as we'd like, but it's not zero. Neither side is doing anything to help get IPv6 deployed, but just the opposite. further deployment of IPv6 with site-local might be worse than not deploying it. ultimately the goal isn't to get IPv6 deployed; it's to get a usable Internet deployed and make sure it works well. saying that we should just go ahead and deploy IPv6 without bothering to make sure that everything works is what got us into this mess. Don't the IESG and IAB have anything better to do than hearing appeals, counter-appeals, and counter-counter-appeals from people with nothing better to do than prove their analytic and political powers by arguing this issue? absolutely they do. but I'm not the one trying to waste their time on a senseless appeal. Instead of playing childish lawyer games, why not write successors to RFC 1627 and RFC 1597 or an equivalents to RFC 3027 and let the market decide? The market will decide no matter what the IETF says or many zillion times it says it. the market has demonstrated that it's not capable of making technically sound decisions. letting the market decide is no substitute for good design. do we let the market decide whether a plane will crash or a bridge will stay up? P.S. I meant my question about :::10.0.0.0/104 seriously. Are those IPv6 site local addresses that are already available and impossible to retract or even deprecate? If so, how can anyone justify arguing (not to mention appealing) this issue? of course they're not going to be reallocated for other purposes. but it makes perfect sense to say don't use these; they will cause various kinds of problems and will cause various kinds of failures
Re: accusations of cluelessness
Vernon Schryver wrote: 15 years ago a defining difference between the IETF and the ISO was that the IETF cared about what happens in practice and the ISO cared about what happens in theory. As far as I can tell, the IPv6 site local discussion on both sides is only about moot theories. Without getting into the other points of this argument, let me suggest that Vernon is correct on this point, and in as much as we think site-locals are bad we must provide a better alternative. In order to do that we must address the underlying needs for site-locals. Many have written at length on this subject, but it all seems to boil down to this: We need a way for sites to be internally stable even when their relationship to the world around them changes for whatever reason. This goes to the heart of identity and service location. In as much as we can address this problem we would do much to nullify the arguments for site-locals. IMHO this is where the IETF, IRTF, and other bodies should put our efforts. Eliot
Re: accusations of cluelessness
As far as I can tell, the IPv6 site local discussion on both sides is only about moot theories. That's because you aren't trying to write apps that operate across addressing realm boundaries, and you're apparently not willing to listen to those who are. OTOH, you're quite willing to make abusive statements about things that you don't understand. I used to live in a building that had dumbwaiters and garbage chutes. Both were found to be dangerous. So they told people to stop using them. They didn't try to outlaw them or to make them go away or pretend that they never existed. And occasionally someone did try to use them. Fortunately, merely discouraging their use was sufficient to eliminate most of the danger. Keith
Re: accusations of cluelessness
and in as much as we think site-locals are bad we must provide a better alternative. this kind of thinking is a dead end. we must not accept all of the proposed uses of site-locals without question, because some of those uses are the very source of the problem. We need a way for sites to be internally stable even when their relationship to the world around them changes for whatever reason. see draft-moore-ipv6-prefix-substitution-aa-00.txt for one attempt to solve this particular problem within a site. though I don't think it's sufficient to solve it only within a site. Keith
Re: accusations of cluelessness
From: Keith Moore [EMAIL PROTECTED] As far as I can tell, the IPv6 site local discussion on both sides is only about moot theories. That's because you aren't trying to write apps that operate across addressing realm boundaries, and you're apparently not willing to listen to those who are. I am working on an application that works accross addressing realm bondaries. It involves a global network of thousands clients and servers on the public Internet talking to each other through NAT boxes, firewalls, and all sorts of other nasty stuff to the tune of more than 51 million operations the day before yesterday. There is IPv6 support in the code that seems to work in the sense of IPv6-only systems talking to IPv4-only systems indirectly through servers that do both IPv6 and IPv4. That nasty stuff causes all kinds of real world trouble as opposed to academic ivory tower empty talk and chest thumping. I and others waste plenty of time every week trying to get people running clients and servers for that application to fix their NAT boxes and firewalls. A common firewall error regularly causes Mega-packet/day/site wastes of bandwidth. OTOH, you're quite willing to make abusive statements about things that you don't understand. I understand all to well what's going on here. I used to live in a building that had dumbwaiters and garbage chutes. Both were found to be dangerous. So they told people to stop using them. They didn't try to outlaw them or to make them go away or pretend that they never existed. And occasionally someone did try to use them. Fortunately, merely discouraging their use was sufficient to eliminate most of the danger. So why are you only repeating what you've said many times before instead of writing an RFC that will repudiate and replace BCP 5? Vernon Schryver[EMAIL PROTECTED]
Re: accusations of cluelessness
As far as I can tell, the IPv6 site local discussion on both sides is only about moot theories. That's because you aren't trying to write apps that operate across addressing realm boundaries, and you're apparently not willing to listen to those who are. I am working on an application that works accross addressing realm bondaries. [...] That nasty stuff causes all kinds of real world trouble just a couple of messages ago you were claiming that the discussion was about moot theories and now you're claiming that it causes real world trouble. which is it? So why are you only repeating what you've said many times before instead of writing an RFC that will repudiate and replace BCP 5? the reason I repeat these things once in awhile is because a significant fraction of people still don't get it, and they're hindering progress. the reason I don't try to repudiate BCP 5 is that it's clear that for IPv4 we're out of addresses, and you can't really solve the problem in IPv4 any other way except to move to another address space.
Re: accusations of cluelessness
the reason I don't try to repudiate BCP 5 is that it's clear that for IPv4 we're out of addresses, total BS http://www.iana.org/assignments/ipv4-address-space
Re: accusations of cluelessness
From: Keith Moore [EMAIL PROTECTED] ... I am working on an application that works accross addressing realm bondaries. [...] That nasty stuff causes all kinds of real world trouble just a couple of messages ago you were claiming that the discussion was about moot theories and now you're claiming that it causes real world trouble. which is it? It's not a moot theory that NAT/site local addressing causes real world problems. The academic (in the bad sense of the word), ISO style moot theorizing is the repeated haranguing about the evils of NAT and site local addresses and the implication that the IETF could issue an edict and prevent their use in (or near) an IPv6 Internet. So why are you only repeating what you've said many times before instead of writing an RFC that will repudiate and replace BCP 5? the reason I repeat these things once in awhile is because a significant fraction of people still don't get it, and they're hindering progress. Here progress is measured only by published RFCs. Repetitions of your inflammatory litanies of the evils of NAT and site local addresses prevent progress. Every replay resets things to the start of the we gotta have them; no we don't; you're ignorant and wrong; no you are; you're so clueless that you use Windows; no you are and your mother too groove. I doubt there is anything that can be done about NAT and site locals, but one thing is clear. Your repetitions prevent progress as much as the manipulations of the IETF's bureaucratic machinery by your moot debate opponentsno, debate is wrong. When I was an academic debator, the judges allowed only a fixed number of fairly short opportunities to speak. You claim that you do the IETF a service by replaying your old position statements because people are ignorant, stupid, illiterate, and didn't understand before. Do you really think that makes sense? the reason I don't try to repudiate BCP 5 is that it's clear that for IPv4 we're out of addresses, and you can't really solve the problem in IPv4 any other way except to move to another address space. Let's assume that's true. Where is your I-D for a BCP against site local addresses for IPv6? I dislike NAT and site local addresses in theory and practice. I'm not volunteering to work on an anti-site-local BCP because I have no hope that it could affect the real world, and because I don't have a good enough alternative for site local addresses. That application of mine and every other real world application will have to deal with NAT and site local addresses forever. Life Sucks, but I've not found an alternative I prefer. Vernon Schryver[EMAIL PROTECTED]
Re: accusations of cluelessness
the reason I don't try to repudiate BCP 5 is that it's clear that for IPv4 we're out of addresses, total BS okay, let me state it in more detail: to the best of my understanding, even if people were willing to stop using RFC 1918 address space, there aren't enough global address blocks to accomodate all of the networks now using private addresses.
Re: accusations of cluelessness
On Sat, 11 Oct 2003 16:11:07 EDT, Keith Moore said: the reason I don't try to repudiate BCP 5 is that it's clear that for IPv4 we're out of addresses, and you can't really solve the problem in IPv4 any other way except to move to another address space. IANA gave out 61/8 in April 97. 69/8 was August 2002. Except for 3 /8s given to RIPE, there's NOTHING all the way to 126/8. 56 /8s at a burn rate of 2 /8s per year gives us some 28 more years. There's also a big chunk up around 173-190/8 for another few years worth... There's also reason to suspect that for at least 10-15 years, we've reached somewhat of a plateau in address consumption - the dot-bomb bubble bursting released a lot of address space allocated to since-departed end users, and the relatively flat performance by both Microsoft and PC manufacturers indicates that in the US/Europe area and most of the more advanced parts of the Pacific Rim, the majority of people who want to be online already are. There's high percentage growth in South America/ Africa/Asia, but quite frankly, there's some major infrastructure issues those areas have to deal with first. On the other hand, a /28 and a /16 both flap just as hard pgp0.pgp Description: PGP signature
Re: accusations of cluelessness
as much as I feel compelled to respond to personal attacks, I don't think it's relevant to the IETF discussion list any more. so I'll respond in private. Keith
Re: accusations of cluelessness
Tell you what. If you can convince the RIRs that it's feasible to relax the allocation criteria for IPv4 blocks, and you can convince the ISPs to make address blocks available to their customers at reasonable prices, I'll happily co-author one or more drafts that explain: - why in hindsight RFC 1918 was a bad idea - why new sites should use global addresses rather than private addresses - why sites currently using private addresses should consider migrating to global addresses - techniques for minimizing the pain of renumbering in IPv4, and - alternate techniques for providing a similar degree of security to that obtained with private addresses (I say co-author because this is a LOT of ground to cover. Even if I could write it all myself, I doubt that I could defend it all against the various attacks it would draw from many quarters.) But last I knew, there were several technical hurdles associated with doing this - a perceived scarcity of addresses, route flapping, widespread use of IP addresses in various kinds of configuration files (making renumbering more difficult), and widespread conventional wisdom (supported by marketing BS) that NATs and private addresses are a Good Thing. Keith
Re: accusations of cluelessness
On zaterdag, okt 11, 2003, at 23:46 Europe/Amsterdam, [EMAIL PROTECTED] wrote: it's clear that for IPv4 we're out of addresses IANA gave out 61/8 in April 97. 69/8 was August 2002. Except for 3 /8s given to RIPE, there's NOTHING all the way to 126/8. 56 /8s at a burn rate of 2 /8s per year gives us some 28 more years. There's also a big chunk up around 173-190/8 for another few years worth... Have a look at Geoff Huston's latest work in this area: http://www.potaroo.net/papers.html There's also reason to suspect that for at least 10-15 years, we've reached somewhat of a plateau in address consumption - the dot-bomb bubble bursting released a lot of address space allocated to since-departed end users, I don't buy this. and the relatively flat performance by both Microsoft and PC manufacturers indicates that in the US/Europe area and most of the more advanced parts of the Pacific Rim, the majority of people who want to be online already are. Maybe there is something to this, but there is also a significant possibility that there isn't. There is still a large move from dial-up to always on types of access (cable, DSL) going on. It's likely the same will happen for mobile: today you dial in and use an address for a few minutes, in the future you're going to occupy an address 24/7 with your IP-enabled cell phone. It looks like the growth in address usage is linear. Since there are significant factors driving this growth down (use of private addressing with or without NAT, addressless virtual servers, denser packing of subnets) the actual need for addresses must still be going up pretty fast. But at some point the subnets are as dense as they're going to get, and everything that can be virtualized or NATed is. At this point life is going to become much more interesting. I guess it all depends on how you look at it: either the glass is half empty (still 1.5 billion addresses still free) or it's 97% full (31 bits down, one to go). Iljitsch van Beijnum
Re: accusations of cluelessness
From: Keith Moore [EMAIL PROTECTED] as much as I feel compelled to respond to personal attacks, I don't think it's relevant to the IETF discussion list any more. so I'll respond in private. If I'd known you were going to switch from public+courtesy copy to private lectures on how much I don't know and what am not doing, I would have put your address back into my mail defenses. As it is, I'm almost but not quite moved to publish the private epistle that arrived before I got my defenses repaired. Your closing words in that private message are a motto you might consider acting on. In case you've forgotten, they were Get over it. What kind of pathology would lead someone to reason that private flames would be more welcome or effective than public flames? Am I supposed to be more readily convince in private that I'm ignorant and foolish? Vernon Schryver[EMAIL PROTECTED]
Re: accusations of cluelessness
If you can convince the RIRs that it's feasible to relax the allocation criteria for IPv4 blocks, Keith Just what would you suggest in the way of relaxing? The basic rule is now - if you (the requester) can show you are going to use the space you can get it. Relaxing from that would seem to indicate that the RIR should provide more space than the requester thinks they can use. That does not seem like a real good idea. Scott
Re: accusations of cluelessness
If you have $2500 to ante up for the allocation. you might take a look at the RIR web pages - it does not cost an ISP $2500 to get additional address space allocated - the additional fee for additional space for large ISPs is generally zero. end site allocations are a different story but there are a few routing table issues with doing many of those Scott
Re: accusations of cluelessness
On Sat, 11 Oct 2003, Scott Bradner wrote: If you can convince the RIRs that it's feasible to relax the allocation criteria for IPv4 blocks, Keith Just what would you suggest in the way of relaxing? The basic rule is now - if you (the requester) can show you are going to use the space you can get it. If you have $2500 to ante up for the allocation. Relaxing from that would seem to indicate that the RIR should provide more space than the requester thinks they can use. That does not seem like a real good idea. Scott sleekfreak pirate broadcast world tour 2002-3 live from the pirate hideout http://sleekfreak.ath.cx:81/
Re: accusations of cluelessness
On Fri, 10 Oct 2003 21:48:23 EDT, shogunx said: If you have $2500 to ante up for the allocation. If the $2,500 is a stumbling block, you're probably WAY undercapitalized for the project in the first place Why do you need your own allocation? Either because you're getting pretty big, or you want to multihome a /26 or some such tiny allocation. Let's say you're getting big enough to want your own /19 - even if you're in a / 20 and growing, that's still 4,000 machines (either your own or customers).. plus admin salaries, rent, etc, you're a fairly good sized business. That $2500 shouldn't be a breaking cost - if it is, you're close to failing already and need to be thinking about consolidating, not expanding... If you're tiny and trying to multihome, and can't afford $2500, you're probably not going to be able to afford the router and 2 or more leased lines, and the expertise to do it - you probably should be looking at a colo instead. pgp0.pgp Description: PGP signature
Re: accusations of cluelessness
On Sat, 11 Oct 2003, Scott Bradner wrote: If you have $2500 to ante up for the allocation. you might take a look at the RIR web pages - it does not cost an ISP $2500 to get additional address space allocated - the additional fee for additional space for large ISPs is generally zero. ahhh. additional address space. i was referring to initial address space. from IANA anyway. end site allocations are a different story but there are a few routing table issues with doing many of those Scott sleekfreak pirate broadcast world tour 2002-3 live from the pirate hideout http://sleekfreak.ath.cx:81/
Re: accusations of cluelessness
Vladis, On Fri, 10 Oct 2003 21:48:23 EDT, shogunx said: If you have $2500 to ante up for the allocation. If the $2,500 is a stumbling block, you're probably WAY undercapitalized for the project in the first place A situation I'm used to. Why do you need your own allocation? Either because you're getting pretty big, or you want to multihome a /26 or some such tiny allocation. Or you are building new infrastructure. Let's say you're getting big enough to want your own /19 - even if you're in a / 20 and growing, that's still 4,000 machines (either your own or customers).. plus admin salaries, rent, etc, you're a fairly good sized business. That $2500 shouldn't be a breaking cost - if it is, you're close to failing already and need to be thinking about consolidating, not expanding... Actually just starting to do thing this way. In the past I have found it expedient to simply bypass any security measures and connect a machine wherever necessary. And the point is, who decided that IP addresses are a commodity item anyway? If you're tiny and trying to multihome, and can't afford $2500, you're probably not going to be able to afford the router and 2 or more leased lines, and the expertise to do it - you probably should be looking at a colo instead. No need to multihome. In fact, for this PARTICULAR application, I can probably use a free v6 /24 from a tunnel broker routed over a private address network, although the users will probably notice the packet latency from their terminal to the tunnel broker, and its a nasty kludge. Router? I can build that if necessary. Colo? Why when I have access to a Tier 1 NOC with backbone running through it, though that is 600 miles away. Scott sleekfreak pirate broadcast world tour 2002-3 live from the pirate hideout http://sleekfreak.ath.cx:81/
RE: accusations of cluelessness
I don't have any problem with IETF/IANA saying the addresses formerly allocated to site-local will never be re-assigned. I do have a problem with IETF giving any support to the notion that it's reasonable to use site-local addresses. In the real world among adults and outside the delusions of those who think standards committees are Powerful and In Charge, having the IETF/IANA say the addresses formerly allocated to site-local will never be re-assigned is indistinguishable from having the IETF/IANA say here are some site-local addresses; have fun. I think Vernon hits the nail on the head. A lot of the IETF's energy appears to be now devoted to shutting down someone else's ideas, on the basis that we, the clueful, must protect the Internet against the attacks of greedy and incompetent amateurs. In short, I see a lot of negative energy floating around. It is perfectly fine to review a specification, understand the intent of the original designer, and suggest ways to better achieve the same result. That is exactly what working groups are supposed to do. It is also perfectly fine, if the original designer won't change their design, to publish an alternative design that hopefully works better, and then rely on market forces to sort it out. But it is not fine to try to prevent the original designer from actually shipping products, either by preventing publication of the specification or by trying to prevent deployment. The publication game can be played in the working group, but the best tricks are used by the IESG. The two delaying tactics that I have seen so far are to request complete consensus on a requirement document before any progress can be made on a system design, or by simply sitting on a document submitted by the working group and let it rot in the IESG queue. The requirement trick is probably the most efficient: in practice, it can delay any work by two or three years. However, in the days of the web, the IESG can only control publication as an RFC; the unloved specification most often ends up published elsewhere. There is a lot of damage done to the IETF, but the ideas, good or bad, get documented. To really exercise power, one has to prevent deployment. The practical way to prevent deployment is to control the assignment of a resource required for the unloved design. The normal loop is through requiring IANA that some numbers can only be allocated following publication of an RFC. For example, a new MIME type can only be registered that way. Very often, this control is only nominal. Fore example, nothing prevents consenting software implementations to use unregistered MIME types. One of the remaining strongholds of the IESG, however, is the assignment of a range of addresses for a specific purpose. Today, the IANA will not do that without IESG approval, hence all the hoopla about site local addresses. I wish the attitude would change, but I am not too hopeful. So, maybe a solution is to as much as possible remove blocking powers from the IETF process. In particular, we should do something about the control of address ranges assignment. OK, for IPv4 that will be hard, but IPv6 has more resource. But maybe we should have a specific registry that assigns ranges of addresses to specific purposes, and have that registry managed in much the same way as the current registry of port numbers. -- Christian Huitema
Re: accusations of cluelessness
What kind of pathology would lead someone to reason that private flames would be more welcome or effective than public flames? you deserved what you got in private email. the list didn't.
Re: accusations of cluelessness
Just what would you suggest in the way of relaxing? since I view this as a hypothetical situation anyway (and one that isn't likely to happen in the real world) I don't think it's necessary to pin down exactly how they'd go about relaxing the criteria - only to realize that it is possible to relax those criteria.they could ask for less money, or less supporting evidence, or they could give out bigger blocks. whatever. note that I didn't say it was a good idea to do this. but if, as some people assert, the v4 address shortage is a myth - then I'd be happy to try to work on getting people to move away from NAT. personally I still believe there's some substance to the address shortage. but I'd love to find out that this isn't the case.
Re: accusations of cluelessness
It is perfectly fine to review a specification, understand the intent of the original designer, and suggest ways to better achieve the same result. That is exactly what working groups are supposed to do. It is also perfectly fine, if the original designer won't change their design, to publish an alternative design that hopefully works better, and then rely on market forces to sort it out. But it is not fine to try to prevent the original designer from actually shipping products, either by preventing publication of the specification or by trying to prevent deployment. On the contrary, it is our duty to do all of these things. What is not fine is for participants to expect IETF to lend its support to bad designs.
Re: accusations of cluelessness
Well, one fairly good indicator of a clueless person is when they insist that things have to be a certain way, but seem unwilling or unable to explain why. ... That's all fine, except that it would be more accurate without the words starting with but... People who absolutely positively know they are right never are. Being certain you right is the second best defense against being penetrated by clues. (The best defense is being dead.) Perhaps the person being so accused should simply say my customer insists on this. I can't tell you who he is, and I can't amplify on his reasons for insisting on this. To which the reply should be then your customer needs to represent his own needs here. Because the solution your customer insists on causes problems, and without actually having some insight into your customer's real requirements, we have no basis on which to try to build a compromise. Once upon a time there was an ad hoc group that hashed out interoperable network protocols based on consensus, but generally did not try to impose its view of right or wrong. The group was filled with sentiments along the lines of let the market decide. The group had many academic ties, but knew better than to try to require any tha participant disclose anything secret. This was partly because the incumbant official standards condemned as completely wrong, unworkable, and heretical everything the ad hoc group was doing, and it would have been silly for the ad hoc group to dictate anything. Then the group became powerful and infected with delusions of grandeur, omnipotence, and provincialism. Forever after, or until the next ad hoc group with incomplete pretensions, it grew increasingly insane and like the older organizations it used to decry for reasons other than turf wars. In other words, what happened to the old IETF that would have said Site local addresses are utterly stupid and wrong; how large a block did you say you wanted? Where are the old IETF vendors and other outfits that if told No, we will not let you have site local addresses would have said Ok, give us a block and by the way, if I were you, I'd filter routes and packets carrying those addresses. Isn't that only a slighty bent picture of how the SAP addresses used for IP on 802 networks came about? What happened is that they all turned into a bunch of provincial amateur lawyers, each utterly convinced of omniscience, and with no designing, implementing, or debugging to do, since otherwise they'd be doing it instead of spending time writing legalistic appeals and decisions. This is all utterly silly. If some outfit decides it must have site local addresses for any notion of the phrase, it will get them. The most you can do is brand sight local anathema and reasonably expect that pointy haired IT bosses will not use them until there is a real need that overcomes the political risks of violating an oh fish all fur shure standard. Just for my own ignorance, how are you going to outlaw the use of :::10.0.0.0/104 in the bright new IPv6 world? Why won't existing uses of site local addresses be grandfathered? Vernon Schryver[EMAIL PROTECTED]
Re: accusations of cluelessness
In other words, what happened to the old IETF that would have said Site local addresses are utterly stupid and wrong; how large a block did you say you wanted? It went away with the old Internet that was mostly an experiment and research tool used by a relatively small, elite group with largely common interests, and a fairly high overall clue level (as compared to today), to support a relatively small set of apps. If some outfit decides it must have site local addresses for any notion of the phrase, it will get them. well, if they expect apps to actually work with those things, they're seriously deluded. and no amount of money will stop the tide. Why won't existing uses of site local addresses be grandfathered? people can be deluded if they want to. vendors can encourage those delusions, and try to sell them products based on those delusions. the question is only whether IETF should lend support to those delusions.
Re: accusations of cluelessness
Keith, I don't understand what you are saying here. As I read his note, Vernon isn't saying make all the applications recognize a particular address range and do something special. He is saying ok, we don't think this is useful, but, if it would help you to have an address range to do your own thing in your own way, addresses are just not that scarce. scarcity isn't the problem. just because something is plentiful doesn't mean you want to pollute the water supply with it. and no, it's not okay to say you can pollute your own water supply if you want to, because that stuff flows downstream to everybody else. I'd love to stamp out all of the wrong-headedness and stupidity in the world, but I would not expect to succeed and have largely given up trying except for isolated local cases. Efforts through the centuries to make and enforce laws against stupidity and stupid behavior have not been very successful. nobody is proposing we make or enforce laws against stupidity. what is being proposed is that we take a practice that is now widely acknowledged to be stupid, or at least harmful, and say oops, sorry, this turns out to have been a bad idea. please don't do this maybe we can't outlaw stupidity, but that doesn't mean we have to encourage it, or even be silent about it. I'm not as convinced as you are that, to use Vernon's description, Site local addresses are utterly stupid and wrong, but, even if I were, I'd be having some trouble convincing myself that taking the relevant address range out of the allocation pool and leaving it out would be seriously harmful to the network and to interoperability. well, so would I, and as I understand it, that's just what is being proposed. I don't have any problem with IETF/IANA saying the addresses formerly allocated to site-local will never be re-assigned. I do have a problem with IETF giving any support to the notion that it's reasonable to use site-local addresses.