Re: Re[4]: www.isoc.org unreachable when ECN is used
On Sun, 2003-12-14 at 23:34, Anthony G. Atkielski wrote: jamal writes: So the Linux decision was infact a very good one. An award of some form is in order. Maybe Microsoft will be inspired to do things the same way: it can change its implementations in order to break 10% of all sites around the world, and when anyone complains, it can say that it was forcing those sites to move to more modern software, and that it really deserves an award in consequence. The issue is a standard that defines ECN in the RFC; Linux implemented that standard to the spec. All MUSTs are met. Other older devices made assumptions about what the future would be and hard-coded certain behavior. Its clear that the older devices are the one that are broken. Now if Linux had done what the older devices did (like MS did with kerberos) then you would be correct. I claim that ECN would have been a failure in deployment i.e not as transparent as it is today if Linux had implemented the workaround. Thats where the award is deserved. One could argue that at the end a better network is one with less broken devices; and that a better interop really means conformance as opposed to adaptation to broken implementations. This conflicts with Linux having a broken implementation (and yes, it is broken, because it is not interoperatively better). Your definition of broken is a little off. I would think the broken implementation is the one that misunderstood the definition. reserved as i have been enlightened privately has been clearly defined at IETF as: a) Must be set to zero on transmission b) Should be ignored upon reception. Some systems dont follow b). I believe those are broken. Linux does follow b). It is true that an implementation would be considered robust if it was able to recover from interacting with problematic non-conformant devices i.e those that break b). But robustness does not equate to conformance. It is also true that some systems may break b) _by design_ for paranoia reasons. You make none of the above points. The main contention it seems is the definition of reserved. The main contention seems to be the system with the problem. If it's Linux, it's not a bug, it's feature. If it's Microsoft, it's not a feature, it's a bug. You are working hard to turn this into a Linux vs MS debate? Heres some help for you: MS sucks! ;- In case you didnt hear that, here goes again MS sucks! ;- Now go and look for your sword and meet me at the fountain by the town square for a duel for i have dishonored your family. Oh, i forgot to mention the time, make it at sunset because i have other things i have to go to after (like work for example). cheers, jamal
Re: Re[4]: www.isoc.org unreachable when ECN is used
- Original Message - From: jamal [EMAIL PROTECTED] To: Anthony G. Atkielski [EMAIL PROTECTED] Cc: IETF Discussion [EMAIL PROTECTED] Sent: Monday, December 15, 2003 6:12 AM Subject: Re: Re[4]: www.isoc.org unreachable when ECN is used On Sun, 2003-12-14 at 23:34, Anthony G. Atkielski wrote: This conflicts with Linux having a broken implementation (and yes, it is broken, because it is not interoperatively better). Your definition of broken is a little off. I would think the broken implementation is the one that misunderstood the definition. reserved as i have been enlightened privately has been clearly defined at IETF A citation here (from anyone) would be really helpful. This is also my understanding, but I have no idea why I think so, and would prefer to continue the discussion knowing whether we've actually written this down, or whether this is a commonly held belief resulting from being conservative in what you send and liberal in what you accept. as: a) Must be set to zero on transmission b) Should be ignored upon reception.
Re: Re[4]: www.isoc.org unreachable when ECN is used
On 15-dec-03, at 14:03, Spencer Dawkins wrote: Your definition of broken is a little off. I would think the broken implementation is the one that misunderstood the definition. reserved as i have been enlightened privately has been clearly defined at IETF as: a) Must be set to zero on transmission b) Should be ignored upon reception. A citation here (from anyone) would be really helpful. This is also my understanding, but I have no idea why I think so, and would prefer to continue the discussion knowing whether we've actually written this down, or whether this is a commonly held belief resulting from being conservative in what you send and liberal in what you accept. If we set our time machine for the year 1981 and look at the text in RFC 791 describing the bits in the TOS byte, the picture shows bits 6 and 7 holding the value 0, while the text says Bit 6-7: Reserved for Future Use without further discussion. So this doesn't help. However, I don't see any way to reserve fields for future backward compatible use without requiring them to be set to a predictable value (i.e., zero) upon transmission and ignoring their contents upon reception. Obviously setting the field to a random value precludes adding new values with new meanings, since non-zero values could then be set by implementations that aren't aware of the new functionality. Requiring the fields to have a certain value upon reception makes future use of the field impossible, as implementations would then have to be upgraded across the board first, which is hard to do with a few hundred million systems deployed.
Re[4]: www.isoc.org unreachable when ECN is used
jamal writes: So the Linux decision was infact a very good one. An award of some form is in order. Maybe Microsoft will be inspired to do things the same way: it can change its implementations in order to break 10% of all sites around the world, and when anyone complains, it can say that it was forcing those sites to move to more modern software, and that it really deserves an award in consequence. One could argue that at the end a better network is one with less broken devices; and that a better interop really means conformance as opposed to adaptation to broken implementations. This conflicts with Linux having a broken implementation (and yes, it is broken, because it is not interoperatively better). The main contention it seems is the definition of reserved. The main contention seems to be the system with the problem. If it's Linux, it's not a bug, it's feature. If it's Microsoft, it's not a feature, it's a bug.
Re: Re[4]: www.isoc.org unreachable when ECN is used
On Mon, 15 Dec 2003 05:34:53 +0100, Anthony G. Atkielski [EMAIL PROTECTED] said: The main contention seems to be the system with the problem. If it's Linux, it's not a bug, it's feature. If it's Microsoft, it's not a feature, it's a bug. Linux could at least stand on the claim that it was implementing the RFCs as written, and that the interoperability problem was due to the other end failing to implement the RFCs. Feel free to point at examples of Microsoft saying We're MS, and we're going to implement the RFC because if everybody did it the RFC way, the world would be a better place, even if there's breakage along the way. Be prepared to find a LOT of examples - you'll need them to outweigh the damage that has been done by active content in e-mail, even when the very first set of MIME RFCs cautioned against it due to security concerns. pgp0.pgp Description: PGP signature
Re[4]: www.isoc.org unreachable when ECN is used
Theodore Ts'o writes: To continue quoting from RFC 3360, there were some good reasons stated in that document for why reasonable implementors might not choose to implement the workaround: * The work-arounds would result in ECN-capable hosts not responding properly to the first valid reset received in response to a SYN packet. * The work-arounds would limit ECN functionality in environments without broken equipment, by disabling ECN where the first SYN or SYN-ACK packet was dropped in the network. * The work-arounds in many cases would involve a delay of six seconds or more before connectivity is established with the remote server, in the case of broken equipment that drops ECN-setup SYN packets. By accommodating this broken equipment, the work-arounds have been judged as implicitly accepting both this delay and the broken equipment that would be causing this delay. It sounds like ECN is pretty badly designed; I'm glad it wasn't my idea. But since it is out there now, it still seems better to provide a workaround that provides _some_ connectivity (even with a six-second delay) than an implementation that provides none at all. Better still, just turn ECN off. It should also be noted that RFC 3168 did not require the workaround as a MUST. If RFC 3168 had required the use of the workaround (which presumably would have required the working group to come to a consensus that the tradeoffs listed above were less important than coddling trashy firewall implementations), then I'm sure the Linux implementors would have respected such a MUST requirement in RFC 3168. The problem is that RFC 3168 postdates all the RFCs that came before it, and when something needs to be compatible with real-world systems that are not all instantly and simultaneously upgraded, it needs to behave in a way that works acceptably with systems that haven't guite reached RFC 3168. This problem will only get worse, you know. More and more systems on the Net, with more and more variable maintenance, mean ever greater difficulty in making any non-backwards-compatible change at all to anything. For better or for worse, the earliest design decisions of the Internet will be haunting us for decades to come, and it will be imperative to design anything new in a way that accommodates them. Obviously this wasn't done for ECN, and I daresay it isn't being done for lots of new specifications. I guess this would be second on the list of most common mistakes made by engineers, after the woeful misjudgement of necessary or available capacity (in addressing schemes, for example).
Re: Re[4]: www.isoc.org unreachable when ECN is used
On Fri, Dec 12, 2003 at 09:01:09PM +0100, Anthony G. Atkielski wrote: The problem is that RFC 3168 postdates all the RFCs that came before it, and when something needs to be compatible with real-world systems that are not all instantly and simultaneously upgraded, it needs to behave in a way that works acceptably with systems that haven't guite reached RFC 3168. This problem will only get worse, you know. More and more systems on the Net, with more and more variable maintenance, mean ever greater difficulty in making any non-backwards-compatible change at all to anything. For better or for worse, the earliest design decisions of the Internet will be haunting us for decades to come, and it will be imperative to design anything new in a way that accommodates them. Obviously this wasn't done for ECN, and I daresay it isn't being done for lots of new specifications. There are a lot of broken firewalls out there. Some of them stop any new TCP/UDP port that wasn't known about at the time they were constructed. Should we therefore stop inventing new protocols. Some of them stop various VOIP stacks. Are they broken? Should we give up on VOIP just because some stupidly designed boxes? Some middleware boxes reach into TCP packets, and modify them while they are in flight, either to adjust the Max Segment Size option (to deal with other breakages caused by things like PPP over ethernet combined with firewalls that drop ICMP fragmentation needed packets, which therefore breaks Path MTU discovery), or to adjust the TCP window size, becuause they are going over satellite links --- and encryption and integrity protection prevents such hacks from working. Does that mean that Path MTU was badly designed, because it failed to take into account stupid firewalls? Does it mean that backwards compatibility is **SO** important that we cannot add security, lest we break some badly designed, but yet deployed infrastructure boxes? Of course, we do need to be pragmatic, which in some cases means rewarding bad behaviour. But in the case of ECN, most of the major sites on the net have fixed their broken firewalls. It's unfortunate that ISOC happens to be one that hasn't, but if we accomodate every single stupidly designed box out there, we might as well not bother having IETF meetings, and just pack up and go home. After all, no matter what we we do, even if it is to design a new protocol that uses a newly assigned TCP or UDP port, I guarantee *somewhere* out there, there will be a stupidly designed firewall will not do the right thing when we deploy that new protocol. Ultimately, given that market pressure often got us into this kind of mess, sometimes using market pressure is the only way to get us out of the mess. It's amazing how quickly most commercial storefront sites fixed their ECN-buggy firewalls when they realized that they might be losing potential customers as result of their bogus firewalls - Ted
Re: Re[4]: www.isoc.org unreachable when ECN is used
On 12-dec-03, at 22:24, Theodore Ts'o wrote: Does that mean that Path MTU was badly designed, because it failed to take into account stupid firewalls? Path MTU disovery was implemented very poorly because implementations tend expect certain functionality in routers, and usually don't recover when this functionality is absent. (For whatever reason.) Does it mean that backwards compatibility is **SO** important that we cannot add security, lest we break some badly designed, but yet deployed infrastructure boxes? The way things are today (and will probably stay for a long time), there is no course of action that is completely problem-free. In the mean time, can anyone explain to me which real-world problem ECN solves?
Re[4]: www.isoc.org unreachable when ECN is used
Mark Smith writes: I think you might be missing the point. ECN only breaks when used with previous *bad* implementations of the relevant RFCs. Perhaps my point isn't clear: ECN implementations prevent communication, rather than enhance it. I don't see what advantage ECN provides, but it has become obvious from this discussion what it removes. A protocol and implementation that leave you worse off than you would be without them are a waste of time. And what, exactly, is a bad implementation of relevant RFCs? One that does not presciently foresee every good and not-so-good implementation of every conceivably functionality that someone might dream up in a spare moment for the eternal future? I don't like solutions looking for problems. They tend to cause problems themselves. Apparently, FreeBSD and Windows don't implement ECN, since they both seem to work for me with any site. This is how things should be. At a guess, I find, that it is only 1% or less of web sites that I visit that I have trouble with ECN. How many Web sites do you have trouble accessing _without_ ECN? That indicates that the other 99% of web sites firewalls got it right. Following your logic, the 99% should be penelised for the mistakes of the 1%. No. Following my logic, if I can access 100% of sites without ECN, there is no point in implementing ECN so that I can access only 99% of them. I lose something with ECN, and I gain nothing. Therefore ECN is a bad idea--feature bloat. In the long term, accommodating developer naivety, rather than penalising it, can only lead everybody down a dead end path. Yes. That's why I'm not interested in ECN. Why encourage mistakes? Improvement stops, at which point everybody suffers. You assume that improvement is required. If I have a system that does everything I require, I don't need improvements. Improvements will destabilize my system uselessly, while providing no benefit that I need. The assumption that information systems must be perpetually upgraded and improved even in the absence of clear need, and the axiomatic character that this assumption seems to have among so many people in IT, is an affliction largely peculiar to the information-technology industry, and to a lesser extent to all high-tech industries.
Re: Re[4]: www.isoc.org unreachable when ECN is used
If I have a system that does everything I require, I don't need improvements. So your currently requirements are exactly the same as all the other users of the Internet ? I find it hard to believe that your requirements are exactly the same as mine, and I'm only one of the other approximately 500 million people currently accessing the Internet. Do you know what ECN does ? Can you explain why you don't need it ? If you can't, I don't think you should be making statements about whether it is good or bad, or whether it has been designed well or not.
Re[4]: www.isoc.org unreachable when ECN is used
[EMAIL PROTECTED] writes: The problem is that the most common failure mode is *not* getting an RST back, but getting NOTHING back because some squirrely firewall between here and there is silently dropping packets with bits it doesn't understand. Ah ... that would definitely be a bug with the firewall, then. However, a slight complication is that firewalls normally do not enter into TCP/IP conversations as proxies for the true correspondents--so is it really appropriate for a firewall to send a RST on behalf of some other host? If the firewall really is a legitimate proxy as well, no problem, but if it is intended to be fairly transparent, holding conversations with a distant host in a way that gives the latter the impression that it is talking to someone else is risky business. I also don't see why a firewall would drop packets just because reserved bits are set, although I can see why it might be a configurable option for the most paranoid users. smime.p7s Description: S/MIME Cryptographic Signature
Re[4]: www.isoc.org unreachable when ECN is used [was: Re: ITU takes over?]
Scott Bradner writes: woe be to new applications through such a firewall It's important to understand that the Internet is not monolithic, and no matter what the latest and greatest standards may be, there will always be parts of the Net that run older software. Expecting the entire Net to upgrade to this morning's version of TCP/IP stacks in two hours is a fantasy.
Re: Re[4]: www.isoc.org unreachable when ECN is used
On Thu, Dec 11, 2003 at 09:06:06PM +0100, Anthony G. Atkielski wrote: I also don't see why a firewall would drop packets just because reserved bits are set, although I can see why it might be a configurable option for the most paranoid users. There are a lot of really dumb, dumb, dumb firewall authors out there, that's why - Ted