Re: www.isoc.org unreachable when ECN is used
Iljitsch; On 12-dec-03, at 22:24, Theodore Ts'o wrote: Does that mean that Path MTU was badly designed, because it failed to take into account stupid firewalls? Yes, PMTUD was badly designed. Path MTU disovery was implemented very poorly because implementations tend expect certain functionality in routers, and usually don't recover when this functionality is absent. (For whatever reason.) No, it is not an implementation issue. It is not a good design to periodically send a large packet expecting it is larger than the current path MTU and dropped. It should have been designed into TCP over IPv4 and should have never set don't fragment bit. TCP should be able to know that packets are reassembled at the IP layer. IPv6 was badly designed. IPv6 should never use PMTUD and TCP over IPv6 should work with PMTU of 1280, unless it gets OOB information that all the pathes have larger MTU. An alternative was to let routers record MTU, say, in flow label field. Masataka Ohta
Re: www.isoc.org unreachable when ECN is used
Fred Templin; It should have been designed into TCP over IPv4 and should have never set don't fragment bit. TCP should be able to know that packets are reassembled at the IP layer. Yes - it would have been great if this was designed into the architecture way back when. But it wasn't, so it would be up to receiving implementations to add hooks into the IPv4 reassembly cache if they want to inform the TCP of fragmentation. As IP and TCP are implemented with a lot of interference, which is a good thing, it was OK to modify both IP and TCP to introduce properly designed PMTUD. IPv6 was badly designed. IPv6 should never use PMTUD and TCP over IPv6 should work with PMTU of 1280, unless it gets OOB information that all the pathes have larger MTU. An alternative was to let routers record MTU, say, in flow label field. Huh? First, this is an alternative for those who insits on having PMTUD. I prefer to just send 1280B. First, the flow label field is needed for other purposes. Because of the interaction between fragmentation and QoS assurance, PMTU of QoS assured flow must also be assured that only best effort packets need to reuse the unused 16 bit of the field. Second, untrusted routers can't be counted on to record accurate MTU values. It was possible to specify all the IPv6 routers behave so without having backward compatibility problem. Finally, not all forwarding nodes can be counted on to understand IPv6. What is the problem? When you receives an IPv6 packet, all the routers on the path of the packet is IPv6 capable. The only reliable PMTU information is that which comes from a trusted end node. As P of PMTU means PATH, the end node does not have PMTU information, unless the information is collected by routers on the path. Masataka Ohta
Re: /48 micro allocations for v6 root servers, was: national security
Bill Manning; % Expect to see routers being optimized that will only route % the upper 64bits of the address, so you might not want to do % anything smaller than that. This, if it happens, will be exactly opposed to the IPv6 design goal, which was to discourage/prohibit hardware/software designers from making presumptions or assumptions about the size of prefixes and HARDCODING them into products. That is a foolish design goal demonstrating lack of understanding on how large 2^64 space is. A better design, for example, is to use only upper 16 bits (note that 100,000 routes of IPv4 is already overkill) at the core backbone, which makes high speed backbone routers not very expensive and faster. Though multicast and QoS needs more bits, they, anyway, need separate mechanism. Masataka Ohta
Re: ITU takes over?
Vint; Unfortunately, the discussion has tended to center on ICANN as the only really visible example of an organization attempting to develop policy (which is being treated as synonymous with governance). To be practical, considering that ICANN never acted as the authority of the Internet governance, what's wrong to make ICANN the scape goat by giving ICANN governance, which has nothing to do with the Internet governance, to some of the countries? Masataka Ohta
Re: national security
Joe Abley; I don't think this is an oversight, I'm pretty sure this was intentional. However, since in practice the BGP best path selection algorithm boils down to looking at the AS path length and this has the tendency to be the same length for many paths, BGP is fairly useless for deciding the best path for even low ambition definitions of the word. For the service aspects of F we're more concerned with reliability than performance. Recursive resolvers ask questions to the root relatively infrequently, and the important thing is that they have *a* path to use to talk to a root server, not necessarily that they are able to automagically select the instance with the lowest instantaneous RTT (and continue to find a root regardless of what damage might exist in the network elsewhere). I'm afraid F servers does not follow the intention of my original proposal of anycast root servers. The intention is to allow millions or trillions of root servers. While you can rely on someone else's root server with the BGP best path selection, it is a lot better to have your own root server. In addition, it is not necessary to have any hierarchy between anycast servers at all, as long as there is a single source of information. Hierarchy may be useful if a single entity manages all the anycast root servers. However, you can manage your own. Finally, using only a single address, F, does not provide any real robustness. Masataka Ohta
Re: national security
Joe Abley; I'm afraid F servers does not follow the intention of my original proposal of anycast root servers. This may well be the case (I haven't read your original proposal). The IDs have expired. I'm working on a revised one. Apologies if I gave the impression that I thought to the contrary. No, no need of apologies. Finally, using only a single address, F, does not provide any real robustness. Fortunately there are twelve other root nameservers. But, one should have one's own three root servers with different addresses. Masataka Ohta
Re: IPv6 addressing limitations (was national security)
Anthony G. Atkielski; I've described variable-length addresses in the past. Essentially a system like that of the telephone network, with addresses that can be extended as required at either end. Such addressing allows unlimited ad hoc extensibility at any time without upsetting any routing already in place. Unlimited? The limitation on public part is 20 digits. Ad hoc extension beyond hardware supported length at that time will fatally hurt performance. Masataka Ohta
Re: IPv6 addressing limitations (was national security)
Anthony G. Atkielski; Unlimited? The limitation on public part is 20 digits. That's just a matter of programming these days. On the Internet these days, it is a matter of hardware. Ad hoc extension beyond hardware supported length at that time will fatally hurt performance. What hardware limits numbers to 20 digits today? On psuedo packet network, such as X.25 or ATM, with full of connection, packets are forwarded by hardware with short connection ids where e.164 numbers are used at the time of complex signalling processed by software. Masataka Ohta
Re: Ietf ITU DNS stuff
John C Klensin; ITU-T is quite insistent that they make _Recommendations_ only. W.r.t. enforcement, ITU-T makes standards, regardless of whether it is called recommendations or requests for comments. Interpretation and enforcement is up to each individual government. No. WTO agreement helps a lot for them to enforce ITU standards. Masataka Ohta
Re: IPv6 addressing limitations (was national security)
jfcm; Dear Masataka, my interest in this is national security. I see a problem with IPv6 in two areas. Only two? IPv6 contains a lot of unnecessary features, such as stateless autoconfiguration, and is too complex to be deployable. Comments welcome. As it is too complex, it naturally has a lot of security problems. I'm not surprised some of them are national ones. Masataka Ohta
Re: IPv6 addressing limitations (was national security)
Iljitsch; We need to keep the size of the global routing table in check, which means wasting a good deal of address space. That's not untrue. However, as the size of the global routing table is limited, we don't need so much number of bits for routing. 61 bits, allowing 4 layers of routing each with 32K entries, is a lot more than enough. Even in IPv4, where addresses are considered at least somewhat scarce, a significant part of all possible addresses is lost because of this. Only 20 bits or so for routing is, certainly, no good. If we want to keep stateless autoconfig and be modestly future-proof we need at least a little over 80 bits. 96 would have been a good number, but I have no idea what the tradeoffs are in using a broken power of two. If we assume at least 96 bits are necessary, IPv6 only wastes 2 x 32 bits = 8 bytes per packet, or about 0,5% of a maximum size packet. Not a huge deal. And there's always header compression. Stateless autoconfig is mostly useless feature applicable only to hosts within a private IP network that 64 bits could have worked. 128 bit is here to enable separation of 64 bit structured ID and 64 bit locator. Masataka Ohta
Re: IPv6 addressing limitations (was national security)
Iljitsch; Putting a 64-bit crypto-based host identifier in the bottom 64 bits of IPv6 addresses shouldn't get in the way of regular IPv6 addressing mechanisms and/or operation. Putting a crypto-based host identifier in the address is unnecessary, since there's really no need to include a strong host identifier in every packet sent to a host. The locator alone is usually sufficient, and if that's not sufficient, the sender can generally encrypt the traffic with a secret known only to the intended destination. Putting a 64 bit crypto-based identifier in IPv6 addresses isn't something that would be done because it's the only way to arrive at certain functionality, but rather because it's a convenient way to do it. Putting a 64 bit crypto-based identifier means people won't type that long hexadecimal sequence. That is, even if most people use DNS or something like that, it is still inconvenient for DNS administrators. Note that the value is psuedo random and completely different host by host that copying some digit of other host does not work. People relying on DNS do not notice even if a 64bit id from DNS is different from the specified one. There is no one who specifies the id. So, there is no security nor convenience. Masataka Ohta
Re: arguments against NAT?
Michel Py; Melinda Shore wrote: The problems we're seeing from NATs - and they're considerable It depends of the situation; don't generalize, the reality of numbers is against you. The number of sites where NAT works just fine is orders of magnitude greater than the number of sites where it causes more than minor inconveniences. How can you say the number of sites where NAT works just fine? Have you operated such sites with and without NAT and compared the result by asking all the users? Or, does it just mean that network operators they operate their network just fine? We're the IETF; we don't design the Internet for the select few that have issues with NAT, we design it for everyone. Design what? IP network beyond NAT is not part of the Internet. I have the greatest respect to Economics Nobel prize winners but I have never met one that has half of a clue on what it takes to operate an enterprise network on a daily basis. There is a difference between the market and what the market would/could be if this or that. How many of these Nobel prizes understand the relationship between NAT and renumbering (opposed to the obvious and moot save IP addresses and security ones)? The only thing economists should observe is that ISP service with a lot of IP addresses is charged a lot more than that with a single IP address. The difference reflects the real world evaluation on the cost of NAT. Masataka Ohta
Re: national security
Paul Vixie; The switch to anycast for root servers is a good thing. again there's a tense problem. there was no switch to anycast. the last time those thirteen (or eight) ip addresses were each served by a single host in a single location was some time in the early 1990's. So? Service by multiple hosts in a single location is hardly anycast. When it was switched to anycast? But it was hardly without risks. For example, do we really fully comprehend the dynamics of anycast should there be a large scale disturbance to routing on the order of 9/11? yes, actually, we do. (or at least the f-root operator does.) Can you explain, the reactions of people who have been engaging in root server operations against anycast without comprehending the dynamics of anycast, as observed in the last month in IETF DNS OP ML? Masataka Ohta
Re: [58crew] RE: IETF58 - Network Status
Perry; Because of exponential backoff, aggregated bandwidth of multiple TCP over congested WLAN should not be so bad. However, RED like approach to attempt retries only a few times may be a good strategy to improve latency. A RED approach would be good, but in general there has to be a limit on the queue. Your wireless interface should not become a packet long term storage facility. We are saying the same thing. I've seen RTTs to the base stations on the order of 25000ms (that's 25 SECONDS) or more. If you can't deliver a packet a through 300ns distance of air in 10ms, it is probably time to drop it. With exponential back-off with base 2, 10ms of initial delay becomes 40s after 12 attempts of retry. Note that 25000ms of delay does not necessarily mean that a station has a large buffer. Masataka Ohta
Re: [58crew] RE: IETF58 - Network Status
Iljitsch; I think there is some middle ground between 25000 and 10 ms. 10ms is the middle ground. That's enough for a bunch of retransmits on modern hardware. Retransmits on what type of hardware? At 1 Mbps transmitting a 1500 byte packet already takes 12 ms, without any link layer overhead, acks/naks or retransmits. End-to-end retransmits take even longer because of speed of light delays., That is a improper calculation. The requirement from TCP is that, on lightly loaded link, probability of packet drop should be very small. For example, if probability of collision is (despite CSMA/CA) 0.1, 2 retries will be enough to make packet drop probability 0.1%. If there are hidden terminals, probability of interference will get worse that more retry will be desired. And, with exponential back-off, delay will be doubled as the number of retries in increased by 1. Coupled with aggressive FEC, that's more than enough time. FEC sucks because it also eats away at usable bandwidth when there are no errors. Wrong reason. FEC is fine against thermal noise. However, FEC does not work at all here, because, with a collision, contents of colliding packets will be lost almost entirely. So let's: 1. Make sure access points don't have to contend with each other for airtime on the same channel 2. Make sure access points transmit with enough power to be heard over clients associated with other access points 3. Refrain from using too much bandwidth 4. Make use of higher-bandwidth wireless standards such as 802.11g No. 2 should be: Make sure clients and access points transmit with equal power to be heard over each other to enable CSMA/CA collision avoidance with random delay 1 is a performance improvement but is not a serious problem if CSMA/CA is working well. 3 is not specific to wireless and is irrelevant. 4 helps little but is not very meaningful as PPS, rather than BPS, is becoming the limiting factor, especially for applications with small packets such as VOIP. By the way, it would also be a good idea if the standard did proper power control of the mobile stations. Why? Raising the power output of the stuff you want to hear over these clients is much, much simpler. It is a good idea for some wireless protocol. However, it is the worst thing to do against CSMA/CA. Others won't notice your presence and won't reduce transmission rate. By the way, I did some testing today and the results both agree with and contradict conventional wisdom with regard to 802.11 channel utilization. When two sets of systems communicating over 802.11b/g are close together, they'll start interfering when the channels are 3 apart (ie, 5 and 8), slowing down data transfer significantly. This indicates that in the US only three channels can be used close together, but four in Europe: 1, 5, 9, 13. However, with the two sets of stations are NOT close together (but still well within range), such as with a wall in between them, 3 channels apart doesn't lead to statistically significant slowdowns, and even 2 channels apart is doable: 25% slowdown at 802.11b and 50% slowdown at 802.11g. So that would easily give us four usable channels in the US (1, 4, 8, 11) and 5 in Europe (1, 4, 7, 10, 13), or even six / seven (all odd channels) in a pinch. (As always, your milage may vary. These results were obtained with spare hardware lying around my house.) The results, it seems to me, completely agree with the conventional wisdom. Masataka Ohta
Re: [58crew] RE: IETF58 - Network Status
Perry; Radio links like this are simply too unreliable to run without additional protection: TCP isn't equipped to operate in environments with double digit packet loss percentages. I agree with you, Iljitsch. A protocol that had been tuned for use with TCP would have been fine -- heavy FEC and some moderate retransmissions in case of corruption or station handoffs are okay. The problem is not when you try to be reliable in the face of radio interference, but when you also try to be reliable in the face of congestion -- which is precisely what the system does. Storing huge queues of packets when there is congestion does no one any favors. TCP needs packet drops and fairly low standard deviations on packet delays to do its job well, and 802.11 does precisely the wrong thing. But, Ethernet does (or did, when it was CSMA) the same thing and RFC1958 says: To quote from [Saltzer], The function in question can completely and correctly be implemented only with the knowledge and help of the application standing at the endpoints of the communication system. Therefore, providing that questioned function as a feature of the communication system itself is not possible. (Sometimes an incomplete version of the function provided by the communication system may be useful as a performance enhancement.) Because of exponential backoff, aggregated bandwidth of multiple TCP over congested WLAN should not be so bad. However, RED like approach to attempt retries only a few times may be a good strategy to improve latency. Masataka Ohta
LEA Bottom LIne (was Re: IEA Bottom Line)
John; Third, while it would certainly make global interoperability easier, there is no way to prevent people from deciding to use local characters, and maybe even local codings, in local environments. They will do it. That something is technically possible does not mean it useful. In this case of localized (not internationalization at all) email addresses, it is not useful and will not be used widely to be ignored safely. They will believe (probably correctly) that they have good reasons for doing it. Our choices are between * Finding a rational, plausible, global way to let them do it while still preserving global interoperability or Mails with localized addresses do not interoperate globally just beause they use localized addresses that that is a meaningless goal. Fourth, there was, and is, a case to be made that internationalization of domain names is unnecessary and dumb Right. Localized domain names are not useful and not used at all, at least in Japan. Maybe, there are some names registered, as protection purposes, but none are used. Even if there is a few countries where such domain names are used, it proves that the issue is purely local with no international issues and no global interoperability is necessary. You can argue that localized domain names was useful for registries and registareres, as names registered for protection purposes increased their revenues, though I guess, in Japan, that outcome of advertisement fee on various media exceeded income of registration. But, even such argument is not applicable to email names. Masataka Ohta
Re: IESG proposed statement on the IETF mission
Simon Woodside; Yes, and towards a possibly more contentious application, see Voice over IP. Lots of VoIP work is being done without involving the internet at all. Used by telecoms for telecoms applications, where best effort isn't good enough because it needs to keep working when the power goes out. IP, yes, Internet, no. Why, do you think, the Internet may stop working when the power goes out? It should not, which is required to certain ISPs by regulation at least in Japan. Against that you have voice over internet which is AKA voice chat and already abounds in true internet p2p apps like iChat, GnomeMeeting, and some programs on that other OS. These run on the public internet and benefit from the IETF design paradigms like edge-to-edge (aka end2end) and best effort but also have to accept the relevant drawbacks. voice chat? Are you assuming PCs? POTS telephone devices and terminal adaptors is the natural way of voice over the Internet. Note that end to end architecture means ultimate availability of fate sharing. Masataka Ohta PS According to the end to end principle, end user equipments should have their own power backup, of course, which is also the case with ISDN TA or cellular phone devices. Then, with multihoming, your connection is a lot more robust than that of a single telephone company.
Re: Persistent applications-level identifiers, the DNS, and RFC 2428
John; I just had occasion to look again at RFC 2428, FTP Extensions for IPv6 and NATs, Please consider this a fairly narrow question. I'm afraid that your question is still too broad. Are you asking the question for IPv6 or for NATs? I am asking the question about FTP, about a piece of syntax in the protocol, and about the options that syntax permits. For what purpose? I am asking only about why we haven't structured that syntax Seemingly because there was no reason to have one. If you think you now have a reason, you should state it. If you don't, we can debate forever on infinite variations of reasons and associated extensions. For example, how about extending FTP to be able to treat realtime stream? It is a question about FTP, about a piece of syntax (MODE/STRU commands) in the protocol, and about the options that syntax permits. Masataka Ohta
Re: Persistent applications-level identifiers, the DNS, and RFC 2428
John; I just had occasion to look again at RFC 2428, FTP Extensions for IPv6 and NATs, Please consider this a fairly narrow question. I'm afraid that your question is still too broad. Are you asking the question for IPv6 or for NATs? Masataka Ohta
Re: [Fwd: [Asrg] Verisign: All Your ...
Dean; Specifically, you insist that DNS queries, via DNS _protocol_ can be used to check if a domain exists. No, I never. Masataka Ohta
Re: conclusion for ALL YOUR WILDCARDS
Keith; Your mistake (or, is it intentional?) is to have narrowed the focus of the discussion that your point is on a minor protocol issue of an e-mail protocol. Yes, you should conclude it. In general, trying to teach things to people with read-only minds is an exercise in futility. Exactly. Masastaka Ohta
Re: [Fwd: [Asrg] Verisign: All Your ...
Dean; When you get an NXDOMAIN DNS protocol reply, the DNS protocol (RFC 1034, etc) defines a specific meaning. Neither rfc1034 nor rfc1035 define NXDOMAIN DNS protocol reply. But when you don't get NXDOMAIN, there is no meaning to be implied. This is a fact due to the inclusion of wildcard records in the DNS protocol. Wrong. As is clearly stated in rfc1034: The general idea is that any name in that zone which is presented to server in a query will be assumed to exist, with certain properties, unless explicit evidence exists to the contrary. domain names matching a wildcard is assumed to exist. Masataka Ohta
Re: [Fwd: [Asrg] Verisign: All Your ...
Dean; You say names. But, is it whois names or domain names? I mean people useful names. Whois is a protocol for accessing the registration of names. DNS is a a protocol for distributing Records Wrong. Whois protocol is a protocol for accessing the registration of names, not specifically domain names. DNS, domain name system, is not a protocol but the system to manage domain names. I've never heard of such a system. DNS is an acronym of Domain Name System. This acronym, used in RFC 1034 and elsewere refers to a protocol. As is clearly stated in the second paragraph of rfc1034, A subset of DNS functions and data types constitute an official protocol. DNS system includes DNS protocol but is not DNS protocol. RFC 1034 and subsequent RFCs describe the protocol. Yes, rfc1034 does describe the protocol and other things. So what? You seem to think that Verisign operations are restricted by this protocol. No. DNS is a system, not a protocol. Verisign operations directly related to the protocol, is, of course, restricted by the protocol, though, which is irrelevent. The fact still remains that DNS entries do not necessarilly imply registration, and that the DNS protocol cannot be used to make registry queries. Domain registry is a part of DNS system and is of no importance as long as proper names are returned for DNS queries. Masataka Ohta
Re: [Fwd: [Asrg] Verisign: All Your ...
Dean; Domain registry is a part of DNS system and is of no importance as long as proper names are returned for DNS queries. Good, Then we agree: Verisign is not doing anything wrong. As long as Verisign's registry has nothing to do with the result for com/net TLD query, yes. Wildcards are part of the DNS protocol. You are still trying to confuse the system and a protocol in vain. Our concern is not merely on a protocol but on the DNS system as a whole. Masataka Ohta
Re: [Fwd: [Asrg] Verisign: All Your ...
James; Dean wrote: The fact still remains that DNS entries do not necessarilly imply registration, and that the DNS protocol cannot be used to make registry queries. This is getting so far from the topic it's not funny. Do any of the systems broken by Verisign try and do REGISTRY queries through DNS? No. So how is this relevant AT ALL to what Verisign have done? As you can observe, Dean keep saying just names and registry and trying to hide the fact that we are discussing on domain names and registry of domain names. Masataka Ohta
Re: [Fwd: [Asrg] Verisign: All Your ...
Dean; You say names. But, is it whois names or domain names? I mean people useful names. Whois is a protocol for accessing the registration of names. DNS is a a protocol for distributing Records Wrong. Whois protocol is a protocol for accessing the registration of names, not specifically domain names. DNS, domain name system, is not a protocol but the system to manage domain names. I've never heard of such a system. DNS is an acronym of Domain Name System. Uhh, no. You don't seem to understand that names are abstract concepts, And the names are called domain names. PERIOD. Masataka Ohta
Re: [Fwd: [Asrg] Verisign: All Your ...
Dean; The set is the set of *registered* names. The proper and only way to query this set is through whois. The only reason to have domain names registered is to use it through DNS. Whois may be a useful tool for registration convenience but is of secondary importance. If you disagree, let me control DNS reply of your domain av8.com, while keeping whois response to av8.com as is. DNS has nothing to do with registration If you are arguing that verisign registration has nothing to do with DNS, I have no reason to disagree. Then, for DNS use, we need another registration which has much to do with DNS. Verisign can still continue to operate their current registry for com and net for their whois query, though it has nothing to do with com and net TLDs in DNS reply. Masataka Ohta
Re: [Fwd: [Asrg] Verisign: All Your ...
Tim; So when are Verisign's rights to handle .net/.com up for renewal? It seems we should see that they don't get a renewal. Which .net/.com? Whois ones or DNS ones? Whois ones may be updated by Verisign, forever. :-) Tim On Sat, Sep 20, 2003 at 03:04:52PM +0859, Masataka Ohta wrote: DNS has nothing to do with registration If you are arguing that verisign registration has nothing to do with DNS, I have no reason to disagree. Then, for DNS use, we need another registration which has much to do with DNS. Verisign can still continue to operate their current registry for com and net for their whois query, though it has nothing to do with com and net TLDs in DNS reply. Masataka Ohta
Re: [Fwd: [Asrg] Verisign: All Your Misspelling Are Belong To Us]
Carl; http://www.isc.org/products/BIND/delegation-only.html As I just post to DNSOP WG ML (detailed discussion should be done there), it is not an effective protection against synthesised (from wildcared NS, in this case) NS and synthesised (from scratch) child zone contents. A protection is to reject NS answers, if it is identical to wildcarded one, though it has several side effects. Masataka Ohta
Re: Stupid DNS tricks
Adam Roach; Because this is probably a community of interest for the topic of DNS, I thought it would be worthwhile mentioning that Verisign has apparently unilaterally put in place wildcard DNS records for *.com and *.net. All unregistered domains in .com and .net now resolve to 64.94.110.11, which runs a Verisign-operated web search engine on port 80. A very interesting stupidity. However, as I checked whois database of verisign for *.com and *.net, the answers are negative No match for *.COM. No match for *.NET. that, according to verysign, they are not registered domain names. So, I tried to get the names by myself, for obvious reasons :-), through www.verisign.com. :-) But, my request was denied as Please use only letters, numbers or dashes [-]. Do not enter spaces, periods [.] or other punctuation. that they are, according to verisign, not domain names open for registration. So, the remaining question is whether the stupidity is interesting enough to make verisign not to be qualified to be a gtld registry anymore or not. Masataka Ohta
Re: Stupid DNS tricks
Any comment on the attached draft ID? Abstract This memo describes actions against broken content of a primary server of a TLD. Without waiting for an action of some, if any, central authority, distributed actions TLD server operators and ISPs can settle the issue, for a short term. Masataka Ohta --- INTERNET DRAFT M. Ohta draft-ohta-broken-tld--1.txt Tokyo Institute of Technology September 2003 Distributed Actions Against Broken TLD Status of this Memo This document is an Internet-Draft and is subject to all provisions of Section 10 of RFC2026. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet- Drafts. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet- Drafts as reference material or to cite them other than as work in progress. The list of current Internet-Drafts can be accessed at http://www.ietf.org/1id-abstracts.html The list of Internet-Draft Shadow Directories can be accessed at http://www.ietf.org/shadow.html Abstract This memo describes actions against broken content of a primary server of a TLD. Without waiting for an action of some, if any, central authority, distributed actions TLD server operators and ISPs can settle the issue, for a short term. 1. Introduction DNS is a fully distributed database of domain names and their associated values with loose integrity. However, the primary server of a zone is a single point of failure of the zone to hold the current most copy of the zone and such a failure at TLD can cause a lot of damage to the Internet. As it may take time for a central authority, if any, take care of the problem, this memo describes distriburted actions as a short term solution to protect the Internet against broken TLD zone content. The long term solution is to let the primary server operator fix the content or to change the primary server operator, which may involve a central authority. M. OhtaExpires on March 17, 2004[Page 1] INTERNET DRAFT Broken TLD June 2003 Similar technique is applicable to root servers with broken contents. 2. Actions of TLD Server Operators A TLD server operator who have found that TLD zone content is broken should disable zone transfer and use a copy of old zone content known not to be broken. Or, if the fix for the zone content is obvious and easy, the operator may manually or automatically edit the content of the current most one without updating SOA serial number. In this case, zone transfer may not be disabled, though actions of ISPs described in section 3 may make the transfer from servers of broken content impossible. 3. Actions of ISPs ISPs should disable routes to TLD servers with broken content and/or filter packets to/from the TLD servers. ISPs should periodically check the servers, whether they still contain broken content or not. 4. Security Considerations As for security, TLD servers should never have broken content. 5. Author's Address Masataka Ohta Graduate School of Information Science and Engineering Tokyo Institute of Technology 2-12-1, O-okayama, Meguro-ku, Tokyo 152-8552, JAPAN Phone: +81-3-5734-3299 Fax: +81-3-5734-3299 EMail: [EMAIL PROTECTED] M. OhtaExpires on March 17, 2004[Page 2]
Re: where the indirection layer belongs
Robert Honore; It seems to me though, that nobody has stated clearly what those wrong perceptions and ideas are, much less to say what is wrong with them and thus replace them with correct perceptions and ideas. A draft ID Simple Internet Protocol, Again on the real problems can be found at ftp://ftp.hpcl.titech.ac.jp/draft-ohta-sipa--1.txt Abstract IPv6 has failed to be deployed partly because of its complexity. IPv6 mostly is a descendant of SIP (Simple Internet Protocol) but has fatally bloated merging with other proposals and trying to make IPv6 has better functionality than IPv4, which makes IPv6 unnecessarily complex and, thus, worse than IPv4. SIPA (Simple Internet Protocol, Again) is a proposal attempting to restore simplicity to IPv6 sticking to the real world operational requirements for the IPv4 Internet to make IPv6 more acceptable to ISPs and end users. Masataka Ohta
Re: where the indirection layer belongs
Robert Honore; (regarding the complexity of putting a general-purpose layer to survive address changes between L4 and L7) It is not merely complex but also useless to have such a layer. Right now I am not fully aware of all of the specifics of the issues in trying to implement such a layer, but the statement you make in the paragraph just below does not seem to support your statement that It is not merely complex but also useless to have such a layer. I will explain my position below. The issue has been discussed and analyzed long ago. Just making a complex proposal with new layers, which is know to be useless, is not a construtive way of discussion and the proposal should be ignored without much discussion. The basic problem of the approach to have such a layer is that only the application layer has proper knowledge on proper timeout value to try other addresses. If such a layer is useless as you say, then the application could see no benefit to being able to parametrise the transport or network layers with such information as the proper timeout values to try other addresses. It would also be impossible for the application to benefit from being able to find other addresses to try. Read the draft and say TCP. Another objection I have to your statement: If the application layer is the one that has the proper knowledge of things like timeouts (I suppose there are things), then it should be possible to implement something on the stack that is close to the application wherein it can collect that information and parametrise the behaviour of the rest of the lower layers. If by your statement you are saying that it is impossible to implement such a thing, then you will have to prove it. Read the draft and say TCP. So, the issue can be solved only involving application layer, though most applications over TCP can be satisfied by a default timeout of TCP. In either case, there is no room to have an additional layer. I agree completely with your first sentence in this paragraph. The problem we have right now is that, save for the application specifying to the transport which peer port it wants to connect with, it has no further control over the behaviour of the transport or the network layers other than to possibly drop the connection. Wrong. Additional API to existing layers is just enough. I would also prefer not to modify any of the libraries or implementations of those protocols, lest we break something. That is source of your misunderstanding. Youre requirement is not only meaningless but also harmful. See other mail on how MAST is not transparent for details. In short, it is as complex, transparent, meaningless and harmful as NAT. Having looked at draft-ohta-e2e-multihoming-05.txt, I am not convinced that it supports your statement that It is not merely complex but useless to have such a layer, nor do I believe that the presence of See above. Layering is abstraction and not indirection at all. Agreed. We should use the correct terminology. And a new layer, to which information in other layers are hidden with the abstraction, is useless to implement mechanisms using the information. Masataka Ohta
Re: the VoIP Paradox
Simon; Voice over IP is paradoxically both internet and telephony at the same time. This article presents the paradox, and associated arguments. There is no paradox. The internet carries information. You should, at least, distinguish VoIP as a telephone network and the Internet telephony. There is no internet telephony... See my paper Simple Internet Phone presented at INET2000. http://www.isoc.org/inet2000/cdproceedings/4a/4a_3.htm It, for example, says: However, it is obvious that the telephone network will be replaced by the Internet, and will eventually disappear. there is IP telephony which is There is no internet telephony... there is IP telephony which is not running on the public internet. There is also VoIP on the public internet which I like to call Voice Chat. Apparently, you don't recognize the current situation, which I foresaw several years ago. Voice chat, of course, is no internet telephony. Paradoxical reguration on voice in US is a US local issue. Please cite a document, I don't find any japanese regulation that makes it any different there... Japanese telecommunication laws (available at http://law.e-gov.go.jp/cgi-bin/idxsearch.cgi) does not distinguish telephony or voice something special and the requirement on providers is same, though detailed requirements varies. In Japan, TAs to connect the Internet and POTS telephone devices are rapidly replacing the telephone network including VoIP ones. ... do they provide PSTN-level availability? In theory, yes. In practice, there is no such thing as PSTN-level availability. in an emergency / power failure? In emergency, best effort network works better than circuit swithced one, of course. As for power, have you ever used ISDN with TAs? Masataka Ohta
Re: VoIP regulation... Japan versus USA approaches (RE: Masataka Ohta,Simon)
Simon; Internet Telephony another paradox. How can the public internet possibly support telephony? We have as axiomatic the edge-to-edge principle which guarantees that the person at the other end may not have UPS power supply. This is a DESIGN GOAL of the internet, hence, the paradox. Is that design goal changing? You are talking about paradox of PSTN, then. Emergency power supply is not required directly by law. Over ISDN, emergency power supply is not required even by finer regulation and even though NTT is voluntarily supply power, it is not enough to drive an analog phone device over a TA, which is the way almost all are using ISDN. In addition, a consequence of the end-to-end (not edge-to-edge) principle is fate sharing property to maximize reliability and availability, which has nothing to do with not having UPS. The fact of the paradox is going to lead to paradoxical situations like internet regulation for VoIP. No, not at all. It is merely that some country such as US has a paradoxical regulation on voice. There is no internet telephony... See my paper Simple Internet Phone presented at INET2000. http://www.isoc.org/inet2000/cdproceedings/4a/4a_3.htm in the paper introduction: However, it is obvious that the telephone network will be replaced by the Internet, Why is this obviously true? It was obvious, if you have had enough knowledge on PSTN. See above. But, as the replacement is happening, it is even more obvious. of the features of VoIP protocols will become obsolete. Instead, the Simple Internet Phone is designed placing the priority in the affinity to the Internet and its architectural principles as an end-to-end, globally connected and scalable IP network. You do not include reliable or more importantly available in your list of architectural principle of the internet, Reliability and availability is automatically derived from the end-to-end principle. but as I pointed out in my paradox paper, available is the top principle of the telephone network. I believe that BY DESIGN the two are mutually exclusive, thus, it is a paradox to say internet telephony. By design, telephone network, violating the end-to-end principle to have central servers, is faulty. So is MGCP, which is a major cause of loss of availability of Internet telephony or VoIP network using MGCP. In emergency, best effort network works better than circuit swithced one, of course. If the power goes out it doesn't matter! See above. As for power, have you ever used ISDN with TAs? No. Have more experience with PSTN. Masataka Ohta
Re: VoIP regulation... Japan versus USA approaches (RE: Masataka Ohta,Simon)
Bob; I am curious how Japan does this, but the island size and density makes the whole argument different to some extent. So, how's it work under the wise rule of NHK/MTT ??? That'd be MPHPT at http://www.soumu.go.jp/ Though cabinet set a wise strategy, MPHPT has no idea on what is the Internet telephony and making stupid actions. However, as the actions are so delayed and are not so actively against the cabinet strategy that they are not so harmful. The uptake in VOIP in Japan has been driven by the success of cheap/fast broadband (see http://www.itu.int/osg/spu/newslog/2003/07/21.html#a72 Progress of the Internet telephony in Japan is by private ISPs, which are convinced that free long distance telephony (with additional charged (but inexpensive) service for PSTN gatewaying) is the most powerful sales promotion tool of their service. Many countries have moved beyond the regulatory debates that characterize the US very-much sector-specific regulatory framework. There are a number of indications the landscape is changing rapidly in the US too (see http://www.itu.int/osg/spu/newslog/categories/voip/2003/08/22.html#a159) Too bad. They are still talking about voice. No one can regulate individuals use VoIP over the Internet without central authority similar to NAPSTAR. The basic problem of US regulation is not that they don't regulate VoIP but that their model on universal access charge is not economically feasible. Universal access charge is to help people in sparsely populate area. So, the charge should be paid by regional providers in densely polulated area (regardless of whether the providers provide PSTN, TV or the Internet service). Can't ITU-T perform some study to let USG recognize its fault? Masataka Ohta
Re: names, addresses, routes
Dave; This being a technical discussion, we had better have some useful, technical definitions. Technical definitions? On the problems, yes. On the terminology, no. Then we can switch to having a debate about problems and solutions. It is a useful approach to continue the debate forever. However, to solve the problem, anyone proposing a solution may use any terminology, as long as it is clearly defined. There is technical history to the terms at issue, here. However they suffer different definitions in different technical discussions. Different technology needs different definitions. We really suffer, if you try to unify them. Masataka Ohta
Re: the VoIP Paradox
Simon; Voice over IP is paradoxically both internet and telephony at the same time. This article presents the paradox, and associated arguments. There is no paradox. The internet carries information. You should, at least, distinguish VoIP as a telephone network and the Internet telephony. In Japan, TAs to connect the Internet and POTS telephone devices are rapidly replacing the telephone network including VoIP ones. a. VoIP is telephony and should be regulated. b. VoIP is internet and should not be regulated. Why, do you think, the Internet without voice should not be regulated? It is. Paradoxical reguration on voice in US is a US local issue. Masataka Ohta
Re: Criminals
Valdis; MIME is too much e-mail centric. For an E-mail centric protocol, it's worked pretty well on port 80 MIME worked on port 80 only as well as pre-MIME 822. MIME worked pretty badly with the rest of the OS. All that was necessary is to document file name extensions such as tar, uu and 822 and email could have worked better with OS. MIME complexities such as multipart and base64 is just a reinvention of wheel. On most OSes, including but not limited to UNIX, that's the way to designate content types of files. But it's not *universally* true, so you have to come up with some sideband way of conveying information. And in fact, even if two systems both support extensions as a *mandatory* flagging, you can still run into trouble No. - what if the two systems don't use the *same* extension for a filetype that should be portable? It's just an issue of escaping. Add, say, X, if some extension is reserved on some OS. Should a postscript file end in .PS, .ps, .PST? Should a VisualBasic script be .VB or .VBS? Is a image/jpeg file a .JPG or .JPEG? Should a MIME content type of postscript file be PS, ps, PST, postscript or PostScript? Should a VisualBasic script be VB, VBS or visual-basic? Is a image/jpeg file a JPG or JPEG? It's just an issue of registration and registration authority. Instead, MIME developers arrogantly claimed that OSes should be modified to support MIME content-type (and even that text files on OSes should use MIME format to support other tags such as charset). No. This claim is right up there with SMTP developers arrogantly claimed that OSs should be modified to support network-standard EOL. Wrong. It is not a line format issue but an issue of OS internal features. MIME developers arrogantly assumes OSes should support file type using MIME content type. And poor OS developers, such as Microsoft, actually did so. And of course they didn't. They merely insisted that the user agent at either end convert to/from the local format. The local format does support ASCII file names but have no room to store content type. Masataka Ohta
Re: Criminals
Keith; MIME developers are. MIME is too much e-mail centric. Whether one use content-type or file name is irrelevant to mail security, just as whether one use uuencode or base64 is irrelevant, on both of which MIME developers wasted a lot of time. It also produced mail readers that didn't properly label content on outgoing mail and ignored the content-type parameter on incoming mail, instead looking at the suffix of a nonstandard filename parameter (which was only intended for use with application/octet-stream). On most OSes, including but not limited to UNIX, that's the way to designate content types of files. MIME should have followed the practice and IANA could be the central authority to register filename extensions with their possible security holes. Instead, MIME developers arrogantly claimed that OSes should be modified to support MIME content-type (and even that text files on OSes should use MIME format to support other tags such as charset). Rest of us righteously ignored it. Words that come to mind to describe this include: Willful, Criminal, and Negligence. Exactly. But, see above. Masataka Ohta
Re: where the indirection layer belongs
Keith; (regarding the complexity of putting a general-purpose layer to survive address changes between L4 and L7) It is not merely complex but also useless to have such a layer. The basic problem of the approach to have such a layer is that only the application layer has proper knowledge on proper timeout value to try other addresses. So, the issue can be solved only involving application layer, though most applications over TCP can be satisfied by a default timeout of TCP. In either case, there is no room to have an additional layer. I documented it long ago (April 2000) in draft-ohta-e2e-multihoming-*.txt The Architecture of End to End Multihoming (current most version is 05) that: To support the end to end multihoming, no change is necessary on routing protocols. Instead, APIs and applications must be modified to detect and react against the loss of connection. In case of TCP where there is a network wide defact agreement on the semantics (timeout period) of the loss of connectivity, most of the work can be done by the kernel code at the transport layer, though some timing may be adjusted for some application. However, in general, the condition of loss of connectivity varies application by application that the multihoming must directly be controlled by application programs. Masataka Ohta PS Layering is abstraction and not indirection at all.
Re: re the plenary discussion on partial checksums
Gruesse, Carsten; How would an app know to set this bit? The problem is that different L2s will have different likelihoods of corruption; you may decide that it's safe to set the bit on Ethernet, but not on 802.11*. Aah, there's the confusion. The apps we have in mind would think that it is pointless (but harmless) to set the bit on Ethernet, but would be quite interested in setting it on 802.11*. If the application is voice and the datalink is SONET, it may be fine to ignore some errors. However, the problem, in general, is that packets are often corrupted a lot to be almost meaningless even if the contents is voice. Still, it may be possible to design a datalink protocol to ignore a few bits, but not beyond, of errors in packets. But, it is, then, easy to have ECC to fix the errors that no sane person design such protocol. in the middle. (Reducing packet sizes to achieve a similar effect is highly counterproductive on media with a very high per-packet overhead such as 802.11.) Of course, 802.11 has retransmissions, so maybe this is a bad example, but it does illustrate the point. The problem of 802.11 is that packets are often corrupted a lot by collisions (from hidden terminals). *) e.g., in order to salvage half of a video packet that got an error You need extra check sum which consumes bandwidth. Note that, for multicast case, you can't behave adaptively to all the receivers that all the receivers suffer from the bandwidth loss. Masataka Ohta
Re: Multicast Last Mile BOF report
Keith; ] and 500 gadgets do not make a technology adopted when it has no ] business model folk can understand. one business model that might be understandable is: you should support multicast if/when it saves you enough bandwidth (over the same content being sent over separate unicast streams) to make it worth your cost. It means it saves nothing for receivers that they don't want to receive multicast stream. Then, senders are not motvated to send multicast stream. Also, the bandwidth saving is beneficial to ISPs that ISPs supporting multicast should charge less to its customers. It's not bad if ISPs are competitive and backbone bandwidth were expensive. Masataka Ohta
Re: Multicast Last Mile BOF report
Keith; ] one business model that might be understandable is: you should support ] multicast if/when it saves you enough bandwidth (over the same content ] being sent over separate unicast streams) to make it worth your cost. ] ] It means it saves nothing for receivers that they don't want to ] receive multicast stream. ] ] Then, senders are not motvated to send multicast stream. maybe the ISPs supporting multicast could prioritize that traffic, thus providing better service on multicast than unicast, thus providing an incentive for receivers to use multicast over unicast. Prioritization is orthogonal to uni/mulitcast issue. Users will favour those ISPs which prioritize unicast and pay more money. Other users may use the prioritized multicast for 1:1 communication. Masataka Ohta
Re: the end-to-end name problem
Simon; We all know what the end-to-end principle means. It's (reportedly) THE guiding principle of the IETF, and THE guiding principle of IETF design decisions. The problem I am trying to demonstrate with this dictionary analysis, is that average non-indoctrinated person needs to travel a long way from the simple word end in order to get to the definition that the IETF actually means. Wrong. It is the principle of not the IETF but the Internet. As you should recognize, most, if not all, of recent activity of IETF is against the principle, against which, the Internet is operated. Well organized standardization body such as IETF is an intermediate intelligent entity between end users and the Internet. That is, according to the principle, an intermediate intelligent entity of IETF MUST be removed. Masataka Ohta
Re: Network Working Group
At 01:27 PM 3/10/2003 -0800, Bob Braden wrote: The archive of early NWG discussions is the RFC series itself. I started to reply saying that, but I think he's referring to a pointer to the working group's discussions. A pointer to the working group's discussions??? NWG could be replaced by IETF or ISOC, but... Are you all assuming that individuals can not submit RFCs? Masataka Ohta
Re: Multihoming in IPv6
Perry; Iljitsch van Beijnum [EMAIL PROTECTED] writes: As it looks like the long term solution will be some kind of identifier/locator separation which will have a huge impact on all aspects of IPv6, I think this topic deserves attention from a wider audience than it's getting now. Identifier/locator separation has been a topic of conversation at the IETF for at least the last decade if not longer. In spite of this continuous interest, an actual fruitful proposal has yet to arrive. A problem was that multi6 WG has been required to first produce requirement document. It is just recently that so many people recognize it not a productive approach. I highly recommend attempting to do a fully worked proposal complete with documents before bringing up the topic in a broad audience -- it will increase your credibility markedly. Meanwhile, I wrote an ID draft-ohta-e2e-multihoming-03.txt on general architecture of scalable multihoming. Then, we designed and implemented a solution based on LIN6 proposal. Paper on it is available at: http://www.lab1.kuis.kyoto-u.ac.jp/~arifumi/paper/saint2003html/ Masataka Ohta
Re: Datagram? Packet? (was : APEX)
Valdis; I'm routing based on circuit ID. Current RSVP does not. Like I said - RSVP is *NOT* circuit based. You don't have to make the confusion on terminology even worse by insisting on youres. Circuit ID is introduced by Noel w.r.t. ATM and you can use your favourite wording for RSVP. You misunderstand the problem. Not in the slightest. I understand RSVP - what I objected to was the statement that RSVP is circuit based. As Noel said: : It's easy to imagine an ATM-like system : in which circuit ID's are global in scope.A we are discussing on circuit-ID-like things of ATM-like systems. In RSVP terminology, flows are identified by addresses and port numbers, which is the circuit-ID-like thing. That RSVP does not have circuit-like thing is fine, as long as there is circuit-ID-like thing. The problem is that a protocol to tweak the control of an underlying bandwidth allocation is pretty useless if there isn't enough bandwidth to tweak. I'm aware of that. Then, there is no reason to insist on detailed terminology of useles protocols such as ATM and RSVP. Masataka Ohta
Re: TCP/IP Terms
Michel; In terms of design, if you do TCP/IP *only* design, the TCP/IP model is probably enough. However, the Internet is not only TCP/IP. Carriers, for example, don't care much if their fiber transports TCP/IP or IPX or voice or video or GigE. No. Anything at or above transport layer is a layer internal to end systems and has nothing to do with networking or network protocols. Seperation of transport and application layers is a overkill for a best effort network, though it may help standardize the internal design of end systems such that anything supported by kernel belong the transport layer. You can check the reality that application and transport areas of IETF are now almost identical, though, historically, trasnsport area was working on protocols likely to be implemented in kernel. In addition, defining a thin transport layer may be useful over a hypothetical port-number-aware network such as that supporting RSVP. However, forcibly defining a session-layer-aware network is a layer violation. And, there are complex multi-protocol networks that a) don't use only TCP/IP and b) would not be able to use the TCP/IP model anyway because it's too simple. Unless you are trying to standardize internal design of application layer gateways, which is like defining standardizing the way of structured programming and is hopeless, the separation of upper layers is meaningles. The bottom line is: lots of people are going to continue using the OSI model. We don't need two different models. I am having no difficulty in teaching my students, even though I often forget the names of two OSI layers between transport and application. In writing this mail, I only remember one: session. New comers don't need two different models. Masataka Ohta
Re: Datagram? Packet? (was : APEX)
Valdis; In this thread, as Noel said: : It's easy to imagine an ATM-like system : in which circuit ID's are global in scope. the circuit ID does not neccessarily imply special routing. If you're not routing based on circuit ID, why are you bothering to have one? I'm routing based on circuit ID. Current RSVP does not. RSVP is bothering to have one for prioritized queueing. However, you should also be aware that RSVP is virtually useless without QoS routing. Yes, a protocol to tweak the control of an underlying XYZ is pretty useless if there isn't an XYZ to tweak... You're overlooking the basic distinction between a circuit and RSVP - if something happens along the way to break the previously established circuit, the circuit is *BROKEN*, and nothing moves until it is either re-established or re-negotiated. You misunderstand the problem. The problem is that a protocol to tweak the control of an underlying bandwidth allocation is pretty useless if there isn't enough bandwidth to tweak. You can't reserve 1Gbps on the best effort path with T1 circuit, even if the T1 circuit is not broken. Masataka Ohta
Re: Datagram? Packet? (was : APEX)
Noel; From: Caitlin Bestler [EMAIL PROTECTED] Given the source interface, the *meaning* of an IP header is not supposed to be dependent on the routing tables. .. By contrast, the meaning of an ATM circuit is dependent on the context in which it was established. There is no expectation that there is any meaning to this circuit identifier beyond those imparted when the circuit was created. Yes, but that's just a minor engineering decisions, i.e. the use of a That's the essential requirement for networks capable of best effort service to let all the packets have globally routable addresses. non-global namespace for circuit ID's. It's easy to imagine an ATM-like system in which circuit ID's are global in scope. RSVP is a case. The real crucial *architectural* difference is in the fact that there is per-flow state, along with the need to set up state before the packets can flow. RSVP establishes the per-flow state before the packets can flow. It is just a minor engineering decision to allow optional circuit switched service over a best-effort-capable network. Masataka Ohta
Re: Datagram? Packet? (was : APEX)
Lixia; RSVP establishes the per-flow state before the packets can flow. I missed Ohta Son's original post, thanks to Valdis for catching this incorrect statement. IP packets can flow anytime. Fixing your statement: IP packets can be forwarded anytime. To say flow on packets not on flows is incorrect. Masataka Ohta
Re: Datagram? Packet? (was : APEX)
Valdis; RSVP establishes the per-flow state before the packets can flow. It is just a minor engineering decision to allow optional circuit switched service over a best-effort-capable network. 1) I wasn't aware that RSVP caused packets to be routed according to a flow ID contained in the packet rather than the IP address in the packet. Huh? In this thread, as Noel said: : It's easy to imagine an ATM-like system : in which circuit ID's are global in scope. the circuit ID does not neccessarily imply special routing. However, you should also be aware that RSVP is virtually useless without QoS routing. Masataka Ohta
Re: Datagram? Packet? (was : APEX)
Lloyd; At 05:44 PM 9/24/2002 +0200, TOMSON ERIC wrote: Last, while I definitely, clearly prefer calling Layer 2 data units FRAMES, I sometimes [over-]simplify the terminology of Layer 3 by making the following distinction : a datagram is the data unit before fragmentation ; a packet is a piece of a fragmented datagram. :^) A fragment of a datagram is itself a datagram; after you re-assemble them, the result is still a datagram. A datagram is self-describing; full source and destination. That's the essential property. That is, an ATM cell or an X.25 packet actually is not a datagram, because it does not have full information on source nor destination. It, instead, has full information on a VC, forwarding for which relies on rather static information stored on intermediate systems at the time of signalling. A fragment (IPv4 fragment) may not be. An IPv4 fragment, however, has full information on source and destination hosts and is a datagram, though, it does not have full information on source and destination ports, which is not an internetworking issue. Masataka Ohta
Re: 802.11b access in Tokyo and Kyoto with IP mobility
Fred; For the record, I'm sitting at this instant in Tokyo Station, and am on my way from Narita to Yokohama. I am sitting in the green car, and I accessed the appropriate web page. Note that that is an experimental service of JR, itojun mentioned, and has nothing to do with MIS service, I mentioned, subject line of which you copied. Anyway, I have wonderful 802.11 connectivity, and I have an IP address. Whether that means I can use the Internet is another question. On the parts of the track where the connectivity is there, we see ping round trip delays varying from 380 ms to over four seconds. There are fairly large parts of the track where the NTT DoCoMo 3G data connectivity appears to simply not be there - especially when in concrete tubes and ditches, but also on places with open track. As you know, JR uses not IP mobility but telephone network mobility. So, it is a problem of a plain old telephone system so called 3G. :-) So I think here the term seamless, when applied to connectivity, doesn't really apply. Direct WLAN service, today, has much less service area. However, in the future, inexpensive base stations of WLAN are essential to cover small holes of connectivity. Masataka Ohta
Re: Unicode is so flawed that 7 or 8 bit encoding is not an issue
James; As you could have seen, on IETF mailing list, Harald and I have, at least, agreed that, if you use unicode based encoding, local context (or locale) must be carried out of band Few will disagree (including me) that using Unicode to do localization is almost impossible without locale context. Huh? No one said such a thing. What is agreed is that, to use unicode, it must be supplied out of band local context. So, could you explain how the context is supplied with current IDN spec? But I am touching on a sensitive topic about what you considered as I18N L10N and what the rest of the world thinks. Totally irrelevant. But, anyway, the discussion so far in IETF list is enough to deny IDN. I am not aware there was a IETF wide last call on IDN yet. To deny IDN, IESG can simply say IDN is disbanded. There is no last call necessaruy. I have found a theory to deny PKI and to explain why public key cryptograpy is not so polular dispite the efforts of ISO, which will destory entire business of a company or companies owing several important TLDs. Very interesting. Could you share the theory, or privately if you prefer? I will post it later if I have time. Masataka Ohta
Re: I don't want to be facing 8-bit bugs in 2013
Kre; | IDNA does _not_ work, because Unicode does not work in International | context. This argument is bogus, and always has been. If (and where) unicode is defective, the right thing to do is to fix unicode. Unicode is not usable in international context. There is no unicode implementaion work in international context. Unicode is usable in some local context. There is some unicode implementaion work in local contexts. However, the context information must be supplied out of band. And, the out of band information is equivalent to charset information, regardless of whether you call it charset or not. So, stop arguing against unicode (10646) - just fix any problems it has. Fix is to supply context information out of band to specify which Unicode-based local character set to use. With MIME, it is doable by using different charset names for different local character set. See, for example, RFC1815. As for IDN, it can't just say use charset of utf-7 or use charset of utf-8. IDN can say for Japanese, use charset of utf-7-japanese. Or, if you insist not to distinguish different local character sets by MIME charset names, IDN can say use charset of utf-7, but, for Japanese, use Japan local version of utf-7 and somehow specify how a name is used for Japanese. Anyway, with the fix, there is no reason to prefer Unicode-based local character sets, which is not widely used today, than existing local character sets already used world wide. Masataka Ohta
Re: I don't want to be facing 8-bit bugs in 2013
Harald; [EMAIL PROTECTED] wrote: Unicode is usable in some local context. Agreed. Note that some is changing to many as time goes on. Irreleant, because some contexts are not compatible. The point here is that there can be no universal context. There is some unicode implementaion work in local contexts. However, the context information must be supplied out of band. Agreed. Then, how can you provide the information with IDN? And, the out of band information is equivalent to charset information, regardless of whether you call it charset or not. Do not agree. for most values of what we currently have registered as charset, it is not sufficient to identify the context. That is a problem of current and past charset reviewers including you. Therefore, depending on charset to identify context is not only useless, but actively harmful. Agreed. ISO 2022 escape sequence is the way to go. My opinion, which I stated in RFC 1766, and have found no reason to change. It was already denied by real world examples. Masataka Ohta
Re: I don't want to be facing 8-bit bugs in 2013
Erkki I. Kolehmainen; The use of local character sets (encoding) is doomed for particularly ww information interchange. Interestingly enough, ww information interchange is working very well with local character sets. The reason is because only people sharing a language, a scripting system and a character encoding system join each exchange, regardless of whether it is ww or intranational. For example, ww IETF communication is with English, Latin script and ASCII. Introduction of ISO-8859-1 or Unicode does not make IETF use Finnish. Your attempt to put ISO-8859-1 characters is not acceptable for me and your mail is filtered to be pure ASCII by my mailer, which is fair because many of us have no way to input non-ASCII ISO-8859-1 characters. Masataka Oha
Unicode is so flawed that 7 or 8 bit encoding is not an issue
James; Trying to reply your mail, my mailer says: [Charset Windows-1252 unsupported, skipping...] so, could you learn not to Microsoft centric and to use proper charset for the International discussion of IETF? While the discussion of the use of various character set is interesting topic, one which is also of interest to IDN WG, such prolonged discussion are better carried out in a forum which is dedicated to this, such as [EMAIL PROTECTED], a list which is formed to talk about the generic problem of I18N and L10N in IETF, and not IDN. I believe IDN, or bogosity of it, worthes attracting generic attention of IETF. Please bring it over to the other list and when/if there is a conclusion, please keep the IDN informed. As you could have seen, on IETF mailing list, Harald and I have, at least, agreed that, if you use unicode based encoding, local context (or locale) must be carried out of band I think Harald's language tag is highly semantical and is not useful for purely lexical (not even syntactic) issues such as charset disambiguation or space elimination for format=flowed. For example, the follwing Japanese text in Romaji script: IDN tte zenzen dame follows, as you can easily guess, usual folding rules for Latin-based scripts. But, anyway, the discussion so far in IETF list is enough to deny IDN. I'm happy to discontinue the thread, then. Masataka Ohta PS I have found a theory to deny PKI and to explain why public key cryptograpy is not so polular dispite the efforts of ISO, which will destory entire business of a company or companies owing several important TLDs. So, don't bother to say that there are so many so-called-international- but-actuallly-local domain names registered.
Re: Netmeeting - NAT issue
Keith; I think you missed the important point. It's not the NAT vendors, it's the ISPs. I'll grant that ISPs have something to do with it. But there is a shortage of IPv4 addresses, so it's not as if anybody can have as many as they want. Wrong. There actually is no shortage of IPv4 addresses. The primary reason of why NAT is so popular is that NICs do not offer IPv4 addresses promptly, because NICs feared shortage of IPv4 addresses. The wrong policy on IPv4 address assignment made NAT profittable. Masataka Ohta1
Re: I don't want to be facing 8-bit bugs in 2013
D. J. Bernstein; Paul Robinson writes: You tell him that although it's gobbledygook to people without greek alphabet support, it will still work. It's not convenient, but it WILL work. Guaranteed. False. IDNA does _not_ work. IDNA causes interoperability failures. IDNA does _not_ work, because Unicode does not work in International context. People who say that IDN is purely a DNS issue are confused. It's purely a cultural issue. In fact, the cost of fixing UTF-8 displays is much _smaller_ than the cost of fixing IDNA displays. UTF-8 has been around for many years, has built up incredible momentum (as illustrated by RFC 2277), and already works in a huge number of programs. In international context, it is technically impossible to properly display Unicode characters. There is no implementation exist. While some implementations work in some localized context, local character set serves better for the context. Masataka Ohta
Re: [idn] WG last call summary
D. J. Bernstein; What Seng fails to do is compare IDNA to the status quo. Sure, the status quo forces sites to stick to ASCII, which is visually unpleasant for many users. Though I, personally, can read, for example, Hangul (Korean Alphabet), for almost all international users, it is much much more unpleasant to be displayed a domain name in Hangul characters than be displayed Latin character equivalent of it. Though I, personally, can read many Chineses characters, for almost all Japanese users, it is much much more unpleasant to be displayed a domain name in Chinese characters than be displayed Latin character equivalent of it. Note that Kanji characters of Japan is not Chinese characters any more, just as Greek characters are not Latin characters. So, even if your software can display Hangul and/or Chinese characters, it does not make the situation better. Masataka Ohta
Re: Multicast
Ali; Hello, First the CBT protocol was created to use shared tree solutions because DVMRP and the other dense mode protocols werent scalable. there were many problems with CBT (which is bidirectional) so PIM-SM was cretaed which provide some switching (between shared tree and source tree). and after that there is some discussions about the bidirectional PIM, which is like CBT. Are we in circle here or what ?? If there were some progress somewhere, you might be able say "circle". But, in IETF, no proposal makes any sense that there is absolutely no movement. Here, you are on a point behind a start line. SSM is no different. The real difficulty of multicast is in the various relationships to resource reservation. I've heard that ISPs, which were expecting to receive special charge for multicast service, are now bitterly recognizing one aspect of the relationship that, even if multicast is free, customers do not bother to use multicast if they pay flat rate to receive any amount of unicast traffic Those serving the customers simply increase the number of capacity of servers, of course. Masataka Ohta
Re: Multicast
Ali; are you suggesting that there is no multicast problem :-) There is none remaining. and that is the ordinary unicast problems related to ressource reservation and by expanding the capacity of servers and also of the link bandwidth that problem will be solved. :-) I'm saying it was simple and easy to solve the resource reservation and multicast problems at once (www.real-internet.org). You can see that RSVP has a fatal flaw trying to accomodate all the possible and impossible multicast protocols. Another interesting aspect of the relationship is that each multicast group consumes a precious resource of routing table entry that multicast is a resource reserving communication. Masataka Ohta
Re: Multicast
Kobayashi san; ISP talking with you intend to charge whether sender, reciever or both ? I guess some ISP would not like to start multicast service, even if subscriber request it. They might sometimes say as an excuse what you heard. I think there are a lot of way to make a special charge for special service without resource reservation protocol as you think, even if you say "It is not scalable way" :) You completely miss the point. Because best effort service is flat rated, receivers are not motivated to use multicast. Senders learned to have more servers. See my column article on the most recent IPSJ magazine for details. That's all. Masataka Ohta
Re: Multicast
Sean; | Because best effort service is flat rated, receivers are not motivated | to use multicast. Senders learned to have more servers. Yes, but some senders would like to send a bit less :-) By definition, majority is taken by the receivers. ;-) Masataka Ohta
Re: Multicast
Sean; | Is that the draft :Simple Resource ReSerVation Protocol (SRSVP) (in English) | or there is another one?? | I didnt found it in the internet-drafts at the IETF !!! Good, RSVP for multicast is an insane idea. There is a finite but nearly zero chance that an ISP will ever squeeze more money out of someone by promising them via RSVP that their multicast packets will make it through from source to destination, whether the someone is a source or a listener. ISPs can squeeze more money with resource reserved unicast. ISPs can squeeze less money with resource reserved multicast, which is why, with resource reserved communication, source or listener are motivated to use multicast. Then, with free competition between ISPs, some ISP is motivated to offer multicast. However, if you demonstrate compliance with an SLA that works like many unicast ones (x% packet loss, y ms delay, from network edge to network edge), you may be able to charge more for "best efforts" multicast, and pick up customers who are frustrated with RSVP stuff. To make it operationally scale without spending infinite amount of money on network operators (:-), we need a fully automated signaling protocol. "Simple" RSVP: say yes always, or say no always. Choose one. RSVP is complex partly because of filter spec, which is unnecessary with shared-tree unidirectional PIM, and half hearted support of half-reliable link local multicast, which is unnecessary as, following the CATENET model, internet should not include large link layer, which ATM network dreamed to be. Masataka Ohta
Re: NATs *ARE* evil!
Itojun; You do not consider IPv6 an option? ipv6 is working just fine even here at IETF49 venue, it's so much more convenient than IPv4, for couple of reasons. We can't use IPv6 until multihoming issues are properly solved and global routing table size and the number of ASes are controlled to be below reasonable upper bound. IETF is intensively working on the issue that a new WG (multi6) will be created to draft a framwork document. So, you can expect lengthy framework document effectively stating nothing more than that the issue is hard, within a year or two. Well, I have a solution. But, last time I tried this kind of thing (proposed that subscribers be assigned /48 IPv6 address ranges or renumbering and other things are too hard just before IPv6 went to PS), it was rejected with a reason that it is too late. As you can see, 5 years are wasted until IAB and IESG make the same statement that assignments should be /48. Masataka Ohta
Re: Internationalization and the IETF
Kyle; There seems to be a debate to split "DNS" from "Directory" services, whereas, in long term, it is inevitable that DNS will merge with Directory services, even if current technology isn't that way. Huh? URLs are the result of such merge. URLs have ASCII domain name part followed by a path and search part. You can put any local characters with any local encoding scheme to path or search part of a URL. Browsers further encode them with % escape. Localized web servers to recognize such a URL are available. That's all and it's working. Masataka Ohta
Re: Will Language Wars Balkanize the Web?
Keith; you missed it. Suppose you could not exchange in commerce with a person of a given nationality, not because you did not have a language in common with him or her, but because your system could not interpret his or her name. That would mean that you could not spend money in that person's direction, because you could not communicate with him or her. And it means that person is at a disadvantage in your marketspace, and that it's not your problem. why in the world do people think they can justify or not justify actions based on whether something is an advantage/disadvantage in some "marketspace"? They can justify them locally within local marketspaces, of course. However, they can't justify to call them internationalization. Masataka Ohta
Re: Internationalization and the IETF (Re: Will Language Wars Balkanizethe Web?)
Vernon; MIME character sets is an example of a battle fought and won. When MIME is used to pass special forms among people whose common understandings including more or other than ASCII, MIME is a battle fought and won. FYI, we, Japanese, have, long before MIME, been and still are exchanging local characters purely within the framework of RFCs 821 and 822. See RFC 1468. MIME is a good *localization* mechanism, either in geography or culture No. ISO 2022 gives the good localization mechanism. Unlike MIME, you can use and we are using it in UNIX files without mail headers, file types, charset tagging nor POSIX locales. ISO 2022 gives proper localization information. It can be used in internationalized computer files to store international characters and on internationalized computer terminals to display international characters. However, even with ISO 2022, it is meaningless to "internationalize" domain names, of course, because ISO 2022 do not "internationalize" people using domain names. The only problem of ISO 2022 is that it is too complex having too much optional features beyond the localization. So, proper profiling, such as that specified in RFC 1468, is essential. Then, ISO 10646 *simplified* ISO 2022 by removing the essential feature of localization keeping all the other complexities, many of which are now, though ignored, mandated. MIME charset may be useful for ISO 10646. MIME charset can supply the localization information to ISO 10646 as I demonstrated, as a silly joke, in RFC 1815. Masataka Ohta PS Note that MIME charsets of "ISO-8859-*" also removes the essential but optional feature of ISO 2022 to give localization information inline, which makes MIME useful for "ISO-8859-*".
Re: Will Language Wars Balkanize the Web?
Claus; vint cerf [EMAIL PROTECTED] schrieb/wrote: Incorporating other character sets without deep technical consideration will risk the inestimable value of interworking across the Internet. It CAN be done but there is a great deal of work to make it function properly. How do I type chinese characters? I can't. So I can't write mail to someone whose email address contains non-ASCII characters if I don't already have the address in electronic form (e.g. within a webpage). Right. And, if a mailto URL is within a webpage with a chinese character anchor, it does not matter whether a mail address in the URL consists of pure ASCII characters or not. It's worth nothing that my computer could handle the address if I can't. You properly understand that the current ASCII DNS is already fully internationalized. Masataka Ohta
Re: Will Language Wars Balkanize the Web?
John; (Can we please move this discussion to the IDN list, where it belongs?) The point is that IDN WG is purposeless and is wrong to exist. Of course, it is waste of time to discuss it in IDN list. So, the only reasonable reaction is to ignore it (I dropped improper CC:). The only necessary discussion on domain names, IF ANY, is localization issues, for which there is no specific WG of IETF. (iii) Regardless of how the names in the DNS are coded, it is important to have analogies to "two sided business cards". A typical business card of Japanese have Chinese characters. When we internatinalize it, we use the other side to put a Lain character version. As we already have fully internatinalized DNS with Latin characters, Chinese characters in DNS is localization against internationalization. And, because of the registration issue, there is no plausible way to impose a requirement that every host (or other DNS entry) have a name in ASCII if it has a name in some other script: people and hosts not visible outside their own countries may not care enough to go to the trouble. That are local issues. If people want local names let them have them under local domains, with all the local conventions on encoding and everything. The administrator of the local domains may or may not force people have additional internatinalized domain names. Note that local, here, means culturally (not necessarily geographically) local that ccTLDs may or maynot be the local domains. But, it can be said that gTLDs are not a proper place to put local names. Masataka Ohta
Re: Will Language Wars Balkanize the Web?
Ran; At 02:53 05/12/00, Martin J. Duerst wrote: At 00/12/04 10:42 -0800, Christian Huitema wrote: So, at a minimum, we need an IETF specification on how to detect that a domain name part is using a non ascii encoding, so that DNS servers don't get lost. Why not just use UTF-8? It is an encoding of the UCS (aka Unicode/ISO 10646), the encoding is fully compatible with ASCII (all 7-bit bytes are ASCII and only ASCII), and it is IETF policy (RFC 2277). All, Please MOVE this conversation to the IDN WG list, where it would be in scope. Btw, this specific question has been raised and answered several times now on the IDN list. I encourage folks to read the sundry IDN proposals before diving in any deeper here. IDN is the perfect place for repeated silly conversation like above. But it is not the place to discuss localized domain name, which has nothing to do with internationalization. Masataka Ohta
Re: Will Language Wars Balkanize the Web?
Dave; Thank you. I was hoping someone would point out the support for parallel operation so we could go further down that path. As you note, it seems to be the closest to providing local/global support already. Silly comparison. Efficient postal system works with numbers so called zip code. Postal address with various characters needs human intervention for complex matching and is similar not to DNS but to search engines. Masataka Ohta
Re: Will Language Wars Balkanize the Web?
Graham; Leaving aside the issues of competing registries, touched upon in that article, I had been wondering with the formation of IDN WG how I18N would affect cross-character-type-boundary Internet activities. Nothing. Cross-character-type-boundary is a pure localization issue and has nothing to do with people wrongly working on I18N. PS: I think it is without doubt that it is a Good Thing that we make efforts to internationalize protocols; If only you understand what "internationalize protocols" mean. ASCII (latin, numeric and hypen) characters are the only characters internationally recognizable by so many people. Masataka Ohta
Re: mobile orthogonal to wide-area wireless
James; The prevailing view seems to be that wide-area wireless devices need to be "mobile" in the sense that they are able to move from one network to another. This is not the case, and maybe not even desirable. I believe that this view has led to easily avoidable delays in wireless internet services. Why do you think "mobile" delays wireless Internet services? IP mobility protocol is out there and AAA is a problem of wireless, not mobile, Internet services. Masataka Ohta
Re: wireless Internet in the U.S.
James; This is interesting: http://www.whitehouse.gov/library/hot_releases/October_13_2000_2.html It's a US local problem. However, it is interesting as an innocent reaction of people who has never suffered from so much charging of iMODE, WAP or whatever over second generation mobile systems. Masataka Ohta
Re: cell phone audio email
James; So get a PBX that does VPIM, and dial into it. Our current solution is very similar to this, and has multiple problems, the most important being that it is much more complicated for the end-user than it needs to be. It needs to be complicated. The simple and straight forward way to transmit audio over phone network is to use voice channel. But, it has nothing to do with IETF nor the Internet. You think MIME audio attachments work over iMODE but not over WAP. I don't know what the new Docomo products do; that's why I'm asking. This problem is not specific to VPIM, or email; the mobile aspects (e.g., the lack of mobile carriers providing end-to-end Internet) form a problem bigger than the IETF. Your problem may be 25 times bigger than IETF. But, it is your probelm and has nothing to do with IETF. Masataka Ohta
Re: cell phone audio email
James; Or, are you spamming IETF acting as a sales agent of Docomo? No, I'm not affiliated with any part of the cellphone business. I ask because my employer and I have multiple, specific applications for cellphone-based email with MIME audio attachments. OK. I acknowledge that you are involved in cellphone business. There are many companies who -- if you believe their announcements -- are so close, but they seem to be throwing good money after bad, on dead-end technologies like WAP. You think MIME audio attachments work over iMODE but not over WAP. OK. OK. You can discuss it in a specific WG (VPIM). But you should not bother people in IETF mailing list by repeating an obvious fact that both WAP and iMODE are dead-end technologies. Masataka Ohta
Re: Topic drift Re: An Internet Draft as reference material
Eliot; I would accept your interpretation if you can go to a major search engine, like Yahoo or Altavista, and find me in a brief period of time ANY version of Mike O'Dell's 8+8 proposal. You should really check archive of a big internet mailing list (does someone know where it is archived?) for various 8+8 proposals (including mine), some of which was available as IDs (thus, archive of expired IDs are useful). Mike's is the latest but the least useful one. Don't you think it shameful that there is no permanent record about a serious effort to deal with a serious problem (multihoming)? And this is a recent (read: current) problem! Handling of multihoming in Mike's draft is very old (even at the time of his proposal) one. As is briefly explained in draft-ohta-e2e-multihoming-00.txt, the method violates the end to end principle and does not scale. Masataka Ohta
Re: An Internet Draft as reference material
Harald; What do you think about "de facto" that many technical documents are currently using Internet Drafts as referece material? I've seen next two cases: 1. An Internet Draft refers to another Internet Draft. Common. It means that if the reference is normative, the I-D cannot be published as an RFC before the other. If both refer normatively to each other, they must be published at the same time. (If the reference is not normative, the draft name is replaced by "work in progress" when the RFC is published. Then, sometimes, the draft is lost...) 2. A book refers to another Internet Draft. Stupid, but nothing the IETF can do about it. If you say it stupid (regardless of whether IDs are cited as work in progress, or not), those who are publishing RFCs citing IDs (regardless of whether IDs are cited as work in progress, or not) are stupid. Actually, it is not stupid. Instead, it is stupid if ID editors were not responsible for keeping records of expired IDs. If is wise if all the expired IDs are put under: ftp:ftp.ietf.org/expired-internet-drafts/ or somewhere else. Masataka Ohta
Re: Mobile Multimedia Messaging Service
Mohsen; James (1) End-to-End Internet Services for Mobile Devices James Scope: Specifications and interoperability guidelines for James end-to-end mobile IP connection and transport services required James for support of standard Internet messaging ... The beauty of the Internet End-to-End model is that people don't have to wait for the IETF to create a working group, a charter, a chair, , blessings, to move forward. Yes, in this case, people are using different URLs already. It is solved at the so far end of communication that it works even with iMODE or WAP without the Internet end-to-end model. What was IETF's role in Internet's main modern application? The role is to delay IANA resigtration trying to keep influential power on application protocols but resulting only to make IANA registration less important. As far as the domain of Internet services for mobile devices go, the key issue is "Efficiency". No. It is wrong to assume that bandwidth for mobile devices is limited forever. Simplicity is the key issue. Masataka Ohta
Re: getting IPv6 space without ARIN (Re: PAT )
Keith; GET A CLUE. I did read the draft. Huh? You obviously don't. What's more, I have tried implementing multihoming in a way very similar to what you are describing, which is what informed my earlier comments. I know "very similar" often means "completely different", especially when you don't read the draft carefully. Without routing support from the network, it cannot be made to work well. Wrong. With the end to end principle, it is silly to distinguish routers and nodes and give routers more intelligence. With end to end multihoming, there is no reason that the global routing table is large. So, you can assume a host has full routing table. However, unless you use host route, exsitence of a route entry means that there is reachability to some part of an address range of the entry but not necessarily to the target host. Thus, routing systems may give you a hint but routing information can not be fully relied upon. All the possible addresses should be tried with certain time out. Applications needing quicker recovery should use smaller time out, of course. They are all written in the draft. READ THE DRAFT. Especially if you're also trying to support mobility. No. My draft says nothing about mobility, because it is no difficult. Masataka Ohta
Re: *implement* the drafts (was: getting IPv6 space without ARIN)
Mau; GET A CLUE. I did read the draft. Huh? You obviously don't. What's more, I have tried implementing multihoming in a way very similar to what you are describing, which is what informed my earlier comments. I know "very similar" often means "completely different", especially when you don't read the draft carefully. That may well be true. I expect therefore that you can point to a correct implementation of your draft, so that we can check for ourselves if multihoming works or not... Befor stating stupid things about my draft, READ THE DRAFT. Two implementations are pointed to. It's regrettable that you are using both of them. Masataka Ohta
Re: Deployment vs the IPv6 community's ambivalence towards large providerss
Vint; I hope I will be forgiven for adding another message to this long thread. 1. we have to discuss the practical problems of deploying IPv6 and especially a bunch of corner cases or it won't work. 2. there are still a lot of "holes" in my opinion that need filling in any credible deployment scenario - and we'll learn the most from trying to get serious, operational IPv6 networks up and running. That's an easier half of the problem. When I visited a major router vendor in last June and asked about IPv6 support, the answer was that it will be supported IPv6 at the end of next year, because it involves a lot of software work, even though their hardware is ready. They, then, asked me which of the complex feaures of IPv6, such as tunneling or NAT-like capability, should be supported. 3. the ietf general list is probably the wrong place for further extended discusssion Maybe. But it means that entire IETF is the wrong place. Here is a better place than IPNG WG committee list, where removal of features can not be accepted. Masataka Ohta
Re: Deployment vs the IPv6 community's ambivalence towards large providers
Thomas; The other changes/benefits (simplified autoconfiguration, improved mobility, tools to help with renumbering, etc.) while important, are secondary. Huh? Compared to IPv4 equivalent, all the three features of IPv6 are unnecessarily complex without necessary functionalities. IPv6 is only rationally justified as a modest but necessary enhancement to IPv4, I agree with this, and suspect that much of the core IPv6 community does as well. That's a silly statement. Committees add all the features considered by some a modest but necessary enhancement, of course. Masataka Ohta
Re: imode far superior to wap
Nilsson; I'm afraid that ssh for phone is just another telephantisms. :-) Compared to your much healthier view, yes. I do agree that it is very interesting. I was perhaps thinking the same but failed to express; the hand terminal is going to be a computing device with voice capability rather than a phone with datacom kludges. Then it is immediately obvious why one wants to make a VT-style terminal connection to it. :-) (ie SSH) Huh? You may have your reasons. However, commercially speaking, next obvious step of the evolution of mobile Internet terminals is removal of ten keys replaced by those of Nintendos and Play Stations. People will be busy to play network games with their own special protocols over flat rated mobile Internet. That's why end to end transparancy is commecially important. People can still input numbers and other characters slowly (some quickly) for less important applications like phone calling or web browsing. Masataka Ohta
Re: Sequentially assigned IP addresses--why not?
Sean; Brian Carpenter writes to Anthony Atkielski: | The telephone company figured out how to avoid problems decades ago. Why | the computer industry has to rediscover things the hard way mystifies me. | | The telephone company has milliseconds to seconds to resolve an address | into a route. The Internet has microseconds to nanoseconds to do so. You are missing the difference between "what" and "where". The telephone company takes milliseconds to translate the equivalent of 6.6.9.9.9.6.6.8.6.4.e164.net into the equivalent of 192.36.143.3. That is, the phone number is merely an identity name, which is converted into a location name by a database lookup. In that sense, DNS names are randomly (more aggressive than sequentially) assigned addresses. Masataka Ohta
Re: imode far superior to wap
John; Renfield Kuroda wrote: James Seng wrote: Not sure if it is relevant but i-mode is working on an end-to-end IP system now which will be deploy sometime next year. Really? No. The guy from NTT Docomo who spoke at Adelaide mentioned it. I don't remember details, though. The detail you wrote in IETF ML on: Date: Mon, 01 May 2000 14:30:54 -0400 in Message-ID: [EMAIL PROTECTED] is : Didn't someone from DOCOMO present in Adelaide, and say they were : planning to go to running IP in the handsets? That's all. I guess you don't understand the phrase "end-to-end", the essence of the Internet. Masataka Ohta
Re: imode far superior to wap
Francis; = according to a IPv6 Forum internal mail: NTTDoCoMo confirmed that IPv6 will be used in their backbone starting Jan 2001 in a panel session with Fujitsu and the IPv6 Forum at WTC. FYI, it's equally easy for docomo to use IPv4, IPv6, OSI or any other protocol, because their backbone is a private network. Masataka Ohta
Re: imode far superior to wap
Nilsson; I doubt that you will find support from IETF folks for something that breaks the end-to-end model of IP (as Imode and WAP do as they are implemented today). I want to be able to ssh to my phone (or equivalent). Anything below that is just telephantisms. I'm afraid that ssh for phone is just another telephantisms. :-) If you want phone with real Internet style, see our INET paper: http://www.isoc.org/inet2000/cdproceedings/4a/4a_3.htm The "Simple Internet Phone" has an architecture tuned for a future situation in which non-Internet networks, such as IP-based private telephone networks, will disappear. While the "Simple Internet Phone" is a form of voice over Internet Protocol (VoIP), most, if not all, VoIP protocols are designed placing the priority in the affinity to the telephone network. However, it is obvious that the telephone network will be replaced by the Internet, and will eventually disappear. At that time, most of the features of VoIP protocols will become obsolete. Instead, the "Simple Internet Phone" is designed placing the priority in the affinity to the Internet and its architectural principles as an "end-to-end," "globally connected" and "scalable" IP network. As a result, most features of VoIP are substituted by the existing Internet protocols. With Internet phones, callees are required to have persistent connection to the Internet with globally unique addresses, which helps to promote the healthy development of the Internet. Dispite all the hypes of VoIP people on QoS with private IP network (if you are hyped, your phone can not be free and can not compete with telephone network), I just confirmed voice quality good enough between Taiwan and Japan through USA. Run this kind of protocol over Ricochet terminal and WAP and iMODE will disapper. Masataka Ohta PS You can purchase a prototype terminal adapter.
Re: belated apology
Keith; p.s. I do however think that, given the tendency of various providers these days to violate the internet protocol specifications and erode the ability of applications to run on the network, the community might benefit from some kind of "standardized" (in the loose sense) description of IP service (rather than ISPs) that could be specified by customers. and IETF would appear to be the organization which is best suited to define such terms. It exactly what IETF is (or, at least, was) doing, when loose sense means "standard to be used on the Internet". IETF is "Internet Engineering Task Force", not "IP Engineering Task Force". IETF can't say anything to braindead protocols of braindead providers in their purely private IP network. Masataka Ohta
Re: draft-ietf-nat-protocol-complications-02.txt
Bob; * but yes, likely some things in this world are not acceptable to some * segment of the population. so don't accept them. but life goes on and * things change. * * randy Changes are already implied by RC1958, which I refer. As things change, new RFCs can be issued. Resist entropy. You can't. Entropy and the number of RFCs monotonically increase. Masataka Ohta
Re: draft-ietf-nat-protocol-complications-02.txt
Randy; My intention is to provide a semi permanent definition as an Informational RFC. It is important to make the definition protected by bogus opinions of various bodies including IETF. of course you will exuse the providers if we continue to be perverse and find new business models. Exuse? If you mean execution or decapitalization, yes, I will. Masataka Ohta
Re: draft-ietf-nat-protocol-complications-02.txt
Jon; Any comments on the content of the draft? I would go further - first to define by exclusion, secondly to define a new class of providers (according tro common uisage) so that discussion can proceed My intention is to provide a semi permanent definition as an Informational RFC. It is important to make the definition protected by bogus opinions of various bodies including IETF. An ISP _hosts_ its own and customer's hosts. Hosts follow the hosts requirements RFC, at least. An ISP uses routers to interconnect its, its customers, and other to ISPs networks, Routers follow the router requirements RFC, at least. They are requirements by IETF. Worse, even in IETF, there is no Internet Standard of router requirements yet and the newest revision to the Proposed Standard is BCP. So, please don't attempt to rely on it. Service Organisations that don't allow a host or router that follows the above definition to excercise capabilities defined are what we now know as Content Service Providers, and must provide application level gateways, Application Service Providers, and offer portals or ALGs. In each case there may be good performance or security reasons for this mode of service, but there will usually be lack of flexibility or ease of introdution to new services, content and applications in general. I think my draft covers the case to make such network providers not ISPs. personal comment Other classes of organisation may simply be providing a subset of internet services - I don't see a market or technical case for these and in fact would encourage regulatory bodies to see if these types of organisations are trying to achieve lock out or are engaged in other types of monopolistic or anti-competitive behaviour. :-) I just want to make it illegal for these types of organisations call their service "Internet" or "internet". It's something like "Olympic". Masataka Ohta
Re: draft-ietf-nat-protocol-complications-02.txt
Dear all; Based on the previous discussion, Jon In message [EMAIL PROTECTED], Masataka Ohta ty Jon ped: Jon Jon Is it fair if providers using iMODE or WAP are advertised Jon to be ISPs? Jon Jon Is it fair if providers using NAT are advertised to be ISPs? Jon Jon My answer to both questions is Jon Jon No, while they may be Internet Service Access Providers and Jon NAT users may be IP Service Providers, they don't provide Jon Internet service and are no ISPs. Jon Jon i agree: Jon in the UK, i would say that someone claiming internet access via WAP Jon would be in breach of the trades description act. Jon Jon Any oppositions? Jon Jon not from here (for wap - i dont know enough about iMODE to comment) and lack of oppsitions in the thread, I have drafted an attached ID. However, IESG is blocking the publication of the ID (it is just an Internet Draft, NOT an Informational RFC)! IESG says they have not even saw the content, which means a lot of time will be wasted further. So, I post the draft to the Mailing List. Any comments on the content of the draft? Comments for the blockage, if any, should be given with a separate subject? I hope IESG has not taken the obvious next step of the moderation of the Mailing List. Masataka Ohta --- INTERNET DRAFT M. Ohta draft-ohta-isps-00.txt Tokyo Institute of Technology July 2000 The Internet and ISPs Status of this Memo This document is an Internet-Draft and is in full conformance with all provisions of Section 10 of RFC2026. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet- Drafts. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet- Drafts as reference material or to cite them other than as "work in progress." The list of current Internet-Drafts can be accessed at http://www.ietf.org/ietf/1id-abstracts.txt The list of Internet-Draft Shadow Directories can be accessed at http://www.ietf.org/shadow.html. Copyright (C) The Internet Society (May/1/2000). All Rights Reserved. Abstract This memo gives definitions on the Internet and ISPs (Internet Service Providers). 1. The Internet The Internet is a public IP [1, 2] network globally connected end to end [3] at the Internetworking layer. 2. ISPs A network provider is an ISP, if and only if its network, including access parts of the network to its subscribers, is a part of the Internet. As such, ISPs must preserve the end to end and globally connected principles of the Internet at the Internetworking layer. M. OhtaExpires on January 1, 2001 [Page 1] INTERNET DRAFTISPs July 2000 A network provider of a private IP or non-IP network, which is connected to the Internet through an application and/or transport gateway is not an ISP. Dispit the requirement of "global connectivity", a network provider may use transparent firewalls to the Internet with no translation to filter out a limited number of problematic well known ports of TCP and/or UDP and can still be an ISP. However, if filtering out is a default and only a limited number of protocols are allowed to pass the firewalls (which means snooping of transport/application layer protocols), it can not be regarded as full connectivity to the Internet and the provider is not an ISP. 3. Security Considerations While some people may think that filtering by application/transport gateways offer some sort of security, they should recognize that macro virus in e-mails can pass and are passing through all such gateways. 4. References [1] J. Postel, "Internet Protocol", RFC791, September 1981. [2] S. Deering, R. Hinden, "Internet Protocol, Version 6 (IPv6) Specification", RFC2460, December 1998. [3] B. Carpenter, "Architectural Principles of the Internet", RFC1958, June 1996. 5. Author's Address Masataka Ohta Computer Center, Tokyo Institute of Technology 2-12-1, O-okayama, Meguro-ku, Tokyo 152-8550, JAPAN Phone: +81-3-5734-3299 Fax: +81-3-5734-3415 EMail: [EMAIL PROTECTED] M. OhtaExpires on January 1, 2001 [Page 2] INTERNET DRAFTISPs July 2000 6. Full Copyright Statement "Copyright (C) The Internet Society (July/1/2000). All