Here is my third attempt at understanding and critiquing ILNP. I think I understand it better than in the past.
Please see my previous message for why I think ILNP does not fit into any of Bill's strategies, and why I think it needs to have a Summary and Analysis document. - Robin My previous critiques of ILNP were: http://psg.com/lists/rrg/2008/msg02464.html 2008-09-15 http://www.irtf.org/pipermail/rrg/2008-November/000169.html 2008-11-17 The current I-Ds are from 2008-12-10: ILNP Concept of Operations http://tools.ietf.org/html/draft-rja-ilnp-intro-02 Additional DNS Resource Records http://tools.ietf.org/html/draft-rja-ilnp-dns-01 ICMP Locator Update message http://tools.ietf.org/html/draft-rja-ilnp-icmp-00 Nonce Destination Option http://tools.ietf.org/html/draft-rja-ilnp-nonce-01 My general critique of requiring hosts to handle multihoming etc. applies to ILNP: http://www.irtf.org/pipermail/rrg/2008-November/000268.html 2208-11-24 Here are some notes and critiques of the current I-Ds. ILNP Concept of Operations http://tools.ietf.org/html/draft-rja-ilnp-intro-02 The lack of guaranteed uniqueness for IDs (lower 64 bits of address) strikes me as a potential problem - for obvious reasons. Although a host is supposed to be able to tell the difference between two hosts it is communicating using the same ID, based on the nonce value, (P15) it is not clear how this would be conveyed to applications, since the stack apparently only gives the lower 64 bits to the applications. It is not clear to me how ILNP would work with unmodified applications. The app would look up DNS and the stack would intercept the response, replacing the AAAA result with one created from the I and one or more R items. The stack would give the application the resulting 128 bit IP address and the application would open a socket bound to that address. Is this right? A detailed example would be helpful. TCP needs to be changed to use only the lower 64 bits in its pseudo-header calculations. But this changed behavior must only be implemented on sessions with other ILNP hosts. (P6) Each ILNP host is required to use Secure Dynamic DNS to update its I and L records in the DNS server which is authoritative for its FQDN. As I noted earlier, this involves some potentially prohibitive security and administrative arrangements. For instance, if the ILNP host is at a hosting company, and runs web sites for 20 companies, it will need to have the authority to update the I and L records of its FQDN in the authoritative DNS servers of these 20 companies - which are not necessarily run by this hosting company. I know nothing about RFC 3007, but is it technically possible to give some remote host the credentials to do this update, and only this update, on a company's primary DNS server? There is also an option for CE routers changing the LOC bits of the source and destination addresses of outgoing packets, and for updating DNS to reflect these choices. So now it is not just the host, but CE routers changing these things. They need the credentials too. It is not clear how the CE router tells the host of these changes. How can all these permissions be reliably and securely organised when someone walks into an end-user site and connects their own personal device (for instance a laptop computer) to the network, via cabled or wireless Ethernet? (P8) Localised addressing - how to handle packets sent between hosts in a site. The split horizon DNS system sounds to me like a nightmare. Are there any arguments to the contrary? The alternative simpler in terms of DNS, but involves trickier and more expensive routing which I haven't yet fully understood. (P9) IPsec needs to be changed - so it has additional modes just for use with ILNP sessions. Since applications can't know which sessions are ILNP or not, I guess the IPsec code has to know and behave accordingly. The changes are not just to the packet handling code for AH and ESP, but to the key management code too. This sounds very messy. DNS servers for any ILNP hosts need to be changed - but this wouldn't be too hard. On page 10, there is mention of DNS servers creating an AAAA record (one or more?) for non-ILNP hosts automatically from the I and L records. (P10) It is not at all clear how reverse DNS can be implemented. The reverse DNS is often administered by a completely different organisation than the ordinary DNS. In the case of a 2-ISP multihomed site, there would be two separate reverse DNS systems, one for each ISP. How could they be updated every time a host changed its LOC bits - such as in a multihoming failure where one set formerly to be used should now not be used? It is bad enough that the host has to securely update the DNS of its one or more FQDNs (a web-host company's hosts might have dozens or a hundred such FQDNs, each in a different customer company's DNS). However, to update the reverse DNS system, each host would need to be able to securely update a subset of both ISP's reverse DNS system. How could all this authority be securely given to some guest host in an end-user network such as the phone or laptop connected temporarily via Wi-Fi? There needs to be very short or zero caching times for DNS queries for these FQDNs. Otherwise, if a host changes its LOC bits, other hosts using DNS and trying to establish a session with this host will have stale information and will send packets to a non-functional address. This looks like it would dramatically increase the load on the entire DNS system. (P10 - 11) I am not sure how address referrals can work. Referrals occur at the application level and there is no DNS involvement. If host A is having an ILNP session with B, using LB upper 64 bits and IB lower 64 bits, it gives these 128 bits to and application running on ILNP-capable C. Assuming A is still reachable on this address when C sends packets to it, how is the stack in either C or A to know this is supposed to be an ILNP session? Would C recognise that this address for A was not it its ILNP-specific session cache and so generate a fresh nonce - sending Nonce Destination Options along with the initial packets? Clearly, referrals can't work if A is no longer available on this 128 bit address. So multihoming can't work for referrals. Hence the mention on page 11 of creating a new API, and modifying applications, so they can operate differently and avoid referrals. But that involves changing applications, which raises probably the most prohibitive barriers imaginable to development and adoption. Multihoming only works between upgraded hosts anyway, so ILNP only provides its benefits (multihoming and perhaps inbound TE) to a very small fraction of sessions, on average, in the early days of deployment. Additional DNS Resource Records http://tools.ietf.org/html/draft-rja-ilnp-dns-01 I understand how a 2 ISP multihomed host with I bits "zzzz" can work with two sets of L bits "xxxx" for the /64 of ISP 1's address range where this host resides and likewise "yyyy" for ISP-2. A DNS query for its FQDN returns either "xxxx zzzz" or "yyyy zzzz" for the AAAA record (for non-ILNP hosts to use) and, for instance, giving priority to ISP 1: L (first of two) xxxx priority 1 L (second of two) yyyy priority 2 I zzzz (priority any value) Conventional round-robin techniques enable the one DNS name to give out multiple AAAA addresses, spreading the load of sessions from multiple requesting hosts over multiple responding hosts. Since each AAAA address can be for any host on the Net, this enables the spreading of load over hosts in different networks. AFAIK, such round-robin techniques are not possible with ILNP. In the multihomed example above, (I guess) it would be possible to spread incoming sessions over the two links, for the one physical host, by setting both L priorities to the same value. However, suppose there was a need to spread the load over multiple physical hosts. The only obvious approach would be to add a second I record "gggg", and give it the same priority as "zzzz". Then, there could be two physical hosts, both getting incoming sessions via both ISP links. However, the two hosts need to be in the same end-user network. There seems to be no way of doing round-robin session sharing between physical hosts in separate networks with ILNP. How does the extra length of this new response information affect the ability to reply in a single packet? ICMP Locator Update message http://tools.ietf.org/html/draft-rja-ilnp-icmp-00 I think it would be helpful to add that the authentication for this message is always provided by the Nonce Destination Option. This provides a new list of LOCs for the a host's ID address and nonce. Normally, it is sent by the host. However, it could be sent by a CE router. Would the CE router spoof the host's sending address? (Then, the host gets any ICMP error messages - not the CE router.) If CE routers are to do this, then I think the protocol needs to be extended to include the address of the host concerned. Presumably the CE router would pick up the nonce from passing traffic. If this is sent by a CE router, as noted above, how does the CE router tell the host whose LOC addresses it has just updated in some remote correspondent host? Presumably this message overrides whatever the destination host has learned about LOCs and priorities from a DNS lookup, or from any previous such Update message. There appears to be no provision for acknowledging this message - yet there surely needs to be acknowledgement, since the message is so important that the sending host (or router) should persist in trying to get it through. Nonce Destination Option http://tools.ietf.org/html/draft-rja-ilnp-nonce-01 The ILNP stack can add this Destination Option to any packet. The destination option may be 8 pr 20 bytes long. The application can't know that this is to be added, so I think the stack has to tell applications to always send packets at least 20 bytes shorter than what could normally be carried according to the PMTUD limits of the path to the destination host. This would seem to reduce the efficiency of many ILNP sessions. Also, I guess the RFC 1191 part of the stack would need to be aware of the presence or absence of this Header Option. Tentative conclusion ILNP suffers from all the problems I noted about any proposal which pushes the responsibility for multihoming out to hosts. It also does not provide portability. I am not sure how ACLs would work with ILNP's multiple addresses for each host. It offers early adopters very little incentive to use it, since multihoming only works with other upgraded networks. Until essentially all hosts on the net are upgraded, it will not provide robust multihoming for all communications. In addition to upgrading host stacks, DNS servers needs to be changed and there will often need to be elaborate trust relationships established and managed between different organisations for multihoming to work. This is an "elimination" strategy, which could, in principle, remove the need for end-user networks to use PI addresses. However all such networks need to be motivated to adopt this modified form of IPv6. Could DNS servers be multihomed in this fashion? Wouldn't that require additional I and L information in responses from one request, which point to the address of another server to query next? There appears to be a scaling problem due to ILNP apparently requiring much more frequent requests to the authoritative DNS servers. The level of complexity seems to be minimal, with no actual cryptography - quite a contrast with crypto-laden HIP: http://infrahip.hiit.fi/index.php?index=how Unless reverse DNS is fully supported - and this does not seem to be discussed in detail in this I-Ds - then there is no mapping from a 64 bit ID to its one or more 64 bit LOCs. There is only mapping from a FQDN to one ID with multiple LOCs - for multihoming. There appears to be no method of doing round-robin for multiple hosts in separate networks - which would be a major barrier to adoption for some end-user networks. _______________________________________________ rrg mailing list [email protected] https://www.irtf.org/mailman/listinfo/rrg
