Hi,

FWIW, I spent some time going through Y.IPv6RefModel and really really want 
RIPE NCC to be strong and blunt in their answer. The short response is clearly 
what Jordi wrote:

> In short the document makes no sense at all and is plenty of errors even in 
> the way some notation, protocol names and references are used.

The slightly longer response is that there are a number of explicit 
misunderstandings in the document, and they are not small misunderstandings. 
They will (not might) create operational issues if implemented as explained. 
Let me give a few examples:

1. Transition from IPv4 to IPv6

The whole document includes in a number of cases statements that claim 
transmission from IPv4 to IPv6 is easier if there is a mechanical mapping 
between the addresses in the two protocols. This is wrong in so many ways. 
First of all that there is a transition. What is happening is that IPv6 is 
deployed. Then one day maybe IPv4 is depleted, or rather, made less important. 
Sure, various applications with move from using IPv4 to using IPv6, but having 
application layer transition their default protocol has not much or anything to 
do with the address plan. Further, due to the limited number of IPv4 addresses, 
the subletting (if any) in IPv4 space is constrained in a way that IPv6 do not 
have to be. Limiting IPv6 address plans to the same scheme we have been 
squeezed into for IPv4 is a bad idea from the beginning. Its like adding ICT to 
existing processes instead of reviewing and optimizing processes the way 
possible thanks to ICT.

This is mentioned in for example the Summary, Section 10, 11, 12.1, 12.2, 12.3, 
12.4 and more.

2. Consultation with IETF

It is claimed in the Summary that Y.IPv6RefModel have been developed in 
coordination with IETF. This without any reference to any discussion, 
Internet-Draft or such.

3. Deployment of IPv6

There is a claim in Section 7 that IPv6 might be the advent of a new Digital 
Divide. Some countries are mentioned as having high deployment of IPv6 and 
others very low. The examples are clearly chosen to strengthen the argument as 
development countries with low IPv6 deployment (Sweden for example) are not 
mentioned. Further, in Section 8 where large scale deployment of IPv6 is 
mentioned, China is listed as a clear winner, although China is not listed as 
an exception when exceptions are listed in Section 7. In short, having such 
statistics and drawing conclusions from it in such broad strokes with the brush 
is just a stupid idea, imflammatory and creates unnecessary discussion. If the 
document is good from a technical grounds it should be able to stand on its own 
for such reasons.

4. Reference Model for IPv6 deployment

There is a claim in Section 7 that having a reference model for how to subnet 
end users allocations developed in the ITU-T, and this specific model, will 
address issues regarding Digital Divide. As the earlier portion of Section 7 
and arguments in other places in the document is at best weak, if not wrong, 
making claims based on this is not possible. Further, there are no examples or 
arguments why the reference model explained would help with solving the low 
IPv6 deployment in for example Sweden.

5. Reference Model for IoT deployment

In Section 8 it is claimed this model should be particularly interesting for 
large scale IoT deployments but also IoT deployments. As IoT devices are 
connected to the Internet like any Internet Connected Device there is no 
difference between IoT deployment and non-IoT deployment. The contrary. We will 
have IoT deployment everywhere. In every house, home, car, owen, airplane, 
ship, watch we will have some IoT devices or IoT functionality. Having specific 
IoT reference models and then non-IoT allocation models as well will absolutely 
create a digital divide and fragmentation of the Internet. If any reference 
model is to be created (or Best Practice) it must be created from the general 
use of IPv6 in the world, and not for IoT specifically.

6. Explicit mapping between geographical location and addresses

In Section 11 there is a requirement that the geographical location of a device 
is to be visible as a specific set of bits (the first four). This implies a 
device is changing IPv6 address when being moved from one building to another, 
and that one can not have a subnet covering more than 16 buildings. I do not 
give more comments on this.

7. DMZ

Section 11 talks about DMZ in singular. This is built upon the thinking used in 
the 20th century that one have a firewall (i.e. one firewall) that separates 
the "inside" from the "outside" and that everything on the "inside" of the 
firewall was secure, safe and trusted. And further, that devices that where to 
be reached from the outside had to be in a so called DMZ. This is a model that 
no longer is valid. All devices and services should not be reachable, and this 
can not be based on some firewall. Every active network element that can filter 
traffic must do so, and the devices themselves must filter explicitly what 
services they expose and make available. And this independently of whether it 
is on the "inside" or "outside" of whatever layer in the layered protection 
designed in the network.

8. Internal servers

Similarly to the misunderstanding of network design described in (7) above also 
the concept of "internal server" is confusing. Today network structure consists 
of various services being exposed from various virtual and otherwise existing 
processes that announce themselves to clients. Various discovery mechanisms 
exists for such services, and the existence of dynamic allocation of services, 
failover mechanisms etc existing in todays architectures makes discussion of 
"server" on the IP addressing layer feel like discussions on how to move a rune 
stone in the most optimal way.

9. Proposed addressing schemes

Based on the above and more, the proposed models in section 12 are just wrong.


If, and this is a big if, some coordination is to be created regarding best 
practices regarding address management of whatever is allocated to the end 
user, that obviously must be developed in the IETF, or possibly, the RIR 
community (or a mix between the two -- as normal). The IETF and RIR communities 
must in this specific case be clear and crisp in the response to the ITU-T that 
having ITU-T injecting themselves in matters like this is stepping too far into 
the area where other SDOs are responsible for.

   paf

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to