Dear all:

This looks like the problem of local namespaces,  which is the reason why MPLS 
switches labels.
The user may access a limited set of resources from a number of servers, and 
servers are read by many users.
You cannot ensure that you have a single alias space that satisfies them all.

So ,what if the user had its own set of aliases and the server also had its own 
set of aliases?

The aliases could be mapped in a table like a uSAP and a pSAP are mapped across 
layers. Broadly:
- First time a user requests a resource in a server, it comes with a tuple 
(userid, useralias, fullname)
- The server responds with (serverid, serveralias, userid, useralias).

After that,  if the server is a constrained device then the user talks to the 
server with (serverid, serveralias) and the user maintains a switching table 
for the resource it uses on various servers.
If the user is the constrained device and the server is not, the user could 
talk with (userid, useralias) and the server maintains a switching table.

For large servers, the server may allocate aliases dynamically based on what it 
is being asked. We’d then need to talk about alias update or revocation, 
probably in terms of session and lifetime.

Does that look usable?

Pascal

From: 6tisch [mailto:[email protected]] On Behalf Of Andy Bierman
Sent: vendredi 5 juin 2015 20:55
To: Michel Veillette
Cc: [email protected]; Alexander Pelov; [email protected]
Subject: Re: [6tisch] Reserve space for aliases



On Fri, Jun 5, 2015 at 11:41 AM, Michel Veillette 
<[email protected]<mailto:[email protected]>> 
wrote:
Hi Alexander

I have some concerns about allowing CoMI client(s) the control of the list of 
aliases.
This approach work fine for the first CoMI application (e.g. 6TiSCH) but what 
do we do with the subsequent CoMI application?

One possible solution is that each CoMI client send a list of (alias, data node 
ID) to the CoMI server. In this case, the CoMI client might receive an error if 
one or multiple of these aliases are already reserved.

A second possible solution is that each CoMI client send a list of (data node 
ID) and get a list of (alias, data node ID) from the CoMI server. In this case, 
CoMI clients will have to deal with a mix population of aliases.

The proposed solution need to scale to a multi-vendors, multiple applications 
environment.
With this is mind, do you have any alternative solutions to propose?



Does this approach allow each client to have a different set of aliases,
so the server has to maintain a configured mapping for each client?
This seems like a lot of overhead and NV-storage requirements
on the server.

Changing the schema identifiers based on which client is
sending the request seems like a complicated design change
from RESTCONF, NETCONF, or SNMP, where there is only
1 schema tree which is not dependent on the client identity.


Andy




[cid:[email protected]]

Michel Veillette
System Architecture Director
Trilliant Inc.
Tel: 450-375-0556 ext. 237
[email protected]<mailto:[email protected]>
www.trilliantinc.com<http://www.trilliantinc.com/>



From: Alexander Pelov 
[mailto:[email protected]<mailto:[email protected]>]
Sent: 5 juin 2015 12:03
To: Michel Veillette
Cc: Andy Bierman; [email protected]<mailto:[email protected]>; 
[email protected]<mailto:[email protected]>
Subject: Re: Reserve space for aliases

Hi Michel,


Le 5 juin 2015 à 17:17, Michel Veillette 
<[email protected]<mailto:[email protected]>> a 
écrit :

Hi Alexander

In your presentation at 6TiSCH, you propose the following data node ID 
structure.

•       32 bits YANG ID
–      20 bits for module ID (assigned by IETF)
–      10 bits for data node ID (generated deterministically)


Based on this structure, we can reserve module ID zero for aliases.


I was just summarizing what was discussed on the discussion in CoMI. Actually, 
it is 30 bits YANG ID, but that’s for purposes to be consistent with the YANG 
hash and I don’t mind keeping it 30 bits.


If we want to minimize both the network an node resources require by this alias 
mechanism, we can map an entire module to this space.
This can be implemented by a single integer resource (e.g. leaf alliassedModule 
{ type uint32 } )
This approach require 4 bytes per CoMI server accessed.


It is possible to map an entire module to the alias space and this is up to the 
operator. However, if you load two modules which redefine the same alias you 
will loose the benefit from it. Maybe it would be interesting to be able to 
specify that you want the aliases from THIS specific module to be used.

If the /mg/0 alias is reserved for managing the aliases, this could be simply 
saying:
POST /mg/0
{
 "source_uri" : "/mg/BAA"
}

where /mg/BAA is the YANG id of the module (20 bits module ID + 10 bits set to 
0). This way, the server will know: OK, get the alias mapping from the YANG 
scheme of module with ID = B (as defined by the IETF).

Or, you can dynamically configure the mapping:
POST /mg/0
{
  YANG_id : alias,
  YANG_id : alias,
  YANG_id : alias
}

If we want a more complex but more flexible aliases mechanism, your proposed 
map of (allias, YANG ID) seem the solution.
However, we have to consider that this structure can be as large as 1250 bytes 
per CoMI server accessed if limited to 256 aliases.

This is assuming you need a separate mapping for each server. I would suppose 
that in a network you’ll have a single alias mapping (maybe two?) - after all, 
you’re trying to optimize the management of your devices (servers). So, 
typically you’d map all 6tisch devices with aliases 1-10 (channel, slot, etc.) 
and use that on them, and on all other you’d access the full ID.

Best,
Alexander


<image001.jpg>

Michel Veillette
System Architecture Director
Trilliant Inc.
Tel: 450-375-0556 ext. 237
[email protected]<mailto:[email protected]>
www.trilliantinc.com<http://www.trilliantinc.com/>



_______________________________________________
6tisch mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/6tisch

Reply via email to