Hi, Jensen:
Thank for summarizing the discussion in last IETF meeting, please see my 
comments inline.

发件人: alto [mailto:[email protected]] 代表 Jensen Zhang
发送时间: 2022年8月16日 21:04
收件人: IETF ALTO <[email protected]>
主题: [alto] Open discussions of ALTO O&M data model

Hi ALTOers,

From the WG session in IETF 114, we had a lot of discussions about the open 
issues for ALTO O&M. Authors appreciate all the comments and are working on the 
next revision.

We quickly summarize the major debates and are willing to have more discussions 
to move this work forward. To be more efficient, we may separate the 
discussions to different threads later.

1. How to handle data types defined by IANA registries

There are actually two arguments:

1.a. Which statement is better used to define the IANA-related data types 
(e.g., cost modes, cost metrics)? Two options: enumeration typedef or identity

The main limitation of the enumeration is the extensibility. As ALTO may have 
multiple ongoing extensions, it will be required to add new private values for 
existing data types for experimenting purpose. Identity is better choice to 
support this.

1.b. Whether put the data type definitions to an IANA-maintained YANG module

From the guidelines provided by Med 
(https://datatracker.ietf.org/doc/html/draft-boucadair-netmod-iana-registries-03),
 IANA-maintained module is RECOMMENDED.
[Qin Wu] If you chose to use identity data type, I think it is not necessary 
for you to use IANA maintained YANG module, IANA maintained YANG module allows 
you later on make update to the data type if needed.
If you don’t expect to have frequent changes to the data types, It looks 
identity is best option but not necessary to create IANA maintained module.
Otherwise, it seems overdesign to me.
2. Whether and how to supply server-to-server communication for multi-domain 
settings

There is no draft defining any standard for ALTO eastern-western bound API 
(server-to-server communication). Defining data model for this may be too 
early. But this problem is important in practice. We have several potential 
choices:

2.a. Each ALTO server connects data sources for its own domain, and build 
interdomain connections with each other (using eastern-western bound API)

2.b. A single ALTO server connects data sources from multiple domains. The data 
sources provide interdomain information for ALTO server to build global network 
view.
[Qin Wu] You might refer to multi-domain case in RFC7165, it did describe a few 
requirements and use cases for ALTO eastern-western bound API, but I think it 
leave the door open for the solution.
I think if you use other protocol than ALTO to define ALTO eastern-western 
bound API, it is apparent not in the scope of ALTO WG, it you use ALTO protocol 
to define server to server communication, I think it is in the scope ALTO OAM 
YANG.
Also don’t forget ALTO discovery mechanism, one is intra-domain discovery 
mechanism ,the other is inter domain discovery mechanism.
3. How to build connection between data sources and algorithm data model

Consider each algorithm data model defines an interface of an ALTO service 
implementation. It declares types for a list of arguments. Those arguments can 
be references to data collected from data sources.

In real practice, there are two cases using data to calculate ALTO information 
resources:

3.a. ALTO service (algorithm plugin) directly reads data from data sources to 
calculate ALTO information resources. 
https://datatracker.ietf.org/doc/html/draft-hzx-alto-network-topo-00 can be one 
of such examples

3.b. ALTO server preprocesses data collected from data sources and writes to a 
data broker. Algorithm plugin reads data from data broker to calculate ALTO 
information resources. FlowDirector 
(https://dl.acm.org/doi/10.1145/3359989.3365430) can be such an example.

These two cases may coexist in the same ALTO server implementation.
[Qin Wu] We did discus this in Philadelphia meeting, ALTO focus on query 
interface, we did’t specify where the data come from and how to collect it. I 
don’t think
We should put too much constraints on how the data is collected and exported, 
stored and fed into ALTO server. Therefore based on our discussion in ALTO 
weekly meeting on Tuesday in this week, one suggestion to address this, factor 
out common data retrieval mechanism, move implementation specific or protocol 
specific parameters to the Appendix as an example.
Supporting 3.a in O&M data model is easy. Sec 7 of the draft provides such an 
example. However, Consider the O&M data model MUST NOT assume the 
schema/interface of the data broker is fixed, it will be hard to support 3.b

One potential solution is to allow the data model to define references to data 
in the data broker, and dependencies between data in the data broker and the 
data sources.

Looking forward to seeing feedback and further discussions.

Best regards,
Jensen
_______________________________________________
alto mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/alto

Reply via email to