Hi ALTOers,

>From the WG session in IETF 114, we had a lot of discussions about the open
issues for ALTO O&M. Authors appreciate all the comments and are working on
the next revision.

We quickly summarize the major debates and are willing to have more
discussions to move this work forward. To be more efficient, we may
separate the discussions to different threads later.

1. How to handle data types defined by IANA registries

There are actually two arguments:

1.a. Which statement is better used to define the IANA-related data types
(e.g., cost modes, cost metrics)? Two options: enumeration typedef or
identity

The main limitation of the enumeration is the extensibility. As ALTO may
have multiple ongoing extensions, it will be required to add new private
values for existing data types for experimenting purpose. Identity is
better choice to support this.

1.b. Whether put the data type definitions to an IANA-maintained YANG module

>From the guidelines provided by Med (
https://datatracker.ietf.org/doc/html/draft-boucadair-netmod-iana-registries-03),
IANA-maintained module is RECOMMENDED.

2. Whether and how to supply server-to-server communication for
multi-domain settings

There is no draft defining any standard for ALTO eastern-western bound API
(server-to-server communication). Defining data model for this may be too
early. But this problem is important in practice. We have several potential
choices:

2.a. Each ALTO server connects data sources for its own domain, and build
interdomain connections with each other (using eastern-western bound API)

2.b. A single ALTO server connects data sources from multiple domains. The
data sources provide interdomain information for ALTO server to build
global network view.

3. How to build connection between data sources and algorithm data model

Consider each algorithm data model defines an interface of an ALTO service
implementation. It declares types for a list of arguments. Those arguments
can be references to data collected from data sources.

In real practice, there are two cases using data to calculate ALTO
information resources:

3.a. ALTO service (algorithm plugin) directly reads data from data sources
to calculate ALTO information resources.
https://datatracker.ietf.org/doc/html/draft-hzx-alto-network-topo-00 can be
one of such examples

3.b. ALTO server preprocesses data collected from data sources and writes
to a data broker. Algorithm plugin reads data from data broker to calculate
ALTO information resources. FlowDirector (
https://dl.acm.org/doi/10.1145/3359989.3365430) can be such an example.

These two cases may coexist in the same ALTO server implementation.

Supporting 3.a in O&M data model is easy. Sec 7 of the draft provides such
an example. However, Consider the O&M data model MUST NOT assume the
schema/interface of the data broker is fixed, it will be hard to support 3.b

One potential solution is to allow the data model to define references to
data in the data broker, and dependencies between data in the data broker
and the data sources.

Looking forward to seeing feedback and further discussions.

Best regards,
Jensen
_______________________________________________
alto mailing list
alto@ietf.org
https://www.ietf.org/mailman/listinfo/alto

Reply via email to