Actually, in most OSPF implementations, there are two relevant data
stores. OSPF often has one data store internally, and routes get
installed into a common RIB, managed by a RIB manager, which is clearly
a datastore.
It seems to me that "datastore" is just a descriptive term for the set
of manipulable state information. It has been clear from the outset
that I2RS had to store the operations it applied in some meaningful
fashion. Otherwise it could not do collision detection, or operation
removal.
Yours,
Joel
On 10/1/14, 1:31 PM, Igor Bryskin wrote:
Alia,
Your question makes sense if I2RS is limited to routing data manipulation.
In this case it could be thought of as an additional routing protocol.After all
OSPF does not need any data store to install its routes,
What if I2RS client want to configure other things?
Igor
________________________________________
From: i2rs [[email protected]] on behalf of Alia Atlas [[email protected]]
Sent: Wednesday, October 1, 2014 9:54 AM
To: Dean Bogdanovic
Cc: [email protected]
Subject: Re: [i2rs] Why do we need a datastore?
Hi Dean,
Thanks for the explanation. It matches with what I understand for
configuration.
Where I am confused is why I2RS - which is doing ephemeral only and matches
closer to the direct proprietary APIs directly to
the routing processes - is being tied up in this.
Regards,
Alia
On Wed, Oct 1, 2014 at 9:39 AM, Dean Bogdanovic
<[email protected]<mailto:[email protected]>> wrote:
Hi Alia,
today networking devices can be managed in three ways:
1. CLI - is the UI we all know and use it quite heavily. In order for the
device to boot in a known state a set of CLI commands have to be saved
somewhere on the device. For that you have a data store. That data store can be
a flat file or a database. The datastore is then read by network device daemons
which change their state based on it.
Before the APIs were exposed, TCL/Expect and Perl ruled the automation world
and everything was based on screen scraping. In order to make machine to
machine communication easier, NETCONF was created as a standard protocol to
communicate with the devices.
2. via APIs - there are several type of APIs, but most widely available ones
are providing functionalities same as CLI. The difference is that CLI is human
intended and APIs are machine intended. One of the example is Junos XML APIs,
where (almost) each CLI command is represented by XML API. It still requires
knowing how to configure device via CLI and the result of the application
written with such APIs is stored in the data store from which daemon read how
to change their state.
There are some vendors that provide APIs that allow communication with daemons
directly, bypassing management infrastructure, but those are highly proprietary
mechanisms.
The APIs that were released by vendors, were not standardized and each vendor
had different configuration models, so although communication with devices was
standardized, there was no easy way to communicate semantics, which led to
YANG, as standard language that all vendors would understand and now
standardizing configuration and operational model, so same configuration
statement can be sent to all supporting devices and be done (instead of writing
same desired functionality in several configuration statements)
At the end it really doesn't matter how you are communicating with the devices,
it really boils down to that you want the device to boot into a known state.
This was done by having a local data store that was containing set of
instructions what state the device should be after reading it.
It really doesn't matter which mechanism is used (from above 2), configuration
data has to be provided. Historically, data was on the device, as the
connection between device and the network management was slow. Today (like in
MSDC), devices don't have local configuration, those devices when boot look for
provisioning system and get configuration from a remote location and then
change the state of the device based on it. This has led that
3. some vendors use Linux, allow management via pseudo file system (/proc)
where the state of the device is changed directly without having a need for
data store on the device. You have to keep in mind that only Linux allows
changing the state of non-processes data through /proc. *BSD flavors don't
allow that.
So today, most network operators (by this I mean any entity that operates a
network, either enterprise or carrier), need to simplify network management,
make it more efficient and that devices will behave very predictably, so that
network is in a known state. Because of the legacy, this is easiest done by
having a local datastore on a device, through which the state of the device is
changed.
Hope this helps
Dean
On Oct 1, 2014, at 12:25 AM, Alia Atlas
<[email protected]<mailto:[email protected]>> wrote:
Hi,
I'd like to really understand why I2RS needs a datastore and what that actually
means.
In my initial conception of what an I2RS agent would do for, say, writing a
route in the RIB
model, is that the I2RS agent would simply parse a received request from a
standard format
and model into the internal and pass that to a RIB Manager - just as an OSPF
implementation
might install a route to the RIB manager. An I2RS agent could also query the
RIB Manager to
read routes and there'd be events coming out.
With the introduction of priorities to handle multi-headed writers and
collision errors, the I2RS agent would need to store what was written by which
client.
What benefits and rationale does a YANG datastore add? Why does using one need
to be
standardized?
I apologize if this seems a naive question, but it's been quite a while since I
read up on YANG and NetConf/RestConf.
Regards,
no-hats Alia
_______________________________________________
i2rs mailing list
[email protected]<mailto:[email protected]>
https://www.ietf.org/mailman/listinfo/i2rs
_______________________________________________
i2rs mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/i2rs
_______________________________________________
i2rs mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/i2rs