Hi Dean, Sorry for the delay in responding. Telechats do that to one.
On Wed, Oct 1, 2014 at 10:12 AM, Dean Bogdanovic <[email protected]> wrote: > > On Oct 1, 2014, at 9:54 AM, Alia Atlas <[email protected]> > wrote: > > Hi Dean, > > Thanks for the explanation. It matches with what I understand for > configuration. > Where I am confused is why I2RS - which is doing ephemeral only and > matches closer to the direct proprietary APIs directly to > the routing processes - is being tied up in this. > > > Today there is no standard protocol to expose writing to the daemons > directly, except through a datastore. If you would compare routing daemon > in Junos and routing daemon in IOS or NX-OS, the data models are very > different. You need a standard API to communicate to all of them. > Right - but that's what I2RS is asking and trying to do. I'm sure I didn't just hear you say that we can't do it because it doesn't exist yet. > Here is a reason why a local datastore comes handy > > routing daemon crashes and the state is gone. The routing state has to > be reinstated. > > 1. replay config data from the local data store(s) and reactivate the > configuration > 2. replay config data from the local data store (single data store) and > request all I2RS clients to replay their data again > > Now if some I2RS client, during the crash, tried to execute routing > changes, with local store, those can be executed and put into wait state > until daemon recovers. I2RS clients can be informed that their changes are > pending. > I do understand that a datastore is useful for config. For I2RS, if the relevant routing crashes, then the I2RS clients are just notified that their state is gone. If an I2RS client tries to request a change and the routing daemon isn't up, then the change fails! This is part of being in a dynamic and distributed system. > Both approaches have pluses and minuses. > > Another issue is with config statements. If you go directly to the > daemons, only few very basic things can be exposed (like interfaces, > routes, stateless fw filters). If you want to expose more complex > statements like L3VPN, then you have to create a logic that will > communicate to multiple daemons to create such state on the device. You are > doubling such logic on the device. > Right - but here I have to pull our attention back to the charter. We can do: RIB, topology (read only), BGP, DDOS mitigation, hub-and-spoke routing improvement, and traffic exit point optimization. If there are proprietary mechanism for everything else, that's ok. Let's get this designed correctly - given the WG agreement on the architecture and the charter constraints on the use-cases. I want to allow network administrators to decide by them self what is > exposed through I2RS, not us (vendors, SDOs). > Then we boil the ocean, claim everything is safe, try and solve all the hard problems instead of taking reasonable efforts with the simple parts first. Let's get something done that can be implemented. Regards, Alia > > Dean > > > Regards, > Alia > > On Wed, Oct 1, 2014 at 9:39 AM, Dean Bogdanovic <[email protected]> wrote: > >> Hi Alia, >> >> today networking devices can be managed in three ways: >> >> 1. CLI - is the UI we all know and use it quite heavily. In order for the >> device to boot in a known state a set of CLI commands have to be saved >> somewhere on the device. For that you have a data store. That data store >> can be a flat file or a database. The datastore is then read by network >> device daemons which change their state based on it. >> >> Before the APIs were exposed, TCL/Expect and Perl ruled the automation >> world and everything was based on screen scraping. In order to make machine >> to machine communication easier, NETCONF was created as a standard protocol >> to communicate with the devices. >> >> 2. via APIs - there are several type of APIs, but most widely available >> ones are providing functionalities same as CLI. The difference is that CLI >> is human intended and APIs are machine intended. One of the example is >> Junos XML APIs, where (almost) each CLI command is represented by XML API. >> It still requires knowing how to configure device via CLI and the result of >> the application written with such APIs is stored in the data store from >> which daemon read how to change their state. >> There are some vendors that provide APIs that allow communication with >> daemons directly, bypassing management infrastructure, but those are highly >> proprietary mechanisms. >> The APIs that were released by vendors, were not standardized and each >> vendor had different configuration models, so although communication with >> devices was standardized, there was no easy way to communicate semantics, >> which led to YANG, as standard language that all vendors would understand >> and now standardizing configuration and operational model, so same >> configuration statement can be sent to all supporting devices and be done >> (instead of writing same desired functionality in several configuration >> statements) >> >> At the end it really doesn't matter how you are communicating with the >> devices, it really boils down to that you want the device to boot into a >> known state. This was done by having a local data store that was containing >> set of instructions what state the device should be after reading it. >> >> It really doesn't matter which mechanism is used (from above 2), >> configuration data has to be provided. Historically, data was on the >> device, as the connection between device and the network management was >> slow. Today (like in MSDC), devices don't have local configuration, those >> devices when boot look for provisioning system and get configuration from a >> remote location and then change the state of the device based on it. This >> has led that >> >> 3. some vendors use Linux, allow management via pseudo file system (/proc) >> >> where the state of the device is changed directly without having a need >> for data store on the device. You have to keep in mind that only Linux >> allows changing the state of non-processes data through /proc. *BSD flavors >> don't allow that. >> >> So today, most network operators (by this I mean any entity that operates >> a network, either enterprise or carrier), need to simplify network >> management, make it more efficient and that devices will behave very >> predictably, so that network is in a known state. Because of the legacy, >> this is easiest done by having a local datastore on a device, through which >> the state of the device is changed. >> >> Hope this helps >> >> Dean >> >> >> On Oct 1, 2014, at 12:25 AM, Alia Atlas <[email protected]> wrote: >> >> > Hi, >> > >> > I'd like to really understand why I2RS needs a datastore and what that >> actually means. >> > In my initial conception of what an I2RS agent would do for, say, >> writing a route in the RIB >> > model, is that the I2RS agent would simply parse a received request >> from a standard format >> > and model into the internal and pass that to a RIB Manager - just as an >> OSPF implementation >> > might install a route to the RIB manager. An I2RS agent could also >> query the RIB Manager to >> > read routes and there'd be events coming out. >> > >> > With the introduction of priorities to handle multi-headed writers and >> collision errors, the I2RS agent would need to store what was written by >> which client. >> > >> > What benefits and rationale does a YANG datastore add? Why does using >> one need to be >> > standardized? >> > >> > I apologize if this seems a naive question, but it's been quite a while >> since I read up on YANG and NetConf/RestConf. >> > >> > Regards, >> > no-hats Alia >> > >> > _______________________________________________ >> > i2rs mailing list >> > [email protected] >> > https://www.ietf.org/mailman/listinfo/i2rs >> >> > >
_______________________________________________ i2rs mailing list [email protected] https://www.ietf.org/mailman/listinfo/i2rs
