> On Oct 20, 2015, at 12:48 PM, Martin Bjorklund <[email protected]> wrote: > > "Einar Nilsen-Nygaard (einarnn)" <[email protected]> wrote: >> Carl, >> >> A bit of a late response, sorry, but was on vacation. Please see >> inline. >> >>> On Oct 15, 2015, at 2:53 PM, Robert Wilton -X (rwilton - ENSOFT >>> LIMITED at Cisco) <[email protected]> wrote: >>> >>> Hi Carl, >>> >>> On 15/10/2015 14:05, Carl Moberg (camoberg) wrote: >>>> Maybe we’re coming down to a definition of “requirement” here. But the >>>> issue I raised can be summarized as follows: >>>> >>>> “”” >>>> The assumption of a 1:1 mapping ignores situations where a change to >>>> an intended configuration leaf value may result in several instances >>>> of applied configuration leaf values (operational state) to be updated >>>> in the backend framework across several subsystems. >>>> “”" >> >> The operational state you are taking about here is not, per my >> understanding of "applied configuration" what is meant by applied >> configuration. The "several instances" are instead implementation >> detail that are perhaps changed as a result of changing a single >> intended configuration leaf. >> >> And at this point I agree with Rob and will say that, from an end >> user's perspective, it is not reasonable to expect them to patch >> together the 1:N relationship between intended config and what I will >> call "operational state" (NOT applied configuration) to determine if >> the device is actually doing what it was asked to do. The fact that >> this is exactly what we have typically asked users to do is, I >> believe, one of the key reasons why we are seeing requirements >> articulated this way. > > Ok. Do you mean that instead you want the server to do this job? > I.e., if it happens to distribute a single leaf in N internal > components, it should aggregate the status of the leaf's value in > these N internal components into one single value, and report that > value?
To perhaps be a little more precise, I mean that it is the server's job to make the information about whether or not intended config has been instantiated correctly in a way that is really easy to correlate with the intended config asserted by the client. In terms of aggregating the status of the operational leaves into a ingle value that could, for example, be diff'd with the intended config, that would meet the criteria of being easy to correlate with the intended config. What we may find is that thinking about requirements like this may make us more mindful of how we define the relationship between intended config and how our internal components work together to achieve it and how we expose the success or failure of config operations in a way that is easier for users to make sense of. Additionally, I'm not saying that there isn't operational state that each of the N internal components have. That still exists, and will continue to exist as part of ongoing operational management. Cheers, Einar > /martin > > > > >> >>>> My issue is that the requirement seems to ignore the situations and my >>>> suggestion is to relax the requirement. >>> I think that the operator requirement is stating that the multiple >>> actual applied configuration leaves across multiple subsystems need to >>> be mapped back to a single logical leaf for the purposes of the >>> applied config leaf that is exposed to the operators. >>> >>> If no such mapping is possible then this would imply that there is no >>> way that the operator can determine whether a particular item of >>> configuration is actually in effect on a system. Is this reasonable >>> behaviour for a system? >>> >>> If such a mapping is possible, then I can see benefit in performing >>> such a mapping on the device itself rather than requiring each and >>> every operator to know and program the mapping. >> >> +1 >> >> Cheers, >> >> Einar >> >> >>> Is the main concern here that the cost of implementing this mapping in >>> the system may be prohibitively expensive? >>> >>> Thanks, >>> Rob >>> >>>> >>>> I don’t believe 1.C addresses the actual concern with the requirement. >>>> >>>>> On Oct 14, 2015, at 8:14 PM, Kent Watsen <[email protected]> wrote: >>>>> >>>>> >>>>> I believe that you are correct, it seems that we've doubled-down on 1C >>>>> and so #5 should now be marked as DEAD. >>>>> >>>>> This action will be taken if no objection is made before tomorrow's >>>>> interim. >>>>> >>>>> Thanks, >>>>> Kent >>>>> >>>>> >>>>> From: Robert Wilton <[email protected]> >>>>> Date: Tuesday, October 13, 2015 at 9:29 AM >>>>> To: Kent Watsen <[email protected]>, "[email protected]" >>>>> <[email protected]> >>>>> Subject: Re: [netmod] opstate-reqs #5: Support for situations when >>>>> structure of intended configuration is not the same as applied >>>>> >>>>> From the interim meeting two weeks ago, it was clarified that the >>>>> schema of the intended configuration nodes are expected to be the same >>>>> as the schema of the applied configuration nodes so that clients can >>>>> easily relate between the two. >>>>> >>>>> I think that the requirement text for 1.C and the proposed updated >>>>> text for 1.D makes this reasonable clear. >>>>> >>>>> Hence, is issue 5 now at the state where is can be closed as not being >>>>> a requirement? Or is there something further that needs to be >>>>> discussed first? >>>>> >>>>> Thanks, >>>>> Rob >>>>> >>>>> >>>>> On 30/09/2015 16:44, Kent Watsen wrote: >>>>>> It's time to tackle another issue, just before tomorrow's meeting, and >>>>>> this time I'm picking a hard one: >>>>>> >>>>>> https://github.com/netmod-wg/opstate-reqs/issues/5 >>>>>> >>>>>> Already Carl, Mahesh, Einar, and Andy have posted 18 comments on the >>>>>> GitHub issue tracker. Please first read the comments posted there and >>>>>> then continue the discussion here on the mailing list (not on the >>>>>> GitHub issue tracker). >>>>>> >>>>>> Note that this issue is closely tied to the definition of "applied >>>>>> configuration", which is exactly what issue #4 regards >>>>>> (https://github.com/netmod-wg/opstate-reqs/issues/4), for which Mahesh >>>>>> and Einar have posted comments on already. As these two issues (#4 >>>>>> and #5) are so highly related, I'm going to simultaneously open the >>>>>> other issue for discussion now as well. >>>>>> >>>>>> Thanks, >>>>>> Kent >>>>>> >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> netmod mailing list >>>>>> >>>>>> [email protected]https://www.ietf.org/mailman/listinfo/netmod >>>>> _______________________________________________ >>>>> netmod mailing list >>>>> [email protected] >>>>> https://www.ietf.org/mailman/listinfo/netmod >>> >>> _______________________________________________ >>> netmod mailing list >>> [email protected] >>> https://www.ietf.org/mailman/listinfo/netmod >> >> _______________________________________________ >> netmod mailing list >> [email protected] >> https://www.ietf.org/mailman/listinfo/netmod _______________________________________________ netmod mailing list [email protected] https://www.ietf.org/mailman/listinfo/netmod
