> You either expose the state or you require the different apps to maintain a
> copy of the state, which is sort of

> A) dangerous, because an app could be building a corrupted/biased view of
> the state if they cannot “spoof" the configuration traffic from other apps and
> then start creating havoc
> B) tedious, because we need all apps to replicate the state

I'm not seeing the planes as being "replicated by various apps," though. If we 
take a simple model --

Internal device state -- agent -- client(s)

Then the internal device state is the actual running state, the agent is 
holding the "planes of glass," and each client is holding its desired state, 
the current state (as the feedback loop), and whatever policies/actions/code it 
needs to manage. I don't see where client 1 is going to need a full copy of the 
"planes of glass" the agent is holding, so I don't see any complexity for 
anyone but the agent here -- and that should be optional.

Consider what happens when some internal device state changes that will 
(ultimately) trigger some policy on some particular client. There are two 
options here -- 

1. The agent informs all the clients who have subscribed to that particular 
event, and then takes the incoming commands as they are processed.
2. The agent has some sort of "cache," or "planes of glass," that allow it to 
replace information from an internal store as needed, based on predetermined 
factors it has been informed of by the clients.

I don't see either of these as invalid or "wrong" -- just different responses. 
In fact, we might want one reaction in some cases, and another in others. There 
may be some cases where time is of the essence, and hence having the 
information needed within the agent to make a decision locally is important. 
There may be other cases where it's not worth the agent locally storing this 
information; the loop can be a bit longer. We've already admitted this with the 
inclusion of "backup routes" in the RIB model (which is already an unlimited 
cache of "backup information" for one particular node in the tree, albeit just 
for next hops, which is disturbing in many ways from a routing perspective).

The point I think that's being made here is there's no reason not to generalize 
the "next hop" situation -- the WG has said, "no, we don't want to do that," 
but the nexthop example has already broken that rule, so I'm not certain that 
we're even being consistent about our objections, or being clear in what it is 
we want to do. 

IMHO, there's nothing wrong with the agent being _allowed_ to hold such backup 
state, or "planes of glass," even if it's for a subset of the entire model, and 
letting users and vendors work together to decide what makes sense and what 
doesn't in terms of holding this sort of information locally. I don't see why, 
as a WG, we should pre-emptively decide that _only_ the nexthop has this sort 
of speed requirement, and forcing everything else to go through the 
notify/reply loop. What this is going to lead to, in the long run, I think, is 
actually more complexity -- as use cases are put forward to support tighter 
loops for something other than the nexthop, we're going to have to put forward 
drafts extending the models for each and every case. I don't see how this is 
simpler than putting the protocol mechanics in place up front to support three 
responses to an attempt to install some piece of state --

1. Can't do it for reason x
2. Done
3. Stored, but not installed for reason x (same reason codes as above)

And the one additional query -- can you please give me all the state you have 
stored for this object?

If there's any additional complexity, I'd love to hear about it, because I 
don't see it right now -- I'm perfectly willing to have my mind changed. But in 
terms of complexity tradeoff -- and complexity is always a tradeoff -- I see a 
couple of extra things to define now, versus going through the entire model and 
somehow building what's already there for nexthop into anything that someone 
can present a valid use case for.

> APP1: gets (and locks) the state, calculates the new state and then writes
> (and unlocks) the state
> APP2: sees the state is locked and waits until it is unlocked to get and lock 
> it
> to do its operations

We don't do locks/unlocks -- that's a key point here. This is all atomic. It's 
not a database, it's more like a web app with a restful interface. 

No locks. Ever. 

:-)

Russ

_______________________________________________
i2rs mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/i2rs

Reply via email to