Joel:
On the reading of composite panes, if you believe that ephemeral can be set
on any configuration node or as an independent then the following
Config-node-1
ephemeral node-1 (client 1),
ephemeral Node-1 (client 2),
Each ephemeral node could have an ID and a priority. The composite mode can
apply a consistent policy: (E.g. high priority with tagging for
first-wins). Asking for a composite in the rpc for read is a possible use
for the composite. Asking for all of these nodes is possible.
I believe this caching is required for the "tight-loop" issue of < 1 second
response for query/response.
If we define the group issues in the Yang language, then I think we can
handle the multiple I2RS ephemeral entries with more memory space. The
amount of memory space used by the I2RS caching entries can be set by the
implementation in the Agent, and expressed by the model capabilities. I
agree with Andy's original position that Caching will be necessary for high
performance on medium to large scale Data models.
Lastly, I am not sure our consensus on the "no-caching" was strong enough to
refuse to consider it now. Meaning less than 10 people in interims or email
-- should not prevent a larger group from reconsidering. This is a
different type of consensus as a long debate on list plus IETF discussions.
Sue
-----Original Message-----
From: i2rs [mailto:[email protected]] On Behalf Of Joel M. Halpern
Sent: Tuesday, November 03, 2015 6:49 PM
To: Andy Bierman; Susan Hares
Cc: [email protected]; Russ White
Subject: Re: [i2rs] Conversation on Priority and Panes
The basic problem I have with your description is that it treats over-writes
as normal and desirable, and assumes that the priority handling is producing
the correct results. If we actually believed that, I suppose making them
more efficient would be sensible.
But that is not actually what we are doing. Priority over-write is a
disambiguation mechanism. There is no expectation that it is even a good
heuristic for correctness. It is merely predictable. Trying to optimize
the control loop for cases of improper behavior seems the wrong place to
optimize.
Having said that, if we want to get into multiple panes of ephemeral glass
then we are going to need mechanisms to read the composite effect read what
I as a client have applied indicate in the response to a write request that
the agent has accepted the request, even though it is not actually taking
effect.
And if we do all that, clients whose state is pending will need to maintain
monitoring of all related activities even though their network application
is not in effect.
And, if there are multiple aspect of an operation, if one gets over-written
but retained, then the client probably can't leave it there, but has to go
in and remove that state, since the client will likely be removing the rest
of the related state that is still sitting there with its lynchpin missing.
And then we get into the question of how much unapplied state is an agent
going to store. So it all probability the client still has to be prepared
for being told that not only was its state over-written (which is
technically an error) but that it got deleted too.
Yours,
Joel
On 11/3/15 6:14 PM, Andy Bierman wrote:
> Hi,
>
> This raises a data modeling issue.
> Should every "backup parameter" be modeled explicitly in the YANG
> module, or can the ephemeral datastore be used for that, without any
> additional data model objects?
>
> The I2RS architecture supports this "backup" concept (lower priority
> client), except it requires a notification from the agent and
> subsequent request from the client to make the backup active. With
> NETCONF or RESTCONF today, that round-trip will probably take around
> 500 to 1000 milli-seconds.
> Probably much worse during high loads.
>
> Our original proposal to the design team included a pane of glass for
> each client (and unique priorities for each client), because some
> people were talking about sub-milli-sec latency, and I know there is
> no way NETCONF or RESTCONF is going to support this sort of tight control
loop.
>
> Whether the server rejects lower-priority edits right away, or whether
> the agent caches the request in the form of a client-specific pane,
> the client still needs to observe the data resources with pub/sub and
> decide whether its own particular state is still relevant.
> IMO the client complexity is the same either way, especially since the
> caching is only done if the client requests it.
>
> The only difference is likely to be almost a million times faster
> fail-over to the backup on the server.
>
>
> Andy
>
>
>
>
> On Mon, Nov 2, 2015 at 8:32 PM, Susan Hares <[email protected]
> <mailto:[email protected]>> wrote:
>
> Russ thank you for kicking off this discussion. It is an interesting
> approach. I know that certain RIB implementations allows a back-up
> route.
>
> Sue
>
> -----Original Message-----
> From: i2rs [mailto:[email protected]
> <mailto:[email protected]>] On Behalf Of Russ White
> Sent: Monday, November 02, 2015 7:39 PM
> To: [email protected] <mailto:[email protected]>
> Subject: [i2rs] Conversation on Priority and Panes
>
> Y'all --
>
> After sleeping on the discussion last night, I think the panes of
> glass (or
> is it pains of glass?? :-) ) is still by and large another
> expression of
> the priority concept within I2RS. The concept does bring up one
specific
> point of interest, however -- what about backup information? Some
vendor
> RIBs, for instance, allow a routing process to install not only the
best
> path as calculated by the process -- but if the process fails to
> install,
> some RIB implementations allow the process to place the route in the
> "backup
> pool." This allows the local RIB process to drop to the "second best
> path,"
> in terms of priority, so the local RIB doesn't need to query the
routing
> processes to switch in the case of a withdraw or change in topology.
>
> To convert this to I2RS terms, it does seem worthwhile to me to have
the
> capability for a local agent to accept an install instruction for some
> particular ephemeral state, and if the install fails, to hold that
> state for
> future use. This would apply to any sort of ephemeral data,
> including that
> which is configured locally on the network device. Rather than trying
to
> think of this as "panes of glass," though, this would convert it to
> simply a
> backup list of lower priority items the local agent can use in the
> case of
> the failure of the highest priority item currently in use.
>
> The nice thing about this view is that it doesn't require a lot of
> changes
> at the protocol level. The only thing that needs to be available is
the
> option for an agent to send three different types of answers to an
> install
> request --
>
> 1. This ephemeral state was installed and is now being used.
> 2. This ephemeral state was rejected/not installed -- with potential
> codes
> for why (out of range parameter, etc.) 3. This ephemeral state was not
> installed, but is being held as a backup.
>
> Using these semantics, the actual implementation of such a feature
> is up to
> the local agent. It might be that some agents don't know how to hold
> backup
> information, or that it doesn't make any sense to hold some sorts of
> information in a backup list. This is fine -- the install can just be
> rejected without further note. Locally configured information could
> simply
> be treated as an item on the backup list, such that the priorities
> can be
> considered as always in deciding what to install when any particular
> action
> is taken.
>
> It seems, to me, that this is a simpler way to consider the same
problem
> set, and reduces to an actual protocol mechanism that appears (?) to
be
> fairly simple, and leaves as much flexibility as possible for any
given
> agent implementation.
>
> Thoughts?
>
> :-)
>
> Russ
>
> _______________________________________________
> i2rs mailing list
> [email protected] <mailto:[email protected]>
> https://www.ietf.org/mailman/listinfo/i2rs
>
> _______________________________________________
> i2rs mailing list
> [email protected] <mailto:[email protected]>
> https://www.ietf.org/mailman/listinfo/i2rs
>
>
>
>
> _______________________________________________
> i2rs mailing list
> [email protected]
> https://www.ietf.org/mailman/listinfo/i2rs
>
_______________________________________________
i2rs mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/i2rs
_______________________________________________
i2rs mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/i2rs