On Tue, Apr 29, 2014 at 10:19:14AM -0400, Jamal Hadi Salim wrote:
> This is back again with node overload.
> Our experience with ForCES made us prioritize events and request-response
> differently. This is important only when there is an overload case.
> As an example if i had sufficient cycles/bandwith/ram space to respond to 
> either
> an ADD or an event - I choose to use those resources to process and respond
> to the ADD; which means events are not reliably delivered to the clients.
> 
> I think something like this would be needed for I2RS.

In our architecture, we are permitting multiple clients to communicate with
one agent, so this somewhat compounds the issue.  It does permit some amount
of discussion of what we can do about a few issues in this problem space:

- Pipelining: If you can submit multiple requests but they must be satisfied
  in the order submitted (e.g. as per netconf), the amount of work that a
  given request implies has impact on overall throughput.
- Even if you're able to bypass this to some extent using multiple client
  sessions, if the resources you're working with rendezvous at a common
  blocking point (e.g. some RIB service that has internal mutual exclusion
  semantics), then this can be problematic.  We're not doing locking and
  thus we're not exploring the semantic of saying "would block".  
- The work in a reply may be huge.  Consider a single client having in its
  work queue a "add route" and a "give me the entire BGP RIB".  Ordering
  will clearly have impacts on how quickly the add route operation
  completes.

I suspect that (mutex example aside), operational semantics across more than
one client session may address many of these issues.  What isn't clear is
what needs to be addressed in our documents about such issues.

-- Jeff

_______________________________________________
i2rs mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/i2rs

Reply via email to