(1) As daya mentioned, ORCHESTRATOR-RECONCILE flag (i like the arbitrator/autonomous better) should be set at the controller startup, because in production environment generally devices are already configured with controller setting. In that case devices can connect when your controller is coming up, and they will take the normal reconciliation path, which is what we want to avoid. We can use iptables rule to block the connections etc, but that's not a realistic scenario when it comes to data centers with hardware devices (admin's don't prefer to play with that).
(2) We should refrain from providing any API that says "all" (e.g commit all bundles). It's not really a clear API, because if you implement an ALL API, than you need to answer following questions *.* How should FRM proceed if one of the bundle commit fails ? *.* How should FRM proceed if while committing the bundle one of the device get disconnected? And there can be multiple answer for these question, based on how someone is designing its application or deploying its application. So i would say the life cycle of the API should be related to single switch only from plugin perspective. Orchestrator can always write a ALL wrapper API to achieve this and they can implement their own control flow on what Orchestrator should do when it hits any of the above scenario. The only state plugin should care about is the individual switch state. On Mon, Apr 16, 2018 at 2:27 AM, Dayavanti Gopal Kamath < [email protected]> wrote: > Hi arun, > > Please see inline. > > > > Thanks, > > daya > > > > *From:* D Arunprakash > *Sent:* Monday, April 16, 2018 9:46 AM > *To:* Josh Hershberg <[email protected]>; Dayavanti Gopal Kamath < > [email protected]> > *Cc:* Anil Vishnoi <[email protected]>; openflowplugin-dev@lists. > opendaylight.org; Prasanna Huddar <[email protected]>; > Muthukumaran K <[email protected]>; Vinayak Joshi < > [email protected]> > *Subject:* RE: [openflowplugin-dev] Approaches for making budle service > more generic (normal resync and upgrade resync being special cases) > > > > Daya, > > Comments inline. > > > > Regards, > > Arun > > > > *From:* Josh Hershberg [mailto:[email protected] <[email protected]>] > *Sent:* Sunday, April 15, 2018 5:29 PM > *To:* Dayavanti Gopal Kamath <[email protected]> > *Cc:* D Arunprakash <[email protected]>; Anil Vishnoi < > [email protected]>; [email protected]; > Prasanna Huddar <[email protected]>; Muthukumaran K < > [email protected]>; Vinayak Joshi <[email protected]> > *Subject:* Re: [openflowplugin-dev] Approaches for making budle service > more generic (normal resync and upgrade resync being special cases) > > > > inline... > > > > On Fri, Apr 13, 2018 at 5:18 PM, Dayavanti Gopal Kamath < > [email protected]> wrote: > > Hi arun, > > Few comments- > > 1. do you guys want the ‘upgrading’ flag to stay in genius, and a > separate flag orchestrator-reconcile to be set for the plugin. > > [Arun] orchestarator-recocile would be an apt flag for plugin that > upgrading flag present in genius. > > 2. didn’t understand steps 3 and 4, and 23/24 - isn’t this flag > directly set by the external orchestrator, what else needs to be set by > upgrade manager? Or will you maintain 2 flags, one external, 1 internal ? > > [Arun] Orchestrator set the orchestarator-reconcile flag to enable in > config DS and the upgrade mananger receives the DTCN change and sets it > locally in cache. Both are same. No need to set 2 flags. > > Daya->in that case, its better to not have multiple states, ofp should > just base its logic directly off the orchestrator-reconcile flag. > > 3. In step 12, any bundles opened need to be cleaned up on > controller and switch on failure, the bundle (if created) should not get > inadvertently committed, or left open for an hour or so. > > [Arun] Yes, we are cleaning up the bundle id from controller. Do we need > to close the bundle on the ovs as well? As per your previous comments > bundle will become invalid if there is a switch disconnect. > > Daya->no, that would not be possible since its for a connection loss > scenario. We should ask the ovs to delete any open bundles if there is a > connection failure. Can we please check if this is already in place? if > not, Josh, is this something Flavio can add as well? > > 4. this flag(as well as upgrading flag) will be set after > controller down, and then odl will be restarted followed by config replay, > so want to make sure this is possible in the datamodel for the flag > > In a worse case scenario, we can keep blocking the connections (we use > iptables now) until after ODL comes up and the orechestrator sets things up. > > > [Arun] we are not aware that the flag will be set when the controller is > down. Will check the possibility of setting the flat when controller is > down. Also, I hope unsetting will be done while the controller the running. > > > > Daya->yes, commit will be during controller up. Its safer to set it before > the controller is rebooted to prevent any leaks. > > > > 5. can frm not ensure, that if a bundle commit is received, and in > progress, no other thread should get executed parallely, and try to program > flows/groups directly? It will be difficult for applications to know about > these transient states in the plugin. > > [Arun] ofp just wants to just provide infra and the applications has to > code the logic. There will be checks and queues to be added in frm for this > and it might affect the performance. So, community reviewers feels to keep > to this way. Check if the logic can be added in genius mdsal-util. > > Daya->I don’t understand how any non ofp entitiy can be aware of transient > state, to act on it its not being pushed into the frm data model as far as > I can tell. So, for example, lets say external orchestrator has issued a > bundle commit, now a new api comes in, and application listener fires, its > just going to create new flows into frm related to that. only ofp will know > if a bundle commit has gone through before the new flow can be programmed. > We need something like a barrier with bundle commit in the plugin. overall > I think, this is an SLA the plugin should provide, it’s a matter of > consistency and sequencing of flows. > > 6. timebound commit defaults could be set statically, in the plugin > for now. Should we think about doing a rollback instead of commit on the > bundle if the timer expires? > > [Arun] We can decide on the timeout logic. > > Daya->for now, we can just go with a static approach, do some trial and > error, and come up with a fixed value. I am inclined towards discarding the > bundle if it times out in the plugin, rather than commiting it. what do you > guys think? > > > > Thanks, > > daya > > > > *From:* D Arunprakash > *Sent:* Friday, April 13, 2018 11:31 AM > *To:* Anil Vishnoi <[email protected]> > *Cc:* Josh Hershberg <[email protected]>; Dayavanti Gopal Kamath < > [email protected]>; openflowplugin-dev@lists. > opendaylight.org; Prasanna Huddar <[email protected]>; > Muthukumaran K <[email protected]>; Vinayak Joshi < > [email protected]> > > *Subject:* RE: [openflowplugin-dev] Approaches for making budle service > more generic (normal resync and upgrade resync being special cases) > > > > Updated the slide deck with the comments received from openflowplugin > community and local team. > > > > Please go through the slides and provide comments. We have started > implementing the same design, so it be good if we receive comments asap. > > > > Note: Will raise the spec review in couple of days. > > > > Regards, > > Arun > > > > *From:* Anil Vishnoi [mailto:[email protected] <[email protected]>] > > *Sent:* Tuesday, April 10, 2018 12:28 AM > *To:* D Arunprakash <[email protected]> > *Cc:* Josh Hershberg <[email protected]>; Dayavanti Gopal Kamath < > [email protected]>; openflowplugin-dev@lists. > opendaylight.org; Prasanna Huddar <[email protected]>; > Muthukumaran K <[email protected]>; Vinayak Joshi < > [email protected]> > *Subject:* Re: [openflowplugin-dev] Approaches for making budle service > more generic (normal resync and upgrade resync being special cases) > > > > Hi Arun, > > > > Can you please update the details that we discussed today? That will > finalize the proposal so folks can take a final look. > > > > @Josh/@Daya, please spare some time to have a look at the final proposal > (no surprises as such, but want to make sure that we all are on same page). > > > > Thanks > > Anil > > > > On Mon, Apr 9, 2018 at 12:18 AM, D Arunprakash <[email protected]> > wrote: > > Updated the slides based on the comments received and there are few open > items to discuss. > > > Will discuss in today’s community meeting and get a closure on this. > Please join. > > > > Regards, > > Arun > > > > *From:* Anil Vishnoi [mailto:[email protected]] > *Sent:* Wednesday, April 04, 2018 9:59 PM > *To:* D Arunprakash <[email protected]> > *Cc:* Josh Hershberg <[email protected]>; Dayavanti Gopal Kamath < > [email protected]>; openflowplugin-dev@lists. > opendaylight.org; Prasanna Huddar <[email protected]>; > Muthukumaran K <[email protected]>; Vinayak Joshi < > [email protected]> > > *Subject:* Re: [openflowplugin-dev] Approaches for making budle service > more generic (normal resync and upgrade resync being special cases) > > > > Hi Arun, > > > > Slide decks looks good. > > > > I think using the mode for opening/closing reconciliation is not very > clear API. So we need to look at the whole control flow from the individual > switch lifecycle. > > > > At a very high level, > > > > (1) If OFP is in Arbitrator mode, it won't trigger normal reconciliation. > It will just wait for a trigger to start reconciliation for specific switch. > > (1) OFP should get some notification (RPC/DS) to open the reconciliation > window. > > (2) OFP should get some notification (RPC/DS) to commit the reconciliation > window. > > > > Once the reconciliation is done in both the mode (plugin will know about > it), it will move back to normal way of installing flow/groups. Once we > open the bundle API to applications, they can decide how they want to > program the device (bundle base or normal). > > > > UPGRADE_MODE=false to trigger the commit for all the node is going to be > bit messy for error handling, because we need to make lot of decisions > here, like > > what should plugin do when one of the switch fails the reconciliation -- > abort the commit on other switches or continue ? > > > > So keeping the reconciliation lifecycle per switch will be more clean, so > whenever Orchestrator will call to close the reconciliation window, it will > get error for that switch and it can decide what it wants to do in that > state. > > > > Anil > > > > On Thu, Mar 29, 2018 at 10:22 AM, D Arunprakash < > [email protected]> wrote: > > Hi All, > > Attached the ppt with the implementation details for the replay based > upgrade support from openflowplugin. > > Please go through this and provide the comments. > > > > Regards, > > Arun > > *From:* Josh Hershberg [mailto:[email protected]] > *Sent:* Wednesday, March 14, 2018 12:40 PM > *To:* Dayavanti Gopal Kamath <[email protected]> > *Cc:* D Arunprakash <[email protected]>; Anil Vishnoi < > [email protected]>; [email protected] > > > *Subject:* Re: [openflowplugin-dev] Approaches for making budle service > more generic (normal resync and upgrade resync being special cases) > > > > I just want to check, when we say that OFP will public the switch early to > the operational DS, this will include any port status events as well. Is > this correct? > > > > On Wed, Mar 14, 2018 at 9:02 AM, Dayavanti Gopal Kamath < > [email protected]> wrote: > > Hi arun, > > Looks good. So, will the system be in autonomous mode by default > (applicable for switch connection resets, and switch, controller reboots), > and external orchestrator has to set it to arbitrer mode when it sets the > upgrading flag to true (in genius). > > > > Please see inline for other comments. > > > > > > Thanks, > > daya > > > > *From:* D Arunprakash > *Sent:* Tuesday, March 13, 2018 11:21 AM > *To:* Josh Hershberg <[email protected]>; Anil Vishnoi < > [email protected]> > *Cc:* Dayavanti Gopal Kamath <[email protected]>; > [email protected] > *Subject:* RE: [openflowplugin-dev] Approaches for making budle service > more generic (normal resync and upgrade resync being special cases) > > > > Last week’s discussion on reconciliation/upgrade profiles, > > > > 1. There shall be two profiles/modes configured which can change OFP > behavior in context of reconciliation > > Reconciliation Mode : <ARBITARTED|AUTONOMOUS> > > AUTONOMOUS mode implies that OFPlugin shall perform reconciliation > autonomously as it does now without any change in the workflow - ie. > Switch- > connection-handling followed by bundle-based Reconciliaion execution > followed by publishing of switch to the Inventory Opertaional DS > > ARBITRATED mode implies that OFPlugin shall not execute reconciliation > by itself but by allowing higher level app "Reconciliation arbiter / > orchestrator" to initiate and > complete the bundle-based reconciliation including any error-handling > thereof > > 2. This implies two changes for OFPlugin > a) OFP must publish the switch to Inventory Operational soon after the > connection-handling and master-election phases complete without triggering > Reconciliation by > itself in ARBITRATED mode > b) OFP must expose RPCs to start, modify and close bundle so that > "Reconciliation arbiter / orchestrator" is able to invoke the same to > realize reconciliation > > Reconciliation Arbiter / Orchestrator (which is functional only in > scenario when Reconciliation-Mode==ARBITRATED) > ============================================================ > ================ > Listeners for Reconciliation Arbiter / Orchestrator : > This is more or less similar to current reconciliation function of FRM > a) Inventory Oper DTCN for switch status sensing > b) Config Inventory DTCN for flows / groups and meters > > Conditions: > ------------ > a) Whether or not data is available for given switch in Config Inventory DS > b) Whether the system is in upgrade mode > > Common reaction for Switch Connection handling when > - Condition 1 : UPGRADE_MODE=true > - Condition 2 : UPGRADE_MODE=false > For all conditions, reconciliation can be executed by the arbiter as per > below steps > > Trigger reconciliation by > a) Issuing implicit close of implicit bundle-id - this is mainly > to address following failure scenario in OF-HA context : For OFHA purge the > incomplete implicit bundles > which might have been started by previous switch-master in > case leadership changes. Since we use a single implicit bundle-id, issuing > a cancel-bundle from new > switch-master implicitly would not result in any issues and > can be considered as safety measure. > > > > Daya-> According to Vinayak, an ovs closes any bundle opened by a > controller if the connection to the controller breaks. So, we probably > don’t need step a to be pushed to the switch, only need to handle idle > bundles on the controller itself. > > > > > b) starting implicit bundle using OFP RPC > b) reading flows / groups / meters from DS if present > > > > daya->There should be no flows in the frm when we are in arbitrer mode I > guess. But if there are such flows, will there be any conflicts between frm > listener processing these, vs new flows and groups getting triggered due to > replay? I am assuming multithreading will just work as is, and we will > handle the dependent action programming issues separately (using local > caching). > > > c) add the same to the implicit bundle if available > d) defer the bundle close for a period of time (which is not a > timer managed by OFP but by some static configuration managed within the > scope of > Reconciliation arbiter / orchestrator > e) During the deferred period - if any flows / group / meter > configurations are received via Inventory Config DTCN, arbiter shall push > those flows to current bundle > f) When timer expires, issue bundle closure OFP RPC call to all > switches in one close loop > > > > daya->how about a hard timeout for closing the bundle in case the rpc is > never received. > > > > > Other key decisions : > 1) Since we switch-agnostic implicit bundle inventory data model need not > be changed and that kind of change reuqires a more rigorous > impact-assessment and can be taken up in future > > > > > > Regards, > > Arun > > > > *From:* [email protected] [ > mailto:[email protected] > <[email protected]>] *On Behalf Of *Josh > Hershberg > *Sent:* Wednesday, February 28, 2018 1:26 PM > *To:* Anil Vishnoi <[email protected]> > *Cc:* Dayavanti Gopal Kamath <[email protected]>; > [email protected] > *Subject:* Re: [openflowplugin-dev] Approaches for making budle service > more generic (normal resync and upgrade resync being special cases) > > > > A few comments inline... > > > > On Tue, Feb 27, 2018 at 10:58 PM, Anil Vishnoi <[email protected]> > wrote: > > Hi Muthu, > > > > Sorry for delayed response. Please see inline .. > > > > On Fri, Feb 23, 2018 at 2:44 AM, Muthukumaran K < > [email protected]> wrote: > > Hi all, > > > > @Arun, @Gobi, please feel free to add anything which I might have missed > out from our discussions > > > > Context : > > ======= > > During last OFPlugin weekly call, we were discussing about how > bundle-based resync flow would behave in upgrade context. In normal resync > scenarios – ie. a) switch reconnecting and/or b) controller reboots (and > subsequent switch reconnects), flows and groups which can be dispatched via > resync bundle is sourced from the inventory Config DS. In case of upgrade, > inventory Config DS would be completely nuked and hence a different > handling is required. During this discussion, a concern was raised as to > why OFPlugin should become extensively cognizant of “upgrade” as a special > case. The thought process was more to expose the bundle service generically > from OFPlugin’s end so that upgrade-orchestration becomes a higher level > functionality without switching behaviors from with OFP > > > > So, following this mail boostraps the discussions with following > approaches and choosing an approach which is less invasive. We can continue > to discuss the merits, demerits and fine-tune the approach to arrive at an > agreeable mechanism. > > > > Premises : > > ======= > 1) Bundle construct is currently used in Reconciliation and Resync context > 2) The objective is to expose bundle as a generic construct for > applications to use whenever a set of flows and/groups have to be > provisioned to the switch in "atomic" manner > 3) Usage scenarios > a) Common bundle shared across applications - usage example - > coordinated resync > b) Enabling applications to use bundles with specific set of > constraints if the merit of atomicity provided by bundles offsets the > constraints (see approach 2) > > NOTE: Bundle is transactional concept, so to expose these generically we > need to make sure that the > > > > API control loop is properly expose, so when user configure any bundle, > it can also fetch status of the bundle some way. > > > > Bundle lifecycle : > > =========== > Approach 1 : > > ======== > Tying up bundle permanently with a flow/group at DS level optionally. In > other words, a given flow is always associated with a fixed bundle-id. This > can > > > > tantamount to adding non-persistable (config = false) bundle-id attribute > to flow and group models. This appears to be a very rigid approach which > would not be flexible to associate flows and groups with different bundles > for different scenarios (normal operations and resync – to be specific) > > This approach breaks the transactional nature of the bundle because now > flows will be tied to the bundle-id and it needs to be persisted till > flow/groups lifetime. Also if we really want to tag the flow/group we will > have to user the config data store, because from northbound side, > operational data store can not be modified. If we put in the config data > store that means once FRM need to use the same bundle-id, and that can be a > major limitation because now application also need to make sure that they > are using the unique bundle-id. It can be done in plugin as well, but point > is somebody need to do this logistic handling and that won't be simple. > > > Approach 2 : > > ======== > Allowing applications to create and close the bundle with not more than > one bundle active per switch. > All groups and flows provisioned during the window of create and close of > the bundle by a given application can be pushed in context of the > current-active > bundle. This approach has a benefit that flow / group object in DS need > not explicitly refer the bundle. But separate flavor of flow and group > add/mod/del API > needs to be invoked by apps to indicate that corresponding flows and group > actions have to be encapsulated within a bundle. It would be the > responsibility of > the application to close the bundle explictly > > I think this make sense to me as a first cut implementation. Although we > need to provide programming interface (trough rpc, datastore) so that > internal and external application both can consume this functionality. > Given that at any point of time there is only active bundle for a switch, > it should provide the following interface to the applications > > > > (1) Created/update/close bundle -- Already exist, but we need to > > (2) Get bundles with the status (with current implementation it will > return only one bundle). > > (3) Report proper error if application is trying to modify the bundle > that is already committed. > > (4) notification/rpc for caching the last error message, because bundle > can throw error while staging as well and application should have some way > to make sure that the bundle they are staging is in error-free condition. > > > > I like this option. One minor issue is, and I could be wrong about this, > that as far as I can tell the spec allows for multiple concurrent bundles > per switch. If this is true, then this design does not match the spec since > it only allows for a single bundle per switch at any given time. A possible > mitigation to this approach (which does not need to be implemented > immediately) is to have the notion of the "default bundle" which is what is > described in Approach 2 and in the future we could augment that with the > ability to specify bundles both in the CRUD/bundle management APIs that > Anil mentioned and in the flow and croup config writes. I don't think this > is a major issue, just something that occurred to me. > > > > > > > > > Approach 3 : > > ======== > Keeping bundle(s) as floating, leasable, non-persistent resouce(s) which > can be associated with one or more flows/groups combination. Application > functionality - resync being one of them, can lease one or more bundle(s) > as required from a pool of bundles, associate the same with a switch and > start using the same. Some involved yang-modeling and service-logic would > be required for providing such leasing service and corresponding validation > logic (eg. handling normal releasing and abnormal releasing - lease-onwer > node dies etc.) would require a non-trivial service implementation and > testing for various failure scenarios. But this approach would be provide > more flexibility from usage perspective. > > In my humble opinion, this probably is not a good implementation given > the transactional nature of the bundles. I see bundle service as a platform > feature that user can leverage and it's should be pretty-much without any > business logic. Just like a normal finite state machine type API's where > user create a resource and take care of it's state during it's lifetime. > Plugin should just throw an error message whenever user try to do the > operation that is not allowed as per the openflowplugin specification, like > using the already committed bundle. So in this scenario, application is the > one that need to handle/coordinate the bundle lifetime, like the other > resource flow/group. That i think give more freedom to applications and it > probably make the programming model more simpler. > > > > Approach 2 appears to be less invasive and is nothing but a special case > of Approach 3 (of course we can discuss on any other approaches and agree > upon a more acceptable way forward) > > > > I would like to throw one more approach here, which probably can be more > simpler and have same control flow for most of the scenarios. Plugin should > provide a "reconcile" rpc that user can use to trigger reconciliation for > individual switch. So FRM won't do any reconciliation till application > sends a rpc request to trigger the reconciliation. This functionality will > be enabled when user enable the "orchestrator" profile in the > openflowplugin config file (we can choose any other name, it's just for > discussion), otherwise everything will work the way it works in the > existing implementation. > > > > This is how it should work in the two scenario you mentioned above > > (1) Switch connection/disconnection : Configuration is already present in > the data store, so whenever application see the switch connects to the > controller, it can trigger the reconciliation. If bundle based > reconciliation is enabled it will use the bundle based reconciliation > otherwise it will use the normal reconciliation. In clustered environment, > even if switch disconnect from one of the controller, it won't notify the > application about disconnection, so there is no action required from the > application. But if switch is disconnected from all the controller, that > will send notification to application and only "owner" controller will > react to the "reconcile" request. > > > > (2) Controller reboot : This will work the same way as above. Only "owner" > controller will react to the "reconcile" request. > > > > (3) Controller Upgrade : Same as above, but in this case user might want > to push all of the configuration to the switch and fire the reconcile > request for that switch. Rest works the same. > > > > So from plugin perspective, it's very simple contract to the application > -- Plugin will reconcile the configuration present in the config data store > at the time of reconcile request and then open it for general device > configuration. Although this approach does not give a control over the > bundles, but it provides a control to user on when to start the > reconciliation. > > > > This mechanism can be further enhanced when openflowplugin will expose > bundle to the application. Applications can use the bundles to reconcile in > that scenario, but this "reconcile" rpc will still be require to notify > plugin to open the gate for general configuration. > > > > On a side note, we can think of using the data store as well to provide a > reconcile flag, but we need to discuss how we are going to maintain it's > state in data store for various event like switch > connection/disconnection. > > > > Let me know your thoughts. > > > > So, we discussed this in some of the upgrade meetings and my opinion is > that it does not benefit the upgrade use case enough to justify the > feature. Either way we need a way for the application to control the > bundles in the case upgrade and however that needs to be trigger the > upgrade installer/orchestrator can trigger that before the switches > reconnect. Also, note that even if we are in reconciliation mode waiting to > fill up the bundle, we still need to receive the port status updates > because we can't generate the flows without them. I assume that shouldn't > be too much of a problem. > > > > > > > > > Common model change across these approaches is following (assuming we > choose Approach 2) > > ============================================================= > 1) A separate bunde-list to be added as child to flowcapablenode - to > represent generic 1:MANY relationship between switch and bundle for future > extensions > 2) Each element of bundle-list shall be a bundle object which has > attributes - bundle-id, bundle-status (OPEN|CLOSE), bundle-type (REGULAR > for now and can > be added with additional enum values to provide one more level of > indirection in future if we want to add types like RESYNC - for example in > future) > 3) Especially for supporting approach-2, there must not be more than 1 > bundle active for a DPN at any point in time. This should be imposed as a > constraint in > the bundle-list. If we have more than 1 bundle active, then there > must be explicit mapping of flow and group entities to respective bundles > > > > Or, as I suggest above, this can be postponed to a later stage of > development by introducing the addition convenient notion of a "default > bundle". > > > > > > > > > > I was wondering if we really want to maintain the bundle related data in > config data store? Given that bundle is transactional in nature and they > does not make sense across switch reconnect, controller reboot, i think > providing the rpc's for CRUD to maintain the lifecycle of bundle probably > make more sense here ? We can maintain the status of bundles in operational > data store as well because we will require that for clustering environment > (to avoid routed rpc). > > > CRUD Handling of new bundle in bundle-list (assuming we choose Approach 2) > > ================================================== > 1) Applications create a common shared bundle (bundle-id=<bundle-id;bundle- > status=OPEN;bundle-type=REGULAR> under bundle-list and subsequently > invoke special flowmod and groupmod APIs (to indicate that corresponding > flow-mods and group-mods are to be added in context of OPEN'ed bundle) > > By special means to add a leaf in the flow/group mod that we write to > config data store? > > > > 2) Bundle created under bundle-list can be disposed as part of nodal > reboot (since the bundle-list is operational in nature) > > Yeah, I think using rpc here will help, because then we can just > maintain the status in operational data store and that's non-persistent > anyways. > > 3) Bundle gets updated only for close status. ie. when applications decide > to close bundle which they have opened > 4) Bundle is read mainly by OFPlugin via DTCN to get update events on > bundle OPENing and CLOSing (for initial case, this would be mainly for > triggering resync) > > > > E2E flow (assuming Approach 2) : > > ====================== > > The proposal is to use > > a) Reconciliation API which is present in OFPlugin to announce the > available of switch to apps > > b) In reaction, apps OPEN a bundle using the approach 2 and push > flows and/or groups > > c) When completed, application can CLOSE > > > > Please see “app” referred above as normal resync or upgrade-specific > resync functionality and not in general notion of visualizing apps as ITM, > IFM or L3VPN service > > > > Let us continue the discussion during next week’s call as well as over > this mail chain > > > > Regards > > Muthu > > > > > > > > > _______________________________________________ > openflowplugin-dev mailing list > [email protected] > https://lists.opendaylight.org/mailman/listinfo/openflowplugin-dev > > > > > > -- > > Thanks > > Anil > > > _______________________________________________ > openflowplugin-dev mailing list > [email protected] > https://lists.opendaylight.org/mailman/listinfo/openflowplugin-dev > > > > > > > > > > -- > > Thanks > > Anil > > > > > > -- > > Thanks > > Anil > > > -- Thanks Anil
_______________________________________________ openflowplugin-dev mailing list [email protected] https://lists.opendaylight.org/mailman/listinfo/openflowplugin-dev
