What do we want to do with upgrades:
* per feature
* per node

Assumptions:
* we will not do ISSU to start?
    * not do in place upgrades inside a JVM?
    * can we get away without doing in-place upgrades inside a cluster?

Let's do the easiest thing first:
* do we need it to be hitless?
* I'd argue for now, we don't need it to be hitless, we just need it to be
automated
* cofiguraion

OpenDaylight Upgrades:
1.) code upgrades (changing the actual artifacts)
2.) configuration upgrades (all the xml files from Config Subsytem)
    * the hope is that this really becomes 3
    * except for a few things that have to load before the data store
3.) data upgrades (stuff stored in the MD-SAL)
    * schema migration: well studied in RDBMSes (flyway, liquidbase)
    * we really want one upgrade "script" which then calls (per-model?
per-project?) sub-scripts which can do individual bits of data munging
    * we probably (at least eventually) need a maintenance mode which loads
the data store and YANG models, but not any bundles that will take action

Other things:
* Karaf?
* Akka config files?
    * already done, handled in upgrades
* Needs to be tied into HA.
    * at least in the long run, want to be able to do zero-downtime upgrades
    *

Maybe look at Cisco open source project to generate code to help translate
one version of a model to another as configured via a GUI. (Get it from
Jan.)


Lost structure:
* JSON loses information

Java frameworks from mapping one Java type to another:
* Dozer
* Mapstruct
_______________________________________________
controller-dev mailing list
controller-dev@lists.opendaylight.org
https://lists.opendaylight.org/mailman/listinfo/controller-dev

Reply via email to