Excerpts from Zane Bitter's message of 2013-11-15 12:41:53 -0800: > Good news, everyone! I have created the missing whiteboard diagram that > we all needed at the design summit: > > https://wiki.openstack.org/wiki/Heat/Blueprints/Multi_Region_Support_for_Heat/The_Missing_Diagram > > I've documented 5 possibilities. (1) is the current implementation, > which we agree we want to get away from. I strongly favour (2) for the > reasons listed. I don't think (3) has many friends. (4) seems to be > popular despite the obvious availability problem and doubts that it is > even feasible. Finally, I can save us all some time by simply stating > that I will -2 on sight any attempt to implement (5). > > When we're discussing this, please mention explicitly the number of the > model you are talking about at any given time. > > If you have a suggestion for a different model, make your own diagram! > jk, you can sketch it or something for me and I'll see if I can add it.
Thanks for putting this together Zane. I just now got around to looking closely. Option 2 is good. I'd love for option 1 to be made automatic by making the client smarter, but parsing templates in the client will require some deep thought before we decide it is a good idea. I'd like to consider a 2a, which just has the same Heat engines the user is talking to being used to do the orchestration in whatever region they are in. I think that is actually the intention of the diagram, but it looks like there is a "special" one that talks to the engines that actually do the work. 2 may morph into 3 actually, if users don't like the nested stack requirement for 2, we can do the work to basically make the engine create a nested stack per region. So that makes 2 a stronger choice for first implementation. 4 has an unstated pro, which is that attack surface is reduced. This makes more sense when you consider the TripleO case where you may want the undercloud (hardware cloud) to orchestrate things existing in the overcloud (vm cloud) but you don't want the overcloud administrators to be able to control your entire stack. Given CAP theorem, option 5, the global orchestrator, would be doable with not much change as long as partition tolerance were the bit we gave up. We would just have to have a cross-region RPC bus and database. Of course, since regions are most likely to be partitioned, that is not really a good choice. Trading partition tolerance for consistency lands us in the complexity black hole. Trading out availability makes it no better than option 4. _______________________________________________ OpenStack-dev mailing list [email protected] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
