I'm trying to hash out where data will live for Tuskar (both long term and for its Icehouse deliverables). Based on the expectations for Icehouse (a combination of the wireframes and what's in Tuskar client's api.py), we have the following concepts:

= Nodes =
A node is a baremetal machine on which the overcloud resources will be deployed. The ownership of this information lies with Ironic. The Tuskar UI will accept the needed information to create them and pass it to Ironic. Ironic is consulted directly when information on a specific node or the list of available nodes is needed.


= Resource Categories =
A specific type of "thing" that will be deployed into the overcloud. These are static definitions that describe the entities the user will want to add to the overcloud and are owned by Tuskar. For Icehouse, the categories themselves are added during installation for the four types listed in the wireframes.

Since this is a new model (as compared to other things that live in Ironic or Heat), I'll go into some more detail. Each Resource Category has the following information:

== Metadata ==
My intention here is that we do things in such a way that if we change one of the original 4 categories, or more importantly add more or allow users to add more, the information about the category is centralized and not reliant on the UI to provide the user information on what it is.

ID - Unique ID for the Resource Category.
Display Name - User-friendly name to display.
Description - Equally self-explanatory.

== Count ==
In the Tuskar UI, the user selects how many of each category is desired. This stored in Tuskar's domain model for the category and is used when generating the template to pass to Heat to make it happen.

These counts are what is displayed to the user in the Tuskar UI for each category. The staging concept has been removed for Icehouse. In other words, the wireframes that cover the "waiting to be deployed" aren't relevant for now.

== Image ==
For Icehouse, each category will have one image associated with it. Last I remember, there was discussion on whether or not we need to support multiple images for a category, but for Icehouse we'll limit it to 1 and deal with it later.

Metadata for each Resource Category is owned by the Tuskar API. The images themselves are managed by Glance, with each Resource Category keeping track of just the UUID for its image.


= Stack =
There is a single stack in Tuskar, the "overcloud". The Heat template for the stack is generated by the Tuskar API based on the Resource Category data (image, count, etc.). The template is handed to Heat to execute.

Heat owns information about running instances and is queried directly when the Tuskar UI needs to access that information.

----------

Next steps for me are to start to work on the Tuskar APIs around Resource Category CRUD and their conversion into a Heat template. There's some discussion to be had there as well, but I don't want to put too much into one e-mail.


Thoughts?

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to