On 2013/06/12 21:26, Tzu-Mainn Chen wrote:
* can be allocated as one of four node types
It's pretty clear by the current verbiage but I'm going to ask anyway:
"one and only one"?
Yep, that's right!
Confirming. One and only one.

My gut reaction is that we want to bite this off sooner rather than
later. This will have data model and API implications that, even if we
don't commit to it for Icehouse, should still be in our minds during it,
so it might make sense to make it a first class thing to just nail down now.
That is entirely correct, which is one reason it's on the list of requirements. 
 The
forthcoming API design will have to account for it.  Not recreating the entire 
data
model between releases is a key goal :)
Well yeah, that's why we should try to think in a longer-term and wireframes are covering also a bit more than might land in Icehouse. So that we are aware of future direction and we don't have to completely rebuild underlying models later on.

              * optional node profile for a resource class (M)
                  * acts as filter for nodes that can be allocated to that
                  class (M)
To my understanding, once this is in Icehouse, we'll have to support
upgrades. If this filtering is pushed off, could we get into a situation
where an allocation created in Icehouse would no longer be valid in
Icehouse+1 once these filters are in place? If so, we might want to make
it more of a priority to get them in place earlier and not eat the
headache of addressing these sorts of integrity issues later.
Hm, can you be a bit more specific about how the allocation created in I might no longer be valid in I+1?

That's true.  The problem is that to my understanding, the filters we'd
need in nova-scheduler are not yet fully in place.
I think at the moment there are 'extra params' which we might use to some level. But yes, AFAIK there is missing part for filtered scheduling in nova.

I also think that this is an issue that we'll need to address no matter what.
Even once filters exist, if a user applies a filter *after* nodes are allocated,
we'll need to do something clever if the already-allocated nodes don't meet the
filter criteria.
Well here is a thing. Once nodes are allocated, you can get warning, that those nodes in the resource class are not fulfilling the criteria (if they were changed) but that's all. It will be up to user's decision if he wants to keep them in or unallocate them. The profiles are important when a decision 'which node can get in' is being made.

          * nodes can be viewed by node types
                  * additional group by status, hardware specification
          * controller node type
             * each controller node will run all openstack services
                * allow each node to run specified service (F)
             * breakdown by workload (percentage of cpu used per node) (M)
      * Unallocated nodes
Is there more still being flushed out here? Things like:
   * Listing unallocated nodes
   * Unallocating a previously allocated node (does this make it a
vanilla resource or does it retain the resource type? is this the only
way to change a node's resource type?)
If we use policy based approach then yes this is correct. First unallocate a node and then increase number of resources in other class.

But I believe that we need keep control over your infrastructure and not to relay only on policies. So I hope we can get into something like 'reallocate'/'allocate manually' which will force a node to be part of specific class.

   * Unregistering nodes from Tuskar's inventory (I put this under
unallocated under the assumption that the workflow will be an explicit
unallocate before unregister; I'm not sure if this is the same as
"archive" below).
Ah, you're entirely right.  I'll add these to the list.

      * Archived nodes (F)
Can you elaborate a bit more on what this is?
To be honest, I'm a bit fuzzy about this myself; Jarda mentioned that there was
an OpenStack service in the process of being planned that would handle this
requirement.  Jarda, can you detail a bit?
So the thing is based on historical data. At the moment, there is no service which would keep this type of data (might be new project?). Since Tuskar will not be only deploying but also monitoring your deployment, it is important to have historical data available. If user removes some nodes from infrastructure, he would lose all the data and we would not be able to generate graphs.That's why archived nodes = nodes which were registered in past but are no longer available.

-- Jarda
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to