On 12/06/2013 09:39 PM, Tzu-Mainn Chen wrote:
Thanks for the comments and questions!  I fully expect that this list of 
requirements
will need to be fleshed out, refined, and heavily modified, so the more the 
merrier.

Comments inline:


*** Requirements are assumed to be targeted for Icehouse, unless marked
otherwise:
    (M) - Maybe Icehouse, dependency on other in-development features
    (F) - Future requirement, after Icehouse

* NODES

Note that everything in this section should be Ironic API calls.

    * Creation
       * Manual registration
          * hardware specs from Ironic based on mac address (M)

Ironic today will want IPMI address + MAC for each NIC + disk/cpu/memory
stats

          * IP auto populated from Neutron (F)

Do you mean IPMI IP ? I'd say IPMI address managed by Neutron here.

       * Auto-discovery during undercloud install process (M)
    * Monitoring
        * assignment, availability, status
        * capacity, historical statistics (M)

Why is this under 'nodes'? I challenge the idea that it should be
there. We will need to surface some stuff about nodes, but the
underlying idea is to take a cloud approach here - so we're monitoring
services, that happen to be on nodes. There is room to monitor nodes,
as an undercloud feature set, but lets be very very specific about
what is sitting at what layer.

That's a fair point.  At the same time, the UI does want to monitor both
services and the nodes that the services are running on, correct?  I would
think that a user would want this.

Would it be better to explicitly split this up into two separate requirements?

That was my understanding as well, that Tuskar would not only care about the services of the undercloud but the health of the actual hardware on which it's running. As I write that I think you're correct, two separate requirements feels much more explicit in how that's different from elsewhere in OpenStack.

    * Management node (where triple-o is installed)

This should be plural :) - TripleO isn't a single service to be
installed - We've got Tuskar, Ironic, Nova, Glance, Keystone, Neutron,
etc.

I misspoke here - this should be "where the undercloud is installed".  My
current understanding is that our initial release will only support the 
undercloud
being installed onto a single node, but my understanding could very well be 
flawed.

        * created as part of undercloud install process
        * can create additional management nodes (F)
     * Resource nodes

                         ^ nodes is again confusing layers - nodes are
what things are deployed to, but they aren't the entry point

         * searchable by status, name, cpu, memory, and all attributes from
         ironic
         * can be allocated as one of four node types

Not by users though. We need to stop thinking of this as 'what we do
to nodes' - Nova/Ironic operate on nodes, we operate on Heat
templates.

Right, I didn't mean to imply that users would be doing this allocation.  But 
once Nova
does this allocation, the UI does want to be aware of how the allocation is 
done, right?
That's what this requirement meant.

             * compute
             * controller
             * object storage
             * block storage
         * Resource class - allows for further categorization of a node type
             * each node type specifies a single default resource class
                 * allow multiple resource classes per node type (M)

Whats a node type?

Compute/controller/object storage/block storage.  Is another term besides "node 
type"
more accurate?


             * optional node profile for a resource class (M)
                 * acts as filter for nodes that can be allocated to that
                 class (M)

I'm not clear on this - you can list the nodes that have had a
particular thing deployed on them; we probably can get a good answer
to being able to see what nodes a particular flavor can deploy to, but
we don't want to be second guessing the scheduler..

Correct; the goal here is to provide a way through the UI to send additional 
filtering
requirements that will eventually be passed into the scheduler, allowing the 
scheduler
to apply additional filters.

         * nodes can be viewed by node types
                 * additional group by status, hardware specification

*Instances* - e.g. hypervisors, storage, block storage etc.

         * controller node type

Again, need to get away from node type here.

            * each controller node will run all openstack services
               * allow each node to run specified service (F)
            * breakdown by workload (percentage of cpu used per node) (M)
     * Unallocated nodes

This implies an 'allocation' step, that we don't have - how about
'Idle nodes' or something.

Is it imprecise to say that nodes are allocated by the scheduler?  Would 
something like
'active/idle' be better?

     * Archived nodes (F)
         * Will be separate openstack service (F)

* DEPLOYMENT
    * multiple deployments allowed (F)
      * initially just one
    * deployment specifies a node distribution across node types

I can't parse this. Deployments specify how many instances to deploy
in what roles (e.g. 2 control, 2 storage, 4 block storage, 20
hypervisors), some minor metadata about the instances (such as 'kvm'
for the hypervisor, and what undercloud flavors to deploy on).

That's what I meant by node distribution (the first word that jumped out
at me when I looked at the wireframes).

       * node distribution can be updated after creation
    * deployment configuration, used for initial creation only

Can you enlarge on what you mean here?

This is the equivalent of the options that a user would have before running, 
say,
packstack.  My understanding is that passing those options in is non-trivial, so
we would just want to expose the default options being used, without allowing
the user to change them.

       * defaulted, with no option to change
          * allow modification (F)
    * review distribution map (F)
    * notification when a deployment is ready to go or whenever something
    changes

Is this an (M) ?

Ah, you're right to call this out; it needs expansion.  I'm guessing that having
a notification when a deployment deploys successfully should be possible during
the Icehouse timeframe; having a notification when there's an error or something
is more of an (M).

* DEPLOYMENT ACTION
    * Heat template generated on the fly
       * hardcoded images
          * allow image selection (F)

We'll be spinning images up as part of the deployment, I presume - so
this is really M, isn't it? or do you mean 'allow supplying images
rather than building just in time' ? Or --- I dunno, but lets get some
clarity here.

Tuskar currently doesn't allow you to actually specify what image is spun up
as part of the deployment.  This requirement is about allowing users to upload
their own images and have them be used in the deployment.

       * pre-created template fragments for each node type
       * node type distribution affects generated template
    * nova scheduler allocates nodes
       * filters based on resource class and node profile information (M)

What does this mean?

The proposed requirement is that within a deployment, a user can specify a 
resource class, and then
specify a 'node profile' that lists the requirements necessary for a node to be 
placed in that class.
Upon deployment, that information is passed through to the scheduler, which 
uses them in its filters
when picking nodes.


Sorry for having so many questions :)

Not at all!  Again, thank you for asking them, and sorry for adding some of my 
own :)

Mainn

-Rob

--
Robert Collins <[email protected]>
Distinguished Technologist
HP Converged Cloud

_______________________________________________
OpenStack-dev mailing list
[email protected]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_______________________________________________
OpenStack-dev mailing list
[email protected]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



_______________________________________________
OpenStack-dev mailing list
[email protected]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to