Re: [openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-17 Thread Jay Pipes
On Thu, 2014-01-16 at 20:59 -0800, Clint Byrum wrote:
 Excerpts from Jay Pipes's message of 2014-01-12 11:40:41 -0800:
  On Fri, 2014-01-10 at 10:28 -0500, Jay Dobies wrote:
So, it's not as simple as it may initially seem :)
   
Ah, I should have been clearer in my statement - my understanding is 
that
we're scrapping concepts like Rack entirely.
   
   That was my understanding as well. The existing Tuskar domain model was 
   largely placeholder/proof of concept and didn't necessarily reflect 
   exactly what was desired/expected.
  
  Hmm, so this is a bit disappointing, though I may be less disappointed
  if I knew that Ironic (or something else?) planned to account for
  datacenter inventory in a more robust way than is currently modeled.
  
  If Triple-O/Ironic/Tuskar are indeed meant to be the deployment tooling
  that an enterprise would use to deploy bare-metal hardware in a
  continuous fashion, then the modeling of racks, and the attributes of
  those racks -- location, power supply, etc -- are a critical part of the
  overall picture.
  
 
 To be clear, the goal first is to have them be the deployment tooling that
 _somebody_ would use in production. Enterprise is pretty amorphous. If
 I'm running a start-up but it is a start-up that puts all of its money
 into a 5000 node public cloud, am I enterprise?

I, too, hate the term enterprise. Sorry for using it.

In my post, you can replace enterprise with business that has legacy
hardware inventory or datacenter deployment tooling/practices already in
place.

 Nothing in the direction that has been laid out precludes Tuskar and
 Ironic from consuming one of the _many_ data center inventory management
 solutions and CMDB's that exist now.

OK, cool.

 If there is a need for OpenStack to grow one, I think we will. Lord
 knows we've reinvented half the rest of the things we needed. ;-)

LOL, touché.

 For now I think Tuskar should focus on feeding multiple groups into Nova,
 and Nova and Ironic should focus on making sure they can handle multiple
 group memberships for compute resources and schedule appropriately. Do
 that and it will be relatively straight forward to adapt to racks, pods,
 power supplies, or cooling towers.

Completely agreed, which is why I say in my post have it somewhere on
the roadmap and doesn't have to be done tomorrow.

  As an example of why something like power supply is important... inside
  ATT, we had both 8kW and 16kW power supplies in our datacenters. For a
  42U or 44U rack, deployments would be limited to a certain number of
  compute nodes, based on that power supply.
  
  The average power draw for a particular vendor model of compute worker
  would be used in determining the level of compute node packing that
  could occur for that rack type within a particular datacenter. This was
  a fundamental part of datacenter deployment and planning. If the tooling
  intended to do bare-metal deployment of OpenStack in a continual manner
  does not plan to account for these kinds of things, then the chances
  that tooling will be used in enterprise deployments is diminished.
 
 Right the math can be done in advance and racks/psus/boxes grouped
 appropriately. Packing is one of those things that we need a wholistic
 scheduler for to be fully automated. I'm not convinced that is even a
 mid-term win, when there are so many big use-cases that can be handled
 with so much less complexity.

No disagreement from me at all.

  And, as we all know, when something isn't used, it withers. That's the
  last thing I want to happen here. I want all of this to be the
  bare-metal deployment tooling that is used *by default* in enterprise
  OpenStack deployments, because the tooling fits the expectations of
  datacenter deployers.
  
  It doesn't have to be done tomorrow :) It just needs to be on the map
  somewhere. I'm not sure if Ironic is the place to put this kind of
  modeling -- I thought Tuskar was going to be that thing. But really,
  IMO, it should be on the roadmap somewhere.
 
 I agree, however I think the primitive capabilities, informed by helpful
 use-cases such as the one you describe above, need to be understood
 before we go off and try to model a UI around them.

Yup, totally. I'll keep a-lurking and following along various threads,
offering insight where I can.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-16 Thread Jaromir Coufal

Hi Jay,

Awesome. I'll just add quick note inline (and sorry for smaller delay):

On 2014/09/01 18:22, Jay Dobies wrote:

I'm trying to hash out where data will live for Tuskar (both long term
and for its Icehouse deliverables). Based on the expectations for
Icehouse (a combination of the wireframes and what's in Tuskar client's
api.py), we have the following concepts:


[snip]


= Resource Categories =

[snip]


== Count ==
In the Tuskar UI, the user selects how many of each category is desired.
This stored in Tuskar's domain model for the category and is used when
generating the template to pass to Heat to make it happen.
Based on latest discussions - instance count is a bit tricky, but it 
should be specific to Node Profile if we care what hardware we want in 
the play.


Later, we can add possibility to enter just number of instances for the 
whole resource category and let system to decide for me which node 
profile to deploy. But I believe this is future look.



These counts are what is displayed to the user in the Tuskar UI for each
category. The staging concept has been removed for Icehouse. In other
words, the wireframes that cover the waiting to be deployed aren't
relevant for now.

+1



== Image ==
For Icehouse, each category will have one image associated with it. Last
I remember, there was discussion on whether or not we need to support
multiple images for a category, but for Icehouse we'll limit it to 1 and
deal with it later.

Metadata for each Resource Category is owned by the Tuskar API. The
images themselves are managed by Glance, with each Resource Category
keeping track of just the UUID for its image.

I think we were discussing to keep track of image's name there.

Thanks for this great work
-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-16 Thread Jaromir Coufal


On 2014/12/01 20:40, Jay Pipes wrote:

On Fri, 2014-01-10 at 10:28 -0500, Jay Dobies wrote:

So, it's not as simple as it may initially seem :)


Ah, I should have been clearer in my statement - my understanding is that
we're scrapping concepts like Rack entirely.


That was my understanding as well. The existing Tuskar domain model was
largely placeholder/proof of concept and didn't necessarily reflect
exactly what was desired/expected.


Hmm, so this is a bit disappointing, though I may be less disappointed
if I knew that Ironic (or something else?) planned to account for
datacenter inventory in a more robust way than is currently modeled.

If Triple-O/Ironic/Tuskar are indeed meant to be the deployment tooling
that an enterprise would use to deploy bare-metal hardware in a
continuous fashion, then the modeling of racks, and the attributes of
those racks -- location, power supply, etc -- are a critical part of the
overall picture.

As an example of why something like power supply is important... inside
ATT, we had both 8kW and 16kW power supplies in our datacenters. For a
42U or 44U rack, deployments would be limited to a certain number of
compute nodes, based on that power supply.

The average power draw for a particular vendor model of compute worker
would be used in determining the level of compute node packing that
could occur for that rack type within a particular datacenter. This was
a fundamental part of datacenter deployment and planning. If the tooling
intended to do bare-metal deployment of OpenStack in a continual manner
does not plan to account for these kinds of things, then the chances
that tooling will be used in enterprise deployments is diminished.

And, as we all know, when something isn't used, it withers. That's the
last thing I want to happen here. I want all of this to be the
bare-metal deployment tooling that is used *by default* in enterprise
OpenStack deployments, because the tooling fits the expectations of
datacenter deployers.

It doesn't have to be done tomorrow :) It just needs to be on the map
somewhere. I'm not sure if Ironic is the place to put this kind of
modeling -- I thought Tuskar was going to be that thing. But really,
IMO, it should be on the roadmap somewhere.

All the best,
-jay


Perfect write up, Jay.

I can second these needs based on talks I had previously.

The goal is to primarily support enterprise deployments and they work 
with racks, so all of that information such as location, power supply, 
etc are important.


Though this is pretty challenging area and we need to start somewhere. 
As a proof of concept, Tuskar tried to provide similar views, then we 
jumped into reality. OpenStack has no strong support in racks field for 
the moment. As long as we want to deliver working deployment solution 
ASAP and enhance it in time, we started with currently available features.


We are not giving up racks entirely, they are just a bit pushed back, 
since there is no real support in OpenStack yet. But to deliver more 
optimistic news, regarding last OpenStack summit, Ironic intends to work 
with all the racks information (location, power supply, ...). So once 
Ironic contains all of that information, we can happily start providing 
such capability for deployment setups, hardware overviews, etc.


Having said that, for Icehouse I pushed for Node Tags to get in. It is 
not the best experience, but using Node Tags, we can actually support 
various use-cases for user (by him tagging nodes manually at the moment).


Cheers
-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-16 Thread Jay Pipes
On Thu, 2014-01-16 at 11:25 +0100, Jaromir Coufal wrote:
 On 2014/12/01 20:40, Jay Pipes wrote:
  On Fri, 2014-01-10 at 10:28 -0500, Jay Dobies wrote:
  So, it's not as simple as it may initially seem :)
 
  Ah, I should have been clearer in my statement - my understanding is that
  we're scrapping concepts like Rack entirely.
 
  That was my understanding as well. The existing Tuskar domain model was
  largely placeholder/proof of concept and didn't necessarily reflect
  exactly what was desired/expected.
 
  Hmm, so this is a bit disappointing, though I may be less disappointed
  if I knew that Ironic (or something else?) planned to account for
  datacenter inventory in a more robust way than is currently modeled.
 
  If Triple-O/Ironic/Tuskar are indeed meant to be the deployment tooling
  that an enterprise would use to deploy bare-metal hardware in a
  continuous fashion, then the modeling of racks, and the attributes of
  those racks -- location, power supply, etc -- are a critical part of the
  overall picture.
 
  As an example of why something like power supply is important... inside
  ATT, we had both 8kW and 16kW power supplies in our datacenters. For a
  42U or 44U rack, deployments would be limited to a certain number of
  compute nodes, based on that power supply.
 
  The average power draw for a particular vendor model of compute worker
  would be used in determining the level of compute node packing that
  could occur for that rack type within a particular datacenter. This was
  a fundamental part of datacenter deployment and planning. If the tooling
  intended to do bare-metal deployment of OpenStack in a continual manner
  does not plan to account for these kinds of things, then the chances
  that tooling will be used in enterprise deployments is diminished.
 
  And, as we all know, when something isn't used, it withers. That's the
  last thing I want to happen here. I want all of this to be the
  bare-metal deployment tooling that is used *by default* in enterprise
  OpenStack deployments, because the tooling fits the expectations of
  datacenter deployers.
 
  It doesn't have to be done tomorrow :) It just needs to be on the map
  somewhere. I'm not sure if Ironic is the place to put this kind of
  modeling -- I thought Tuskar was going to be that thing. But really,
  IMO, it should be on the roadmap somewhere.
 
  All the best,
  -jay
 
 Perfect write up, Jay.
 
 I can second these needs based on talks I had previously.
 
 The goal is to primarily support enterprise deployments and they work 
 with racks, so all of that information such as location, power supply, 
 etc are important.
 
 Though this is pretty challenging area and we need to start somewhere. 
 As a proof of concept, Tuskar tried to provide similar views, then we 
 jumped into reality. OpenStack has no strong support in racks field for 
 the moment. As long as we want to deliver working deployment solution 
 ASAP and enhance it in time, we started with currently available features.
 
 We are not giving up racks entirely, they are just a bit pushed back, 
 since there is no real support in OpenStack yet. But to deliver more 
 optimistic news, regarding last OpenStack summit, Ironic intends to work 
 with all the racks information (location, power supply, ...). So once 
 Ironic contains all of that information, we can happily start providing 
 such capability for deployment setups, hardware overviews, etc.
 
 Having said that, for Icehouse I pushed for Node Tags to get in. It is 
 not the best experience, but using Node Tags, we can actually support 
 various use-cases for user (by him tagging nodes manually at the moment).

Totally cool, Jarda. I appreciate your response and completely
understand the prioritization.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-16 Thread Clint Byrum
Excerpts from Jay Pipes's message of 2014-01-12 11:40:41 -0800:
 On Fri, 2014-01-10 at 10:28 -0500, Jay Dobies wrote:
   So, it's not as simple as it may initially seem :)
  
   Ah, I should have been clearer in my statement - my understanding is that
   we're scrapping concepts like Rack entirely.
  
  That was my understanding as well. The existing Tuskar domain model was 
  largely placeholder/proof of concept and didn't necessarily reflect 
  exactly what was desired/expected.
 
 Hmm, so this is a bit disappointing, though I may be less disappointed
 if I knew that Ironic (or something else?) planned to account for
 datacenter inventory in a more robust way than is currently modeled.
 
 If Triple-O/Ironic/Tuskar are indeed meant to be the deployment tooling
 that an enterprise would use to deploy bare-metal hardware in a
 continuous fashion, then the modeling of racks, and the attributes of
 those racks -- location, power supply, etc -- are a critical part of the
 overall picture.
 

To be clear, the goal first is to have them be the deployment tooling that
_somebody_ would use in production. Enterprise is pretty amorphous. If
I'm running a start-up but it is a start-up that puts all of its money
into a 5000 node public cloud, am I enterprise?

Nothing in the direction that has been laid out precludes Tuskar and
Ironic from consuming one of the _many_ data center inventory management
solutions and CMDB's that exist now.

If there is a need for OpenStack to grow one, I think we will. Lord
knows we've reinvented half the rest of the things we needed. ;-)

For now I think Tuskar should focus on feeding multiple groups into Nova,
and Nova and Ironic should focus on making sure they can handle multiple
group memberships for compute resources and schedule appropriately. Do
that and it will be relatively straight forward to adapt to racks, pods,
power supplies, or cooling towers.

 As an example of why something like power supply is important... inside
 ATT, we had both 8kW and 16kW power supplies in our datacenters. For a
 42U or 44U rack, deployments would be limited to a certain number of
 compute nodes, based on that power supply.
 
 The average power draw for a particular vendor model of compute worker
 would be used in determining the level of compute node packing that
 could occur for that rack type within a particular datacenter. This was
 a fundamental part of datacenter deployment and planning. If the tooling
 intended to do bare-metal deployment of OpenStack in a continual manner
 does not plan to account for these kinds of things, then the chances
 that tooling will be used in enterprise deployments is diminished.
 

Right the math can be done in advance and racks/psus/boxes grouped
appropriately. Packing is one of those things that we need a wholistic
scheduler for to be fully automated. I'm not convinced that is even a
mid-term win, when there are so many big use-cases that can be handled
with so much less complexity.

 And, as we all know, when something isn't used, it withers. That's the
 last thing I want to happen here. I want all of this to be the
 bare-metal deployment tooling that is used *by default* in enterprise
 OpenStack deployments, because the tooling fits the expectations of
 datacenter deployers.
 
 It doesn't have to be done tomorrow :) It just needs to be on the map
 somewhere. I'm not sure if Ironic is the place to put this kind of
 modeling -- I thought Tuskar was going to be that thing. But really,
 IMO, it should be on the roadmap somewhere.

I agree, however I think the primitive capabilities, informed by helpful
use-cases such as the one you describe above, need to be understood
before we go off and try to model a UI around them.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-13 Thread Jay Dobies

Excellent write up Jay.

I don't actually know the answer. I'm not 100% bought into the idea that 
Tuskar isn't going to store any information about the deployment and 
will rely entirely on Heat/Ironic as the data store there. Losing this 
extra physical information may be a a strong reason why we need to store 
capture additional data beyond what is or will be utilized by Ironic.


For now, I think the answer is that this is the first pass for Icehouse. 
We're still a ways off from being able to do what you described 
regardless of where the model lives. There are ideas around how to 
partition things as you're suggesting (configuring profiles for the 
nodes; I forget the exact term but there was a big thread about manual 
v. automatic node allocation that had an idea) but there's nothing in 
the wireframes to account for it yet.


So not a very helpful reply on my part :) But your feedback was 
described well which will help keep those concerns in mind post-Icehouse.



Hmm, so this is a bit disappointing, though I may be less disappointed
if I knew that Ironic (or something else?) planned to account for
datacenter inventory in a more robust way than is currently modeled.

If Triple-O/Ironic/Tuskar are indeed meant to be the deployment tooling
that an enterprise would use to deploy bare-metal hardware in a
continuous fashion, then the modeling of racks, and the attributes of
those racks -- location, power supply, etc -- are a critical part of the
overall picture.

As an example of why something like power supply is important... inside
ATT, we had both 8kW and 16kW power supplies in our datacenters. For a
42U or 44U rack, deployments would be limited to a certain number of
compute nodes, based on that power supply.

The average power draw for a particular vendor model of compute worker
would be used in determining the level of compute node packing that
could occur for that rack type within a particular datacenter. This was
a fundamental part of datacenter deployment and planning. If the tooling
intended to do bare-metal deployment of OpenStack in a continual manner
does not plan to account for these kinds of things, then the chances
that tooling will be used in enterprise deployments is diminished.

And, as we all know, when something isn't used, it withers. That's the
last thing I want to happen here. I want all of this to be the
bare-metal deployment tooling that is used *by default* in enterprise
OpenStack deployments, because the tooling fits the expectations of
datacenter deployers.

It doesn't have to be done tomorrow :) It just needs to be on the map
somewhere. I'm not sure if Ironic is the place to put this kind of
modeling -- I thought Tuskar was going to be that thing. But really,
IMO, it should be on the roadmap somewhere.

All the best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-12 Thread Jay Pipes
On Fri, 2014-01-10 at 10:28 -0500, Jay Dobies wrote:
  So, it's not as simple as it may initially seem :)
 
  Ah, I should have been clearer in my statement - my understanding is that
  we're scrapping concepts like Rack entirely.
 
 That was my understanding as well. The existing Tuskar domain model was 
 largely placeholder/proof of concept and didn't necessarily reflect 
 exactly what was desired/expected.

Hmm, so this is a bit disappointing, though I may be less disappointed
if I knew that Ironic (or something else?) planned to account for
datacenter inventory in a more robust way than is currently modeled.

If Triple-O/Ironic/Tuskar are indeed meant to be the deployment tooling
that an enterprise would use to deploy bare-metal hardware in a
continuous fashion, then the modeling of racks, and the attributes of
those racks -- location, power supply, etc -- are a critical part of the
overall picture.

As an example of why something like power supply is important... inside
ATT, we had both 8kW and 16kW power supplies in our datacenters. For a
42U or 44U rack, deployments would be limited to a certain number of
compute nodes, based on that power supply.

The average power draw for a particular vendor model of compute worker
would be used in determining the level of compute node packing that
could occur for that rack type within a particular datacenter. This was
a fundamental part of datacenter deployment and planning. If the tooling
intended to do bare-metal deployment of OpenStack in a continual manner
does not plan to account for these kinds of things, then the chances
that tooling will be used in enterprise deployments is diminished.

And, as we all know, when something isn't used, it withers. That's the
last thing I want to happen here. I want all of this to be the
bare-metal deployment tooling that is used *by default* in enterprise
OpenStack deployments, because the tooling fits the expectations of
datacenter deployers.

It doesn't have to be done tomorrow :) It just needs to be on the map
somewhere. I'm not sure if Ironic is the place to put this kind of
modeling -- I thought Tuskar was going to be that thing. But really,
IMO, it should be on the roadmap somewhere.

All the best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-10 Thread Imre Farkas

Thanks Jay, this is a very useful summary! Some comments inline:

On 01/09/2014 06:22 PM, Jay Dobies wrote:

I'm trying to hash out where data will live for Tuskar (both long term
and for its Icehouse deliverables). Based on the expectations for
Icehouse (a combination of the wireframes and what's in Tuskar client's
api.py), we have the following concepts:


= Nodes =
A node is a baremetal machine on which the overcloud resources will be
deployed. The ownership of this information lies with Ironic. The Tuskar
UI will accept the needed information to create them and pass it to
Ironic. Ironic is consulted directly when information on a specific node
or the list of available nodes is needed.


= Resource Categories =
A specific type of thing that will be deployed into the overcloud.
These are static definitions that describe the entities the user will
want to add to the overcloud and are owned by Tuskar. For Icehouse, the
categories themselves are added during installation for the four types
listed in the wireframes.

Since this is a new model (as compared to other things that live in
Ironic or Heat), I'll go into some more detail. Each Resource Category
has the following information:

== Metadata ==
My intention here is that we do things in such a way that if we change
one of the original 4 categories, or more importantly add more or allow
users to add more, the information about the category is centralized and
not reliant on the UI to provide the user information on what it is.

ID - Unique ID for the Resource Category.
Display Name - User-friendly name to display.
Description - Equally self-explanatory.

== Count ==
In the Tuskar UI, the user selects how many of each category is desired.
This stored in Tuskar's domain model for the category and is used when
generating the template to pass to Heat to make it happen.

These counts are what is displayed to the user in the Tuskar UI for each
category. The staging concept has been removed for Icehouse. In other
words, the wireframes that cover the waiting to be deployed aren't
relevant for now.

== Image ==
For Icehouse, each category will have one image associated with it. Last
I remember, there was discussion on whether or not we need to support
multiple images for a category, but for Icehouse we'll limit it to 1 and
deal with it later.

Metadata for each Resource Category is owned by the Tuskar API. The
images themselves are managed by Glance, with each Resource Category
keeping track of just the UUID for its image.


= Stack =
There is a single stack in Tuskar, the overcloud.

A small nit here: in the long term Tuskar will support multiple overclouds.

 The Heat template

for the stack is generated by the Tuskar API based on the Resource
Category data (image, count, etc.). The template is handed to Heat to
execute.

Heat owns information about running instances and is queried directly
when the Tuskar UI needs to access that information.

--

Next steps for me are to start to work on the Tuskar APIs around
Resource Category CRUD and their conversion into a Heat template.
There's some discussion to be had there as well, but I don't want to put
too much into one e-mail.


Thoughts?


There's few pieces of concepts which I think is missing from the list:
- overclouds: after Heat successfully created the stack, Tuskar needs to 
keep track whether it applied the post configuration steps (Keystone 
initialization, registering services, etc) or not. It also needs to know 
the name of the stack (only 1 stack named 'overcloud' for Icehouse).
- service endpoints of an overcloud: eg. Tuskar-ui in the undercloud 
will need the url of the overcloud Horizon. The overcloud Keystone owns 
the information about this (after post configuration is done) and Heat 
owns the information about the overcloud Keystone.
- user credentials for an overcloud: it will be used by Heat during 
stack creation, by Tuskar during post configuration, by Tuskar-ui 
querying various information (eg. running vms on a node) and finally by 
the user logging in to the overcloud Horizon. Now it can be found in the 
Tuskar-ui settings file [1].


Imre

[1] 
https://github.com/openstack/tuskar-ui/blob/master/local_settings.py.example#L351 



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-10 Thread Jay Dobies

Thanks for the feedback  :)


= Stack =
There is a single stack in Tuskar, the overcloud.

A small nit here: in the long term Tuskar will support multiple overclouds.


Yes, absolutely. I should have added For Icehouse like I did in other 
places. Good catch.



There's few pieces of concepts which I think is missing from the list:
- overclouds: after Heat successfully created the stack, Tuskar needs to
keep track whether it applied the post configuration steps (Keystone
initialization, registering services, etc) or not. It also needs to know
the name of the stack (only 1 stack named 'overcloud' for Icehouse).


I assumed this sort of thing was captured by the resource status, though 
I'm far from a Heat expert. Is it not enough to assume that if the 
resource started successfully, all of that took place?



- service endpoints of an overcloud: eg. Tuskar-ui in the undercloud
will need the url of the overcloud Horizon. The overcloud Keystone owns
the information about this (after post configuration is done) and Heat
owns the information about the overcloud Keystone.



- user credentials for an overcloud: it will be used by Heat during
stack creation, by Tuskar during post configuration, by Tuskar-ui
querying various information (eg. running vms on a node) and finally by
the user logging in to the overcloud Horizon. Now it can be found in the
Tuskar-ui settings file [1].


Both of these are really good points that I haven't seen discussed yet. 
The wireframes cover the allocation of nodes and displaying basic 
details of what's created (even that is still placeholder) but not much 
beyond that.


I'd like to break that into a separate thread. I'm not saying it's 
unrelated, but since it's not even wireframed out I'd like to have a 
dedicated discussion about what it might look like. I'll start that 
thread up as soon as I collect my thoughts.



Imre

[1]
https://github.com/openstack/tuskar-ui/blob/master/local_settings.py.example#L351


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-10 Thread Jay Dobies

As much as the Tuskar Chassis model is lacking compared to the Tuskar
Rack model, the opposite problem exists for each project's model of
Node. In Tuskar, the Node model is pretty bare and useless, whereas
Ironic's Node model is much richer.


Thanks for looking that deeply into it :)


So, it's not as simple as it may initially seem :)


Ah, I should have been clearer in my statement - my understanding is that
we're scrapping concepts like Rack entirely.


That was my understanding as well. The existing Tuskar domain model was 
largely placeholder/proof of concept and didn't necessarily reflect 
exactly what was desired/expected.



Mainn


Best,
-jay

[1]
https://github.com/openstack/ironic/blob/master/ironic/db/sqlalchemy/models.py
[2]
https://github.com/openstack/ironic/blob/master/ironic/db/sqlalchemy/models.py#L83



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-10 Thread Imre Farkas

On 01/10/2014 04:27 PM, Jay Dobies wrote:

Thanks for the feedback  :)


= Stack =
There is a single stack in Tuskar, the overcloud.

A small nit here: in the long term Tuskar will support multiple
overclouds.


Yes, absolutely. I should have added For Icehouse like I did in other
places. Good catch.


There's few pieces of concepts which I think is missing from the list:
- overclouds: after Heat successfully created the stack, Tuskar needs to
keep track whether it applied the post configuration steps (Keystone
initialization, registering services, etc) or not. It also needs to know
the name of the stack (only 1 stack named 'overcloud' for Icehouse).


I assumed this sort of thing was captured by the resource status, though
I'm far from a Heat expert. Is it not enough to assume that if the
resource started successfully, all of that took place?



I am also far from a Heat expert, I just had a some really hard times 
when I previously expected from my Tuskar deployed overcloud that it's 
ready to use. :-)


In short, having the resources started is not enough, Heat stack-create 
is only a part of the deployment story. There was a few emails on the 
mailing list about this:

http://lists.openstack.org/pipermail/openstack-dev/2013-December/022217.html
http://lists.openstack.org/pipermail/openstack-dev/2013-December/022887.html

There was also a discussion during the last TripleO meeting in December, 
check the topic 'After heat stack-create init operations (lsmola)'
http://eavesdrop.openstack.org/meetings/tripleo/2013/tripleo.2013-12-17-19.02.log.html 




- service endpoints of an overcloud: eg. Tuskar-ui in the undercloud
will need the url of the overcloud Horizon. The overcloud Keystone owns
the information about this (after post configuration is done) and Heat
owns the information about the overcloud Keystone.



- user credentials for an overcloud: it will be used by Heat during
stack creation, by Tuskar during post configuration, by Tuskar-ui
querying various information (eg. running vms on a node) and finally by
the user logging in to the overcloud Horizon. Now it can be found in the
Tuskar-ui settings file [1].


Both of these are really good points that I haven't seen discussed yet.
The wireframes cover the allocation of nodes and displaying basic
details of what's created (even that is still placeholder) but not much
beyond that.

I'd like to break that into a separate thread. I'm not saying it's
unrelated, but since it's not even wireframed out I'd like to have a
dedicated discussion about what it might look like. I'll start that
thread up as soon as I collect my thoughts.



Fair point, sorry about that. I haven't seen the latest wireframes, I 
had a few expectations based on the previous version.



Imre

[1]
https://github.com/openstack/tuskar-ui/blob/master/local_settings.py.example#L351



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-10 Thread James Slagle
On Fri, Jan 10, 2014 at 10:27 AM, Jay Dobies jason.dob...@redhat.com wrote:
 There's few pieces of concepts which I think is missing from the list:
 - overclouds: after Heat successfully created the stack, Tuskar needs to
 keep track whether it applied the post configuration steps (Keystone
 initialization, registering services, etc) or not. It also needs to know
 the name of the stack (only 1 stack named 'overcloud' for Icehouse).


 I assumed this sort of thing was captured by the resource status, though I'm
 far from a Heat expert. Is it not enough to assume that if the resource
 started successfully, all of that took place?

Not currently.  Those steps are done seperately from a different host
after Heat reports the stack as completed and running.  In the Tuskar
model, that host would be the undercloud.  Tuskar would have to know
what steps to run do the post configuration/setup of the overcloud.

I believe It would be possible to instead automate that so that it
happens as part of the os-refresh-config cycle that runs scripts at
boot time in an image.  At the end of the initial os-refresh-config
run there is a callback to Heat to indicate success.  So, if we did
that, the Overcloud would basically configure itself then callback to
Heat to indicate it all worked.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-10 Thread James Slagle
On Fri, Jan 10, 2014 at 11:01 AM, Imre Farkas ifar...@redhat.com wrote:
 On 01/10/2014 04:27 PM, Jay Dobies wrote:
 There's few pieces of concepts which I think is missing from the list:
 - overclouds: after Heat successfully created the stack, Tuskar needs to
 keep track whether it applied the post configuration steps (Keystone
 initialization, registering services, etc) or not. It also needs to know
 the name of the stack (only 1 stack named 'overcloud' for Icehouse).


 I assumed this sort of thing was captured by the resource status, though
 I'm far from a Heat expert. Is it not enough to assume that if the
 resource started successfully, all of that took place?


 I am also far from a Heat expert, I just had a some really hard times when I
 previously expected from my Tuskar deployed overcloud that it's ready to
 use. :-)

 In short, having the resources started is not enough, Heat stack-create is
 only a part of the deployment story. There was a few emails on the mailing
 list about this:
 http://lists.openstack.org/pipermail/openstack-dev/2013-December/022217.html
 http://lists.openstack.org/pipermail/openstack-dev/2013-December/022887.html

 There was also a discussion during the last TripleO meeting in December,
 check the topic 'After heat stack-create init operations (lsmola)'
 http://eavesdrop.openstack.org/meetings/tripleo/2013/tripleo.2013-12-17-19.02.log.html

Thanks for posting the links :) Very helpful.  There are some really
good points there in the irc log about *not* doing what I suggested
with the local machine os-refresh-config scripts :).

So, I think it's likely that Tuskar will need to orchestrate this
setup in some fasion.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-09 Thread Jay Dobies
I'm trying to hash out where data will live for Tuskar (both long term 
and for its Icehouse deliverables). Based on the expectations for 
Icehouse (a combination of the wireframes and what's in Tuskar client's 
api.py), we have the following concepts:



= Nodes =
A node is a baremetal machine on which the overcloud resources will be 
deployed. The ownership of this information lies with Ironic. The Tuskar 
UI will accept the needed information to create them and pass it to 
Ironic. Ironic is consulted directly when information on a specific node 
or the list of available nodes is needed.



= Resource Categories =
A specific type of thing that will be deployed into the overcloud. 
These are static definitions that describe the entities the user will 
want to add to the overcloud and are owned by Tuskar. For Icehouse, the 
categories themselves are added during installation for the four types 
listed in the wireframes.


Since this is a new model (as compared to other things that live in 
Ironic or Heat), I'll go into some more detail. Each Resource Category 
has the following information:


== Metadata ==
My intention here is that we do things in such a way that if we change 
one of the original 4 categories, or more importantly add more or allow 
users to add more, the information about the category is centralized and 
not reliant on the UI to provide the user information on what it is.


ID - Unique ID for the Resource Category.
Display Name - User-friendly name to display.
Description - Equally self-explanatory.

== Count ==
In the Tuskar UI, the user selects how many of each category is desired. 
This stored in Tuskar's domain model for the category and is used when 
generating the template to pass to Heat to make it happen.


These counts are what is displayed to the user in the Tuskar UI for each 
category. The staging concept has been removed for Icehouse. In other 
words, the wireframes that cover the waiting to be deployed aren't 
relevant for now.


== Image ==
For Icehouse, each category will have one image associated with it. Last 
I remember, there was discussion on whether or not we need to support 
multiple images for a category, but for Icehouse we'll limit it to 1 and 
deal with it later.


Metadata for each Resource Category is owned by the Tuskar API. The 
images themselves are managed by Glance, with each Resource Category 
keeping track of just the UUID for its image.



= Stack =
There is a single stack in Tuskar, the overcloud. The Heat template 
for the stack is generated by the Tuskar API based on the Resource 
Category data (image, count, etc.). The template is handed to Heat to 
execute.


Heat owns information about running instances and is queried directly 
when the Tuskar UI needs to access that information.


--

Next steps for me are to start to work on the Tuskar APIs around 
Resource Category CRUD and their conversion into a Heat template. 
There's some discussion to be had there as well, but I don't want to put 
too much into one e-mail.



Thoughts?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-09 Thread Tzu-Mainn Chen
Thanks!  This is very informative.  From a high-level perspective, this
maps well with my understanding of how Tuskar will interact with various
OpenStack services.  A question or two inline:

- Original Message -
 I'm trying to hash out where data will live for Tuskar (both long term
 and for its Icehouse deliverables). Based on the expectations for
 Icehouse (a combination of the wireframes and what's in Tuskar client's
 api.py), we have the following concepts:
 
 
 = Nodes =
 A node is a baremetal machine on which the overcloud resources will be
 deployed. The ownership of this information lies with Ironic. The Tuskar
 UI will accept the needed information to create them and pass it to
 Ironic. Ironic is consulted directly when information on a specific node
 or the list of available nodes is needed.
 
 
 = Resource Categories =
 A specific type of thing that will be deployed into the overcloud.
 These are static definitions that describe the entities the user will
 want to add to the overcloud and are owned by Tuskar. For Icehouse, the
 categories themselves are added during installation for the four types
 listed in the wireframes.
 
 Since this is a new model (as compared to other things that live in
 Ironic or Heat), I'll go into some more detail. Each Resource Category
 has the following information:
 
 == Metadata ==
 My intention here is that we do things in such a way that if we change
 one of the original 4 categories, or more importantly add more or allow
 users to add more, the information about the category is centralized and
 not reliant on the UI to provide the user information on what it is.
 
 ID - Unique ID for the Resource Category.
 Display Name - User-friendly name to display.
 Description - Equally self-explanatory.
 
 == Count ==
 In the Tuskar UI, the user selects how many of each category is desired.
 This stored in Tuskar's domain model for the category and is used when
 generating the template to pass to Heat to make it happen.
 
 These counts are what is displayed to the user in the Tuskar UI for each
 category. The staging concept has been removed for Icehouse. In other
 words, the wireframes that cover the waiting to be deployed aren't
 relevant for now.
 
 == Image ==
 For Icehouse, each category will have one image associated with it. Last
 I remember, there was discussion on whether or not we need to support
 multiple images for a category, but for Icehouse we'll limit it to 1 and
 deal with it later.
 
 Metadata for each Resource Category is owned by the Tuskar API. The
 images themselves are managed by Glance, with each Resource Category
 keeping track of just the UUID for its image.
 
 
 = Stack =
 There is a single stack in Tuskar, the overcloud. The Heat template
 for the stack is generated by the Tuskar API based on the Resource
 Category data (image, count, etc.). The template is handed to Heat to
 execute.
 
 Heat owns information about running instances and is queried directly
 when the Tuskar UI needs to access that information.

The UI will also need to be able to look at the Heat resources running
within the overcloud stack and classify them according to a resource
category.  How do you envision that working?

 --
 
 Next steps for me are to start to work on the Tuskar APIs around
 Resource Category CRUD and their conversion into a Heat template.
 There's some discussion to be had there as well, but I don't want to put
 too much into one e-mail.
 

I'm looking forward to seeing the API specification, as Resource Category
CRUD is currently a big unknown in the tuskar-ui api.py file.


Mainn


 
 Thoughts?
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-09 Thread Jay Dobies

The UI will also need to be able to look at the Heat resources running
within the overcloud stack and classify them according to a resource
category.  How do you envision that working?


There's a way in a Heat template to specify arbitrary metadata on a 
resource. We can add flags in there and key off of those.



Next steps for me are to start to work on the Tuskar APIs around
Resource Category CRUD and their conversion into a Heat template.
There's some discussion to be had there as well, but I don't want to put
too much into one e-mail.



I'm looking forward to seeing the API specification, as Resource Category
CRUD is currently a big unknown in the tuskar-ui api.py file.


Mainn




Thoughts?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-09 Thread Dougal Matthews
I'm glad we are hashing this out as I think there is still some debate 
around if Tuskar will need a database at all.


One thing to bear in mind, I think we need to make sure the terminology 
matches that described in the previous thread. I think it mostly does 
here but I'm not sure the Tuskar models do.


A few comments below.

On 09/01/14 17:22, Jay Dobies wrote:

= Nodes =
A node is a baremetal machine on which the overcloud resources will be
deployed. The ownership of this information lies with Ironic. The Tuskar
UI will accept the needed information to create them and pass it to
Ironic. Ironic is consulted directly when information on a specific node
or the list of available nodes is needed.


= Resource Categories =
A specific type of thing that will be deployed into the overcloud.


nit - Wont they be deployed into undercloud to form the overcloud?



These are static definitions that describe the entities the user will
want to add to the overcloud and are owned by Tuskar. For Icehouse, the
categories themselves are added during installation for the four types
listed in the wireframes.

Since this is a new model (as compared to other things that live in
Ironic or Heat), I'll go into some more detail. Each Resource Category
has the following information:

== Metadata ==
My intention here is that we do things in such a way that if we change
one of the original 4 categories, or more importantly add more or allow
users to add more, the information about the category is centralized and
not reliant on the UI to provide the user information on what it is.

ID - Unique ID for the Resource Category.
Display Name - User-friendly name to display.
Description - Equally self-explanatory.

== Count ==
In the Tuskar UI, the user selects how many of each category is desired.
This stored in Tuskar's domain model for the category and is used when
generating the template to pass to Heat to make it happen.

These counts are what is displayed to the user in the Tuskar UI for each
category. The staging concept has been removed for Icehouse. In other
words, the wireframes that cover the waiting to be deployed aren't
relevant for now.

== Image ==
For Icehouse, each category will have one image associated with it. Last
I remember, there was discussion on whether or not we need to support
multiple images for a category, but for Icehouse we'll limit it to 1 and
deal with it later.


+1, that matches my recollection.



Metadata for each Resource Category is owned by the Tuskar API. The
images themselves are managed by Glance, with each Resource Category
keeping track of just the UUID for its image.


= Stack =
There is a single stack in Tuskar, the overcloud. The Heat template
for the stack is generated by the Tuskar API based on the Resource
Category data (image, count, etc.). The template is handed to Heat to
execute.

Heat owns information about running instances and is queried directly
when the Tuskar UI needs to access that information.

--

Next steps for me are to start to work on the Tuskar APIs around
Resource Category CRUD and their conversion into a Heat template.
There's some discussion to be had there as well, but I don't want to put
too much into one e-mail.


Thoughts?


There are a number of other models in the tuskar code[1], do we need to 
consider these now too?


[1]: 
https://github.com/openstack/tuskar/blob/master/tuskar/db/sqlalchemy/models.py



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-09 Thread Tzu-Mainn Chen


- Original Message -
 I'm glad we are hashing this out as I think there is still some debate
 around if Tuskar will need a database at all.
 
 One thing to bear in mind, I think we need to make sure the terminology
 matches that described in the previous thread. I think it mostly does
 here but I'm not sure the Tuskar models do.
 
 A few comments below.
 
 On 09/01/14 17:22, Jay Dobies wrote:
  = Nodes =
  A node is a baremetal machine on which the overcloud resources will be
  deployed. The ownership of this information lies with Ironic. The Tuskar
  UI will accept the needed information to create them and pass it to
  Ironic. Ironic is consulted directly when information on a specific node
  or the list of available nodes is needed.
 
 
  = Resource Categories =
  A specific type of thing that will be deployed into the overcloud.
 
 nit - Wont they be deployed into undercloud to form the overcloud?
 
 
  These are static definitions that describe the entities the user will
  want to add to the overcloud and are owned by Tuskar. For Icehouse, the
  categories themselves are added during installation for the four types
  listed in the wireframes.
 
  Since this is a new model (as compared to other things that live in
  Ironic or Heat), I'll go into some more detail. Each Resource Category
  has the following information:
 
  == Metadata ==
  My intention here is that we do things in such a way that if we change
  one of the original 4 categories, or more importantly add more or allow
  users to add more, the information about the category is centralized and
  not reliant on the UI to provide the user information on what it is.
 
  ID - Unique ID for the Resource Category.
  Display Name - User-friendly name to display.
  Description - Equally self-explanatory.
 
  == Count ==
  In the Tuskar UI, the user selects how many of each category is desired.
  This stored in Tuskar's domain model for the category and is used when
  generating the template to pass to Heat to make it happen.
 
  These counts are what is displayed to the user in the Tuskar UI for each
  category. The staging concept has been removed for Icehouse. In other
  words, the wireframes that cover the waiting to be deployed aren't
  relevant for now.
 
  == Image ==
  For Icehouse, each category will have one image associated with it. Last
  I remember, there was discussion on whether or not we need to support
  multiple images for a category, but for Icehouse we'll limit it to 1 and
  deal with it later.
 
 +1, that matches my recollection.
 
 
  Metadata for each Resource Category is owned by the Tuskar API. The
  images themselves are managed by Glance, with each Resource Category
  keeping track of just the UUID for its image.
 
 
  = Stack =
  There is a single stack in Tuskar, the overcloud. The Heat template
  for the stack is generated by the Tuskar API based on the Resource
  Category data (image, count, etc.). The template is handed to Heat to
  execute.
 
  Heat owns information about running instances and is queried directly
  when the Tuskar UI needs to access that information.
 
  --
 
  Next steps for me are to start to work on the Tuskar APIs around
  Resource Category CRUD and their conversion into a Heat template.
  There's some discussion to be had there as well, but I don't want to put
  too much into one e-mail.
 
 
  Thoughts?
 
 There are a number of other models in the tuskar code[1], do we need to
 consider these now too?
 
 [1]:
 https://github.com/openstack/tuskar/blob/master/tuskar/db/sqlalchemy/models.py

Nope, these are gone now, in favor of Tuskar interacting directly with Ironic, 
Heat, etc.

Mainn

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-09 Thread Jay Pipes
On Thu, 2014-01-09 at 16:02 -0500, Tzu-Mainn Chen wrote: There are a
number of other models in the tuskar code[1], do we need to
  consider these now too?
  
  [1]:
  https://github.com/openstack/tuskar/blob/master/tuskar/db/sqlalchemy/models.py
 
 Nope, these are gone now, in favor of Tuskar interacting directly with 
 Ironic, Heat, etc.

Hmm, not quite.

If compare the models in Ironic [1] to Tuskar's (link above), there are
some dramatic differences. Notably:

* No Rack model in Ironic. Closest model seems to be the Chassis model
[2], but the Ironic Chassis model doesn't have nearly the entity
specificity that Tuskar's Rack model has. For example, the following
(important) attributes are missing from Ironic's Chassis model:
 - slots (how does Ironic know how many RU are in a chassis?)
 - location (very important for integration with operations inventory
management systems, trust me)
 - subnet (based on my experience, I've seen deployers use a
rack-by-rack or paired-rack control and data plane network static IP
assignment. While Tuskar's single subnet attribute is not really
adequate for describing production deployments that typically have 3+
management, data and overlay network routing rules for each rack, at
least Tuskar has the concept of networking rules in its Rack model,
while Ironic does not)
 - state (how does Ironic know whether a rack is provisioned fully or
not? Must it query each each Node's powr_state field that has a
chassis_id matching the Chassis' id field?)
 - 
* The Tuskar Rack model has a field chassis_id. I have no idea what
this is... or its relation to the Ironic Chassis model.

As much as the Tuskar Chassis model is lacking compared to the Tuskar
Rack model, the opposite problem exists for each project's model of
Node. In Tuskar, the Node model is pretty bare and useless, whereas
Ironic's Node model is much richer.

So, it's not as simple as it may initially seem :)

Best,
-jay

[1]
https://github.com/openstack/ironic/blob/master/ironic/db/sqlalchemy/models.py
[2]
https://github.com/openstack/ironic/blob/master/ironic/db/sqlalchemy/models.py#L83



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-09 Thread Tzu-Mainn Chen


- Original Message -
 On Thu, 2014-01-09 at 16:02 -0500, Tzu-Mainn Chen wrote: There are a
 number of other models in the tuskar code[1], do we need to
   consider these now too?
   
   [1]:
   https://github.com/openstack/tuskar/blob/master/tuskar/db/sqlalchemy/models.py
  
  Nope, these are gone now, in favor of Tuskar interacting directly with
  Ironic, Heat, etc.
 
 Hmm, not quite.
 
 If compare the models in Ironic [1] to Tuskar's (link above), there are
 some dramatic differences. Notably:
 
 * No Rack model in Ironic. Closest model seems to be the Chassis model
 [2], but the Ironic Chassis model doesn't have nearly the entity
 specificity that Tuskar's Rack model has. For example, the following
 (important) attributes are missing from Ironic's Chassis model:
  - slots (how does Ironic know how many RU are in a chassis?)
  - location (very important for integration with operations inventory
 management systems, trust me)
  - subnet (based on my experience, I've seen deployers use a
 rack-by-rack or paired-rack control and data plane network static IP
 assignment. While Tuskar's single subnet attribute is not really
 adequate for describing production deployments that typically have 3+
 management, data and overlay network routing rules for each rack, at
 least Tuskar has the concept of networking rules in its Rack model,
 while Ironic does not)
  - state (how does Ironic know whether a rack is provisioned fully or
 not? Must it query each each Node's powr_state field that has a
 chassis_id matching the Chassis' id field?)
  -
 * The Tuskar Rack model has a field chassis_id. I have no idea what
 this is... or its relation to the Ironic Chassis model.
 
 As much as the Tuskar Chassis model is lacking compared to the Tuskar
 Rack model, the opposite problem exists for each project's model of
 Node. In Tuskar, the Node model is pretty bare and useless, whereas
 Ironic's Node model is much richer.
 
 So, it's not as simple as it may initially seem :)

Ah, I should have been clearer in my statement - my understanding is that
we're scrapping concepts like Rack entirely.

Mainn

 Best,
 -jay
 
 [1]
 https://github.com/openstack/ironic/blob/master/ironic/db/sqlalchemy/models.py
 [2]
 https://github.com/openstack/ironic/blob/master/ironic/db/sqlalchemy/models.py#L83
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-09 Thread Tzu-Mainn Chen
- Original Message -
  The UI will also need to be able to look at the Heat resources running
  within the overcloud stack and classify them according to a resource
  category.  How do you envision that working?
 
 There's a way in a Heat template to specify arbitrary metadata on a
 resource. We can add flags in there and key off of those.

That seems reasonable, but I wonder - given a resource in a template,
how do we know what category metadata needs to be set?  Is it simply based
off the resource name?  If that's the case, couldn't we use that
name - category mapping directly, without the need to set metadata?

Mainn

  Next steps for me are to start to work on the Tuskar APIs around
  Resource Category CRUD and their conversion into a Heat template.
  There's some discussion to be had there as well, but I don't want to put
  too much into one e-mail.
 
 
  I'm looking forward to seeing the API specification, as Resource Category
  CRUD is currently a big unknown in the tuskar-ui api.py file.
 
 
  Mainn
 
 
 
  Thoughts?
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev