Excellent write up Jay.

I don't actually know the answer. I'm not 100% bought into the idea that Tuskar isn't going to store any information about the deployment and will rely entirely on Heat/Ironic as the data store there. Losing this extra physical information may be a a strong reason why we need to store capture additional data beyond what is or will be utilized by Ironic.

For now, I think the answer is that this is the first pass for Icehouse. We're still a ways off from being able to do what you described regardless of where the model lives. There are ideas around how to partition things as you're suggesting (configuring profiles for the nodes; I forget the exact term but there was a big thread about manual v. automatic node allocation that had an idea) but there's nothing in the wireframes to account for it yet.

So not a very helpful reply on my part :) But your feedback was described well which will help keep those concerns in mind post-Icehouse.

Hmm, so this is a bit disappointing, though I may be less disappointed
if I knew that Ironic (or something else?) planned to account for
datacenter inventory in a more robust way than is currently modeled.

If Triple-O/Ironic/Tuskar are indeed meant to be the deployment tooling
that an enterprise would use to deploy bare-metal hardware in a
continuous fashion, then the modeling of racks, and the attributes of
those racks -- location, power supply, etc -- are a critical part of the
overall picture.

As an example of why something like power supply is important... inside
AT&T, we had both 8kW and 16kW power supplies in our datacenters. For a
42U or 44U rack, deployments would be limited to a certain number of
compute nodes, based on that power supply.

The average power draw for a particular vendor model of compute worker
would be used in determining the level of compute node packing that
could occur for that rack type within a particular datacenter. This was
a fundamental part of datacenter deployment and planning. If the tooling
intended to do bare-metal deployment of OpenStack in a continual manner
does not plan to account for these kinds of things, then the chances
that tooling will be used in enterprise deployments is diminished.

And, as we all know, when something isn't used, it withers. That's the
last thing I want to happen here. I want all of this to be the
bare-metal deployment tooling that is used *by default* in enterprise
OpenStack deployments, because the tooling "fits" the expectations of
datacenter deployers.

It doesn't have to be done tomorrow :) It just needs to be on the map
somewhere. I'm not sure if Ironic is the place to put this kind of
modeling -- I thought Tuskar was going to be that thing. But really,
IMO, it should be on the roadmap somewhere.

All the best,
-jay


_______________________________________________
OpenStack-dev mailing list
[email protected]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_______________________________________________
OpenStack-dev mailing list
[email protected]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to