Steven,

It's important to note that two of the blueprints you reference: 

https://blueprints.launchpad.net/ironic/+spec/drac-raid-mgmt
https://blueprints.launchpad.net/ironic/+spec/drac-hw-discovery

are both very unlikely to land in Ironic -- these are configuration and 
discovery pieces that best fit inside a operator-deployed CMDB, rather than 
Ironic trying to extend its scope significantly to include these type of 
functions. I expect the scoping or Ironic with regards to hardware 
discovery/interrogation as well as configuration of hardware (like I will 
outline below) to be hot topics in Ironic design summit sessions at Paris.

A good way of looking at it is that Ironic is responsible for hardware *at 
provision time*. Registering the nodes in Ironic, as well as hardware 
settings/maintenance/etc while a workload is provisioned is left to the 
operators' CMDB. 

This means what Ironic *can* do is modify the configuration of a node at 
provision time based on information passed down the provisioning pipeline. For 
instance, if you wanted to configure certain firmware pieces at provision time, 
you could do something like this:

Nova flavor sets capability:vm_hypervisor in the flavor that maps to the Ironic 
node. This would map to an Ironic driver that exposes vm_hypervisor as a 
capability, and upon seeing capability:vm_hypervisor has been requested, could 
then configure the firmware/BIOS of the machine to 'hypervisor friendly' 
settings, such as VT bit on and Turbo mode off. You could map multiple 
different combinations of capabilities as different Ironic flavors, and have 
them all represent different configurations of the same pool of nodes. So, you 
end up with two categories of abilities: inherent abilities of the node (such 
as amount of RAM or CPU installed), and configurable abilities (i.e. things 
than can be turned on/off at provision time on demand) -- or perhaps, in the 
future, even things like RAM and CPU will be dynamically provisioned into nodes 
at provision time. 

-Jay Faulkner

________________________________________
From: Steven Hardy <sha...@redhat.com>
Sent: Monday, September 15, 2014 4:44 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [tripleo][heat][ironic] Heat Ironic resources and 
"ready state" orchestration

All,

Starting this thread as a follow-up to a strongly negative reaction by the
Ironic PTL to my patches[1] adding initial Heat->Ironic integration, and
subsequent very detailed justification and discussion of why they may be
useful in this spec[2].

Back in Atlanta, I had some discussions with folks interesting in making
"ready state"[3] preparation of bare-metal resources possible when
deploying bare-metal nodes via TripleO/Heat/Ironic.

The initial assumption is that there is some discovery step (either
automatic or static generation of a manifest of nodes), that can be input
to either Ironic or Heat.

Following discovery, but before an undercloud deploying OpenStack onto the
nodes, there are a few steps which may be desired, to get the hardware into
a state where it's ready and fully optimized for the subsequent deployment:

- Updating and aligning firmware to meet requirements of qualification or
  site policy
- Optimization of BIOS configuration to match workloads the node is
  expected to run
- Management of machine-local storage, e.g configuring local RAID for
  optimal resilience or performance.

Interfaces to Ironic are landing (or have landed)[4][5][6] which make many
of these steps possible, but there's no easy way to either encapsulate the
(currently mostly vendor specific) data associated with each step, or to
coordinate sequencing of the steps.

What is required is some tool to take a text definition of the required
configuration, turn it into a correctly sequenced series of API calls to
Ironic, expose any data associated with those API calls, and declare
success or failure on completion.  This is what Heat does.

So the idea is to create some basic (contrib, disabled by default) Ironic
heat resources, then explore the idea of orchestrating ready-state
configuration via Heat.

Given that Devananda and I have been banging heads over this for some time
now, I'd like to get broader feedback of the idea, my interpretation of
"ready state" applied to the tripleo undercloud, and any alternative
implementation ideas.

Thanks!

Steve

[1] https://review.openstack.org/#/c/104222/
[2] https://review.openstack.org/#/c/120778/
[3] http://robhirschfeld.com/2014/04/25/ready-state-infrastructure/
[4] https://blueprints.launchpad.net/ironic/+spec/drac-management-driver
[5] https://blueprints.launchpad.net/ironic/+spec/drac-raid-mgmt
[6] https://blueprints.launchpad.net/ironic/+spec/drac-hw-discovery

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to