On Mon, Sep 15, 2014 at 1:08 PM, Steven Hardy <sha...@redhat.com> wrote:
> On Mon, Sep 15, 2014 at 05:51:43PM +0000, Jay Faulkner wrote:
>> Steven,
>> It's important to note that two of the blueprints you reference:
>> https://blueprints.launchpad.net/ironic/+spec/drac-raid-mgmt
>> https://blueprints.launchpad.net/ironic/+spec/drac-hw-discovery
>> are both very unlikely to land in Ironic -- these are configuration and 
>> discovery pieces that best fit inside a operator-deployed CMDB, rather than 
>> Ironic trying to extend its scope significantly to include these type of 
>> functions. I expect the scoping or Ironic with regards to hardware 
>> discovery/interrogation as well as configuration of hardware (like I will 
>> outline below) to be hot topics in Ironic design summit sessions at Paris.
> Hmm, okay - not sure I really get how a CMDB is going to help you configure
> your RAID arrays in an automated way?
> Or are you subscribing to the legacy datacentre model where a sysadmin
> configures a bunch of boxes via whatever method, puts their details into
> the CMDB, then feeds those details into Ironic?
>> A good way of looking at it is that Ironic is responsible for hardware *at 
>> provision time*. Registering the nodes in Ironic, as well as hardware 
>> settings/maintenance/etc while a workload is provisioned is left to the 
>> operators' CMDB.
>> This means what Ironic *can* do is modify the configuration of a node at 
>> provision time based on information passed down the provisioning pipeline. 
>> For instance, if you wanted to configure certain firmware pieces at 
>> provision time, you could do something like this:
>> Nova flavor sets capability:vm_hypervisor in the flavor that maps to the 
>> Ironic node. This would map to an Ironic driver that exposes vm_hypervisor 
>> as a capability, and upon seeing capability:vm_hypervisor has been 
>> requested, could then configure the firmware/BIOS of the machine to 
>> 'hypervisor friendly' settings, such as VT bit on and Turbo mode off. You 
>> could map multiple different combinations of capabilities as different 
>> Ironic flavors, and have them all represent different configurations of the 
>> same pool of nodes. So, you end up with two categories of abilities: 
>> inherent abilities of the node (such as amount of RAM or CPU installed), and 
>> configurable abilities (i.e. things than can be turned on/off at provision 
>> time on demand) -- or perhaps, in the future, even things like RAM and CPU 
>> will be dynamically provisioned into nodes at provision time.
> So you advocate pushing all the vendor-specific stuff down into various
> Ironic drivers,

... and providing a common abstraction / representation for it. Yes.
That is, after all, what OpenStack has done for compute, storage, and
networking, and what Ironic has set out to do for hardware
provisioning from the beginning.

>  is any of what you describe above possible today?

No. We had other priorities in Juno. It's probably one of the things
we'll prioritize in Kilo.

If you can't wait for Ironic to implement a common abstraction layer
for such functionality, then by all means, implement vendor-native
templates in Heat, but keep in mind that our goal is to move any
functionality which multiple vendors provide into the common API over
time. Vendor passthru is there as an early proving ground for vendors
to add their unique capabilities while we work towards cross-vendor


OpenStack-dev mailing list

Reply via email to