On Tue, Sep 16, 2014 at 12:42 PM, Zane Bitter <zbit...@redhat.com> wrote:
> On 16/09/14 15:24, Devananda van der Veen wrote:
>>
>> On Tue, Sep 16, 2014 at 11:44 AM, Zane Bitter <zbit...@redhat.com> wrote:
>>>
>>> On 16/09/14 13:56, Devananda van der Veen wrote:
>>>>
>>>>
>>>> On Mon, Sep 15, 2014 at 9:00 AM, Steven Hardy <sha...@redhat.com> wrote:
>>>>>
>>>>>
>>>>> For example, today, I've been looking at the steps required for driving
>>>>> autodiscovery:
>>>>>
>>>>> https://etherpad.openstack.org/p/Ironic-PoCDiscovery-Juno
>>>>>
>>>>> Driving this process looks a lot like application orchestration:
>>>>>
>>>>> 1. Take some input (IPMI credentials and MAC addresses)
>>>>> 2. Maybe build an image and ramdisk(could drop credentials in)
>>>>> 3. Interact with the Ironic API to register nodes in maintenance mode
>>>>> 4. Boot the nodes, monitor state, wait for a signal back containing
>>>>> some
>>>>>      data obtained during discovery (same as WaitConditions or
>>>>>      SoftwareDeployment resources in Heat..)
>>>>> 5. Shutdown the nodes and mark them ready for use by nova
>>>>>
>>>>
>>>> My apologies if the following sounds snarky -- but I think there are a
>>>> few misconceptions that need to be cleared up about how and when one
>>>> might use Ironic. I also disagree that 1..5 looks like application
>>>> orchestration. Step 4 is a workflow, which I'll go into in a bit, but
>>>> this doesn't look at all like describing or launching an application
>>>> to me.
>>>
>>>
>>>
>>> +1 (Although step 3 does sound to me like something that matches Heat's
>>> scope.)
>>
>>
>> I think it's a simplistic use case, and Heat supports a lot more
>> complexity than is necessary to enroll nodes with Ironic.
>>
>>>
>>>> Step 1 is just parse a text file.
>>>>
>>>> Step 2 should be a prerequisite to doing -anything- with Ironic. Those
>>>> images need to be built and loaded in Glance, and the image UUID(s)
>>>> need to be set on each Node in Ironic (or on the Nova flavor, if going
>>>> that route) after enrollment. Sure, Heat can express this
>>>> declaratively (ironic.node.driver_info must contain key:deploy_kernel
>>>> with value:NNNN), but are you suggesting that Heat build the images,
>>>> or just take the UUIDs as input?
>>>>
>>>> Step 3 is, again, just parse a text file
>>>>
>>>> I'm going to make an assumption here [*], because I think step 4 is
>>>> misleading. You shouldn't "boot a node" using Ironic -- you do that
>>>> through Nova. And you _dont_ get to specify which node you're booting.
>>>> You ask Nova to provision an _instance_ on a _flavor_ and it picks an
>>>> available node from the pool of nodes that match the request.
>>>
>>>
>>>
>>> I think your assumption is incorrect. Steve is well aware that
>>> provisioning
>>> a bare-metal Ironic server is done through the Nova API. What he's
>>> suggesting here is that the nodes would be booted - not Nova-booted, but
>>> booted in the sense of having power physically applied - while in
>>> maintenance mode in order to do autodiscovery of their capabilities,
>>
>>
>> Except simply applying power doesn't, in itself, accomplish anything
>> besides causing the machine to power on. Ironic will only prepare the
>> PXE boot environment when initiating a _deploy_.
>
>
> From what I gather elsewhere in this thread, the autodiscovery stuff is a
> proposal for the future, not something that exists in Ironic now, and that
> may be the source of the confusion.
>
> In any case, the etherpad linked at the top of this email was written by
> someone in the Ironic team and _clearly_ describes PXE booting a "discovery
> image" in maintenance mode in order to obtain hardware information about the
> box.
>

Huh. I should have looked at that earlier in the discussion. It is
referring to out-of-tree code whose spec was not approved during Juno.

Apparently, and unfortunately, throughout much of this discussion,
folks have been referring to potential features Ironic might someday
have, whereas I have been focused on the features we actually support
today. That is probably why it seems we are "talking past each other."

-Devananda

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to