Re: [openstack-dev] [ironic] Summary of ironic sessions from Sydney

2017-11-22 Thread Michael Still
Thanks for this summary. I'd say the cinder-booted IPA is definitely of
interest to the operators I've met. Building new IPAs, especially when
trying to iterate on what drivers are needed is a pain so being able to
iterate faster would be very useful. That said, I guess this implies
booting more than one machine off a volume at once?

Michael

On Wed, Nov 15, 2017 at 3:18 AM, Julia Kreger 
wrote:

> Greetings ironic folk!
>
> Like many other teams, we had very few ironic contributors make it to
> Sydney. As such, I wanted to go ahead and write up a summary that
> covers takeaways, questions, and obvious action items for the
> community that were raised by operators and users present during the
> sessions, so that we can use this as feedback to help guide our next
> steps and feature planning.
>
> Much of this is from my memory combined with notes on the various
> etherpads. I would like to explicitly thank NobodyCam for reading
> through this in advance to see if I was missing anything at a high
> level since he was present in the vast majority of these sessions, and
> dtantsur for sanity checking the content and asking for some
> elaboration in some cases.
>
> -Julia
>
>
>
> Ironic Project Update
> =
>
> Questions largely arose around use of boot from volume, including some
> scenarios we anticipated that would arise, as well as new scenarios
> that we had not considered.
>
> Boot nodes booting from the same volume
> ---
>
> From a technical standpoint, when BFV is used with iPXE chain loading,
> the chain loader reads the boot loader and related data from the
> cinder (or, realistically, any iSCSI volume). This means that a
> skilled operator is able to craft a specific volume that may just turn
> around and unpack a ramdisk and operate the machine solely from RAM,
> or that utilize an NFS root.
>
> This sort of technical configuration would not be something an average
> user would make use of, but there are actual use cases that some large
> scale deployment operators would make use of and that would provide
> them value.
>
> Additionally, this topic and the desire for this capability also come
> up during the “Building a bare metal cloud is hard” talk Q&A.
>
> Action Item: Check the data model to see if we prohibit, and consider
> removing the prohibition against using the same volume across nodes,
> if any.
>
> Cinder-less BFV support
> ---
>
> Some operators are curious about booting Ironic managed nodes without
> cinder in a BFV context. This is something we anticipated and built
> the API and CLI interfaces to support this. Realistically, we just
> need to offer the ability for the data to be read and utilized.
>
> Action Item: Review code and ensure that we have a some sort of no-op
> driver or method that allows cinder-less node booting. For existing
> drivers, it would be the shipment of the information to the BMC or the
> write-out of iPXE templates as necessary.
>
> Boot IPA from a cinder volume
> -
>
> With larger IPA images, specifically in cases where the image contains
> a substantial amount of utilized or tooling to perform cleaning,
> providing a mechanism to point the deployment Ramdisk to a cinder
> volume would allow more efficient IO access.
>
> Action Item: Discuss further - Specifically how we could support as we
> would need to better understand how some of the operators might use
> such functionality.
>
> Dedicated Storage Fabric support
> 
>
> A question of dedicated storage fabric/networking support arose. For
> users of FibreChannel, they generally have a dedicated storage fabric
> by the very nature of separate infrasturcture. However, with ethernet
> networking where iSCSI software initiators are used, or even possibly
> converged network adapters, things get a little more complex.
>
> Presently, with the iPXE boot from volume support, we boot using the
> same interface details for the neutron VIF that the node is attached
> with.
>
> Moving forward, with BFV, the concept was to support the use of
> explicitly defined interfaces as storage interfaces, which could be
> denoted as "volume connectors" in ironic by type defined as "mac". In
> theory, we begin to get functionality along these lines once
> https://review.openstack.org/#/c/468353/ lands, as the user could
> define two networks, and the storage network should then fall to the
> explicit volume connector interface(s). The operator would just need
> to ensure that the settings being used on that storage network are
> such that the node can boot and reach the iSCSI endpoint, and that a
> default route is not provided.
>
> The question then may be, does Ironic do this quietly for the user
> requesting the VM or not, and how do we document the use such that
> operators can conceptualize it. How do we make this work at a larger
> scale? How could this fit or not fit into multi-site deploy

Re: [openstack-dev] [ironic] Summary of ironic sessions from Sydney

2017-11-22 Thread Ruby Loo
Thank you Julia, for sacrificing yourself and going to Australia; I'm glad
the koalas didn't get you :)

This summary is GREAT! I'm trying to figure out how we take all these asks
into consideration with all the existing asks and TODOs that are on our
plate. I guess the best plan of action (and a bit more procrastination) is
to discuss this at our virtual mid-cycle meetup next week [1].

--ruby

[1]
http://lists.openstack.org/pipermail/openstack-dev/2017-November/124725.html


On Tue, Nov 14, 2017 at 11:18 AM, Julia Kreger 
wrote:

> Greetings ironic folk!
>
> Like many other teams, we had very few ironic contributors make it to
> Sydney. As such, I wanted to go ahead and write up a summary that
> covers takeaways, questions, and obvious action items for the
> community that were raised by operators and users present during the
> sessions, so that we can use this as feedback to help guide our next
> steps and feature planning.
>
> Much of this is from my memory combined with notes on the various
> etherpads. I would like to explicitly thank NobodyCam for reading
> through this in advance to see if I was missing anything at a high
> level since he was present in the vast majority of these sessions, and
> dtantsur for sanity checking the content and asking for some
> elaboration in some cases.
>
> -Julia
>
>
>
> Ironic Project Update
> =
>
> Questions largely arose around use of boot from volume, including some
> scenarios we anticipated that would arise, as well as new scenarios
> that we had not considered.
>
> Boot nodes booting from the same volume
> ---
>
> From a technical standpoint, when BFV is used with iPXE chain loading,
> the chain loader reads the boot loader and related data from the
> cinder (or, realistically, any iSCSI volume). This means that a
> skilled operator is able to craft a specific volume that may just turn
> around and unpack a ramdisk and operate the machine solely from RAM,
> or that utilize an NFS root.
>
> This sort of technical configuration would not be something an average
> user would make use of, but there are actual use cases that some large
> scale deployment operators would make use of and that would provide
> them value.
>
> Additionally, this topic and the desire for this capability also come
> up during the “Building a bare metal cloud is hard” talk Q&A.
>
> Action Item: Check the data model to see if we prohibit, and consider
> removing the prohibition against using the same volume across nodes,
> if any.
>
> Cinder-less BFV support
> ---
>
> Some operators are curious about booting Ironic managed nodes without
> cinder in a BFV context. This is something we anticipated and built
> the API and CLI interfaces to support this. Realistically, we just
> need to offer the ability for the data to be read and utilized.
>
> Action Item: Review code and ensure that we have a some sort of no-op
> driver or method that allows cinder-less node booting. For existing
> drivers, it would be the shipment of the information to the BMC or the
> write-out of iPXE templates as necessary.
>
> Boot IPA from a cinder volume
> -
>
> With larger IPA images, specifically in cases where the image contains
> a substantial amount of utilized or tooling to perform cleaning,
> providing a mechanism to point the deployment Ramdisk to a cinder
> volume would allow more efficient IO access.
>
> Action Item: Discuss further - Specifically how we could support as we
> would need to better understand how some of the operators might use
> such functionality.
>
> Dedicated Storage Fabric support
> 
>
> A question of dedicated storage fabric/networking support arose. For
> users of FibreChannel, they generally have a dedicated storage fabric
> by the very nature of separate infrasturcture. However, with ethernet
> networking where iSCSI software initiators are used, or even possibly
> converged network adapters, things get a little more complex.
>
> Presently, with the iPXE boot from volume support, we boot using the
> same interface details for the neutron VIF that the node is attached
> with.
>
> Moving forward, with BFV, the concept was to support the use of
> explicitly defined interfaces as storage interfaces, which could be
> denoted as "volume connectors" in ironic by type defined as "mac". In
> theory, we begin to get functionality along these lines once
> https://review.openstack.org/#/c/468353/ lands, as the user could
> define two networks, and the storage network should then fall to the
> explicit volume connector interface(s). The operator would just need
> to ensure that the settings being used on that storage network are
> such that the node can boot and reach the iSCSI endpoint, and that a
> default route is not provided.
>
> The question then may be, does Ironic do this quietly for the user
> requesting the VM or not, and how do we document the use such 

[openstack-dev] [ironic] Summary of ironic sessions from Sydney

2017-11-14 Thread Julia Kreger
Greetings ironic folk!

Like many other teams, we had very few ironic contributors make it to
Sydney. As such, I wanted to go ahead and write up a summary that
covers takeaways, questions, and obvious action items for the
community that were raised by operators and users present during the
sessions, so that we can use this as feedback to help guide our next
steps and feature planning.

Much of this is from my memory combined with notes on the various
etherpads. I would like to explicitly thank NobodyCam for reading
through this in advance to see if I was missing anything at a high
level since he was present in the vast majority of these sessions, and
dtantsur for sanity checking the content and asking for some
elaboration in some cases.

-Julia



Ironic Project Update
=

Questions largely arose around use of boot from volume, including some
scenarios we anticipated that would arise, as well as new scenarios
that we had not considered.

Boot nodes booting from the same volume
---

From a technical standpoint, when BFV is used with iPXE chain loading,
the chain loader reads the boot loader and related data from the
cinder (or, realistically, any iSCSI volume). This means that a
skilled operator is able to craft a specific volume that may just turn
around and unpack a ramdisk and operate the machine solely from RAM,
or that utilize an NFS root.

This sort of technical configuration would not be something an average
user would make use of, but there are actual use cases that some large
scale deployment operators would make use of and that would provide
them value.

Additionally, this topic and the desire for this capability also come
up during the “Building a bare metal cloud is hard” talk Q&A.

Action Item: Check the data model to see if we prohibit, and consider
removing the prohibition against using the same volume across nodes,
if any.

Cinder-less BFV support
---

Some operators are curious about booting Ironic managed nodes without
cinder in a BFV context. This is something we anticipated and built
the API and CLI interfaces to support this. Realistically, we just
need to offer the ability for the data to be read and utilized.

Action Item: Review code and ensure that we have a some sort of no-op
driver or method that allows cinder-less node booting. For existing
drivers, it would be the shipment of the information to the BMC or the
write-out of iPXE templates as necessary.

Boot IPA from a cinder volume
-

With larger IPA images, specifically in cases where the image contains
a substantial amount of utilized or tooling to perform cleaning,
providing a mechanism to point the deployment Ramdisk to a cinder
volume would allow more efficient IO access.

Action Item: Discuss further - Specifically how we could support as we
would need to better understand how some of the operators might use
such functionality.

Dedicated Storage Fabric support


A question of dedicated storage fabric/networking support arose. For
users of FibreChannel, they generally have a dedicated storage fabric
by the very nature of separate infrasturcture. However, with ethernet
networking where iSCSI software initiators are used, or even possibly
converged network adapters, things get a little more complex.

Presently, with the iPXE boot from volume support, we boot using the
same interface details for the neutron VIF that the node is attached
with.

Moving forward, with BFV, the concept was to support the use of
explicitly defined interfaces as storage interfaces, which could be
denoted as "volume connectors" in ironic by type defined as "mac". In
theory, we begin to get functionality along these lines once
https://review.openstack.org/#/c/468353/ lands, as the user could
define two networks, and the storage network should then fall to the
explicit volume connector interface(s). The operator would just need
to ensure that the settings being used on that storage network are
such that the node can boot and reach the iSCSI endpoint, and that a
default route is not provided.

The question then may be, does Ironic do this quietly for the user
requesting the VM or not, and how do we document the use such that
operators can conceptualize it. How do we make this work at a larger
scale? How could this fit or not fit into multi-site deployments?

In order to determine if there is more to do, we need to have more
discussions with operators.

Action items:

* Determine overall needs for operators, since this is implementation
architecture centric.
* Plan forward path form there, if it makes sense.

Note: This may require more information to be stored or leveraged in
terms of structural or location based data.

Migration questions from classic drivers to Hardware types
--

One explicit question from the operator community was if we intended
to perform a migr