Re: [openstack-dev] [ironic] automatic migration from classic drivers to hardware types?

2017-11-20 Thread Ruby Loo
Thanks for bringing this up Dmitry. See inline...

On Tue, Nov 14, 2017 at 5:05 PM, Alex Schultz  wrote:

> On Tue, Nov 14, 2017 at 8:10 AM, Dmitry Tantsur 
> wrote:
> > Hi folks!
> >
> > This was raised several times, now I want to bring it to the wider
> audience.
> > We're planning [1] to deprecate classic drivers in Queens and remove
> them in
> > Rocky. It was pointed at the Forum that we'd better provide an automatic
> > migration.
> >
> > I'd like to hear your opinion on the options:
> >
> > (1) Migration as part of 'ironic-dbsync upgrade'
> >
> > Pros:
> > * nothing new to do for the operators
> >
> > Cons:
> > * upgrade will fail completely, if for some nodes the matching hardware
> > types and/or interfaces are not enabled in ironic.conf
> >
>

ironic-dbsync upgrade has always (I think) been used to *only* update the
database schema. Not change the actual data in the database. I don't think
this is the right place to be doing this 'migration'.


> > (2) A separate script for migration
> >
> > Pros:
> > * can be done in advance (even while still on Pike)
> > * a failure won't fail the whole upgrade
> > * will rely on drivers enabled in actually running conductors, not on
> > ironic.conf
> >
> > Cons:
> > * a new upgrade action before Rocky
> > * won't be available in packaging
> > * unclear how to update nodes that are in some process (e.g. cleaning),
> will
> > probably have to be run several times
> >
> > (3) Migration as part of 'ironic-dbsync online_data_migration'
> >
> > Pros:
> > * nothing new to do for the operators, similar to (1)
> > * probably a more natural place to do this than (1)
> > * can rely on drivers enabled in actually running conductors, not on
> > ironic.conf
> >
> > Cons:
> > * data migration will fail, if for some nodes the matching hardware types
> > and/or interfaces are not enabled in ironic.conf
> >
>

The online_data_migration exists for situations just like this; migration
of data :) So this is my vote. It is fine if the call fails, this lets the
operators know that they will need to make changes. If we go with this, I
envision it working something like:

Queens: deprecate classic drivers
Queens: 'ironic-dbsync online_data_migrations' is run (at any point in
time, while ironic services are running). Among other things, it would
migrate classic drivers to hardware types, failing if needed hardware types
or interfaces are not enabled in the config file.
This must succeed before upgrading to Rocky
Rocky: if you try to upgrade to Rocky and the above Queens' ironic-dbsync
online_data_migrations fails, you will not be able to upgrade.


> Rather than fail in various ways, why not do like what nova has with a
> pre-upgrade status check[0] and then just handle it in ironic-dbsync
> upgrade?   This would allow operators to check prior to running the
> upgrade to understand what might need to be changed.  Additionally the
> upgrade command itself could leverage the status check to fail nicely.
>
>
I like the idea of a status check, although I am doubtful that it is
do-able in Queens, given the goals we have. But of course, that can be up
for discussion etc.


> > (4) Do nothing, let operators handle the migration.
> >
>
> Please no.
>
> >
> > The most reasonable option for me seems (3), then (4). What do you think?
> >
>
> So this was chatted about in relation to some environment tooling we
> have where we currently have where older 'pxe_ipmitool' defined and
> this will need to switch to be 'ipmi'[1]. The issue with the hard
> cutover on this one is any tooling which may have been written that
> currently works with multiple openstack releases to generate the
> required json for ironic will now have to take that into account.  I
> know in our case we'll be needing to support newton for longer so
> making the tooling openstack aware around this is just further
> tech-debt that we'll be creating. Is there a better solution that
> could be done either in ironic client or in the API to gracefully
> handle this transition for a longer period of time?  I think this may
> be one of those decisions that has a far reaching impact on
> deployers/operators due changes they will have to make to support
> multiple versions or as they upgrade between versions and they aren't
> fully aware of yet as many may not be on Ocata.  This change seems
> like it has a high UX impact and IMHO should be done very carefully.
>


It seems to me that if this going to cause undue hardship, that we consider
prolonging the deprecation period for eg. another cycle. I guess I'd like
to get an idea of how long is reasonable, to handle this transition... How
do we get more data points on this, or is Alex the representative for our
users out there? :)

--ruby


>
> Thanks,
> -Alex
>
> [0] https://docs.openstack.org/nova/pike/cli/nova-status.html
> [1] http://eavesdrop.openstack.org/irclogs/%23tripleo/%
> 23tripleo.2017-11-14.log.html#t2017-11-14T15:36:45
>
>
> > Dmitry

Re: [openstack-dev] [ironic] automatic migration from classic drivers to hardware types?

2017-11-14 Thread Alex Schultz
On Tue, Nov 14, 2017 at 8:10 AM, Dmitry Tantsur  wrote:
> Hi folks!
>
> This was raised several times, now I want to bring it to the wider audience.
> We're planning [1] to deprecate classic drivers in Queens and remove them in
> Rocky. It was pointed at the Forum that we'd better provide an automatic
> migration.
>
> I'd like to hear your opinion on the options:
>
> (1) Migration as part of 'ironic-dbsync upgrade'
>
> Pros:
> * nothing new to do for the operators
>
> Cons:
> * upgrade will fail completely, if for some nodes the matching hardware
> types and/or interfaces are not enabled in ironic.conf
>
> (2) A separate script for migration
>
> Pros:
> * can be done in advance (even while still on Pike)
> * a failure won't fail the whole upgrade
> * will rely on drivers enabled in actually running conductors, not on
> ironic.conf
>
> Cons:
> * a new upgrade action before Rocky
> * won't be available in packaging
> * unclear how to update nodes that are in some process (e.g. cleaning), will
> probably have to be run several times
>
> (3) Migration as part of 'ironic-dbsync online_data_migration'
>
> Pros:
> * nothing new to do for the operators, similar to (1)
> * probably a more natural place to do this than (1)
> * can rely on drivers enabled in actually running conductors, not on
> ironic.conf
>
> Cons:
> * data migration will fail, if for some nodes the matching hardware types
> and/or interfaces are not enabled in ironic.conf
>

Rather than fail in various ways, why not do like what nova has with a
pre-upgrade status check[0] and then just handle it in ironic-dbsync
upgrade?   This would allow operators to check prior to running the
upgrade to understand what might need to be changed.  Additionally the
upgrade command itself could leverage the status check to fail nicely.


> (4) Do nothing, let operators handle the migration.
>

Please no.

>
> The most reasonable option for me seems (3), then (4). What do you think?
>

So this was chatted about in relation to some environment tooling we
have where we currently have where older 'pxe_ipmitool' defined and
this will need to switch to be 'ipmi'[1]. The issue with the hard
cutover on this one is any tooling which may have been written that
currently works with multiple openstack releases to generate the
required json for ironic will now have to take that into account.  I
know in our case we'll be needing to support newton for longer so
making the tooling openstack aware around this is just further
tech-debt that we'll be creating. Is there a better solution that
could be done either in ironic client or in the API to gracefully
handle this transition for a longer period of time?  I think this may
be one of those decisions that has a far reaching impact on
deployers/operators due changes they will have to make to support
multiple versions or as they upgrade between versions and they aren't
fully aware of yet as many may not be on Ocata.  This change seems
like it has a high UX impact and IMHO should be done very carefully.

Thanks,
-Alex

[0] https://docs.openstack.org/nova/pike/cli/nova-status.html
[1] 
http://eavesdrop.openstack.org/irclogs/%23tripleo/%23tripleo.2017-11-14.log.html#t2017-11-14T15:36:45


> Dmitry
>
> [1]
> http://specs.openstack.org/openstack/ironic-specs/specs/approved/classic-drivers-future.html
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] automatic migration from classic drivers to hardware types?

2017-11-14 Thread Dmitry Tantsur

Hi folks!

This was raised several times, now I want to bring it to the wider audience. 
We're planning [1] to deprecate classic drivers in Queens and remove them in 
Rocky. It was pointed at the Forum that we'd better provide an automatic migration.


I'd like to hear your opinion on the options:

(1) Migration as part of 'ironic-dbsync upgrade'

Pros:
* nothing new to do for the operators

Cons:
* upgrade will fail completely, if for some nodes the matching hardware types 
and/or interfaces are not enabled in ironic.conf


(2) A separate script for migration

Pros:
* can be done in advance (even while still on Pike)
* a failure won't fail the whole upgrade
* will rely on drivers enabled in actually running conductors, not on 
ironic.conf

Cons:
* a new upgrade action before Rocky
* won't be available in packaging
* unclear how to update nodes that are in some process (e.g. cleaning), will 
probably have to be run several times


(3) Migration as part of 'ironic-dbsync online_data_migration'

Pros:
* nothing new to do for the operators, similar to (1)
* probably a more natural place to do this than (1)
* can rely on drivers enabled in actually running conductors, not on ironic.conf

Cons:
* data migration will fail, if for some nodes the matching hardware types and/or 
interfaces are not enabled in ironic.conf


(4) Do nothing, let operators handle the migration.


The most reasonable option for me seems (3), then (4). What do you think?

Dmitry

[1] 
http://specs.openstack.org/openstack/ironic-specs/specs/approved/classic-drivers-future.html


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev