On 08/12/2016 01:10 PM, Walter A. Boring IV wrote:
> 
>> I was leaning towards a separate repo until I started thinking about all
>> the overhead and complications this would cause. It's another repo for
>> cores to watch. It would cause everyone extra complication in setting up
>> their CI, which is already one of the biggest roadblocks. It would make
>> it a little harder to do things like https://review.openstack.org/297140
>> and https://review.openstack.org/346470 to be able to generate this:
>> http://docs.openstack.org/developer/cinder/drivers.html. Plus more infra
>> setup, more moving parts to break, and just generally more
>> complications.
>>
>> All things that can be solved for sure. I just question whether it would
>> be worth having that overhead. Frankly, there are better things I'd like
>> to spend my time on.
>>
>> I think at this point my first preference would actually be to define a
>> new tag. This addresses both the driver removal issue as well as the
>> backporting of driver bug fixes. I would like to see third party drivers
>> recognized and treated as being different, because in reality they are
>> very different than the rest of the code. Having something like
>> follows_deprecation_but_has_third_party_drivers_that_dont would make a
>> clear statement that their is a vendor component to this project that
>> really has to be treated differently and has different concerns
>> deployers need to be aware of.
>>
>> Barring that, I think my next choice would be to remove the tag. That
>> would really be unfortunate as we do want to make it clear to users that
>> Cinder will not arbitrarily break APIs or do anything between releases
>> without warning when it comes to non-third party drivers. But if that is
>> what we need to do to effectively communicate what to expect from
>> Cinder, then I'm OK with that.
>>
>> My last choice (of the ones I'm favorable towards) would be marking a
>> driver as untested/unstable/abandoned/etc rather than removing it. We
>> could flag these a certain way and have then spam the logs like crazy
>> after upgrade to make it very and painfully clear that they are not
>> being maintained. But as Duncan pointed out, this doesn't have as much
>> impact for getting vendor attention. It's amazing the level of executive
>> involvement that can happen after a patch is put up for driver removal
>> due to non-compliance.
>>
>> Sean
>>
>> __________________________________________________________________________
> I believe there is a compromise that we could implement in Cinder that
> enables us to have a deprecation
> of unsupported drivers that aren't meeting the Cinder driver
> requirements and allow upgrades to work
> without outright immediately removing a driver.
> 
>  1. Add a 'supported = True' attribute to every driver.
>  2. When a driver no longer meets Cinder community requirements, put a
>     patch up against the driver
>  3. When c-vol service starts, check the supported flag.  If the flag is
>     False, then log an exception, and disable the driver.
>  4. Allow the admin to put an entry in cinder.conf for the driver in
>     question "enable_unsupported_driver = True".  This will allow the
>     c-vol service to start the driver and allow it to work.  Log a
>     warning on every driver call.
>  5. This is a positive acknowledgement by the operator that they are
>     enabling a potentially broken driver. Use at your own risk.
>  6. If the vendor doesn't get the CI working in the next release, then
>     remove the driver. 
>  7. If the vendor gets the CI working again, then set the supported flag
>     back to True and all is good. 
> 
> 
> This allows a deprecation period for a driver, and keeps operators who
> upgrade their deployment from losing access to their volumes they have
> on those back-ends.  It will give them time to contact the community
> and/or do some research, and find out what happened to the driver.  
> This also potentially gives the operator time to find a new supported
> backend and start migrating volumes.  I say potentially, because the
> driver may be broken, or it may work enough to migrate volumes off of it
> to a new backend.
> 
> Having unsupported drivers in tree is terrible for the Cinder community,
> and in the long run terrible for operators.
> Instantly removing drivers because CI is unstable is terrible for
> operators in the short term, because as soon as they upgrade OpenStack,
> they lose all access to managing their existing volumes.   Just because
> we leave a driver in tree in this state, doesn't mean that the operator
> will be able to migrate if the drive is broken, but they'll have a
> chance depending on the state of the driver in question.  It could be
> horribly broken, but the breakage might be something fixable by someone
> that just knows Python.   If the driver is gone from tree entirely, then
> that's a lot more to overcome.
> 
> I don't think there is a way to make everyone happy all the time, but I
> think this buys operators a small window of opportunity to still manage
> their existing volumes before the driver is removed.  It also still
> allows the Cinder community to deal with unsupported drivers in a way
> that will motivate vendors to keep their stuff working.

This seems very reasonable. It allows the cinder team to mark stuff
unsupported at any point that vendors do not meet their upstream
commitments, but still provides some path forward for operators that
didn't realize their chosen vendor abandoned them and the community
until after they are in the midst of upgrade. It's very important that
the cinder team is able to keep a very visible hammer for vendors not
living up to their commitments.

Keeping some visible data around drivers that are flapping (going
unsupported, showing up with CI to get back out of the state,
disappearing again) would be great as well, to further give operators
data on what vendors are working in good faith and which aren't.

        -Sean

-- 
Sean Dague
http://dague.net

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to