Re: [Openstack-operators] [openstack-dev] Upstream LTS Releases

2017-11-15 Thread Jeremy Stanley
On 2017-11-15 00:37:26 + (+), Fox, Kevin M wrote:
[...]
> One idea is that at the root of chaos monkey. If something is
> hard, do it frequently. If upgrading is hard, we need to be doing
> it constantly so the pain gets largely eliminated. One idea would
> be to discourage the use of standing up a fresh devstack all the
> time by devs and have them upgrade them instead. If its hard, then
> its likely someone will chip in to make it less hard.

This is also the idea behind running grenade in CI. The previous
OpenStack release is deployed, an attempt at a representative (if
small) dataset is loaded into it, and then it is upgraded to the
release under development with the proposed change applied and
exercised to make sure the original resources built under the
earlier release are still in working order. We can certainly do more
to make this a better representation of "The Real World" within the
resource constraints of our continuous integration, but we do at
least have a framework in place to attempt it.

> Another is devstack in general. the tooling used by devs and that
> used by ops are so different as to isolate the devs from ops'
> pain. If they used more opsish tooling, then they would hit the
> same issues and would be more likely to find solutions that work
> for both parties.

Keep in mind that DevStack was developed to have a quick framework
anyone could use to locally deploy an all-in-one OpenStack from
source. It was not actually developed for CI automation, to the
extent that we developed a separate wrapper project to make DevStack
usable within our CI (the now somewhat archaically-named
devstack-gate project). It's certainly possible to replace that with
a more mainstream deployment tool, I think, so long as it maintains
the primary qualities we rely on: 1. rapid deployment, 2. can work
on a single system with fairly limited resources, 3. can deploy from
source and incorporate proposed patches, 4. pluggable/extensible so
that new services can be easily integrated even before they're
officially released.

> A third one is supporting multiple version upgrades in the gate. I
> rarely have a problem with a cloud has a database one version
> back. I have seen lots of issues with databases that contain data
> back when the cloud was instantiated and then upgraded multiple
> times.

I believe this will be necessary anyway if we want to officially
support so-called "fast forward" upgrades, since anything that's not
tested is assumed to be (and in fact usually is) broken.

> Another option is trying to unify/detangle the upgrade procedure.
> upgrading compute kit should be one or two commands if you can
> live with the defaults. Not weeks of poring through release notes,
> finding correct orders from pages of text and testing vigorously
> on test systems.

This also sounds like a defect in our current upgrade testing, if
we're somehow embedding upgrade automation in our testing without
providing the same tools to easily perform those steps in production
upgrades.

> How about some tool that does the: dump database to somewhere
> temporary, iterate over all the upgrade job components, and see if
> it will successfully not corrupt your database. That takes a while
> to do manually. Ideally it could even upload stacktraces back a
> bug tracker for attention.

Without a clearer definition of "successfully not corrupt your
database" suitable for automated checking, I don't see how this one
is realistic. Do we have a database validation tool now? If we do,
is it deficient in some way? If we don't, what specifically should
it be checking? Seems like something we would also want to run at
the end of all our upgrade tests too.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] Upstream LTS Releases

2017-11-15 Thread Thierry Carrez
John Dickinson wrote:
> What I heard from ops in the room is that they want (to start) one
> release a year who's branch isn't deleted after a year. What if that's
> exactly what we did? I propose that OpenStack only do one release a year
> instead of two. We still keep N-2 stable releases around. We still do
> backports to all open stable branches. We still do all the things we're
> doing now, we just do it once a year instead of twice.

I started a thread around this specific suggestion on the -sigs list at:

http://lists.openstack.org/pipermail/openstack-sigs/2017-November/000149.html

Please continue the discussion there, to avoid the cross-posting.

If you haven't already, please subscribe at:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] Upstream LTS Releases

2017-11-15 Thread Thierry Carrez
I suggested by Rocky, I moved the discussion to the -sigs list by
posting my promised summary of the session at:

http://lists.openstack.org/pipermail/openstack-sigs/2017-November/000148.html

Please continue the discussion there, to avoid the cross-posting.

If you haven't already, please subscribe at:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs

-- 
Thierry Carrez (ttx)

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] Upstream LTS Releases

2017-11-15 Thread Thierry Carrez
Rochelle Grober wrote:
> Folks,
> 
> This discussion and the people interested in it seem like a perfect 
> application of the SIG process.  By turning LTS into a SIG, everyone can 
> discuss the issues on the SIG mailing list and the discussion shouldn't end 
> up split.  If it turns into a project, great.  If a solution is found that 
> doesn't need a new project, great.  Even once  there is a decision on how to 
> move forward, there will still be implementation issues and enhancements, so 
> the SIG could very well be long-lived.  But the important aspect of this is:  
> keeping the discussion in a place where both devs and ops can follow the 
> whole thing and act on recommendations.

That's an excellent suggestion, Rocky.

Moving the discussion to a SIG around LTS / longer-support / post-EOL
support would also be a great way to form a team to work on that.

Yes, there is a one-time pain involved with subscribing to the -sigs ML,
but I'd say that it's a good idea anyway, and this minimal friction
might reduce the discussion to people that might actually help with
setting something up.

So join:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs

While I'm not sure that's the best name for it, as suggested by Rocky
let's use [lts] as a prefix there.

I'll start a couple of threads.

-- 
Thierry Carrez (ttx)

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] Upstream LTS Releases

2017-11-14 Thread John Dickinson


On 14 Nov 2017, at 16:08, Davanum Srinivas wrote:

> On Wed, Nov 15, 2017 at 10:44 AM, John Dickinson  wrote:
>>
>>
>> On 14 Nov 2017, at 15:18, Mathieu Gagné wrote:
>>
>>> On Tue, Nov 14, 2017 at 6:00 PM, Fox, Kevin M  wrote:
 The pressure for #2 comes from the inability to skip upgrades and the fact 
 that upgrades are hugely time consuming still.

 If you want to reduce the push for number #2 and help developers get their 
 wish of getting features into users hands sooner, the path to upgrade 
 really needs to be much less painful.

>>>
>>> +1000
>>>
>>> We are upgrading from Kilo to Mitaka. It took 1 year to plan and
>>> execute the upgrade. (and we skipped a version)
>>> Scheduling all the relevant internal teams is a monumental task
>>> because we don't have dedicated teams for those projects and they have
>>> other priorities.
>>> Upgrading affects a LOT of our systems, some we don't fully have
>>> control over. And it can takes months to get new deployment on those
>>> systems. (and after, we have to test compatibility, of course)
>>>
>>> So I guess you can understand my frustration when I'm told to upgrade
>>> more often and that skipping versions is discouraged/unsupported.
>>> At the current pace, I'm just falling behind. I *need* to skip
>>> versions to keep up.
>>>
>>> So for our next upgrades, we plan on skipping even more versions if
>>> the database migration allows it. (except for Nova which is a huge
>>> PITA to be honest due to CellsV1)
>>> I just don't see any other ways to keep up otherwise.
>>
>> ?!?!
>>
>> What does it take for this to never happen again? No operator should need to 
>> plan and execute an upgrade for a whole year to upgrade one year's worth of 
>> code development.
>>
>> We don't need new policies, new teams, more releases, fewer releases, or 
>> anything like that. The goal is NOT "let's have an LTS release". The goal 
>> should be "How do we make sure Mattieu and everyone else in the world can 
>> actually deploy and use the software we are writing?"
>>
>> Can we drop the entire LTS discussion for now and focus on "make upgrades 
>> take less than a year" instead? After we solve that, let's come back around 
>> to LTS versions, if needed. I know there's already some work around that. 
>> Let's focus there and not be distracted about the best bureaucracy for not 
>> deleting two-year-old branches.
>>
>>
>> --John
>
> John,
>
> So... Any concrete ideas on how to achieve that?
>
> Thanks,
> Dims
>

Depends on what the upgrade problems are. I'd think the project teams that 
can't currently do seamless or skip-level upgrades would know best about what's 
needed. I suspect there will be both small and large changes needed in some 
projects.

Mathieu's list of realities in a different reply seem very normal. Operators 
are responsible for more than just OpenStack projects, and they've got to 
coordinate changes in deployed OpenStack projects with other systems they are 
running. Working through that list of realities could help identify some areas 
of improvement.

Spitballing process ideas...
* use a singular tag in launchpad to track upgrade stories. better yet, report 
on the status of these across all openstack projects so anyone can see what's 
needed to get to a smooth upgrade
* redouble efforts on multi-node and rolling upgrade testing. make sure every 
project is using it
* make smooth (and skip-level) upgrades a cross-project goal and don't set 
others until that one is achieved
* add upgrade stories and tests to the interop tests
* allocate time for ops to specifically talk about upgrade stories at the PTG. 
make sure as many devs are in the room as possible.
* add your cell phone number to the project README so that any operator can 
call you as soon as they try to upgrade (perhaps not 100% serious)
* add testing infrastructure that is locked to distro-provided versions of 
dependencies (eg install on xenial with only apt or install on rhel 7 with only 
yum)
* only do one openstack release a year. keep N-2 releases around. give ops a 
chance to upgrade before we delete branches
* do an openstack release every month. severely compress the release cycle and 
force everything to work with disparate versions. this will drive good testing, 
strong, stable interfaces, and smooth upgrades


Ah, just saw Kevin's reply in a different message. I really like his idea of 
"use ops tooling for day-to-day dev work. stop using devstack".


Ultimately it will come down to typing in some code and merging it into a 
project. I do not know what's needed there. It's probably different for every 
project.



--John




>>
>>
>> /me puts on asbestos pants
>>
>>>
>>> --
>>> Mathieu
>>>
>>> ___
>>> OpenStack-operators mailing list
>>> OpenStack-operators@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>> 

Re: [Openstack-operators] [openstack-dev] Upstream LTS Releases

2017-11-14 Thread Fox, Kevin M
I can think of a few ideas, though some sound painful on paper Not really 
recommending anything, just thinking out loud...

One idea is that at the root of chaos monkey. If something is hard, do it 
frequently. If upgrading is hard, we need to be doing it constantly so the pain 
gets largely eliminated. One idea would be to discourage the use of standing up 
a fresh devstack all the time by devs and have them upgrade them instead. If 
its hard, then its likely someone will chip in to make it less hard.

Another is devstack in general. the tooling used by devs and that used by ops 
are so different as to isolate the devs from ops' pain. If they used more 
opsish tooling, then they would hit the same issues and would be more likely to 
find solutions that work for both parties.

A third one is supporting multiple version upgrades in the gate. I rarely have 
a problem with a cloud has a database one version back. I have seen lots of 
issues with databases that contain data back when the cloud was instantiated 
and then upgraded multiple times.

Another option is trying to unify/detangle the upgrade procedure. upgrading 
compute kit should be one or two commands if you can live with the defaults. 
Not weeks of poring through release notes, finding correct orders from pages of 
text and testing vigorously on test systems.

How about some tool that does the: dump database to somewhere temporary, 
iterate over all the upgrade job components, and see if it will successfully 
not corrupt your database. That takes a while to do manually. Ideally it could 
even upload stacktraces back a bug tracker for attention.

Thanks,
Kevin

From: Davanum Srinivas [dava...@gmail.com]
Sent: Tuesday, November 14, 2017 4:08 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: openstack-oper.
Subject: Re: [openstack-dev] [Openstack-operators] Upstream LTS Releases

On Wed, Nov 15, 2017 at 10:44 AM, John Dickinson  wrote:
>
>
> On 14 Nov 2017, at 15:18, Mathieu Gagné wrote:
>
>> On Tue, Nov 14, 2017 at 6:00 PM, Fox, Kevin M  wrote:
>>> The pressure for #2 comes from the inability to skip upgrades and the fact 
>>> that upgrades are hugely time consuming still.
>>>
>>> If you want to reduce the push for number #2 and help developers get their 
>>> wish of getting features into users hands sooner, the path to upgrade 
>>> really needs to be much less painful.
>>>
>>
>> +1000
>>
>> We are upgrading from Kilo to Mitaka. It took 1 year to plan and
>> execute the upgrade. (and we skipped a version)
>> Scheduling all the relevant internal teams is a monumental task
>> because we don't have dedicated teams for those projects and they have
>> other priorities.
>> Upgrading affects a LOT of our systems, some we don't fully have
>> control over. And it can takes months to get new deployment on those
>> systems. (and after, we have to test compatibility, of course)
>>
>> So I guess you can understand my frustration when I'm told to upgrade
>> more often and that skipping versions is discouraged/unsupported.
>> At the current pace, I'm just falling behind. I *need* to skip
>> versions to keep up.
>>
>> So for our next upgrades, we plan on skipping even more versions if
>> the database migration allows it. (except for Nova which is a huge
>> PITA to be honest due to CellsV1)
>> I just don't see any other ways to keep up otherwise.
>
> ?!?!
>
> What does it take for this to never happen again? No operator should need to 
> plan and execute an upgrade for a whole year to upgrade one year's worth of 
> code development.
>
> We don't need new policies, new teams, more releases, fewer releases, or 
> anything like that. The goal is NOT "let's have an LTS release". The goal 
> should be "How do we make sure Mattieu and everyone else in the world can 
> actually deploy and use the software we are writing?"
>
> Can we drop the entire LTS discussion for now and focus on "make upgrades 
> take less than a year" instead? After we solve that, let's come back around 
> to LTS versions, if needed. I know there's already some work around that. 
> Let's focus there and not be distracted about the best bureaucracy for not 
> deleting two-year-old branches.
>
>
> --John

John,

So... Any concrete ideas on how to achieve that?

Thanks,
Dims

>
>
> /me puts on asbestos pants
>
>>
>> --
>> Mathieu
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



--
Davanum Srinivas :: https://twitter.com/dims


Re: [Openstack-operators] [openstack-dev] Upstream LTS Releases

2017-11-14 Thread Erik McCormick
On Tue, Nov 14, 2017 at 4:10 PM, Rochelle Grober
 wrote:
> Folks,
>
> This discussion and the people interested in it seem like a perfect 
> application of the SIG process.  By turning LTS into a SIG, everyone can 
> discuss the issues on the SIG mailing list and the discussion shouldn't end 
> up split.  If it turns into a project, great.  If a solution is found that 
> doesn't need a new project, great.  Even once  there is a decision on how to 
> move forward, there will still be implementation issues and enhancements, so 
> the SIG could very well be long-lived.  But the important aspect of this is:  
> keeping the discussion in a place where both devs and ops can follow the 
> whole thing and act on recommendations.
>
> Food for thought.
>
> --Rocky
>
Just to add more legs to the spider that is this thread: I think the
SIG idea is a good one. It may evolve into a project team some day,
but for now it's a free-for-all polluting 2 mailing lists, and
multiple etherpads. How do we go about creating one?

-Erik

>> -Original Message-
>> From: Blair Bethwaite [mailto:blair.bethwa...@gmail.com]
>> Sent: Tuesday, November 14, 2017 8:31 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> ; openstack-oper. > operat...@lists.openstack.org>
>> Subject: Re: [openstack-dev] Upstream LTS Releases
>>
>> Hi all - please note this conversation has been split variously across -dev 
>> and -
>> operators.
>>
>> One small observation from the discussion so far is that it seems as though
>> there are two issues being discussed under the one banner:
>> 1) maintain old releases for longer
>> 2) do stable releases less frequently
>>
>> It would be interesting to understand if the people who want longer
>> maintenance windows would be helped by #2.
>>
>> On 14 November 2017 at 09:25, Doug Hellmann 
>> wrote:
>> > Excerpts from Bogdan Dobrelya's message of 2017-11-14 17:08:31 +0100:
>> >> >> The concept, in general, is to create a new set of cores from
>> >> >> these groups, and use 3rd party CI to validate patches. There are
>> >> >> lots of details to be worked out yet, but our amazing UC (User
>> >> >> Committee) will be begin working out the details.
>> >> >
>> >> > What is the most worrying is the exact "take over" process. Does it
>> >> > mean that the teams will give away the +2 power to a different
>> >> > team? Or will our (small) stable teams still be responsible for
>> >> > landing changes? If so, will they have to learn how to debug 3rd party 
>> >> > CI
>> jobs?
>> >> >
>> >> > Generally, I'm scared of both overloading the teams and losing the
>> >> > control over quality at the same time :) Probably the final proposal 
>> >> > will
>> clarify it..
>> >>
>> >> The quality of backported fixes is expected to be a direct (and
>> >> only?) interest of those new teams of new cores, coming from users
>> >> and operators and vendors. The more parties to establish their 3rd
>> >> party
>> >
>> > We have an unhealthy focus on "3rd party" jobs in this discussion. We
>> > should not assume that they are needed or will be present. They may
>> > be, but we shouldn't build policy around the assumption that they
>> > will. Why would we have third-party jobs on an old branch that we
>> > don't have on master, for instance?
>> >
>> >> checking jobs, the better proposed changes communicated, which
>> >> directly affects the quality in the end. I also suppose, contributors
>> >> from ops world will likely be only struggling to see things getting
>> >> fixed, and not new features adopted by legacy deployments they're used
>> to maintain.
>> >> So in theory, this works and as a mainstream developer and
>> >> maintainer, you need no to fear of losing control over LTS code :)
>> >>
>> >> Another question is how to not block all on each over, and not push
>> >> contributors away when things are getting awry, jobs failing and
>> >> merging is blocked for a long time, or there is no consensus reached
>> >> in a code review. I propose the LTS policy to enforce CI jobs be
>> >> non-voting, as a first step on that way, and giving every LTS team
>> >> member a core rights maybe? Not sure if that works though.
>> >
>> > I'm not sure what change you're proposing for CI jobs and their voting
>> > status. Do you mean we should make the jobs non-voting as soon as the
>> > branch passes out of the stable support period?
>> >
>> > Regarding the review team, anyone on the review team for a branch that
>> > goes out of stable support will need to have +2 rights in that branch.
>> > Otherwise there's no point in saying that they're maintaining the
>> > branch.
>> >
>> > Doug
>> >
>> >
>> __
>> 
>> >  OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > 

Re: [Openstack-operators] [openstack-dev] Upstream LTS Releases

2017-11-14 Thread Erik McCormick
On Tue, Nov 14, 2017 at 6:44 PM, John Dickinson  wrote:
>
>
> On 14 Nov 2017, at 15:18, Mathieu Gagné wrote:
>
>> On Tue, Nov 14, 2017 at 6:00 PM, Fox, Kevin M  wrote:
>>> The pressure for #2 comes from the inability to skip upgrades and the fact 
>>> that upgrades are hugely time consuming still.
>>>
>>> If you want to reduce the push for number #2 and help developers get their 
>>> wish of getting features into users hands sooner, the path to upgrade 
>>> really needs to be much less painful.
>>>
>>
>> +1000
>>
>> We are upgrading from Kilo to Mitaka. It took 1 year to plan and
>> execute the upgrade. (and we skipped a version)
>> Scheduling all the relevant internal teams is a monumental task
>> because we don't have dedicated teams for those projects and they have
>> other priorities.
>> Upgrading affects a LOT of our systems, some we don't fully have
>> control over. And it can takes months to get new deployment on those
>> systems. (and after, we have to test compatibility, of course)
>>
>> So I guess you can understand my frustration when I'm told to upgrade
>> more often and that skipping versions is discouraged/unsupported.
>> At the current pace, I'm just falling behind. I *need* to skip
>> versions to keep up.
>>
>> So for our next upgrades, we plan on skipping even more versions if
>> the database migration allows it. (except for Nova which is a huge
>> PITA to be honest due to CellsV1)
>> I just don't see any other ways to keep up otherwise.
>
> ?!?!
>
> What does it take for this to never happen again? No operator should need to 
> plan and execute an upgrade for a whole year to upgrade one year's worth of 
> code development.
>
> We don't need new policies, new teams, more releases, fewer releases, or 
> anything like that. The goal is NOT "let's have an LTS release". The goal 
> should be "How do we make sure Mattieu and everyone else in the world can 
> actually deploy and use the software we are writing?"
>
> Can we drop the entire LTS discussion for now and focus on "make upgrades 
> take less than a year" instead? After we solve that, let's come back around 
> to LTS versions, if needed. I know there's already some work around that. 
> Let's focus there and not be distracted about the best bureaucracy for not 
> deleting two-year-old branches.
>
>
> --John
>
>
>
> /me puts on asbestos pants
>

OK, let's tone down the flamethrower there a bit Mr. Asbestos Pants
;). The LTS push is not lieu of the quest for simpler upgrades. There
is also an effort to enable fast-forward upgrades going on. However,
this is a non-trivial task that will take many cycles to get to a
point where it's truly what you're looking for. The long term desire
of having LTS releases encompasses being able to hop from one LTS to
the next without stopping over. We just aren't there yet.

However, what we *can* do is make it so when mgagne finally gets to
Newton (or Ocata or wherever) on his next run, the code isn't
completely EOL and it can still receive some important patches. This
can be accomplished in the very near term, and that is what a certain
subset of us are focused on.

We still desire to skip versions. We still desire to have upgrades be
non-disruptive and non-destructive. This is just one step on the way
to that. This discussion has been going on for cycle after cycle with
little more than angst between ops and devs to show for it. This is
the first time we've had progress on this ball of goo that really
matters. Let's all be proactive contributors to the solution.

Those interested in having a say in the policy, put your $0.02 here:
https://etherpad.openstack.org/p/LTS-proposal

Peace, Love, and International Grooviness,
Erik

>>
>> --
>> Mathieu
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] Upstream LTS Releases

2017-11-14 Thread Mike Smith
For those wondering why operators can’t always upgrade sooner, I can add a 
little bit of color:  In our clouds, we have a couple vendors (one network 
plugin, one cinder driver) and those vendors typically are 1-3 releases behind 
‘cutting edge’.  By the time they support the version we want to go to, that 
version is almost end-of-life, which can make things interesting.  On the 
bright side, by then there are usually some helpful articles out there about 
the issues upgrading from A to B.

As for the planning time required - for us, it mostly boils down to testing or 
doing it at a time when some amount of disruption is at least somewhat 
tolerable.  For example, for online retail folks like me, upgrading between 
October and December would be out of the question due to the busy shopping 
season that is almost upon us.

I will say that I was very impressed with some of the containerized demos that 
were given at the Summit last week.  I plan to look into some containerized 
options next year which hopefully could ease the upgrade process for us.  
Still, there is a lot of testing involved, coordination with 3rd parties, and 
other stars that would still have to align.

At Overstock we have also started maintaining two completely separate 
production clouds and have orchestration to build/rebuild VMs on either one as 
needed.  Most the time we spread all our apps across both clouds.  So next year 
when we get the chance to upgrade cloud A, we can either rebuild things on B, 
or just shut them down while we rebuild A.  Then we would repeat on cloud B.  
Hopefully this eases our upgrade process…at least that’s what we are hoping!

My 2 cents.  Thanks



On Nov 14, 2017, at 4:44 PM, John Dickinson > 
wrote:



On 14 Nov 2017, at 15:18, Mathieu Gagné wrote:

On Tue, Nov 14, 2017 at 6:00 PM, Fox, Kevin M 
> wrote:
The pressure for #2 comes from the inability to skip upgrades and the fact that 
upgrades are hugely time consuming still.

If you want to reduce the push for number #2 and help developers get their wish 
of getting features into users hands sooner, the path to upgrade really needs 
to be much less painful.


+1000

We are upgrading from Kilo to Mitaka. It took 1 year to plan and
execute the upgrade. (and we skipped a version)
Scheduling all the relevant internal teams is a monumental task
because we don't have dedicated teams for those projects and they have
other priorities.
Upgrading affects a LOT of our systems, some we don't fully have
control over. And it can takes months to get new deployment on those
systems. (and after, we have to test compatibility, of course)

So I guess you can understand my frustration when I'm told to upgrade
more often and that skipping versions is discouraged/unsupported.
At the current pace, I'm just falling behind. I *need* to skip
versions to keep up.

So for our next upgrades, we plan on skipping even more versions if
the database migration allows it. (except for Nova which is a huge
PITA to be honest due to CellsV1)
I just don't see any other ways to keep up otherwise.

?!?!

What does it take for this to never happen again? No operator should need to 
plan and execute an upgrade for a whole year to upgrade one year's worth of 
code development.

We don't need new policies, new teams, more releases, fewer releases, or 
anything like that. The goal is NOT "let's have an LTS release". The goal 
should be "How do we make sure Mattieu and everyone else in the world can 
actually deploy and use the software we are writing?"

Can we drop the entire LTS discussion for now and focus on "make upgrades take 
less than a year" instead? After we solve that, let's come back around to LTS 
versions, if needed. I know there's already some work around that. Let's focus 
there and not be distracted about the best bureaucracy for not deleting 
two-year-old branches.


--John



/me puts on asbestos pants


--
Mathieu

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




CONFIDENTIALITY NOTICE: This message is intended only for the use and review of 
the individual or entity to which it is addressed and may contain information 
that is privileged and confidential. If the reader of this message is not the 
intended recipient, or the employee or agent responsible for delivering the 
message solely to the intended recipient, you are hereby notified that any 
dissemination, distribution or copying of this communication is strictly 
prohibited. If you 

Re: [Openstack-operators] [openstack-dev] Upstream LTS Releases

2017-11-14 Thread John Dickinson


On 14 Nov 2017, at 15:18, Mathieu Gagné wrote:

> On Tue, Nov 14, 2017 at 6:00 PM, Fox, Kevin M  wrote:
>> The pressure for #2 comes from the inability to skip upgrades and the fact 
>> that upgrades are hugely time consuming still.
>>
>> If you want to reduce the push for number #2 and help developers get their 
>> wish of getting features into users hands sooner, the path to upgrade really 
>> needs to be much less painful.
>>
>
> +1000
>
> We are upgrading from Kilo to Mitaka. It took 1 year to plan and
> execute the upgrade. (and we skipped a version)
> Scheduling all the relevant internal teams is a monumental task
> because we don't have dedicated teams for those projects and they have
> other priorities.
> Upgrading affects a LOT of our systems, some we don't fully have
> control over. And it can takes months to get new deployment on those
> systems. (and after, we have to test compatibility, of course)
>
> So I guess you can understand my frustration when I'm told to upgrade
> more often and that skipping versions is discouraged/unsupported.
> At the current pace, I'm just falling behind. I *need* to skip
> versions to keep up.
>
> So for our next upgrades, we plan on skipping even more versions if
> the database migration allows it. (except for Nova which is a huge
> PITA to be honest due to CellsV1)
> I just don't see any other ways to keep up otherwise.

?!?!

What does it take for this to never happen again? No operator should need to 
plan and execute an upgrade for a whole year to upgrade one year's worth of 
code development.

We don't need new policies, new teams, more releases, fewer releases, or 
anything like that. The goal is NOT "let's have an LTS release". The goal 
should be "How do we make sure Mattieu and everyone else in the world can 
actually deploy and use the software we are writing?"

Can we drop the entire LTS discussion for now and focus on "make upgrades take 
less than a year" instead? After we solve that, let's come back around to LTS 
versions, if needed. I know there's already some work around that. Let's focus 
there and not be distracted about the best bureaucracy for not deleting 
two-year-old branches.


--John



/me puts on asbestos pants

>
> --
> Mathieu
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


signature.asc
Description: OpenPGP digital signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] Upstream LTS Releases

2017-11-14 Thread Mathieu Gagné
On Tue, Nov 14, 2017 at 6:00 PM, Fox, Kevin M  wrote:
> The pressure for #2 comes from the inability to skip upgrades and the fact 
> that upgrades are hugely time consuming still.
>
> If you want to reduce the push for number #2 and help developers get their 
> wish of getting features into users hands sooner, the path to upgrade really 
> needs to be much less painful.
>

+1000

We are upgrading from Kilo to Mitaka. It took 1 year to plan and
execute the upgrade. (and we skipped a version)
Scheduling all the relevant internal teams is a monumental task
because we don't have dedicated teams for those projects and they have
other priorities.
Upgrading affects a LOT of our systems, some we don't fully have
control over. And it can takes months to get new deployment on those
systems. (and after, we have to test compatibility, of course)

So I guess you can understand my frustration when I'm told to upgrade
more often and that skipping versions is discouraged/unsupported.
At the current pace, I'm just falling behind. I *need* to skip
versions to keep up.

So for our next upgrades, we plan on skipping even more versions if
the database migration allows it. (except for Nova which is a huge
PITA to be honest due to CellsV1)
I just don't see any other ways to keep up otherwise.

--
Mathieu

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] Upstream LTS Releases

2017-11-14 Thread Fox, Kevin M
The pressure for #2 comes from the inability to skip upgrades and the fact that 
upgrades are hugely time consuming still.

If you want to reduce the push for number #2 and help developers get their wish 
of getting features into users hands sooner, the path to upgrade really needs 
to be much less painful.

Thanks,
Kevin

From: Erik McCormick [emccorm...@cirrusseven.com]
Sent: Tuesday, November 14, 2017 9:21 AM
To: Blair Bethwaite
Cc: OpenStack Development Mailing List (not for usage questions); 
openstack-oper.
Subject: Re: [openstack-dev] [Openstack-operators]  Upstream LTS Releases

On Tue, Nov 14, 2017 at 11:30 AM, Blair Bethwaite
 wrote:
> Hi all - please note this conversation has been split variously across
> -dev and -operators.
>
> One small observation from the discussion so far is that it seems as
> though there are two issues being discussed under the one banner:
> 1) maintain old releases for longer
> 2) do stable releases less frequently
>
> It would be interesting to understand if the people who want longer
> maintenance windows would be helped by #2.
>

I would like to hear from people who do *not* want #2 and why not.
What are the benefits of 6 months vs. 1 year? I have heard objections
in the hallway track, but I have struggled to retain the rationale for
more than 10 seconds. I think this may be more of a religious
discussion that could take a while though.

#1 is something we can act on right now with the eventual goal of
being able to skip releases entirely. We are addressing the
maintenance of the old issue right now. As we get farther down the
road of fast-forward upgrade tooling, then we will be able to please
those wishing for a slower upgrade cadence, and those that want to
stay on the bleeding edge simultaneously.

-Erik

> On 14 November 2017 at 09:25, Doug Hellmann  wrote:
>> Excerpts from Bogdan Dobrelya's message of 2017-11-14 17:08:31 +0100:
>>> >> The concept, in general, is to create a new set of cores from these
>>> >> groups, and use 3rd party CI to validate patches. There are lots of
>>> >> details to be worked out yet, but our amazing UC (User Committee) will
>>> >> be begin working out the details.
>>> >
>>> > What is the most worrying is the exact "take over" process. Does it mean 
>>> > that
>>> > the teams will give away the +2 power to a different team? Or will our 
>>> > (small)
>>> > stable teams still be responsible for landing changes? If so, will they 
>>> > have to
>>> > learn how to debug 3rd party CI jobs?
>>> >
>>> > Generally, I'm scared of both overloading the teams and losing the 
>>> > control over
>>> > quality at the same time :) Probably the final proposal will clarify it..
>>>
>>> The quality of backported fixes is expected to be a direct (and only?)
>>> interest of those new teams of new cores, coming from users and
>>> operators and vendors. The more parties to establish their 3rd party
>>
>> We have an unhealthy focus on "3rd party" jobs in this discussion. We
>> should not assume that they are needed or will be present. They may be,
>> but we shouldn't build policy around the assumption that they will. Why
>> would we have third-party jobs on an old branch that we don't have on
>> master, for instance?
>>
>>> checking jobs, the better proposed changes communicated, which directly
>>> affects the quality in the end. I also suppose, contributors from ops
>>> world will likely be only struggling to see things getting fixed, and
>>> not new features adopted by legacy deployments they're used to maintain.
>>> So in theory, this works and as a mainstream developer and maintainer,
>>> you need no to fear of losing control over LTS code :)
>>>
>>> Another question is how to not block all on each over, and not push
>>> contributors away when things are getting awry, jobs failing and merging
>>> is blocked for a long time, or there is no consensus reached in a code
>>> review. I propose the LTS policy to enforce CI jobs be non-voting, as a
>>> first step on that way, and giving every LTS team member a core rights
>>> maybe? Not sure if that works though.
>>
>> I'm not sure what change you're proposing for CI jobs and their voting
>> status. Do you mean we should make the jobs non-voting as soon as the
>> branch passes out of the stable support period?
>>
>> Regarding the review team, anyone on the review team for a branch
>> that goes out of stable support will need to have +2 rights in that
>> branch. Otherwise there's no point in saying that they're maintaining
>> the branch.
>>
>> Doug
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Cheers,
> ~Blairo
>
> ___
> 

Re: [Openstack-operators] [openstack-dev] Upstream LTS Releases

2017-11-14 Thread Rochelle Grober
Folks,

This discussion and the people interested in it seem like a perfect application 
of the SIG process.  By turning LTS into a SIG, everyone can discuss the issues 
on the SIG mailing list and the discussion shouldn't end up split.  If it turns 
into a project, great.  If a solution is found that doesn't need a new project, 
great.  Even once  there is a decision on how to move forward, there will still 
be implementation issues and enhancements, so the SIG could very well be 
long-lived.  But the important aspect of this is:  keeping the discussion in a 
place where both devs and ops can follow the whole thing and act on 
recommendations.

Food for thought.

--Rocky

> -Original Message-
> From: Blair Bethwaite [mailto:blair.bethwa...@gmail.com]
> Sent: Tuesday, November 14, 2017 8:31 AM
> To: OpenStack Development Mailing List (not for usage questions)
> ; openstack-oper.  operat...@lists.openstack.org>
> Subject: Re: [openstack-dev] Upstream LTS Releases
> 
> Hi all - please note this conversation has been split variously across -dev 
> and -
> operators.
> 
> One small observation from the discussion so far is that it seems as though
> there are two issues being discussed under the one banner:
> 1) maintain old releases for longer
> 2) do stable releases less frequently
> 
> It would be interesting to understand if the people who want longer
> maintenance windows would be helped by #2.
> 
> On 14 November 2017 at 09:25, Doug Hellmann 
> wrote:
> > Excerpts from Bogdan Dobrelya's message of 2017-11-14 17:08:31 +0100:
> >> >> The concept, in general, is to create a new set of cores from
> >> >> these groups, and use 3rd party CI to validate patches. There are
> >> >> lots of details to be worked out yet, but our amazing UC (User
> >> >> Committee) will be begin working out the details.
> >> >
> >> > What is the most worrying is the exact "take over" process. Does it
> >> > mean that the teams will give away the +2 power to a different
> >> > team? Or will our (small) stable teams still be responsible for
> >> > landing changes? If so, will they have to learn how to debug 3rd party CI
> jobs?
> >> >
> >> > Generally, I'm scared of both overloading the teams and losing the
> >> > control over quality at the same time :) Probably the final proposal will
> clarify it..
> >>
> >> The quality of backported fixes is expected to be a direct (and
> >> only?) interest of those new teams of new cores, coming from users
> >> and operators and vendors. The more parties to establish their 3rd
> >> party
> >
> > We have an unhealthy focus on "3rd party" jobs in this discussion. We
> > should not assume that they are needed or will be present. They may
> > be, but we shouldn't build policy around the assumption that they
> > will. Why would we have third-party jobs on an old branch that we
> > don't have on master, for instance?
> >
> >> checking jobs, the better proposed changes communicated, which
> >> directly affects the quality in the end. I also suppose, contributors
> >> from ops world will likely be only struggling to see things getting
> >> fixed, and not new features adopted by legacy deployments they're used
> to maintain.
> >> So in theory, this works and as a mainstream developer and
> >> maintainer, you need no to fear of losing control over LTS code :)
> >>
> >> Another question is how to not block all on each over, and not push
> >> contributors away when things are getting awry, jobs failing and
> >> merging is blocked for a long time, or there is no consensus reached
> >> in a code review. I propose the LTS policy to enforce CI jobs be
> >> non-voting, as a first step on that way, and giving every LTS team
> >> member a core rights maybe? Not sure if that works though.
> >
> > I'm not sure what change you're proposing for CI jobs and their voting
> > status. Do you mean we should make the jobs non-voting as soon as the
> > branch passes out of the stable support period?
> >
> > Regarding the review team, anyone on the review team for a branch that
> > goes out of stable support will need to have +2 rights in that branch.
> > Otherwise there's no point in saying that they're maintaining the
> > branch.
> >
> > Doug
> >
> >
> __
> 
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> --
> Cheers,
> ~Blairo
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-operators mailing list

Re: [Openstack-operators] [openstack-dev] Upstream LTS Releases

2017-11-14 Thread Dmitry Tantsur

On 11/14/2017 06:21 PM, Erik McCormick wrote:

On Tue, Nov 14, 2017 at 11:30 AM, Blair Bethwaite
 wrote:

Hi all - please note this conversation has been split variously across
-dev and -operators.

One small observation from the discussion so far is that it seems as
though there are two issues being discussed under the one banner:
1) maintain old releases for longer
2) do stable releases less frequently

It would be interesting to understand if the people who want longer
maintenance windows would be helped by #2.



I would like to hear from people who do *not* want #2 and why not.
What are the benefits of 6 months vs. 1 year? I have heard objections
in the hallway track, but I have struggled to retain the rationale for
more than 10 seconds. I think this may be more of a religious
discussion that could take a while though.


One point is maintenance burden. Everything that has to be deprecated and 
removed will have to be kept for twice more time in the worst case.


The second point is that contributors, from my experience, don't like waiting 
many months for their shiny feature to get released. That will increase pressure 
on the teams in the end of every release to get everything in - or it will have 
to wait 1 year.


Note that both points apply even if you do "less-stable" releases between stable 
ones.




#1 is something we can act on right now with the eventual goal of
being able to skip releases entirely. We are addressing the
maintenance of the old issue right now. As we get farther down the
road of fast-forward upgrade tooling, then we will be able to please
those wishing for a slower upgrade cadence, and those that want to
stay on the bleeding edge simultaneously.

-Erik


On 14 November 2017 at 09:25, Doug Hellmann  wrote:

Excerpts from Bogdan Dobrelya's message of 2017-11-14 17:08:31 +0100:

The concept, in general, is to create a new set of cores from these
groups, and use 3rd party CI to validate patches. There are lots of
details to be worked out yet, but our amazing UC (User Committee) will
be begin working out the details.


What is the most worrying is the exact "take over" process. Does it mean that
the teams will give away the +2 power to a different team? Or will our (small)
stable teams still be responsible for landing changes? If so, will they have to
learn how to debug 3rd party CI jobs?

Generally, I'm scared of both overloading the teams and losing the control over
quality at the same time :) Probably the final proposal will clarify it..


The quality of backported fixes is expected to be a direct (and only?)
interest of those new teams of new cores, coming from users and
operators and vendors. The more parties to establish their 3rd party


We have an unhealthy focus on "3rd party" jobs in this discussion. We
should not assume that they are needed or will be present. They may be,
but we shouldn't build policy around the assumption that they will. Why
would we have third-party jobs on an old branch that we don't have on
master, for instance?


checking jobs, the better proposed changes communicated, which directly
affects the quality in the end. I also suppose, contributors from ops
world will likely be only struggling to see things getting fixed, and
not new features adopted by legacy deployments they're used to maintain.
So in theory, this works and as a mainstream developer and maintainer,
you need no to fear of losing control over LTS code :)

Another question is how to not block all on each over, and not push
contributors away when things are getting awry, jobs failing and merging
is blocked for a long time, or there is no consensus reached in a code
review. I propose the LTS policy to enforce CI jobs be non-voting, as a
first step on that way, and giving every LTS team member a core rights
maybe? Not sure if that works though.


I'm not sure what change you're proposing for CI jobs and their voting
status. Do you mean we should make the jobs non-voting as soon as the
branch passes out of the stable support period?

Regarding the review team, anyone on the review team for a branch
that goes out of stable support will need to have +2 rights in that
branch. Otherwise there's no point in saying that they're maintaining
the branch.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Cheers,
~Blairo

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators





Re: [Openstack-operators] [openstack-dev] Upstream LTS Releases

2017-11-14 Thread Erik McCormick
On Tue, Nov 14, 2017 at 11:30 AM, Blair Bethwaite
 wrote:
> Hi all - please note this conversation has been split variously across
> -dev and -operators.
>
> One small observation from the discussion so far is that it seems as
> though there are two issues being discussed under the one banner:
> 1) maintain old releases for longer
> 2) do stable releases less frequently
>
> It would be interesting to understand if the people who want longer
> maintenance windows would be helped by #2.
>

I would like to hear from people who do *not* want #2 and why not.
What are the benefits of 6 months vs. 1 year? I have heard objections
in the hallway track, but I have struggled to retain the rationale for
more than 10 seconds. I think this may be more of a religious
discussion that could take a while though.

#1 is something we can act on right now with the eventual goal of
being able to skip releases entirely. We are addressing the
maintenance of the old issue right now. As we get farther down the
road of fast-forward upgrade tooling, then we will be able to please
those wishing for a slower upgrade cadence, and those that want to
stay on the bleeding edge simultaneously.

-Erik

> On 14 November 2017 at 09:25, Doug Hellmann  wrote:
>> Excerpts from Bogdan Dobrelya's message of 2017-11-14 17:08:31 +0100:
>>> >> The concept, in general, is to create a new set of cores from these
>>> >> groups, and use 3rd party CI to validate patches. There are lots of
>>> >> details to be worked out yet, but our amazing UC (User Committee) will
>>> >> be begin working out the details.
>>> >
>>> > What is the most worrying is the exact "take over" process. Does it mean 
>>> > that
>>> > the teams will give away the +2 power to a different team? Or will our 
>>> > (small)
>>> > stable teams still be responsible for landing changes? If so, will they 
>>> > have to
>>> > learn how to debug 3rd party CI jobs?
>>> >
>>> > Generally, I'm scared of both overloading the teams and losing the 
>>> > control over
>>> > quality at the same time :) Probably the final proposal will clarify it..
>>>
>>> The quality of backported fixes is expected to be a direct (and only?)
>>> interest of those new teams of new cores, coming from users and
>>> operators and vendors. The more parties to establish their 3rd party
>>
>> We have an unhealthy focus on "3rd party" jobs in this discussion. We
>> should not assume that they are needed or will be present. They may be,
>> but we shouldn't build policy around the assumption that they will. Why
>> would we have third-party jobs on an old branch that we don't have on
>> master, for instance?
>>
>>> checking jobs, the better proposed changes communicated, which directly
>>> affects the quality in the end. I also suppose, contributors from ops
>>> world will likely be only struggling to see things getting fixed, and
>>> not new features adopted by legacy deployments they're used to maintain.
>>> So in theory, this works and as a mainstream developer and maintainer,
>>> you need no to fear of losing control over LTS code :)
>>>
>>> Another question is how to not block all on each over, and not push
>>> contributors away when things are getting awry, jobs failing and merging
>>> is blocked for a long time, or there is no consensus reached in a code
>>> review. I propose the LTS policy to enforce CI jobs be non-voting, as a
>>> first step on that way, and giving every LTS team member a core rights
>>> maybe? Not sure if that works though.
>>
>> I'm not sure what change you're proposing for CI jobs and their voting
>> status. Do you mean we should make the jobs non-voting as soon as the
>> branch passes out of the stable support period?
>>
>> Regarding the review team, anyone on the review team for a branch
>> that goes out of stable support will need to have +2 rights in that
>> branch. Otherwise there's no point in saying that they're maintaining
>> the branch.
>>
>> Doug
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Cheers,
> ~Blairo
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] Upstream LTS Releases

2017-11-14 Thread Davanum Srinivas
Blair,

Please add #2 as a line proposal in:
https://etherpad.openstack.org/p/LTS-proposal

So far it's focused on #1

Thanks,
Dims

On Wed, Nov 15, 2017 at 3:30 AM, Blair Bethwaite
 wrote:
> Hi all - please note this conversation has been split variously across
> -dev and -operators.
>
> One small observation from the discussion so far is that it seems as
> though there are two issues being discussed under the one banner:
> 1) maintain old releases for longer
> 2) do stable releases less frequently
>
> It would be interesting to understand if the people who want longer
> maintenance windows would be helped by #2.
>
> On 14 November 2017 at 09:25, Doug Hellmann  wrote:
>> Excerpts from Bogdan Dobrelya's message of 2017-11-14 17:08:31 +0100:
>>> >> The concept, in general, is to create a new set of cores from these
>>> >> groups, and use 3rd party CI to validate patches. There are lots of
>>> >> details to be worked out yet, but our amazing UC (User Committee) will
>>> >> be begin working out the details.
>>> >
>>> > What is the most worrying is the exact "take over" process. Does it mean 
>>> > that
>>> > the teams will give away the +2 power to a different team? Or will our 
>>> > (small)
>>> > stable teams still be responsible for landing changes? If so, will they 
>>> > have to
>>> > learn how to debug 3rd party CI jobs?
>>> >
>>> > Generally, I'm scared of both overloading the teams and losing the 
>>> > control over
>>> > quality at the same time :) Probably the final proposal will clarify it..
>>>
>>> The quality of backported fixes is expected to be a direct (and only?)
>>> interest of those new teams of new cores, coming from users and
>>> operators and vendors. The more parties to establish their 3rd party
>>
>> We have an unhealthy focus on "3rd party" jobs in this discussion. We
>> should not assume that they are needed or will be present. They may be,
>> but we shouldn't build policy around the assumption that they will. Why
>> would we have third-party jobs on an old branch that we don't have on
>> master, for instance?
>>
>>> checking jobs, the better proposed changes communicated, which directly
>>> affects the quality in the end. I also suppose, contributors from ops
>>> world will likely be only struggling to see things getting fixed, and
>>> not new features adopted by legacy deployments they're used to maintain.
>>> So in theory, this works and as a mainstream developer and maintainer,
>>> you need no to fear of losing control over LTS code :)
>>>
>>> Another question is how to not block all on each over, and not push
>>> contributors away when things are getting awry, jobs failing and merging
>>> is blocked for a long time, or there is no consensus reached in a code
>>> review. I propose the LTS policy to enforce CI jobs be non-voting, as a
>>> first step on that way, and giving every LTS team member a core rights
>>> maybe? Not sure if that works though.
>>
>> I'm not sure what change you're proposing for CI jobs and their voting
>> status. Do you mean we should make the jobs non-voting as soon as the
>> branch passes out of the stable support period?
>>
>> Regarding the review team, anyone on the review team for a branch
>> that goes out of stable support will need to have +2 rights in that
>> branch. Otherwise there's no point in saying that they're maintaining
>> the branch.
>>
>> Doug
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Cheers,
> ~Blairo
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] Upstream LTS Releases

2017-11-14 Thread Blair Bethwaite
Hi all - please note this conversation has been split variously across
-dev and -operators.

One small observation from the discussion so far is that it seems as
though there are two issues being discussed under the one banner:
1) maintain old releases for longer
2) do stable releases less frequently

It would be interesting to understand if the people who want longer
maintenance windows would be helped by #2.

On 14 November 2017 at 09:25, Doug Hellmann  wrote:
> Excerpts from Bogdan Dobrelya's message of 2017-11-14 17:08:31 +0100:
>> >> The concept, in general, is to create a new set of cores from these
>> >> groups, and use 3rd party CI to validate patches. There are lots of
>> >> details to be worked out yet, but our amazing UC (User Committee) will
>> >> be begin working out the details.
>> >
>> > What is the most worrying is the exact "take over" process. Does it mean 
>> > that
>> > the teams will give away the +2 power to a different team? Or will our 
>> > (small)
>> > stable teams still be responsible for landing changes? If so, will they 
>> > have to
>> > learn how to debug 3rd party CI jobs?
>> >
>> > Generally, I'm scared of both overloading the teams and losing the control 
>> > over
>> > quality at the same time :) Probably the final proposal will clarify it..
>>
>> The quality of backported fixes is expected to be a direct (and only?)
>> interest of those new teams of new cores, coming from users and
>> operators and vendors. The more parties to establish their 3rd party
>
> We have an unhealthy focus on "3rd party" jobs in this discussion. We
> should not assume that they are needed or will be present. They may be,
> but we shouldn't build policy around the assumption that they will. Why
> would we have third-party jobs on an old branch that we don't have on
> master, for instance?
>
>> checking jobs, the better proposed changes communicated, which directly
>> affects the quality in the end. I also suppose, contributors from ops
>> world will likely be only struggling to see things getting fixed, and
>> not new features adopted by legacy deployments they're used to maintain.
>> So in theory, this works and as a mainstream developer and maintainer,
>> you need no to fear of losing control over LTS code :)
>>
>> Another question is how to not block all on each over, and not push
>> contributors away when things are getting awry, jobs failing and merging
>> is blocked for a long time, or there is no consensus reached in a code
>> review. I propose the LTS policy to enforce CI jobs be non-voting, as a
>> first step on that way, and giving every LTS team member a core rights
>> maybe? Not sure if that works though.
>
> I'm not sure what change you're proposing for CI jobs and their voting
> status. Do you mean we should make the jobs non-voting as soon as the
> branch passes out of the stable support period?
>
> Regarding the review team, anyone on the review team for a branch
> that goes out of stable support will need to have +2 rights in that
> branch. Otherwise there's no point in saying that they're maintaining
> the branch.
>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Cheers,
~Blairo

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] Upstream LTS Releases

2017-11-11 Thread Davanum Srinivas
+1 to " If there are no contributors for an LTS release, there will be
no LTS release. If there *are* contributors, then we'll find a way to
make some sort of LTS model work within the other constraints we
have."

+1 to let's get some folks who are interested a place to
collaborate and talk to each other and get things up and running. That
would be the first step. If it ends up in LTS and skip-level upgrades
for all projects, we all win!

That's a destination, not the first step!

Thanks,
Dims

On Sun, Nov 12, 2017 at 6:04 AM, Doug Hellmann  wrote:
> Excerpts from Clint Byrum's message of 2017-11-11 08:41:15 -0800:
>> Excerpts from Doug Hellmann's message of 2017-11-10 13:11:45 -0500:
>> > Excerpts from Clint Byrum's message of 2017-11-08 23:15:15 -0800:
>> > > Excerpts from Samuel Cassiba's message of 2017-11-08 08:27:12 -0800:
>> > > > On Tue, Nov 7, 2017 at 3:28 PM, Erik McCormick
>> > > >  wrote:
>> > > > > Hello Ops folks,
>> > > > >
>> > > > > This morning at the Sydney Summit we had a very well attended and 
>> > > > > very
>> > > > > productive session about how to go about keeping a selection of past
>> > > > > releases available and maintained for a longer period of time (LTS).
>> > > > >
>> > > > > There was agreement in the room that this could be accomplished by
>> > > > > moving the responsibility for those releases from the Stable Branch
>> > > > > team down to those who are already creating and testing patches for
>> > > > > old releases: The distros, deployers, and operators.
>> > > > >
>> > > > > The concept, in general, is to create a new set of cores from these
>> > > > > groups, and use 3rd party CI to validate patches. There are lots of
>> > > > > details to be worked out yet, but our amazing UC (User Committee) 
>> > > > > will
>> > > > > be begin working out the details.
>> > > > >
>> > > > > Please take a look at the Etherpad from the session if you'd like to
>> > > > > see the details. More importantly, if you would like to contribute to
>> > > > > this effort, please add your name to the list starting on line 133.
>> > > > >
>> > > > > https://etherpad.openstack.org/p/SYD-forum-upstream-lts-releases
>> > > > >
>> > > > > Thanks to everyone who participated!
>> > > > >
>> > > > > Cheers,
>> > > > > Erik
>> > > > >
>> > > > > __
>> > > > > OpenStack Development Mailing List (not for usage questions)
>> > > > > Unsubscribe: 
>> > > > > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > > >
>> > > > In advance, pardon the defensive tone. I was not in a position to
>> > > > attend, or even be in Sydney. However, as this comes across the ML, I
>> > > > can't help but get the impression this effort would be forcing more
>> > > > work on already stretched teams, ie. deployment-focused development
>> > > > teams already under a crunch as contributor count continues to decline
>> > > > in favor of other projects inside and out of OpenStack.
>> > > >
>> > >
>> > > I suspect if LTS's become a normal part of OpenStack, most deployment
>> > > projects will decline to support the interim releases. We can infer this
>> > > from the way Ubuntu is used. This might actually be a good thing for the
>> > > chef OpenStack community. 3 out of 3.5 of you can focus on the LTS bits,
>> > > and the 0.5 person can do some best effort to cover the weird corner
>> > > case of "previous stable release to master".
>> > >
>> > > The biggest challenge will be ensuring that the skip-level upgrades
>> > > work. The current grenade based upgrade jobs are already quite a bear to
>> > > keep working IIRC. I've not seen if chef or any of the deployment 
>> > > projects
>> > > test upgrades like that.
>> > >
>> > > However, if people can stop caring much about the interim releases and
>> > > just keep "previous LTS to master" upgrades working, then that might be
>> > > good for casual adoption.
>> > >
>> > > Personally I'd rather we make it easier to run "rolling release"
>> > > OpenStack. Maybe we can do that if we stop cutting stable releases every
>> > > 6 months.
>> > >
>> >
>> > We should stop calling what we're talking about "LTS". It isn't
>> > going to match the expectations of anyone receiving LTS releases
>> > for other products, at least at first. Perhaps "Deployer Supported"
>> > or "User Supported" are better terms for what we're talking about.
>> >
>>
>> I believe this state we're in is a stop-gap on the way to the full
>> LTS. People are already getting stuck. We're going to help them stay stuck
>> by upstreaming bug fixes.  We should be mindful of that and provide a way
>> to get less-stuck. The LTS model from other projects has proven quite
>> popular, and it would make sense for us to embrace it if our operators
>> are hurting with the current model, which I believe they are.
>>
>> > 

Re: [Openstack-operators] [openstack-dev] Upstream LTS Releases

2017-11-11 Thread Doug Hellmann
Excerpts from Clint Byrum's message of 2017-11-11 08:41:15 -0800:
> Excerpts from Doug Hellmann's message of 2017-11-10 13:11:45 -0500:
> > Excerpts from Clint Byrum's message of 2017-11-08 23:15:15 -0800:
> > > Excerpts from Samuel Cassiba's message of 2017-11-08 08:27:12 -0800:
> > > > On Tue, Nov 7, 2017 at 3:28 PM, Erik McCormick
> > > >  wrote:
> > > > > Hello Ops folks,
> > > > >
> > > > > This morning at the Sydney Summit we had a very well attended and very
> > > > > productive session about how to go about keeping a selection of past
> > > > > releases available and maintained for a longer period of time (LTS).
> > > > >
> > > > > There was agreement in the room that this could be accomplished by
> > > > > moving the responsibility for those releases from the Stable Branch
> > > > > team down to those who are already creating and testing patches for
> > > > > old releases: The distros, deployers, and operators.
> > > > >
> > > > > The concept, in general, is to create a new set of cores from these
> > > > > groups, and use 3rd party CI to validate patches. There are lots of
> > > > > details to be worked out yet, but our amazing UC (User Committee) will
> > > > > be begin working out the details.
> > > > >
> > > > > Please take a look at the Etherpad from the session if you'd like to
> > > > > see the details. More importantly, if you would like to contribute to
> > > > > this effort, please add your name to the list starting on line 133.
> > > > >
> > > > > https://etherpad.openstack.org/p/SYD-forum-upstream-lts-releases
> > > > >
> > > > > Thanks to everyone who participated!
> > > > >
> > > > > Cheers,
> > > > > Erik
> > > > >
> > > > > __
> > > > > OpenStack Development Mailing List (not for usage questions)
> > > > > Unsubscribe: 
> > > > > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > > 
> > > > In advance, pardon the defensive tone. I was not in a position to
> > > > attend, or even be in Sydney. However, as this comes across the ML, I
> > > > can't help but get the impression this effort would be forcing more
> > > > work on already stretched teams, ie. deployment-focused development
> > > > teams already under a crunch as contributor count continues to decline
> > > > in favor of other projects inside and out of OpenStack.
> > > > 
> > > 
> > > I suspect if LTS's become a normal part of OpenStack, most deployment
> > > projects will decline to support the interim releases. We can infer this
> > > from the way Ubuntu is used. This might actually be a good thing for the
> > > chef OpenStack community. 3 out of 3.5 of you can focus on the LTS bits,
> > > and the 0.5 person can do some best effort to cover the weird corner
> > > case of "previous stable release to master".
> > > 
> > > The biggest challenge will be ensuring that the skip-level upgrades
> > > work. The current grenade based upgrade jobs are already quite a bear to
> > > keep working IIRC. I've not seen if chef or any of the deployment projects
> > > test upgrades like that.
> > > 
> > > However, if people can stop caring much about the interim releases and
> > > just keep "previous LTS to master" upgrades working, then that might be
> > > good for casual adoption.
> > > 
> > > Personally I'd rather we make it easier to run "rolling release"
> > > OpenStack. Maybe we can do that if we stop cutting stable releases every
> > > 6 months.
> > > 
> > 
> > We should stop calling what we're talking about "LTS". It isn't
> > going to match the expectations of anyone receiving LTS releases
> > for other products, at least at first. Perhaps "Deployer Supported"
> > or "User Supported" are better terms for what we're talking about.
> > 
> 
> I believe this state we're in is a stop-gap on the way to the full
> LTS. People are already getting stuck. We're going to help them stay stuck
> by upstreaming bug fixes.  We should be mindful of that and provide a way
> to get less-stuck. The LTS model from other projects has proven quite
> popular, and it would make sense for us to embrace it if our operators
> are hurting with the current model, which I believe they are.
> 
> > In the "LTS" room we did not agree to stop cutting stable releases
> > or to start supporting upgrades directly from N-2 (or older) to N.
> > Both of those changes would require modifying the support the
> > existing contributor base has committed to provide.
> > 
> 
> Thanks, I am just inferring those things from what was agreed on. However,
> It would make a lot of sense to discuss the plans for the future, even
> if we don't have data from the present proposal.
> 
> > Fast-forward upgrades will still need to run the migration steps
> > of each release in order, one at a time. The team working on that
> > is going to produce a document describing what works today so we
> > can 

Re: [Openstack-operators] [openstack-dev] Upstream LTS Releases

2017-11-11 Thread Thomas Goirand
On 11/08/2017 05:27 PM, Samuel Cassiba wrote:
> ie. deployment-focused development
> teams already under a crunch as contributor count continues to decline
> in favor of other projects inside and out of OpenStack.

Did you even think that one of the reason for such a decline, is that
OpenStack is moving too fast, and has no LTS? Some major public cloud
(which I will on purpose not name) are still running Kilo, which was
released 3 years ago! 3 or 5 years support for an LTS version is the
industry standard, and OpenStack is doing only 1 year. This has driven
people away, and will continue to do so if nothing is done.

Instead of thinking "this will be more work", why don't you think of the
LTS as an opportunity to only release OpenStack Chef for the LTS? That'd
be a lot less work indeed, and IMO that's a very good opportunity for
you to scale down.

Cheers,

Thomas Goirand (zigo)

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] Upstream LTS Releases

2017-11-11 Thread Clint Byrum
Excerpts from Doug Hellmann's message of 2017-11-10 13:11:45 -0500:
> Excerpts from Clint Byrum's message of 2017-11-08 23:15:15 -0800:
> > Excerpts from Samuel Cassiba's message of 2017-11-08 08:27:12 -0800:
> > > On Tue, Nov 7, 2017 at 3:28 PM, Erik McCormick
> > >  wrote:
> > > > Hello Ops folks,
> > > >
> > > > This morning at the Sydney Summit we had a very well attended and very
> > > > productive session about how to go about keeping a selection of past
> > > > releases available and maintained for a longer period of time (LTS).
> > > >
> > > > There was agreement in the room that this could be accomplished by
> > > > moving the responsibility for those releases from the Stable Branch
> > > > team down to those who are already creating and testing patches for
> > > > old releases: The distros, deployers, and operators.
> > > >
> > > > The concept, in general, is to create a new set of cores from these
> > > > groups, and use 3rd party CI to validate patches. There are lots of
> > > > details to be worked out yet, but our amazing UC (User Committee) will
> > > > be begin working out the details.
> > > >
> > > > Please take a look at the Etherpad from the session if you'd like to
> > > > see the details. More importantly, if you would like to contribute to
> > > > this effort, please add your name to the list starting on line 133.
> > > >
> > > > https://etherpad.openstack.org/p/SYD-forum-upstream-lts-releases
> > > >
> > > > Thanks to everyone who participated!
> > > >
> > > > Cheers,
> > > > Erik
> > > >
> > > > __
> > > > OpenStack Development Mailing List (not for usage questions)
> > > > Unsubscribe: 
> > > > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > 
> > > In advance, pardon the defensive tone. I was not in a position to
> > > attend, or even be in Sydney. However, as this comes across the ML, I
> > > can't help but get the impression this effort would be forcing more
> > > work on already stretched teams, ie. deployment-focused development
> > > teams already under a crunch as contributor count continues to decline
> > > in favor of other projects inside and out of OpenStack.
> > > 
> > 
> > I suspect if LTS's become a normal part of OpenStack, most deployment
> > projects will decline to support the interim releases. We can infer this
> > from the way Ubuntu is used. This might actually be a good thing for the
> > chef OpenStack community. 3 out of 3.5 of you can focus on the LTS bits,
> > and the 0.5 person can do some best effort to cover the weird corner
> > case of "previous stable release to master".
> > 
> > The biggest challenge will be ensuring that the skip-level upgrades
> > work. The current grenade based upgrade jobs are already quite a bear to
> > keep working IIRC. I've not seen if chef or any of the deployment projects
> > test upgrades like that.
> > 
> > However, if people can stop caring much about the interim releases and
> > just keep "previous LTS to master" upgrades working, then that might be
> > good for casual adoption.
> > 
> > Personally I'd rather we make it easier to run "rolling release"
> > OpenStack. Maybe we can do that if we stop cutting stable releases every
> > 6 months.
> > 
> 
> We should stop calling what we're talking about "LTS". It isn't
> going to match the expectations of anyone receiving LTS releases
> for other products, at least at first. Perhaps "Deployer Supported"
> or "User Supported" are better terms for what we're talking about.
> 

I believe this state we're in is a stop-gap on the way to the full
LTS. People are already getting stuck. We're going to help them stay stuck
by upstreaming bug fixes.  We should be mindful of that and provide a way
to get less-stuck. The LTS model from other projects has proven quite
popular, and it would make sense for us to embrace it if our operators
are hurting with the current model, which I believe they are.

> In the "LTS" room we did not agree to stop cutting stable releases
> or to start supporting upgrades directly from N-2 (or older) to N.
> Both of those changes would require modifying the support the
> existing contributor base has committed to provide.
> 

Thanks, I am just inferring those things from what was agreed on. However,
It would make a lot of sense to discuss the plans for the future, even
if we don't have data from the present proposal.

> Fast-forward upgrades will still need to run the migration steps
> of each release in order, one at a time. The team working on that
> is going to produce a document describing what works today so we
> can analyze it for ways to improve the upgrade experience, for both
> fast-forward and "regular" upgrades.  That was all discussed in a
> separate session.
> 

We are what we test. If we're going to test fast-forwards, how far into
the past do we test? It 

Re: [Openstack-operators] [openstack-dev] Upstream LTS Releases

2017-11-10 Thread Blair Bethwaite
I missed this session but the discussion strikes a chord as this is
something I've been saying on my user survey every 6 months.

On 11 November 2017 at 09:51, John Dickinson  wrote:
> What I heard from ops in the room is that they want (to start) one release a
> year who's branch isn't deleted after a year. What if that's exactly what we
> did? I propose that OpenStack only do one release a year instead of two. We
> still keep N-2 stable releases around. We still do backports to all open
> stable branches. We still do all the things we're doing now, we just do it
> once a year instead of twice.

+1

-- 
Cheers,
~Blairo

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] Upstream LTS Releases

2017-11-10 Thread Doug Hellmann
Excerpts from Clint Byrum's message of 2017-11-08 23:15:15 -0800:
> Excerpts from Samuel Cassiba's message of 2017-11-08 08:27:12 -0800:
> > On Tue, Nov 7, 2017 at 3:28 PM, Erik McCormick
> >  wrote:
> > > Hello Ops folks,
> > >
> > > This morning at the Sydney Summit we had a very well attended and very
> > > productive session about how to go about keeping a selection of past
> > > releases available and maintained for a longer period of time (LTS).
> > >
> > > There was agreement in the room that this could be accomplished by
> > > moving the responsibility for those releases from the Stable Branch
> > > team down to those who are already creating and testing patches for
> > > old releases: The distros, deployers, and operators.
> > >
> > > The concept, in general, is to create a new set of cores from these
> > > groups, and use 3rd party CI to validate patches. There are lots of
> > > details to be worked out yet, but our amazing UC (User Committee) will
> > > be begin working out the details.
> > >
> > > Please take a look at the Etherpad from the session if you'd like to
> > > see the details. More importantly, if you would like to contribute to
> > > this effort, please add your name to the list starting on line 133.
> > >
> > > https://etherpad.openstack.org/p/SYD-forum-upstream-lts-releases
> > >
> > > Thanks to everyone who participated!
> > >
> > > Cheers,
> > > Erik
> > >
> > > __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> > In advance, pardon the defensive tone. I was not in a position to
> > attend, or even be in Sydney. However, as this comes across the ML, I
> > can't help but get the impression this effort would be forcing more
> > work on already stretched teams, ie. deployment-focused development
> > teams already under a crunch as contributor count continues to decline
> > in favor of other projects inside and out of OpenStack.
> > 
> 
> I suspect if LTS's become a normal part of OpenStack, most deployment
> projects will decline to support the interim releases. We can infer this
> from the way Ubuntu is used. This might actually be a good thing for the
> chef OpenStack community. 3 out of 3.5 of you can focus on the LTS bits,
> and the 0.5 person can do some best effort to cover the weird corner
> case of "previous stable release to master".
> 
> The biggest challenge will be ensuring that the skip-level upgrades
> work. The current grenade based upgrade jobs are already quite a bear to
> keep working IIRC. I've not seen if chef or any of the deployment projects
> test upgrades like that.
> 
> However, if people can stop caring much about the interim releases and
> just keep "previous LTS to master" upgrades working, then that might be
> good for casual adoption.
> 
> Personally I'd rather we make it easier to run "rolling release"
> OpenStack. Maybe we can do that if we stop cutting stable releases every
> 6 months.
> 

We should stop calling what we're talking about "LTS". It isn't
going to match the expectations of anyone receiving LTS releases
for other products, at least at first. Perhaps "Deployer Supported"
or "User Supported" are better terms for what we're talking about.

In the "LTS" room we did not agree to stop cutting stable releases
or to start supporting upgrades directly from N-2 (or older) to N.
Both of those changes would require modifying the support the
existing contributor base has committed to provide.

Fast-forward upgrades will still need to run the migration steps
of each release in order, one at a time. The team working on that
is going to produce a document describing what works today so we
can analyze it for ways to improve the upgrade experience, for both
fast-forward and "regular" upgrades.  That was all discussed in a
separate session.

Doug

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] Upstream LTS Releases

2017-11-09 Thread Jonathan Proulx
On Thu, Nov 09, 2017 at 04:34:24PM +, Jeremy Stanley wrote:
:On 2017-11-08 23:15:15 -0800 (-0800), Clint Byrum wrote:
:[...]
:> The biggest challenge will be ensuring that the skip-level upgrades
:> work. The current grenade based upgrade jobs are already quite a bear to
:> keep working IIRC. I've not seen if chef or any of the deployment projects
:> test upgrades like that.
:[...]
:
:Another challenge which has been mostly hand-waved away at this
:stage is the distro support piece. Queens is being tested on Ubuntu
:16.04 but our "S" release will likely be tested on Ubuntu 18.04
:instead... so effective testing for a skip-level upgrade between
:those two LTS releases will _also_ involve in-place upgrading of
:Ubuntu.

Having done inplace Ubuntu upgrades from 12.04->14.04->16.04
underneath "production"* openstack I'm not too worried about the
mechanics of that.

Currently for Ubuntu case when you get to a release boundary you need
to bring old Distro Release upto Newest OpenStack then move to new
Distro Release.

You can push production load to another node, reinstall new version of
Ubuntu with current configs then move load back. This does make some
assumptions about the deployed architecture but hopefully if nonstop
cloud is what you're after you've deployed something that's resilient
to single node faults which is basically what a Distro upgrade takes.

-Jon

:-- 
:Jeremy Stanley
:
:___
:OpenStack-operators mailing list
:OpenStack-operators@lists.openstack.org
:http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-- 

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] Upstream LTS Releases

2017-11-09 Thread Jeremy Stanley
On 2017-11-08 23:15:15 -0800 (-0800), Clint Byrum wrote:
[...]
> The biggest challenge will be ensuring that the skip-level upgrades
> work. The current grenade based upgrade jobs are already quite a bear to
> keep working IIRC. I've not seen if chef or any of the deployment projects
> test upgrades like that.
[...]

Another challenge which has been mostly hand-waved away at this
stage is the distro support piece. Queens is being tested on Ubuntu
16.04 but our "S" release will likely be tested on Ubuntu 18.04
instead... so effective testing for a skip-level upgrade between
those two LTS releases will _also_ involve in-place upgrading of
Ubuntu.
-- 
Jeremy Stanley

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] Upstream LTS Releases

2017-11-08 Thread Clint Byrum
Excerpts from Samuel Cassiba's message of 2017-11-08 08:27:12 -0800:
> On Tue, Nov 7, 2017 at 3:28 PM, Erik McCormick
>  wrote:
> > Hello Ops folks,
> >
> > This morning at the Sydney Summit we had a very well attended and very
> > productive session about how to go about keeping a selection of past
> > releases available and maintained for a longer period of time (LTS).
> >
> > There was agreement in the room that this could be accomplished by
> > moving the responsibility for those releases from the Stable Branch
> > team down to those who are already creating and testing patches for
> > old releases: The distros, deployers, and operators.
> >
> > The concept, in general, is to create a new set of cores from these
> > groups, and use 3rd party CI to validate patches. There are lots of
> > details to be worked out yet, but our amazing UC (User Committee) will
> > be begin working out the details.
> >
> > Please take a look at the Etherpad from the session if you'd like to
> > see the details. More importantly, if you would like to contribute to
> > this effort, please add your name to the list starting on line 133.
> >
> > https://etherpad.openstack.org/p/SYD-forum-upstream-lts-releases
> >
> > Thanks to everyone who participated!
> >
> > Cheers,
> > Erik
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> In advance, pardon the defensive tone. I was not in a position to
> attend, or even be in Sydney. However, as this comes across the ML, I
> can't help but get the impression this effort would be forcing more
> work on already stretched teams, ie. deployment-focused development
> teams already under a crunch as contributor count continues to decline
> in favor of other projects inside and out of OpenStack.
> 

I suspect if LTS's become a normal part of OpenStack, most deployment
projects will decline to support the interim releases. We can infer this
from the way Ubuntu is used. This might actually be a good thing for the
chef OpenStack community. 3 out of 3.5 of you can focus on the LTS bits,
and the 0.5 person can do some best effort to cover the weird corner
case of "previous stable release to master".

The biggest challenge will be ensuring that the skip-level upgrades
work. The current grenade based upgrade jobs are already quite a bear to
keep working IIRC. I've not seen if chef or any of the deployment projects
test upgrades like that.

However, if people can stop caring much about the interim releases and
just keep "previous LTS to master" upgrades working, then that might be
good for casual adoption.

Personally I'd rather we make it easier to run "rolling release"
OpenStack. Maybe we can do that if we stop cutting stable releases every
6 months.

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] Upstream LTS Releases

2017-11-08 Thread Samuel Cassiba
On Tue, Nov 7, 2017 at 3:28 PM, Erik McCormick
 wrote:
> Hello Ops folks,
>
> This morning at the Sydney Summit we had a very well attended and very
> productive session about how to go about keeping a selection of past
> releases available and maintained for a longer period of time (LTS).
>
> There was agreement in the room that this could be accomplished by
> moving the responsibility for those releases from the Stable Branch
> team down to those who are already creating and testing patches for
> old releases: The distros, deployers, and operators.
>
> The concept, in general, is to create a new set of cores from these
> groups, and use 3rd party CI to validate patches. There are lots of
> details to be worked out yet, but our amazing UC (User Committee) will
> be begin working out the details.
>
> Please take a look at the Etherpad from the session if you'd like to
> see the details. More importantly, if you would like to contribute to
> this effort, please add your name to the list starting on line 133.
>
> https://etherpad.openstack.org/p/SYD-forum-upstream-lts-releases
>
> Thanks to everyone who participated!
>
> Cheers,
> Erik
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

In advance, pardon the defensive tone. I was not in a position to
attend, or even be in Sydney. However, as this comes across the ML, I
can't help but get the impression this effort would be forcing more
work on already stretched teams, ie. deployment-focused development
teams already under a crunch as contributor count continues to decline
in favor of other projects inside and out of OpenStack.

As a friendly reminder, Chef is still actively developed, though we've
not had a great return from recruiting more people. We have about 3.5
active developers, including active cores, and non-cores who felt it
worthwhile to contribute back upstream. There is no major corporate
backer here, but merely a handful of potentially stubborn volunteers.
Nobody is behind the curtain, but Chef OpenStack still have a few
active users (once again, I point to the annual User Survey results)
and contributors. However, we do not use the MLs as a primary
communication means, so I can see how we might be forgotten or
ignored.

In practice, no one likes talking about Chef OpenStack that I've
experienced, neither in the Chef or OpenStack communities. However, as
a maintainer, I keep making it a point to bring it up when it seems
the project gets papered over, or the core team gets signed up for
more work decided in a room half a world away. Admittedly, the whole
deployment method is a hard sell if you're not using Chef in some way.
It has always been my takeaway that the project was merely tolerated
under the OpenStack designation, neither embraced nor even liked, even
being the "official" OpenStack deployment method for a major
deployment toolset. The Foundation's support has been outstanding when
we've needed it, but that's about as far as the delightful goes. The
Chef community is a bit more tolerant of someone using the Chef
moniker for OpenStack, but migrating from Gerrit to GitHub is a major
undertaking that the development team may or may not be able to
reasonably support without more volunteers. Now that the proposition
exists about making a Stable Release liaison derived from existing
cores, I can't help but get the impression that, for active-but-quiet
projects, it'll be yet another PTL responsibility to keep up with, in
addition to the rigors that already come with the role. I'm hoping
I'll be proven wrong here, but I can and do get in trouble for hoping.

-- 
Best,
Samuel Cassiba

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] Upstream LTS Releases

2017-11-07 Thread Erik McCormick
On Nov 8, 2017 1:52 PM, "James E. Blair"  wrote:

Erik McCormick  writes:

> On Tue, Nov 7, 2017 at 6:45 PM, James E. Blair 
wrote:
>> Erik McCormick  writes:
>>
>>> The concept, in general, is to create a new set of cores from these
>>> groups, and use 3rd party CI to validate patches. There are lots of
>>> details to be worked out yet, but our amazing UC (User Committee) will
>>> be begin working out the details.
>>
>> I regret that due to a conflict I was unable to attend this session.
>> Can you elaborate on why third-party CI would be necessary for this,
>> considering that upstream CI already exists on all active branches?
>
> Lack of infra resources, people are already maintaining their own
> testing for old releases, and distribution of work across
> organizations I think were the chief reasons. Someone else feel free
> to chime in and expand on it.

Which resources are lacking?  I wasn't made aware of a shortage of
upstream CI resources affecting stable branch work, but if there is, I'm
sure we can address it -- this is a very important effort.




It's not a matter of things lacking for today's release cadence and
deprecation policy. That is working fine.  The problems would come if you
had to,  say,  continue to run it for Mitaka until Queens is released.

The upstream CI system is also a collaboratively maintained system with
folks from many organizations participating in it.  Indeed we're now
distributing its maintenance and operation into projects themselves.
It seems like an ideal place for folks from different organizations to
collaborate.


Monty, as well as the Stable Branch cores, were in the room, so perhaps
they can elaborate on this for us.  I'm no expert on what can and cannot be
done.

-Jim
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] Upstream LTS Releases

2017-11-07 Thread Thierry Carrez
Erik McCormick wrote:
> This morning at the Sydney Summit we had a very well attended and very
> productive session about how to go about keeping a selection of past
> releases available and maintained for a longer period of time (LTS).
> 
> There was agreement in the room that this could be accomplished by
> moving the responsibility for those releases from the Stable Branch
> team down to those who are already creating and testing patches for
> old releases: The distros, deployers, and operators.
> 
> The concept, in general, is to create a new set of cores from these
> groups, and use 3rd party CI to validate patches. There are lots of
> details to be worked out yet, but our amazing UC (User Committee) will
> be begin working out the details.

I took the action of summarizing the discussion in more detail, will do
as soon as my brain is not as mushy, which might take a couple of weeks :)

Note that it's not really about devs vs. ops, with devs abdicating all
responsibility on stable branches : it's about allowing collaboration on
patches beyond EOL (which is what we are able to support with "live"
stable branches on evolving OS/PyPI substrate) and enable whoever steps
up to maintain longer-lived branches to come up with a set of tests that
actually match their needs (tests that would be less likely to bitrot
due to changing OS/PyPI substrate).

A number of people from all backgrounds volunteered to flesh out a more
detailed proposal. Watch that space!

-- 
Thierry Carrez (ttx)

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] Upstream LTS Releases

2017-11-07 Thread Erik McCormick
On Tue, Nov 7, 2017 at 6:45 PM, James E. Blair  wrote:
> Erik McCormick  writes:
>
>> The concept, in general, is to create a new set of cores from these
>> groups, and use 3rd party CI to validate patches. There are lots of
>> details to be worked out yet, but our amazing UC (User Committee) will
>> be begin working out the details.
>
> I regret that due to a conflict I was unable to attend this session.
> Can you elaborate on why third-party CI would be necessary for this,
> considering that upstream CI already exists on all active branches?
>
> Thanks,
>
> Jim

Lack of infra resources, people are already maintaining their own
testing for old releases, and distribution of work across
organizations I think were the chief reasons. Someone else feel free
to chime in and expand on it.

-Erik

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators