Re: [openstack-dev] [chef] Making the Kitchen Great Again: A Retrospective on OpenStack & Chef

2017-02-17 Thread Steven Dake
On Thu, Feb 16, 2017 at 11:24 AM, Joshua Harlow 
wrote:

> Alex Schultz wrote:
>
>> On Thu, Feb 16, 2017 at 9:12 AM, Ed Leafe  wrote:
>>
>>> On Feb 16, 2017, at 10:07 AM, Doug Hellmann
>>> wrote:
>>>
>>> When we signed off on the Big Tent changes we said competition
 between projects was desirable, and that deployers and contributors
 would make choices based on the work being done in those competing
 projects. Basically, the market would decide on the "optimal"
 solution. It's a hard message to hear, but that seems to be what
 is happening.

>>> This.
>>>
>>> We got much better at adding new things to OpenStack. We need to get
>>> better at letting go of old things.
>>>
>>> -- Ed Leafe
>>>
>>>
>>>
>>>
>> I agree that the market will dictate what continues to survive, but if
>> you're not careful you may be speeding up the decline as the end user
>> (deployer/operator/cloud consumer) will switch completely to something
>> else because it becomes to difficult to continue to consume via what
>> used to be there and no longer is.  I thought the whole point was to
>> not have vendor lock-in.  Honestly I think the focus is too much on
>> the development and not enough on the consumption of the development
>> output.  What are the point of all these features if no one can
>> actually consume them.
>>
>>
> +1 to that.
>
> I've been in the boat of development and consumption of it for my *whole*
> journey in openstack land and I can say the product as a whole seems
> 'underbaked' with regards to the way people consume the development output.
> It seems we have focused on how to do the dev. stuff nicely and a nice
> process there, but sort of forgotten about all that being quite useless if
> no one can consume them (without going through much pain or paying a
> vendor).
>
> This has or has IMHO been a factor in why certain are companies (and the
> people they support) are exiting openstack and just going elsewhere.
>
> I personally don't believe fixing this is 'let the market forces' figure
> it out for us (what a slow & horrible way to let this play out; I'd almost
> rather go pull my fingernails out). I do believe it will require making
> opinionated decisions which we have all never been very good at.
>
>
I understand Samuel's situation and understand that a free market
capitalism as Doug mentioned appears to be how OpenStack has operated until
today.  For most of my life I was an ardent free market capitalist.  I have
heard many pundits on the news, blog posts, financial spam, etc say free
market capitalism is the best system humankind has found to managing the
flow of resources to people (In this thread's case the flow of contributors
to Chef).  Unfortunately this form of capitalism has resulted in all sorts
of disparity in terms of education, income, freedom, and many other aspects
of our society (which translated into technical components might be what we
see in the diversion of resources to other tools such as Ansible).  I would
pick on soda manufacturers now in the US for their usage of HFCS rather
then pure cane sugar in soda, however, you can hear me rant about that at
the PTG.

OpenStack is an experiment in governance.  Part of that experiment was the
Big Tent, which unlike a circus, was meant to encompass everyone's
political and technical viewpoints to arrive at harmonious working
relationships among the community.  This choice was excellent, however, it
has re-enforced a capitalist approach to developing and delivering
OpenStack.

There is however, always room for improvement in any system.  I'm not
suggesting we can live in some magical Star Trek universe where nobody
suffers and resources are endless via a replicator.

I am suggesting we can make improvements to our governance to solve some of
these problems by applying the approaches of "Conscious Capitalism" the
credo of which is outlined here [1].  How we go about applying these
approaches to OpenStack's governance process I am unclear on.  It is clear
that the few companies that have adopted this "movement" are clearly
improving the human experience for everyone involved, not just a limited
subset of blessed individuals.  I have only studied this subject for less
than 20 hours, but it seems like a big improvement on a free-market
capitalist system and something the entire OpenStack ecosystem should
examine.

[1]  https://www.consciouscapitalism.org/about/credo

Warm Regards,
-steve


> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listin

Re: [openstack-dev] [chef] Making the Kitchen Great Again: A Retrospective on OpenStack & Chef

2017-02-17 Thread Thierry Carrez
Ed Leafe wrote:
> On Feb 16, 2017, at 10:07 AM, Doug Hellmann  wrote:
> 
>> When we signed off on the Big Tent changes we said competition
>> between projects was desirable, and that deployers and contributors
>> would make choices based on the work being done in those competing
>> projects. Basically, the market would decide on the "optimal"
>> solution. It's a hard message to hear, but that seems to be what
>> is happening.
> 
> This.
> 
> We got much better at adding new things to OpenStack. We need to get better 
> at letting go of old things.

Yes.

With the model we've built, it's difficult to move some project teams
from "official" to "unofficial": as long as there is the remnants of a
team working on a project, and this team is clearly made of OpenStack
community members following our principles, our governance model does
not leave many walls you can lean on.

But there is one: does the project help with the OpenStack mission, or
does it hurt it ? Some projects do fall below the level of
maintenance/contribution necessary to present a satisfying experience,
and keeping those in our blessed, official "mix" hurts us more than it
helps us. Some other projects make us appear as (badly) trying to
compete with successful other ecosystems, while we should just co-opt
those ecosystems -- this also hurts us more than it helps us in
achieving the OpenStack mission.

It will be difficult discussions, but at this precise stage in OpenStack
life we need to have them. Come talk to me next week if interested.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [chef] Making the Kitchen Great Again: A Retrospective on OpenStack & Chef

2017-02-16 Thread Fox, Kevin M
Yeah, but if the deployment tools have to implement support for every project, 
it becomes combinatorial to support all the projects in all the deployment 
tools. let alone document it for each deployment project. If there was a 
foundational infrastructure that the other big tent projects could rely on 
always being there, then the big tent projects could themselves work on the 
deployment tooling and do it only once. The idea is not to deprecate the big 
tent projects, but deprecate deploying them with so many different tools. 
Deploy the base openstack parts using chef, ansible, etc and then use the 
common tooling to deploy the rest. Just a thought.

Thanks,
Kevin


From: Tim Bell [tim.b...@cern.ch]
Sent: Thursday, February 16, 2017 11:28 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [chef] Making the Kitchen Great Again: A 
Retrospective on OpenStack & Chef


On 16 Feb 2017, at 19:42, Fox, Kevin M 
mailto:kevin@pnnl.gov>> wrote:

+1. The assumption was market forces will cause the best OpenStack deployment 
tools to win. But the sad reality is, market forces are causing people to look 
for non OpenStack solutions instead as the pain is still too high.

While k8s has a few different deployment tools currently, they are focused on 
getting the small bit of underlying plumbing deployed. Then you use the common 
k8s itself to deploy the rest. Adding a dashboard, dns, ingress, sdn, other 
component is easy in that world.

IMO, OpenStack needs to do something similar. Standardize a small core and get 
that easily deployable, then make it easy to deploy/upgrade the rest of the big 
tent projects on top of that, not next to it as currently is being done.

Thanks,
Kevin

Unfortunately, the more operators and end users question the viability of a 
specific project, the less likely it is to be adopted.
It is a very very difficult discussion with an end user to explain that 
function X is no longer available because the latest OpenStack upgrade had to 
be done for security/functional/stability reasons and this project/function is 
not available.
The availability of a function may also have been one of the positives for the 
OpenStack selection so finding a release or two later that it is no longer in 
the portfolio is difficult.
The deprecation policy really helps so we can give a good notice but this 
assumes an equivalent function is available. For example, the built in Nova EC2 
to EC2 project was an example where we had enough notice to test the new 
solution in parallel and then move with minimum disruption.  Moving an entire 
data centre from Chef to Puppet or running a parallel toolchain, for example, 
has a high cost.
Given the massive functionality increase in other clouds, It will be tough to 
limit the OpenStack offering to the small core. However, expanding with 
unsustainable projects is also not attractive.
Tim


From: Joshua Harlow [harlo...@fastmail.com<mailto:harlo...@fastmail.com>]
Sent: Thursday, February 16, 2017 10:24 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [chef] Making the Kitchen Great Again: A 
Retrospective on OpenStack & Chef

Alex Schultz wrote:
On Thu, Feb 16, 2017 at 9:12 AM, Ed 
Leafemailto:e...@leafe.com>>  wrote:
On Feb 16, 2017, at 10:07 AM, Doug 
Hellmannmailto:d...@doughellmann.com>>  wrote:

When we signed off on the Big Tent changes we said competition
between projects was desirable, and that deployers and contributors
would make choices based on the work being done in those competing
projects. Basically, the market would decide on the "optimal"
solution. It's a hard message to hear, but that seems to be what
is happening.
This.

We got much better at adding new things to OpenStack. We need to get better at 
letting go of old things.

-- Ed Leafe




I agree that the market will dictate what continues to survive, but if
you're not careful you may be speeding up the decline as the end user
(deployer/operator/cloud consumer) will switch completely to something
else because it becomes to difficult to continue to consume via what
used to be there and no longer is.  I thought the whole point was to
not have vendor lock-in.  Honestly I think the focus is too much on
the development and not enough on the consumption of the development
output.  What are the point of all these features if no one can
actually consume them.


+1 to that.

I've been in the boat of development and consumption of it for my
*whole* journey in openstack land and I can say the product as a whole
seems 'underbaked' with regards to the way people consume the
development output. It seems we have focused on how to do the dev. stuff
nicely and a nice process there, but sort of forgotten about all that
being quite useless if no one can consume them (without going 

Re: [openstack-dev] [chef] Making the Kitchen Great Again: A Retrospective on OpenStack & Chef

2017-02-16 Thread Adam Heczko
Personally I'd prefer OpenStack to follow some of K8s deployment patterns.
OpenStack has grown to an enormous size and it really painful to operate it
at scale. My suggestion would be to focus on improvement of consumption
models. 'Dockerization' of the release artifacts would be very useful. Also
current approach to configuration management relying on tens *conf files
distributed in hundreds directories is difficult to understand and maintain
in the longer term. Why don't move all config to etcd or MySQL? Do we need
all these *conf files? This is operator's pain point and leads
Puppet/Chef/Ansible/Saltstack folks spending hundreds of hours in a
suboptimal way.

On Thu, Feb 16, 2017 at 8:28 PM, Tim Bell  wrote:

>
> On 16 Feb 2017, at 19:42, Fox, Kevin M  wrote:
>
> +1. The assumption was market forces will cause the best OpenStack
> deployment tools to win. But the sad reality is, market forces are causing
> people to look for non OpenStack solutions instead as the pain is still too
> high.
>
> While k8s has a few different deployment tools currently, they are focused
> on getting the small bit of underlying plumbing deployed. Then you use the
> common k8s itself to deploy the rest. Adding a dashboard, dns, ingress,
> sdn, other component is easy in that world.
>
> IMO, OpenStack needs to do something similar. Standardize a small core and
> get that easily deployable, then make it easy to deploy/upgrade the rest of
> the big tent projects on top of that, not next to it as currently is being
> done.
>
> Thanks,
> Kevin
>
>
> Unfortunately, the more operators and end users question the viability of
> a specific project, the less likely it is to be adopted.
>
> It is a very very difficult discussion with an end user to explain that
> function X is no longer available because the latest OpenStack upgrade had
> to be done for security/functional/stability reasons and this
> project/function is not available.
>
> The availability of a function may also have been one of the positives for
> the OpenStack selection so finding a release or two later that it is no
> longer in the portfolio is difficult.
>
> The deprecation policy really helps so we can give a good notice but this
> assumes an equivalent function is available. For example, the built in Nova
> EC2 to EC2 project was an example where we had enough notice to test the
> new solution in parallel and then move with minimum disruption.  Moving an
> entire data centre from Chef to Puppet or running a parallel toolchain, for
> example, has a high cost.
>
> Given the massive functionality increase in other clouds, It will be tough
> to limit the OpenStack offering to the small core. However, expanding with
> unsustainable projects is also not attractive.
>
> Tim
>
>
> 
> From: Joshua Harlow [harlo...@fastmail.com]
> Sent: Thursday, February 16, 2017 10:24 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [chef] Making the Kitchen Great Again: A
> Retrospective on OpenStack & Chef
>
> Alex Schultz wrote:
>
> On Thu, Feb 16, 2017 at 9:12 AM, Ed Leafe  wrote:
>
> On Feb 16, 2017, at 10:07 AM, Doug Hellmann  wrote:
>
> When we signed off on the Big Tent changes we said competition
> between projects was desirable, and that deployers and contributors
> would make choices based on the work being done in those competing
> projects. Basically, the market would decide on the "optimal"
> solution. It's a hard message to hear, but that seems to be what
> is happening.
>
> This.
>
> We got much better at adding new things to OpenStack. We need to get
> better at letting go of old things.
>
> -- Ed Leafe
>
>
>
>
> I agree that the market will dictate what continues to survive, but if
> you're not careful you may be speeding up the decline as the end user
> (deployer/operator/cloud consumer) will switch completely to something
> else because it becomes to difficult to continue to consume via what
> used to be there and no longer is.  I thought the whole point was to
> not have vendor lock-in.  Honestly I think the focus is too much on
> the development and not enough on the consumption of the development
> output.  What are the point of all these features if no one can
> actually consume them.
>
>
> +1 to that.
>
> I've been in the boat of development and consumption of it for my
> *whole* journey in openstack land and I can say the product as a whole
> seems 'underbaked' with regards to the way people consume the
> development output. It seems we have focused on how to do the dev. stuff
> nicely and a nice process there, but sort of fo

Re: [openstack-dev] [chef] Making the Kitchen Great Again: A Retrospective on OpenStack & Chef

2017-02-16 Thread Tim Bell

On 16 Feb 2017, at 19:42, Fox, Kevin M 
mailto:kevin@pnnl.gov>> wrote:

+1. The assumption was market forces will cause the best OpenStack deployment 
tools to win. But the sad reality is, market forces are causing people to look 
for non OpenStack solutions instead as the pain is still too high.

While k8s has a few different deployment tools currently, they are focused on 
getting the small bit of underlying plumbing deployed. Then you use the common 
k8s itself to deploy the rest. Adding a dashboard, dns, ingress, sdn, other 
component is easy in that world.

IMO, OpenStack needs to do something similar. Standardize a small core and get 
that easily deployable, then make it easy to deploy/upgrade the rest of the big 
tent projects on top of that, not next to it as currently is being done.

Thanks,
Kevin

Unfortunately, the more operators and end users question the viability of a 
specific project, the less likely it is to be adopted.
It is a very very difficult discussion with an end user to explain that 
function X is no longer available because the latest OpenStack upgrade had to 
be done for security/functional/stability reasons and this project/function is 
not available.
The availability of a function may also have been one of the positives for the 
OpenStack selection so finding a release or two later that it is no longer in 
the portfolio is difficult.
The deprecation policy really helps so we can give a good notice but this 
assumes an equivalent function is available. For example, the built in Nova EC2 
to EC2 project was an example where we had enough notice to test the new 
solution in parallel and then move with minimum disruption.  Moving an entire 
data centre from Chef to Puppet or running a parallel toolchain, for example, 
has a high cost.
Given the massive functionality increase in other clouds, It will be tough to 
limit the OpenStack offering to the small core. However, expanding with 
unsustainable projects is also not attractive.
Tim


From: Joshua Harlow [harlo...@fastmail.com<mailto:harlo...@fastmail.com>]
Sent: Thursday, February 16, 2017 10:24 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [chef] Making the Kitchen Great Again: A 
Retrospective on OpenStack & Chef

Alex Schultz wrote:
On Thu, Feb 16, 2017 at 9:12 AM, Ed 
Leafemailto:e...@leafe.com>>  wrote:
On Feb 16, 2017, at 10:07 AM, Doug 
Hellmannmailto:d...@doughellmann.com>>  wrote:

When we signed off on the Big Tent changes we said competition
between projects was desirable, and that deployers and contributors
would make choices based on the work being done in those competing
projects. Basically, the market would decide on the "optimal"
solution. It's a hard message to hear, but that seems to be what
is happening.
This.

We got much better at adding new things to OpenStack. We need to get better at 
letting go of old things.

-- Ed Leafe




I agree that the market will dictate what continues to survive, but if
you're not careful you may be speeding up the decline as the end user
(deployer/operator/cloud consumer) will switch completely to something
else because it becomes to difficult to continue to consume via what
used to be there and no longer is.  I thought the whole point was to
not have vendor lock-in.  Honestly I think the focus is too much on
the development and not enough on the consumption of the development
output.  What are the point of all these features if no one can
actually consume them.


+1 to that.

I've been in the boat of development and consumption of it for my
*whole* journey in openstack land and I can say the product as a whole
seems 'underbaked' with regards to the way people consume the
development output. It seems we have focused on how to do the dev. stuff
nicely and a nice process there, but sort of forgotten about all that
being quite useless if no one can consume them (without going through
much pain or paying a vendor).

This has or has IMHO been a factor in why certain are companies (and the
people they support) are exiting openstack and just going elsewhere.

I personally don't believe fixing this is 'let the market forces' figure
it out for us (what a slow & horrible way to let this play out; I'd
almost rather go pull my fingernails out). I do believe it will require
making opinionated decisions which we have all never been very good at.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usa

Re: [openstack-dev] [chef] Making the Kitchen Great Again: A Retrospective on OpenStack & Chef

2017-02-16 Thread Sylvain Bauza


Le 16/02/2017 18:42, Alex Schultz a écrit :
> On Thu, Feb 16, 2017 at 9:12 AM, Ed Leafe  wrote:
>> On Feb 16, 2017, at 10:07 AM, Doug Hellmann  wrote:
>>
>>> When we signed off on the Big Tent changes we said competition
>>> between projects was desirable, and that deployers and contributors
>>> would make choices based on the work being done in those competing
>>> projects. Basically, the market would decide on the "optimal"
>>> solution. It's a hard message to hear, but that seems to be what
>>> is happening.
>>
>> This.
>>
>> We got much better at adding new things to OpenStack. We need to get better 
>> at letting go of old things.
>>
>> -- Ed Leafe
>>
>>
>>
> 
> I agree that the market will dictate what continues to survive, but if
> you're not careful you may be speeding up the decline as the end user
> (deployer/operator/cloud consumer) will switch completely to something
> else because it becomes to difficult to continue to consume via what
> used to be there and no longer is.  I thought the whole point was to
> not have vendor lock-in.  Honestly I think the focus is too much on
> the development and not enough on the consumption of the development
> output.  What are the point of all these features if no one can
> actually consume them.
> 

IMHO, I think the crux of the matter has been discussed previously and
said: it's how having collaboration between projects.

Noone can be seasoned by boiling the OpenStack ocean. It's wide, and you
need to build a boat.


That boat can be by having liaisons between deployment and service
projects. Or, by having influence within those projects - mutually.

Putting the burden on one side doesn't solve the problem. Rather, I'd by
far prefer to see communication at design stages (like for example
during the PTG).

-Sylvain


> Thanks,
> -Alex
> 
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [chef] Making the Kitchen Great Again: A Retrospective on OpenStack & Chef

2017-02-16 Thread Fox, Kevin M
+1. The assumption was market forces will cause the best OpenStack deployment 
tools to win. But the sad reality is, market forces are causing people to look 
for non OpenStack solutions instead as the pain is still too high.

While k8s has a few different deployment tools currently, they are focused on 
getting the small bit of underlying plumbing deployed. Then you use the common 
k8s itself to deploy the rest. Adding a dashboard, dns, ingress, sdn, other 
component is easy in that world.

IMO, OpenStack needs to do something similar. Standardize a small core and get 
that easily deployable, then make it easy to deploy/upgrade the rest of the big 
tent projects on top of that, not next to it as currently is being done.

Thanks,
Kevin

From: Joshua Harlow [harlo...@fastmail.com]
Sent: Thursday, February 16, 2017 10:24 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [chef] Making the Kitchen Great Again: A 
Retrospective on OpenStack & Chef

Alex Schultz wrote:
> On Thu, Feb 16, 2017 at 9:12 AM, Ed Leafe  wrote:
>> On Feb 16, 2017, at 10:07 AM, Doug Hellmann  wrote:
>>
>>> When we signed off on the Big Tent changes we said competition
>>> between projects was desirable, and that deployers and contributors
>>> would make choices based on the work being done in those competing
>>> projects. Basically, the market would decide on the "optimal"
>>> solution. It's a hard message to hear, but that seems to be what
>>> is happening.
>> This.
>>
>> We got much better at adding new things to OpenStack. We need to get better 
>> at letting go of old things.
>>
>> -- Ed Leafe
>>
>>
>>
>
> I agree that the market will dictate what continues to survive, but if
> you're not careful you may be speeding up the decline as the end user
> (deployer/operator/cloud consumer) will switch completely to something
> else because it becomes to difficult to continue to consume via what
> used to be there and no longer is.  I thought the whole point was to
> not have vendor lock-in.  Honestly I think the focus is too much on
> the development and not enough on the consumption of the development
> output.  What are the point of all these features if no one can
> actually consume them.
>

+1 to that.

I've been in the boat of development and consumption of it for my
*whole* journey in openstack land and I can say the product as a whole
seems 'underbaked' with regards to the way people consume the
development output. It seems we have focused on how to do the dev. stuff
nicely and a nice process there, but sort of forgotten about all that
being quite useless if no one can consume them (without going through
much pain or paying a vendor).

This has or has IMHO been a factor in why certain are companies (and the
people they support) are exiting openstack and just going elsewhere.

I personally don't believe fixing this is 'let the market forces' figure
it out for us (what a slow & horrible way to let this play out; I'd
almost rather go pull my fingernails out). I do believe it will require
making opinionated decisions which we have all never been very good at.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [chef] Making the Kitchen Great Again: A Retrospective on OpenStack & Chef

2017-02-16 Thread Joshua Harlow

Alex Schultz wrote:

On Thu, Feb 16, 2017 at 9:12 AM, Ed Leafe  wrote:

On Feb 16, 2017, at 10:07 AM, Doug Hellmann  wrote:


When we signed off on the Big Tent changes we said competition
between projects was desirable, and that deployers and contributors
would make choices based on the work being done in those competing
projects. Basically, the market would decide on the "optimal"
solution. It's a hard message to hear, but that seems to be what
is happening.

This.

We got much better at adding new things to OpenStack. We need to get better at 
letting go of old things.

-- Ed Leafe





I agree that the market will dictate what continues to survive, but if
you're not careful you may be speeding up the decline as the end user
(deployer/operator/cloud consumer) will switch completely to something
else because it becomes to difficult to continue to consume via what
used to be there and no longer is.  I thought the whole point was to
not have vendor lock-in.  Honestly I think the focus is too much on
the development and not enough on the consumption of the development
output.  What are the point of all these features if no one can
actually consume them.



+1 to that.

I've been in the boat of development and consumption of it for my 
*whole* journey in openstack land and I can say the product as a whole 
seems 'underbaked' with regards to the way people consume the 
development output. It seems we have focused on how to do the dev. stuff 
nicely and a nice process there, but sort of forgotten about all that 
being quite useless if no one can consume them (without going through 
much pain or paying a vendor).


This has or has IMHO been a factor in why certain are companies (and the 
people they support) are exiting openstack and just going elsewhere.


I personally don't believe fixing this is 'let the market forces' figure 
it out for us (what a slow & horrible way to let this play out; I'd 
almost rather go pull my fingernails out). I do believe it will require 
making opinionated decisions which we have all never been very good at.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [chef] Making the Kitchen Great Again: A Retrospective on OpenStack & Chef

2017-02-16 Thread Alex Schultz
On Thu, Feb 16, 2017 at 9:12 AM, Ed Leafe  wrote:
> On Feb 16, 2017, at 10:07 AM, Doug Hellmann  wrote:
>
>> When we signed off on the Big Tent changes we said competition
>> between projects was desirable, and that deployers and contributors
>> would make choices based on the work being done in those competing
>> projects. Basically, the market would decide on the "optimal"
>> solution. It's a hard message to hear, but that seems to be what
>> is happening.
>
> This.
>
> We got much better at adding new things to OpenStack. We need to get better 
> at letting go of old things.
>
> -- Ed Leafe
>
>
>

I agree that the market will dictate what continues to survive, but if
you're not careful you may be speeding up the decline as the end user
(deployer/operator/cloud consumer) will switch completely to something
else because it becomes to difficult to continue to consume via what
used to be there and no longer is.  I thought the whole point was to
not have vendor lock-in.  Honestly I think the focus is too much on
the development and not enough on the consumption of the development
output.  What are the point of all these features if no one can
actually consume them.

Thanks,
-Alex

>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [chef] Making the Kitchen Great Again: A Retrospective on OpenStack & Chef

2017-02-16 Thread Doug Hellmann
Excerpts from Ian Cordasco's message of 2017-02-16 11:20:45 -0500:
> -Original Message-
> From: Ed Leafe 
> Reply: OpenStack Development Mailing List (not for usage questions)
> 
> Date: February 16, 2017 at 10:13:51
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject:  Re: [openstack-dev] [chef] Making the Kitchen Great Again: A
> Retrospective on OpenStack & Chef
> 
> > On Feb 16, 2017, at 10:07 AM, Doug Hellmann wrote:
> >
> > > When we signed off on the Big Tent changes we said competition
> > > between projects was desirable, and that deployers and contributors
> > > would make choices based on the work being done in those competing
> > > projects. Basically, the market would decide on the "optimal"
> > > solution. It's a hard message to hear, but that seems to be what
> > > is happening.
> >
> > This.
> >
> > We got much better at adding new things to OpenStack. We need to get better 
> > at letting go
> > of old things.
> 
> I agree with the idea of making it easier for new contributors from
> outside our walled garden of paid contributors. I won't further derail
> this thread with my opinions of how corporations will begin to exploit
> that and invest less directly in OpenStack's development though.

Influence comes with investment. The less someone contributes, the
less overall input they have into the direction. Entities that have
strong opinions about direction will still need to participate
consistently in order to convince other contributors to follow their
lead.

Doug

> 
> --
> Ian Cordasco
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [chef] Making the Kitchen Great Again: A Retrospective on OpenStack & Chef

2017-02-16 Thread Monty Taylor
On 02/16/2017 10:23 AM, Ed Leafe wrote:
> On Feb 16, 2017, at 10:12 AM, Ed Leafe  wrote:
> 
>> We got much better at adding new things to OpenStack. We need to get better 
>> at letting go of old things.
> 
> On re-reading that, it doesn’t sound like what I intended. By “old” I mean 
> things that no longer have enough active support to keep them current. I have 
> nothing against old things, being one myself. :)

/me waves from the old-folks home


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [chef] Making the Kitchen Great Again: A Retrospective on OpenStack & Chef

2017-02-16 Thread Ed Leafe
On Feb 16, 2017, at 10:12 AM, Ed Leafe  wrote:

> We got much better at adding new things to OpenStack. We need to get better 
> at letting go of old things.

On re-reading that, it doesn’t sound like what I intended. By “old” I mean 
things that no longer have enough active support to keep them current. I have 
nothing against old things, being one myself. :)


-- Ed Leafe


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [chef] Making the Kitchen Great Again: A Retrospective on OpenStack & Chef

2017-02-16 Thread Ian Cordasco
-Original Message-
From: Ed Leafe 
Reply: OpenStack Development Mailing List (not for usage questions)

Date: February 16, 2017 at 10:13:51
To: OpenStack Development Mailing List (not for usage questions)

Subject:  Re: [openstack-dev] [chef] Making the Kitchen Great Again: A
Retrospective on OpenStack & Chef

> On Feb 16, 2017, at 10:07 AM, Doug Hellmann wrote:
>
> > When we signed off on the Big Tent changes we said competition
> > between projects was desirable, and that deployers and contributors
> > would make choices based on the work being done in those competing
> > projects. Basically, the market would decide on the "optimal"
> > solution. It's a hard message to hear, but that seems to be what
> > is happening.
>
> This.
>
> We got much better at adding new things to OpenStack. We need to get better 
> at letting go
> of old things.

I agree with the idea of making it easier for new contributors from
outside our walled garden of paid contributors. I won't further derail
this thread with my opinions of how corporations will begin to exploit
that and invest less directly in OpenStack's development though.

--
Ian Cordasco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [chef] Making the Kitchen Great Again: A Retrospective on OpenStack & Chef

2017-02-16 Thread Ed Leafe
On Feb 16, 2017, at 10:07 AM, Doug Hellmann  wrote:

> When we signed off on the Big Tent changes we said competition
> between projects was desirable, and that deployers and contributors
> would make choices based on the work being done in those competing
> projects. Basically, the market would decide on the "optimal"
> solution. It's a hard message to hear, but that seems to be what
> is happening.

This.

We got much better at adding new things to OpenStack. We need to get better at 
letting go of old things.

-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [chef] Making the Kitchen Great Again: A Retrospective on OpenStack & Chef

2017-02-16 Thread Doug Hellmann
Excerpts from Dean Troyer's message of 2017-02-16 08:25:42 -0600:
> On Thu, Feb 16, 2017 at 7:54 AM, Ian Cordasco  wrote:
> > It seems like a lot of people view "developing OpenStack" as more
> > attractive than "making it easy to deploy OpenStack" (via the
> > deployment tools in the tent). Doing both is really something project
> > teams should do, but I don't think shoving all of the deployment
> > tooling in repo makes that any better frankly. If we put our tooling
> > in repo, we'll likely then start collecting RPM/Debian/etc. packaging
> > in repo as well.
> 
> Part of it _is_ the "deployment isn't shiny/new/features", but there
> is also the historical view lingering that we ship tarballs (and
> realistically, git repo tags) so as not to pick favorites and to keep
> deployment and packaging out of the project repos directly.
> 
> We have >5 different tools that are "mainstream" to consider, not even
> the large project teams have enough expertise to maintain those.  I
> had a hard enough time myself keeping both rpm and apt up to date in
> previous projects, much less recipies, playbooks and the odd shell
> script.
> 
> It is also hard to come in to a community and demand that other
> projects suddenly have to support your project just because you showed
> up.  (This works both ways, when we add a "Big Tent" project, now all
> of the downstreams are expected to suddenly add it.)
> 
> The solution I see is to have a mechanism by which important things
> can be communicated from the project developers that can be consumed
> by the packager/deployer community.  Today, however sub-optimal it
> seems to be, that is release notes.  Maybe adding a specific section
> for packaging would be helpful, but in practice that adds one more
> thing to remember to write to a list that is already hard to get devs
> to do.
> 
> Ad Doug mentions elsewhere in this thread, the liaison approach has
> worked in other areas, it may be a useful approach here.  But I know
> as the PTL of a very small project I do not want 5 more hats to wear.
> Maybe one.
> 
> We have encouraged the packaging groups to work together in the past,
> with mixed results, but it seems to me that a lot of the things that
> need to be discovered and learned in new releases will be similar for
> all of the downstream consumers.  Maybe if that downstream community
> could reach a critical mass and suggest a single common way to
> communicate the deployment considerations in a release it would get
> more traction.  And a single project liaison for the collective rather
> than one for each deployment project.

I like the idea of at least standardizing the communication. I agree, we
don't want 5+ new liaison responsibilities, especially for small teams.

> > I think what would help improve willing bidirectional communication
> > between service and deployment/packaging teams would be an
> > understanding that without the deployment/packaging teams, the service
> > teams will likely be out of a job. People will/can not deploy
> > OpenStack without the tooling these other teams provide.
> 
> True in a literal sense.  This is also true for our users, without
> them nobody will deploy a cloud in the first place.  That has not
> changed anything unfortunately, we still regularly give our users
> miserable experiences because it is too much effort to participate
> with the (now comatose) UX team or prioritize making a common log
> format or reduce the number of knobs available to twiddle to make each
> cloud a unique snowflake.
> 
> We can not sustain being all things to all people.  Given the
> contraction of investment being considered and implemented by many of
> our foundation member companies, we will have to make some hard
> decisions about where to spend the resources we have left.  Some of
> those decision are made for us by those member companies promoting
> their own priorities (your example of Ansible contributions is one).
> But as a community we have an opportunity to express our desires and
> potentially influence those decisions.

When we signed off on the Big Tent changes we said competition
between projects was desirable, and that deployers and contributors
would make choices based on the work being done in those competing
projects. Basically, the market would decide on the "optimal"
solution. It's a hard message to hear, but that seems to be what
is happening.

We also said that that cross-project efforts needed to decide what
amount of work they could manage themselves and then empower project
teams to take on the responsibility for everything else. We need
to build tools and write documentation to make it as easy as possible
to contribute, and then be OK with the state of the world if we
don't have 100% coverage of all projects with all deployment tools
or install guides or whatever else falls into that cross-project
category.  We want to have perfect parity everywhere, but everything
is evolving continuously, so that hardly seems realistic

Re: [openstack-dev] [chef] Making the Kitchen Great Again: A Retrospective on OpenStack & Chef

2017-02-16 Thread Ian Cordasco
-Original Message-
From: Dean Troyer 
Reply: OpenStack Development Mailing List (not for usage questions)

Date: February 16, 2017 at 08:27:12
To: OpenStack Development Mailing List (not for usage questions)

Subject:  Re: [openstack-dev] [chef] Making the Kitchen Great Again: A
Retrospective on OpenStack & Chef

> On Thu, Feb 16, 2017 at 7:54 AM, Ian Cordasco wrote:
> > It seems like a lot of people view "developing OpenStack" as more
> > attractive than "making it easy to deploy OpenStack" (via the
> > deployment tools in the tent). Doing both is really something project
> > teams should do, but I don't think shoving all of the deployment
> > tooling in repo makes that any better frankly. If we put our tooling
> > in repo, we'll likely then start collecting RPM/Debian/etc. packaging
> > in repo as well.
>
> Part of it _is_ the "deployment isn't shiny/new/features", but there
> is also the historical view lingering that we ship tarballs (and
> realistically, git repo tags) so as not to pick favorites and to keep
> deployment and packaging out of the project repos directly.

Right, I understand that.

> We have >5 different tools that are "mainstream" to consider, not even
> the large project teams have enough expertise to maintain those. I
> had a hard enough time myself keeping both rpm and apt up to date in
> previous projects, much less recipies, playbooks and the odd shell
> script.

Next week it'll be 7. ;)

> It is also hard to come in to a community and demand that other
> projects suddenly have to support your project just because you showed
> up. (This works both ways, when we add a "Big Tent" project, now all
> of the downstreams are expected to suddenly add it.)

Right. And we've seen teams like the UCA team be selective in what
they add (rightfully so).

> The solution I see is to have a mechanism by which important things
> can be communicated from the project developers that can be consumed
> by the packager/deployer community. Today, however sub-optimal it
> seems to be, that is release notes. Maybe adding a specific section
> for packaging would be helpful, but in practice that adds one more
> thing to remember to write to a list that is already hard to get devs
> to do.

I would think the mailing list (with a certain tag) might be better.

> Ad Doug mentions elsewhere in this thread, the liaison approach has
> worked in other areas, it may be a useful approach here. But I know
> as the PTL of a very small project I do not want 5 more hats to wear.
> Maybe one.

Right. And teams like Glance are struggling with their review queue
given the contraction you mention later on.

> We have encouraged the packaging groups to work together in the past,
> with mixed results, but it seems to me that a lot of the things that
> need to be discovered and learned in new releases will be similar for
> all of the downstream consumers. Maybe if that downstream community
> could reach a critical mass and suggest a single common way to
> communicate the deployment considerations in a release it would get
> more traction. And a single project liaison for the collective rather
> than one for each deployment project.

We also have packaging teams that repeatedly denegrate others who
create derivatives of their work while discouraging and denigration
others' participation. I think that coordination will be hard until
those packagers learn the hard lessons of their harmful actions.

> > I think what would help improve willing bidirectional communication
> > between service and deployment/packaging teams would be an
> > understanding that without the deployment/packaging teams, the service
> > teams will likely be out of a job. People will/can not deploy
> > OpenStack without the tooling these other teams provide.
>
> True in a literal sense. This is also true for our users, without
> them nobody will deploy a cloud in the first place. That has not
> changed anything unfortunately, we still regularly give our users
> miserable experiences because it is too much effort to participate
> with the (now comatose) UX team or prioritize making a common log
> format or reduce the number of knobs available to twiddle to make each
> cloud a unique snowflake.
>
> We can not sustain being all things to all people. Given the
> contraction of investment being considered and implemented by many of
> our foundation member companies, we will have to make some hard
> decisions about where to spend the resources we have left. Some of
> those decision are made for us by those member companies promoting
> their own priorities (your example of Ansible contributions is one).
> But as a community we have an opportunity to 

Re: [openstack-dev] [chef] Making the Kitchen Great Again: A Retrospective on OpenStack & Chef

2017-02-16 Thread Monty Taylor
On 02/16/2017 08:25 AM, Dean Troyer wrote:
> On Thu, Feb 16, 2017 at 7:54 AM, Ian Cordasco  wrote:
>> It seems like a lot of people view "developing OpenStack" as more
>> attractive than "making it easy to deploy OpenStack" (via the
>> deployment tools in the tent). Doing both is really something project
>> teams should do, but I don't think shoving all of the deployment
>> tooling in repo makes that any better frankly. If we put our tooling
>> in repo, we'll likely then start collecting RPM/Debian/etc. packaging
>> in repo as well.
> 
> Part of it _is_ the "deployment isn't shiny/new/features", but there
> is also the historical view lingering that we ship tarballs (and
> realistically, git repo tags) so as not to pick favorites and to keep
> deployment and packaging out of the project repos directly.
> 
> We have >5 different tools that are "mainstream" to consider, not even
> the large project teams have enough expertise to maintain those.  I
> had a hard enough time myself keeping both rpm and apt up to date in
> previous projects, much less recipies, playbooks and the odd shell
> script.

/me puts on old man pants and settles down by the fire to tell an old yarn

Back in the day - and by that, I mean before devstack existed and before
we were using git and before global-requirements - we did debian
packaging (targetting ubuntu) for all (4) of our projects. This is how
the gate worked. If you needed to express that nova had a new
dependency, you had to first go land a change in the packaging repo that
added the dependency (or bumped the version) The gate installed
dependencies on build hosts via "apt-get build-dep $project"

It worked ... to a degree. But then around the Diablo cycle, our friends
at Red Hat showed up. I remember quite clearly a discussion I had with
markmc at what I remember to be the Essex summit in Boston (but it's
possible I'm placing the memory in the wrong room - sorry, I'm getting
old) where he made the very reasonable argument "I've been using RPM for
ages and have never in my life made a debian package - why should I need
to go learn debian packaging to be able to do python development?"

That argument or some form of it has persisted in our DNA ever since. It
is one of the two reasons we switched from interacting with distro
packages to using virtualenvs and pip (the other is what is now a rather
funny story involving the old bzr vs. git argument that I'll happily
tell anyone over alcohol in person)

It's at the root of why TripleO originally came in to existence - and
also why it originally did not use puppet or chef (ansible and salt
weren't reasonable options yet at the time) It's related to the reason
we use devstack in the gate and not puppet or chef or ansible or salt or
juju. We knew that an OpenStack deployment project that used Puppet
would alienate the folks who used Chef at work, and vice-versa.

Quick digression - the real reason we use devstack in the gate is that
it's literally the first thing that the infra team was handed that
actually WORKED. Before devstack, on different occasions we were handed
repos of cookbooks or puppet modules - but none of them came with any
instructions of how to use them. Then devstack arrived and was as simple
as "run stack.sh" - and it actually worked the first time we tried it -
thus the devstack-gate was ultimately born.

As Dean points out - far from having gotten better, this has actually
multiplied. For from not alienating either the Puppet or the Chef
crowed, we REALLY pissed off both crowds when they thought that the
existence of TripleO as an official project disadvantaged people using
other deployment frameworks. (we totally solved the 2 competing standard
problem by adding a third standard) We now also have kolla and fuel and
TripleO in addition to puppet, chef, ansible, salt and juju. Luckily
there does seem to be collaboration on docker images via kolla - and
TripleO's and fuel's use of puppet is in collaboration with
openstack-puppet - so although we're not coalescing on one approach,
we're finding ways to collaborate on the pieces where we can.

We also have not just Ubuntu and Red Hat - but have gained (and recently
lost the maintainer of) Debian, and we have folks from Gentoo and SuSE
deeply involved. When you multiply
kolla+fuel+TripleO+ansible+salt+puppet+chef+juju by
Ubuntu+CentOS+RHEL+Fedora+SLES+OpenSUSE+Gentoo+Debian ... not to mention
the different reasonable versions of each of those ... it quickly
becomes unworkable an unworkable burden for each project developer to
know how to navigate all of them - as much as I wish that were not true.

I'll shut up now. I'm old - it's time for a nap.

> It is also hard to come in to a community and demand that other
> projects suddenly have to support your project just because you showed
> up.  (This works both ways, when we add a "Big Tent" project, now all
> of the downstreams are expected to suddenly add it.)
> 
> The solution I see is to have a mechanism by which impo

Re: [openstack-dev] [chef] Making the Kitchen Great Again: A Retrospective on OpenStack & Chef

2017-02-16 Thread Dean Troyer
On Thu, Feb 16, 2017 at 7:54 AM, Ian Cordasco  wrote:
> It seems like a lot of people view "developing OpenStack" as more
> attractive than "making it easy to deploy OpenStack" (via the
> deployment tools in the tent). Doing both is really something project
> teams should do, but I don't think shoving all of the deployment
> tooling in repo makes that any better frankly. If we put our tooling
> in repo, we'll likely then start collecting RPM/Debian/etc. packaging
> in repo as well.

Part of it _is_ the "deployment isn't shiny/new/features", but there
is also the historical view lingering that we ship tarballs (and
realistically, git repo tags) so as not to pick favorites and to keep
deployment and packaging out of the project repos directly.

We have >5 different tools that are "mainstream" to consider, not even
the large project teams have enough expertise to maintain those.  I
had a hard enough time myself keeping both rpm and apt up to date in
previous projects, much less recipies, playbooks and the odd shell
script.

It is also hard to come in to a community and demand that other
projects suddenly have to support your project just because you showed
up.  (This works both ways, when we add a "Big Tent" project, now all
of the downstreams are expected to suddenly add it.)

The solution I see is to have a mechanism by which important things
can be communicated from the project developers that can be consumed
by the packager/deployer community.  Today, however sub-optimal it
seems to be, that is release notes.  Maybe adding a specific section
for packaging would be helpful, but in practice that adds one more
thing to remember to write to a list that is already hard to get devs
to do.

Ad Doug mentions elsewhere in this thread, the liaison approach has
worked in other areas, it may be a useful approach here.  But I know
as the PTL of a very small project I do not want 5 more hats to wear.
Maybe one.

We have encouraged the packaging groups to work together in the past,
with mixed results, but it seems to me that a lot of the things that
need to be discovered and learned in new releases will be similar for
all of the downstream consumers.  Maybe if that downstream community
could reach a critical mass and suggest a single common way to
communicate the deployment considerations in a release it would get
more traction.  And a single project liaison for the collective rather
than one for each deployment project.

> I think what would help improve willing bidirectional communication
> between service and deployment/packaging teams would be an
> understanding that without the deployment/packaging teams, the service
> teams will likely be out of a job. People will/can not deploy
> OpenStack without the tooling these other teams provide.

True in a literal sense.  This is also true for our users, without
them nobody will deploy a cloud in the first place.  That has not
changed anything unfortunately, we still regularly give our users
miserable experiences because it is too much effort to participate
with the (now comatose) UX team or prioritize making a common log
format or reduce the number of knobs available to twiddle to make each
cloud a unique snowflake.

We can not sustain being all things to all people.  Given the
contraction of investment being considered and implemented by many of
our foundation member companies, we will have to make some hard
decisions about where to spend the resources we have left.  Some of
those decision are made for us by those member companies promoting
their own priorities (your example of Ansible contributions is one).
But as a community we have an opportunity to express our desires and
potentially influence those decisions.


dt

-- 

Dean Troyer
dtro...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [chef] Making the Kitchen Great Again: A Retrospective on OpenStack & Chef

2017-02-16 Thread Ian Cordasco
-Original Message-
From: Doug Hellmann 
Reply: OpenStack Development Mailing List (not for usage questions)

Date: February 16, 2017 at 07:06:25
To: openstack-dev 
Subject:  Re: [openstack-dev] [chef] Making the Kitchen Great Again: A
Retrospective on OpenStack & Chef

> Excerpts from Sylvain Bauza's message of 2017-02-16 10:55:14 +0100:
> >
> > Le 16/02/2017 10:17, Neil Jerram a écrit :
> > > On Thu, Feb 16, 2017 at 5:26 AM Joshua Harlow > > > > wrote:
> > >
> > > Radical idea, have each project (not libraries) contain a dockerfile
> > > that builds the project into a deployable unit (or multiple dockerfiles
> > > for projects with multiple components) and then it becomes the projects
> > > responsibility for ensuring that the right code is in that dockerfile to
> > > move from release to release (whether that be a piece of code that does
> > > a configuration migration).
> > >
> > >
> > > I've wondered about that approach, but worried about having the Docker
> > > engine as a new dependency for each OpenStack node. Would that matter?
> > > (Or are there other reasons why OpenStack nodes commonly already have
> > > Docker on them?)
> > >
> >
> > And one could claim that each project should also maintain its Ansible
> > playbooks. And one could claim that each project should also maintain
> > its Chef cookbooks. And one could claim that each project should also
> > maintain its Puppet manifests.
> >
> > I surely understand the problem that it is stated here and how it is
> > difficult for a deployment tool team to cope with the requirements that
> > every project makes every time it writes an upgrade impact.
> >
> > For the good or worst, as a service project developer, the only way to
> > signal the change is to write a release note. I'm not at all seasoned by
> > all the quirks and specifics of a specific deployment tool, and it's
> > always hard time for figuring out if what I write can break other things.
> >
> > What could be the solution to that distributed services problem ? Well,
> > understanding each other problem is certainly one of the solutions.
> > Getting more communication between teams can also certainly help. Having
> > consistent behaviours between heteregenous deployment tools could also
> > be a thing.
>
> Right. The liaison program used by other cross-project teams is
> designed to deal with this communication gap by identifying someone
> to focus on ensuring the communication happens. Perhaps we need to
> apply that idea to to some of the deployment projects as well.

I know the OpenStack-Ansible project went out of its way to try to
create a liaison program with the OpenStack services it works on. The
only engagement (that I'm aware) of it seeing has been from other
Rackspace employees whose management has told them to work on the
Ansible roles to ship the project they work on.

It seems like a lot of people view "developing OpenStack" as more
attractive than "making it easy to deploy OpenStack" (via the
deployment tools in the tent). Doing both is really something project
teams should do, but I don't think shoving all of the deployment
tooling in repo makes that any better frankly. If we put our tooling
in repo, we'll likely then start collecting RPM/Debian/etc. packaging
in repo as well.

I think what would help improve willing bidirectional communication
between service and deployment/packaging teams would be an
understanding that without the deployment/packaging teams, the service
teams will likely be out of a job. People will/can not deploy
OpenStack without the tooling these other teams provide.

Cheers,
--
Ian Cordasco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [chef] Making the Kitchen Great Again: A Retrospective on OpenStack & Chef

2017-02-16 Thread Doug Hellmann
Excerpts from Sylvain Bauza's message of 2017-02-16 10:55:14 +0100:
> 
> Le 16/02/2017 10:17, Neil Jerram a écrit :
> > On Thu, Feb 16, 2017 at 5:26 AM Joshua Harlow  > > wrote:
> > 
> > Radical idea, have each project (not libraries) contain a dockerfile
> > that builds the project into a deployable unit (or multiple dockerfiles
> > for projects with multiple components) and then it becomes the projects
> > responsibility for ensuring that the right code is in that dockerfile to
> > move from release to release (whether that be a piece of code that does
> > a configuration migration).
> > 
> > 
> > I've wondered about that approach, but worried about having the Docker
> > engine as a new dependency for each OpenStack node.  Would that matter?
> >  (Or are there other reasons why OpenStack nodes commonly already have
> > Docker on them?)
> > 
> 
> And one could claim that each project should also maintain its Ansible
> playbooks. And one could claim that each project should also maintain
> its Chef cookbooks. And one could claim that each project should also
> maintain its Puppet manifests.
> 
> I surely understand the problem that it is stated here and how it is
> difficult for a deployment tool team to cope with the requirements that
> every project makes every time it writes an upgrade impact.
> 
> For the good or worst, as a service project developer, the only way to
> signal the change is to write a release note. I'm not at all seasoned by
> all the quirks and specifics of a specific deployment tool, and it's
> always hard time for figuring out if what I write can break other things.
> 
> What could be the solution to that distributed services problem ? Well,
> understanding each other problem is certainly one of the solutions.
> Getting more communication between teams can also certainly help. Having
> consistent behaviours between heteregenous deployment tools could also
> be a thing.

Right. The liaison program used by other cross-project teams is
designed to deal with this communication gap by identifying someone
to focus on ensuring the communication happens. Perhaps we need to
apply that idea to to some of the deployment projects as well.

Doug

> 
> That's an iterative approach, and that takes time. Sure, and that's
> frustrating. But, please, keep in mind we all go into the same direction.
> 
> -S
> 
> > 
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [chef] Making the Kitchen Great Again: A Retrospective on OpenStack & Chef

2017-02-16 Thread Sylvain Bauza


Le 16/02/2017 10:17, Neil Jerram a écrit :
> On Thu, Feb 16, 2017 at 5:26 AM Joshua Harlow  > wrote:
> 
> Radical idea, have each project (not libraries) contain a dockerfile
> that builds the project into a deployable unit (or multiple dockerfiles
> for projects with multiple components) and then it becomes the projects
> responsibility for ensuring that the right code is in that dockerfile to
> move from release to release (whether that be a piece of code that does
> a configuration migration).
> 
> 
> I've wondered about that approach, but worried about having the Docker
> engine as a new dependency for each OpenStack node.  Would that matter?
>  (Or are there other reasons why OpenStack nodes commonly already have
> Docker on them?)
> 

And one could claim that each project should also maintain its Ansible
playbooks. And one could claim that each project should also maintain
its Chef cookbooks. And one could claim that each project should also
maintain its Puppet manifests.

I surely understand the problem that it is stated here and how it is
difficult for a deployment tool team to cope with the requirements that
every project makes every time it writes an upgrade impact.

For the good or worst, as a service project developer, the only way to
signal the change is to write a release note. I'm not at all seasoned by
all the quirks and specifics of a specific deployment tool, and it's
always hard time for figuring out if what I write can break other things.

What could be the solution to that distributed services problem ? Well,
understanding each other problem is certainly one of the solutions.
Getting more communication between teams can also certainly help. Having
consistent behaviours between heteregenous deployment tools could also
be a thing.

That's an iterative approach, and that takes time. Sure, and that's
frustrating. But, please, keep in mind we all go into the same direction.

-S

> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [chef] Making the Kitchen Great Again: A Retrospective on OpenStack & Chef

2017-02-16 Thread Neil Jerram
On Thu, Feb 16, 2017 at 5:26 AM Joshua Harlow  wrote:

> Radical idea, have each project (not libraries) contain a dockerfile
> that builds the project into a deployable unit (or multiple dockerfiles
> for projects with multiple components) and then it becomes the projects
> responsibility for ensuring that the right code is in that dockerfile to
> move from release to release (whether that be a piece of code that does
> a configuration migration).
>
>
I've wondered about that approach, but worried about having the Docker
engine as a new dependency for each OpenStack node.  Would that matter?
 (Or are there other reasons why OpenStack nodes commonly already have
Docker on them?)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [chef] Making the Kitchen Great Again: A Retrospective on OpenStack & Chef

2017-02-15 Thread Joshua Harlow

For the cookbooks, every core and non-core project that is supported has to
be tracked. In addition to that, each platform that is supported must be
tracked, for quirks and idiosyncrasies, because they always have them.

Then, there are the cross-project teams that do the packaging, as well as
the teams that do not necessarily ship releases that must be tracked, for
variances in testing methods, mirrors outside the scope of infra, external
dependencies, etc. It can be slightly overwhelming and overloading at times,
even to someone reasonably seasoned. Scale that process, for every ecosystem
in which one desires to exist, by an order of magnitude.

There’s definitely a general undercurrent to all of this, and it’s bigger
than any one person or team to solve. We definitely can’t “read the release
notes” for this.


Radical idea, have each project (not libraries) contain a dockerfile 
that builds the project into a deployable unit (or multiple dockerfiles 
for projects with multiple components) and then it becomes the projects 
responsibility for ensuring that the right code is in that dockerfile to 
move from release to release (whether that be a piece of code that does 
a configuration migration).


This is basically what kolla is doing (except kolla itself contains all 
the dockerfiles and deployment tooling as well) and though I won't 
comment on the kolla perspective if each project managed its own 
dockerfiles that wouldn't seem like a bad thing... (it may have been 
proposed before).


Such a thing could move the responsibility (of at least the packaging 
components and dependencies) onto the projects themselves. I've been in 
the boat of try to do all the packaging and tracking variances and I 
know it's a some kind of hell and shifting the responsibility on the 
projects themselves may be a better solution (or at least can be one 
people discuss).



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [chef] Making the Kitchen Great Again: A Retrospective on OpenStack & Chef

2017-02-15 Thread Samuel Cassiba

> On Feb 15, 2017, at 08:49, Alex Schultz  wrote:
> 
> On Wed, Feb 15, 2017 at 9:02 AM, Samuel Cassiba  wrote:
>>> On Feb 15, 2017, at 02:07, Thierry Carrez  wrote:
>>> 
>>> Samuel Cassiba wrote:
 [...]
 *TL;DR* if you don't want to keep going -
 OpenStack-Chef is not in a good place and is not sustainable.
 [...]
>>> 
>>> Thanks for sharing, Sam.
>> 
>> Thanks for taking the time to read and respond. This was as hard to write as 
>> it was to read. As time went on, it became apparent that this retrospective 
>> needed to exist. It was not written lightly, and does not aim to point 
>> fingers.
>> 
>>> I think that part of the reasons for the situation is that we grew the
>>> number of options for deploying OpenStack. We originally only had Puppet
>>> and Chef, but now there is Ansible, Juju, and the various
>>> Kolla-consuming container-oriented approaches. There is a gravitational
>>> attraction effect at play (more users -> more contributors) which
>>> currently benefits Puppet, Ansible and Kolla, to the expense of
>>> less-popular community-driven efforts like OpenStackChef and
>>> OpenStackSalt. I expect this effect to continue. I have mixed feelings
>>> about it: on one hand it reduces available technical options, but on the
>>> other it allows to focus and raise quality…
>> 
>> You have a very valid point. One need only look at the trends over the 
>> cycles in the User Survey to see this shift in most places. Ansible wins due 
>> to sheer simplicity for new deployments, but there are also real business 
>> decisions that go behind automation flavors at certain business sizes. This 
>> leaves them effectively married to whichever flavor chosen. That shift 
>> impacts Puppet’s overall user base, as well, though they had and still have 
>> the luxury of maintaining sponsored support at higher numbers.
> 
> To chime in on the Puppet side, we've seen a decrease in contributors
> over the last several cycles and I have a feeling we'll be in the same
> boat in the near future.  The amount of modules that we have to try
> and manage versus the amount of folks that we have contributing is
> getting to an unmanageable state.  I believe the only way we've gotten
> to where we have been is due to the use within Fuel and TripleO.  As
> those projects evolve, it directly impacts the ability for the Puppet
> modules to remain relevant.  Some could argue that's just the way it
> goes and technologies evolve which is true.  But it's also a loss for
> many of the newer methods as they are losing all of the historical
> knowledge and understanding that went with it and why some patterns
> work better than others.  The software wheel, it's getting reinvented
> every day.

Thank you for your perspective from the Puppet side. The Survey data alone
paints a certain narrative, and not one I think people want. If OpenStack
deployment choice is down to a popularity contest, the direct result is
fewer avenues back into OpenStack.

Fewer people will think to pick OpenStack as a viable option if it simply
doesn’t support their design, which means less exposure for non-core
projects, less feedback for core projects, rinse, repeat. Developers can and
would coalesce around but a couple of the most popular options, which works
if that’s the way things are intending to go. With that, the OpenStack story
starts to tell less like an ecosystem and more like a distro, bordering on
echo chamber. I don’t think anyone signed up for that. On the other hand,
fewer deployment options allow for more singular focus. Without all that
choice clouding decision-making, one has no way to OpenStack but those few
methods that everyone uses.

>> Chef’s sponsored support has numbered far fewer. It casts an extremely 
>> negative image on OpenStack when someone looks for help at odd hours, or 
>> asks something somewhere that none of us have time to track. The answer to 
>> that is the point of making noise, to generate conversation about avenues 
>> and solutions. I could have kept my fingers aiming at LP, Gerrit and IRC in 
>> an attempt to bury my head in the sand. We’re way past the point of denial, 
>> perhaps too far, but as long as the results of the User Survey shows Chef, 
>> there are still users to support, for now. Operators and deployers will be 
>> looking to the source of truth, wherever that is, and right now that source 
>> of truth is OpenStack.
>> 
>>> There is one question I wanted to ask you in terms of community. We
>>> maintain in OpenStack a number of efforts that bridge two communities,
>>> and where the project could set up its infrastructure / governance in
>>> one or the other. In the case of OpenStackChef, you could have set up
>>> shop on the Chef community side, rather than on the OpenStack community
>>> side. Would you say that living on the OpenStack community side helped
>>> you or hurt you ? Did you get enough help / visibility to balance the
>>> constraints ? Do you think you would have been more, les

Re: [openstack-dev] [chef] Making the Kitchen Great Again: A Retrospective on OpenStack & Chef

2017-02-15 Thread Alex Schultz
On Wed, Feb 15, 2017 at 9:02 AM, Samuel Cassiba  wrote:
>
>> On Feb 15, 2017, at 02:07, Thierry Carrez  wrote:
>>
>> Samuel Cassiba wrote:
>>> [...]
>>> *TL;DR* if you don't want to keep going -
>>> OpenStack-Chef is not in a good place and is not sustainable.
>>> [...]
>>
>> Thanks for sharing, Sam.
>>
>
> Thanks for taking the time to read and respond. This was as hard to write as 
> it was to read. As time went on, it became apparent that this retrospective 
> needed to exist. It was not written lightly, and does not aim to point 
> fingers.
>
>> I think that part of the reasons for the situation is that we grew the
>> number of options for deploying OpenStack. We originally only had Puppet
>> and Chef, but now there is Ansible, Juju, and the various
>> Kolla-consuming container-oriented approaches. There is a gravitational
>> attraction effect at play (more users -> more contributors) which
>> currently benefits Puppet, Ansible and Kolla, to the expense of
>> less-popular community-driven efforts like OpenStackChef and
>> OpenStackSalt. I expect this effect to continue. I have mixed feelings
>> about it: on one hand it reduces available technical options, but on the
>> other it allows to focus and raise quality…
>
> You have a very valid point. One need only look at the trends over the cycles 
> in the User Survey to see this shift in most places. Ansible wins due to 
> sheer simplicity for new deployments, but there are also real business 
> decisions that go behind automation flavors at certain business sizes. This 
> leaves them effectively married to whichever flavor chosen. That shift 
> impacts Puppet’s overall user base, as well, though they had and still have 
> the luxury of maintaining sponsored support at higher numbers.
>

To chime in on the Puppet side, we've seen a decrease in contributors
over the last several cycles and I have a feeling we'll be in the same
boat in the near future.  The amount of modules that we have to try
and manage versus the amount of folks that we have contributing is
getting to an unmanageable state.  I believe the only way we've gotten
to where we have been is due to the use within Fuel and TripleO.  As
those projects evolve, it directly impacts the ability for the Puppet
modules to remain relevant.  Some could argue that's just the way it
goes and technologies evolve which is true.  But it's also a loss for
many of the newer methods as they are losing all of the historical
knowledge and understanding that went with it and why some patterns
work better than others.  The software wheel, it's getting reinvented
every day.

> Chef’s sponsored support has numbered far fewer. It casts an extremely 
> negative image on OpenStack when someone looks for help at odd hours, or asks 
> something somewhere that none of us have time to track. The answer to that is 
> the point of making noise, to generate conversation about avenues and 
> solutions. I could have kept my fingers aiming at LP, Gerrit and IRC in an 
> attempt to bury my head in the sand. We’re way past the point of denial, 
> perhaps too far, but as long as the results of the User Survey shows Chef, 
> there are still users to support, for now. Operators and deployers will be 
> looking to the source of truth, wherever that is, and right now that source 
> of truth is OpenStack.
>
>>
>> There is one question I wanted to ask you in terms of community. We
>> maintain in OpenStack a number of efforts that bridge two communities,
>> and where the project could set up its infrastructure / governance in
>> one or the other. In the case of OpenStackChef, you could have set up
>> shop on the Chef community side, rather than on the OpenStack community
>> side. Would you say that living on the OpenStack community side helped
>> you or hurt you ? Did you get enough help / visibility to balance the
>> constraints ? Do you think you would have been more, less or equally
>> successful if you had set up shop more on the Chef community side ?
>>
>
> We set up under Stackforge, later OpenStack, because the cookbooks evolved 
> alongside OpenStack, as far back as 2011, before my time in the cookbooks. 
> The earliest commits on the now EOL Grizzly branch were quite enlightening, 
> if only Stackalytics had the visuals. Maybe I’m biased, but that’s worth 
> something.
>
> You’re absolutely correct that we could have pushed more to set up the Chef 
> side of things, and in fact we made several concerted efforts to integrate 
> into the Chef community, up to and including having sponsored contributors, 
> even a PTL. When exploring the Chef side, we found that we faced as much or 
> more friction with the ecosystem, requiring more fundamental changes than we 
> could influence. Chef (the ecosystem) has many great things, but Chef doesn’t 
> OpenStack. Maybe that was the writing on the wall.
>
> I keep one foot in both Chef and OpenStack, to keep myself as informed as 
> time allows me. It’s clear that even Chef’s long-term cookbook 

Re: [openstack-dev] [chef] Making the Kitchen Great Again: A Retrospective on OpenStack & Chef

2017-02-15 Thread Samuel Cassiba

> On Feb 15, 2017, at 02:07, Thierry Carrez  wrote:
> 
> Samuel Cassiba wrote:
>> [...]
>> *TL;DR* if you don't want to keep going -
>> OpenStack-Chef is not in a good place and is not sustainable.
>> [...]
> 
> Thanks for sharing, Sam.
> 

Thanks for taking the time to read and respond. This was as hard to write as it 
was to read. As time went on, it became apparent that this retrospective needed 
to exist. It was not written lightly, and does not aim to point fingers.

> I think that part of the reasons for the situation is that we grew the
> number of options for deploying OpenStack. We originally only had Puppet
> and Chef, but now there is Ansible, Juju, and the various
> Kolla-consuming container-oriented approaches. There is a gravitational
> attraction effect at play (more users -> more contributors) which
> currently benefits Puppet, Ansible and Kolla, to the expense of
> less-popular community-driven efforts like OpenStackChef and
> OpenStackSalt. I expect this effect to continue. I have mixed feelings
> about it: on one hand it reduces available technical options, but on the
> other it allows to focus and raise quality…

You have a very valid point. One need only look at the trends over the cycles 
in the User Survey to see this shift in most places. Ansible wins due to sheer 
simplicity for new deployments, but there are also real business decisions that 
go behind automation flavors at certain business sizes. This leaves them 
effectively married to whichever flavor chosen. That shift impacts Puppet’s 
overall user base, as well, though they had and still have the luxury of 
maintaining sponsored support at higher numbers.

Chef’s sponsored support has numbered far fewer. It casts an extremely negative 
image on OpenStack when someone looks for help at odd hours, or asks something 
somewhere that none of us have time to track. The answer to that is the point 
of making noise, to generate conversation about avenues and solutions. I could 
have kept my fingers aiming at LP, Gerrit and IRC in an attempt to bury my head 
in the sand. We’re way past the point of denial, perhaps too far, but as long 
as the results of the User Survey shows Chef, there are still users to support, 
for now. Operators and deployers will be looking to the source of truth, 
wherever that is, and right now that source of truth is OpenStack.

> 
> There is one question I wanted to ask you in terms of community. We
> maintain in OpenStack a number of efforts that bridge two communities,
> and where the project could set up its infrastructure / governance in
> one or the other. In the case of OpenStackChef, you could have set up
> shop on the Chef community side, rather than on the OpenStack community
> side. Would you say that living on the OpenStack community side helped
> you or hurt you ? Did you get enough help / visibility to balance the
> constraints ? Do you think you would have been more, less or equally
> successful if you had set up shop more on the Chef community side ?
> 

We set up under Stackforge, later OpenStack, because the cookbooks evolved 
alongside OpenStack, as far back as 2011, before my time in the cookbooks. The 
earliest commits on the now EOL Grizzly branch were quite enlightening, if only 
Stackalytics had the visuals. Maybe I’m biased, but that’s worth something.

You’re absolutely correct that we could have pushed more to set up the Chef 
side of things, and in fact we made several concerted efforts to integrate into 
the Chef community, up to and including having sponsored contributors, even a 
PTL. When exploring the Chef side, we found that we faced as much or more 
friction with the ecosystem, requiring more fundamental changes than we could 
influence. Chef (the ecosystem) has many great things, but Chef doesn’t 
OpenStack. Maybe that was the writing on the wall.

I keep one foot in both Chef and OpenStack, to keep myself as informed as time 
allows me. It’s clear that even Chef’s long-term cookbook support community is 
ill equipped to handle OpenStack. The problem? We’re too complex and too far 
integrated, and none of them know OpenStack. Where does that leave us?

--
Best,

Samuel Cassiba

> --
> Thierry Carrez (ttx)
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [chef] Making the Kitchen Great Again: A Retrospective on OpenStack & Chef

2017-02-15 Thread Thierry Carrez
Samuel Cassiba wrote:
> [...]
> *TL;DR* if you don't want to keep going -
> OpenStack-Chef is not in a good place and is not sustainable.
> [...]

Thanks for sharing, Sam.

I think that part of the reasons for the situation is that we grew the
number of options for deploying OpenStack. We originally only had Puppet
and Chef, but now there is Ansible, Juju, and the various
Kolla-consuming container-oriented approaches. There is a gravitational
attraction effect at play (more users -> more contributors) which
currently benefits Puppet, Ansible and Kolla, to the expense of
less-popular community-driven efforts like OpenStackChef and
OpenStackSalt. I expect this effect to continue. I have mixed feelings
about it: on one hand it reduces available technical options, but on the
other it allows to focus and raise quality...

There is one question I wanted to ask you in terms of community. We
maintain in OpenStack a number of efforts that bridge two communities,
and where the project could set up its infrastructure / governance in
one or the other. In the case of OpenStackChef, you could have set up
shop on the Chef community side, rather than on the OpenStack community
side. Would you say that living on the OpenStack community side helped
you or hurt you ? Did you get enough help / visibility to balance the
constraints ? Do you think you would have been more, less or equally
successful if you had set up shop more on the Chef community side ?

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [chef] Making the Kitchen Great Again: A Retrospective on OpenStack & Chef

2017-02-14 Thread Samuel Cassiba
The HTML version is here:
https://s.cassiba.com/2017/02/14/making-the-kitchen-great-again-a-retrospective-on-openstack-chef
 


This was influenced by Graham Hayes' State of the Project for Designate:
http://graham.hayes.ie/posts/openstack-designate-where-we-are/ 


I have been asked recently "what is going on with the OpenStack-Chef project?",
"how is the state of the cookbooks?", and "hey sc, how are those integration
tests coming?". Having been the PTL for the Newton and Ocata cycles, yet
having not shipped a release, is the unthinkable, and deserves at least a
sentence or two.

It goes without saying, this is disheartening and depressing to me and
everybody that has devoted their time to making the cookbooks a solid
and viable method for deploying OpenStack. OpenStack-Chef is among the
oldest[1] and most mature solutions for deploying OpenStack, though it is
not the most feature-rich.


*TL;DR* if you don't want to keep going -
OpenStack-Chef is not in a good place and is not sustainable.


OpenStack-Chef has always been a small project with a big responsibility.
The Chef approach to OpenStack historically has required a level of
investment within the Chef ecosystem, which is a hard enough sell when you
started out with Puppet or Ansible. Despite the unicorns and rainbows of
being Chef cookbooks, OpenStack-Chef always asserted itself as an OpenStack
project first, up to and including joining the Big Tent, whatever it takes.
To beat that drum, we are OpenStack.

There is no *cool* factor from deploying and managing OpenStack using Chef,
unless you've been running Chef, because insert Xzibit meme here and jokes
about turtles. Unless you break something with automation, then it's
applause or facepalm. Usually both. At the same time.

As with any kitchen, it must be stocked and well maintained, and
OpenStack-Chef is no exception. Starting out, there was a vibrant community
producing organic, free-range code. Automation is invisible, assumed to be
there in the background. Once it's in place, it isn't touched again unless
it breaks. Upgrades in complex deployments can be fraught with error, even
in an automated fashion.

As has been seen in previous surveys[2], once an OpenStack release has chosen
by an operator, some tend to not upgrade for the next cycle or three, to get
the immediate bugs worked out. Though there are now multinode and upgrade
scenarios supported with the Puppet OpenStack and TripleO projects, they do
not use Chef, so Chef deployers do not directly benefit from any of this
testing.

Being a deployment project, we are responsible for not one aspect of
the OpenStack project but as many as can be reasonably supported.

We were very fortunate in the beginning, having support from public cloud
providers, as well as large private cloud providers. Stackalytics shows a
vibrant history, a veritable who's-who of OpenStack contributors, too many to
name. They've all moved on, working on other things.

As a previous PTL for the project once joked, the Chef approach to OpenStack
was the "other deployment tool that nobody uses". As time has gone by, that has
become more of a true statement.

There are a few of us still cooking away, creating new recipes and cookbooks. 
The
pilot lights are still lit and there's usually something simmering away on the
back burner, but there is no shouting of orders, and not every dish gets tasted.
We think there might be rats, too, but we’re too shorthanded to maintain the 
traps.

We have yet to see many (meaningful) contributions from the community, however.
We have some amazing deployers that file bugs, and if they can, push up a patch.
It delights me when someone other than a core weighs in on a review. They are
highly appreciated and incredibly valuable, but they are very tactical
contributions. A project cannot live on such contributions.

October 2015

  https://s.cassiba.com/images/oct-2015-deployment-decisions.png 


Where does that leave OpenStack-Chef? Let's take a look at the numbers:

+++
| Cycle  | Commits |
+++
| Havana   | 557|
+++
| Icehouse | 692|
+++
| Juno   | 424 |
+++
| Kilo | 474 |
+++
| Liberty| 259 |
+++
| Mitaka| 85   |
+++
| Newton   | 112 |
+++
| Ocata  | 78  |
+++

As of the time of this writing, Newton has not yet branched. Yes, you read
correctly. This means the Ocata cycle has gone to ensuring that Newton *just
funct