Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-19 Thread Fox, Kevin M
The primary issue I think is that  the Nova folks think there is too much in 
Nova already.

So there are probably more features that can be done to make it more in line 
with vCenter, and more features to make it more functionally like AWS. And at 
this point, neither are probably easy to get in.

Until Nova changes this stance, they are kind of forcing an either or (or 
neither), as Nova's position in the OpenStack community currently drives 
decisions in most of the other OpenStack projects.

I'm not laying blame on anyone. They have a hard job to do and not enough 
people to do it. That forces less then ideal solutions.

Not really sure how to resolve this.

Deciding "we will support both" is a good first step, but there are other big 
problems like this that need solving before it can be more then words on a page.

Thanks,
Kevin


From: Thierry Carrez [thie...@openstack.org]
Sent: Thursday, July 19, 2018 5:24 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26

Zane Bitter wrote:
> [...]
>> And I'm not convinced that's an either/or choice...
>
> I said specifically that it's an either/or/and choice.

I was speaking more about the "we need to pick between two approaches,
let's document them" that the technical vision exercise started as.
Basically I mean I'm missing clear examples of where pursuing AWS would
mean breaking vCenter.

> So it's not a binary choice but it's very much a ternary choice IMHO.
> The middle ground, where each project - or even each individual
> contributor within a project - picks an option independently and
> proceeds on the implicit assumption that everyone else chose the same
> option (although - spoiler alert - they didn't)... that's not a good
> place to be.

Right, so I think I'm leaning for an "and" choice.

Basically OpenStack wants to be an AWS, but ended up being used a lot as
a vCenter (for multiple reasons, including the limited success of
US-based public cloud offerings in 2011-2016). IMHO we should continue
to target an AWS, while doing our best to not break those who use it as
a vCenter. Would explicitly acknowledging that (we still want to do an
AWS, but we need to care about our vCenter users) get us the alignment
you seek ?

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-19 Thread Thierry Carrez

Zane Bitter wrote:

[...]

And I'm not convinced that's an either/or choice...


I said specifically that it's an either/or/and choice.


I was speaking more about the "we need to pick between two approaches, 
let's document them" that the technical vision exercise started as.
Basically I mean I'm missing clear examples of where pursuing AWS would 
mean breaking vCenter.


So it's not a binary choice but it's very much a ternary choice IMHO. 
The middle ground, where each project - or even each individual 
contributor within a project - picks an option independently and 
proceeds on the implicit assumption that everyone else chose the same 
option (although - spoiler alert - they didn't)... that's not a good 
place to be.


Right, so I think I'm leaning for an "and" choice.

Basically OpenStack wants to be an AWS, but ended up being used a lot as 
a vCenter (for multiple reasons, including the limited success of 
US-based public cloud offerings in 2011-2016). IMHO we should continue 
to target an AWS, while doing our best to not break those who use it as 
a vCenter. Would explicitly acknowledging that (we still want to do an 
AWS, but we need to care about our vCenter users) get us the alignment 
you seek ?


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-17 Thread Zane Bitter

On 17/07/18 10:44, Thierry Carrez wrote:

Finally found the time to properly read this...


For anybody else who found the wall of text challenging, I distilled the 
longest part into a blog post:


https://www.zerobanana.com/archive/2018/07/17#openstack-layer-model-limitations


Zane Bitter wrote:

[...]
We chose to add features to Nova to compete with vCenter/oVirt, and 
not to add features the would have enabled OpenStack as a whole to 
compete with more than just the compute provisioning subset of 
EC2/Azure/GCP.


Could you give an example of an EC2 action that would be beyond the 
"compute provisioning subset" that you think we should have built into 
Nova ?


Automatic provision/rotation of application credentials.
Reliable, user-facing event notifications.
Collection of usage data suitable for autoscaling, billing, and whatever 
it is that Watcher does.


Meanwhile, the other projects in OpenStack were working on building 
the other parts of an AWS/Azure/GCP competitor. And our vague 
one-sentence mission statement allowed us all to maintain the delusion 
that we were all working on the same thing and pulling in the same 
direction, when in truth we haven't been at all.


Do you think that organizing (tying) our APIs along [micro]services, 
rather than building a sanely-organized user API on top of a 
sanely-organized set of microservices, played a role in that divide ?


TBH, not really. If I were making a list of contributing factors I would 
probably put 'path dependence' at #1, #2 and #3.


At the start of this discussion, Jay posted on IRC a list of things that 
he thought shouldn't have been in the Nova API[1]:


- flavors
- shelve/unshelve
- instance groups
- boot from volume where nova creates the volume during boot
- create me a network on boot
- num_instances > 1 when launching
- evacuate
- host-evacuate-live
- resize where the user 'confirms' the operation
- force/ignore host
- security groups in the compute API
- force delete server
- restore soft deleted server
- lock server
- create backup

Some of those are trivially composable in higher-level services (e.g. 
boot from volume where nova creates the volume, get me a network, 
security groups). I agree with Jay that in retrospect it would have been 
cleaner to delegate those to some higher level than the Nova API (or, 
equivalently, for some lower-level API to exist within what is now 
Nova). And maybe if we'd had a top-level API like that we'd have been 
more aware of the ways that the lower-level ones lacked legibility for 
orchestration tools (oaktree is effectively an example of a top-level 
API like this, I'm sure Monty can give us a list of complaints ;)


But others on the list involve operations at a low level that don't 
appear to me to be composable out of simpler operations. (Maybe Jay has 
a shorter list of low-level APIs that could be combined to implement all 
of these, I don't know.) Once we decided to add those features, it was 
inevitable that they would reach right the way down through the stack to 
the lowest level.


There's nothing _organisational_ stopping Nova from creating an internal 
API (it need not even be a ReST API) for the 'plumbing' parts, with a 
separate layer that does orchestration-y stuff. That they're not doing 
so suggests to me that they don't think this is the silver bullet for 
managing complexity.


What would have been a silver bullet is saying 'no' to a bunch of those 
features, preferably starting with 'restore soft deleted server'(!!) and 
shelve/unshelve(?!). When AWS got feature requests like that they didn't 
say 'we'll have to add that in a higher-level API', they said 'if your 
application needs that then cloud is not for you'. We were never 
prepared to say that.


[1] 
http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-26.log.html#t2018-06-26T15:30:33


We can decide that we want to be one, or the other, or both. But if we 
don't all decide together then a lot of us are going to continue 
wasting our time working at cross-purposes.


If you are saying that we should choose between being vCenter or AWS, I 
would definitely say the latter.


Agreed.

But I'm still not sure I see this issue 
in such a binary manner.


I don't know that it's still a viable option to say 'AWS' now. Given our 
installed base of users and our commitment to not breaking them, our 
practical choices may well be between 'vCenter' or 'both'.


It's painful because had we chosen 'AWS' at the beginning then we could 
have avoided the complexity hit of many of those features listed above, 
and spent our complexity budget on cloud features instead. Now we are 
locked in to supporting that legacy complexity forever, and it has 
reportedly maxed out our complexity budget to the point where people are 
reluctant to implement any cloud features, and unable to refactor to 
make them easier.


Astute observers will note that this is a *textbook* case of the 
Innovator's Dilemma.


Imagine 

Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-17 Thread Fox, Kevin M
Inlining with KF> 

From: Thierry Carrez [thie...@openstack.org]
Sent: Tuesday, July 17, 2018 7:44 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26

Finally found the time to properly read this...

Zane Bitter wrote:
> [...]
> We chose to add features to Nova to compete with vCenter/oVirt, and not
> to add features the would have enabled OpenStack as a whole to compete
> with more than just the compute provisioning subset of EC2/Azure/GCP.

Could you give an example of an EC2 action that would be beyond the
"compute provisioning subset" that you think we should have built into
Nova ?

KF> How about this one... 
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html
 :/
KF> IMO, its lack really crippled the use case. I've been harping on this one 
for over 4 years now...

> Meanwhile, the other projects in OpenStack were working on building the
> other parts of an AWS/Azure/GCP competitor. And our vague one-sentence
> mission statement allowed us all to maintain the delusion that we were
> all working on the same thing and pulling in the same direction, when in
> truth we haven't been at all.

Do you think that organizing (tying) our APIs along [micro]services,
rather than building a sanely-organized user API on top of a
sanely-organized set of microservices, played a role in that divide ?

KF> Slightly off question I think. A combination of microservice api's + no api 
team to look at the api's as a whole allowed use cases to slip by.
KF> Microservice api might have been ok with overall shepards. Though maybe 
that is what you were implying with 'sanely'?

> We can decide that we want to be one, or the other, or both. But if we
> don't all decide together then a lot of us are going to continue wasting
> our time working at cross-purposes.

If you are saying that we should choose between being vCenter or AWS, I
would definitely say the latter. But I'm still not sure I see this issue
in such a binary manner.

KF> No, he said one, the either, or both. But the lack of decision allowed some 
teams to prioritize one without realizing its effects to others.

KF> There are multiple vCenter replacements in opensource world. For example, 
oVirt. Its alreay way better at it then Nova.
KF> There is not a replacement for AWS in the opensource world. The hope was 
OpenStack would be that, but others in the community did not agree with that 
vision.
KF> Now that the community has changed drastically, what is the feeling now? We 
must decide.
KF> Kubernetes has provided a solid base for doing cloudy things. Which is 
great. But the organization does not care to replace other AWS/Azure/etc 
services because there are companies interested in selling k8s on top of 
AWS/Azure/etc and integrate with the other services they already provide.
KF> So, there is an Opportunity in the opensource community still for someone 
to write an opensource AWS alternative. VM's are just a very small part of it.

KF> Is that OpenStack, or some other project?

Imagine if (as suggested above) we refactored the compute node and give
it a user API, would that be one, the other, both ? Or just a sane
addition to improve what OpenStack really is today: a set of open
infrastructure components providing different services with each their
API, with slight gaps and overlaps between them ?

Personally, I'm not very interested in discussing what OpenStack could
have been if we started building it today. I'm much more interested in
discussing what to add or change in order to make it usable for more use
cases while continuing to serve the needs of our existing users. And I'm
not convinced that's an either/or choice...

KF> Sometimes it is time to hit the reset button because you either:
 a> you know something more then you did when you built that is really important
 b> the world changed and you can no longer going on the path you were
 c> the technical debt has grown very large and it is cheaper to start again

KF> OpenStacks current architectural implementation really feels 1.0ish to me 
and all of those reasons are relevant.
KF> I'm not saying we should just blindly hit the reset button. but I think it 
should be discussed/evaluated . Leaving it alone may have too much of a 
dragging effect on contribution.

KF> I'm also not saying we leave existing users without a migration path 
either. Maybe an OpenStack 2.0 with migration tools would be an option.

KF> OpenStacks architecture is really hamstringing it at this point. If it 
wants to make progress at chipping away at AWS, it can't be trying to build on 
top of the very narrow commons OpenStack provides at present and the boiler 
plate convention of 1, start new project 2, create sql databse, 3, create 
rabbit queues, 5, create api service, 6 create scheduler service, 7, create 
a

Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-17 Thread Thierry Carrez

Finally found the time to properly read this...

Zane Bitter wrote:

[...]
We chose to add features to Nova to compete with vCenter/oVirt, and not 
to add features the would have enabled OpenStack as a whole to compete 
with more than just the compute provisioning subset of EC2/Azure/GCP.


Could you give an example of an EC2 action that would be beyond the 
"compute provisioning subset" that you think we should have built into 
Nova ?


Meanwhile, the other projects in OpenStack were working on building the 
other parts of an AWS/Azure/GCP competitor. And our vague one-sentence 
mission statement allowed us all to maintain the delusion that we were 
all working on the same thing and pulling in the same direction, when in 
truth we haven't been at all.


Do you think that organizing (tying) our APIs along [micro]services, 
rather than building a sanely-organized user API on top of a 
sanely-organized set of microservices, played a role in that divide ?


We can decide that we want to be one, or the other, or both. But if we 
don't all decide together then a lot of us are going to continue wasting 
our time working at cross-purposes.


If you are saying that we should choose between being vCenter or AWS, I 
would definitely say the latter. But I'm still not sure I see this issue 
in such a binary manner.


Imagine if (as suggested above) we refactored the compute node and give 
it a user API, would that be one, the other, both ? Or just a sane 
addition to improve what OpenStack really is today: a set of open 
infrastructure components providing different services with each their 
API, with slight gaps and overlaps between them ?


Personally, I'm not very interested in discussing what OpenStack could 
have been if we started building it today. I'm much more interested in 
discussing what to add or change in order to make it usable for more use 
cases while continuing to serve the needs of our existing users. And I'm 
not convinced that's an either/or choice...


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-06 Thread Zane Bitter

I'm not Kevin but I think I can clarify some of these.

On 03/07/18 16:04, Jay Pipes wrote:
On 07/03/2018 02:37 PM, Fox, Kevin M wrote: 
 So these days containers are out clouding vms at this use case. So, does Nova continue to be cloudy vm or does it go for the more production vm use case like oVirt and VMware?


"production VM" use case like oVirt or VMWare? I don't know what that means. You mean 
"a GUI-based VM management system"?


Read 'pets'.

While some people only ever consider running Kubernetes on top of a 
cloud, some of us realize maintaining both a cloud an a kubernetes is 
unnecessary and can greatly simplify things simply by running k8s on 
bare metal. This does then make it a competitor to Nova  as a platform 
for running workload on.


What percentage of Kubernetes users deploy on baremetal (and continue to 
deploy on baremetal in production as opposed to just toying around with 
it)?


At Red Hat Summit there was a demo of deploying OpenShift alongside (not 
on top of) OpenStack on bare metal using Director (downstream of TripleO 
- so managed by Ironic in an OpenStack undercloud).


I don't know if people using Kubernetes directly on baremetal in 
production is widespread right now, but it's clear to me that it will be 
just around the corner.


As k8s gains more multitenancy features, this trend will continue to 
grow I think. OpenStack needs to be ready for when that becomes a thing.


OpenStack is already multi-tenant, being designed as such from day one. 
With the exception of Ironic, which uses Nova to enable multi-tenancy.


What specifically are you referring to "OpenStack needs to be ready"? 
Also, what specific parts of OpenStack are you referring to there?


I believe the point was:

* OpenStack supports multitenancy.
* Kubernetes does not support multitenancy.
* Applications that require multitenancy currently require separate 
per-tenant deployments of Kubernetes; deploying on top of a cloud (such 
as OpenStack) makes this easier, so there is demand for OpenStack from 
people who need multitenancy even if they are mainly interacting with 
Kubernetes. Effectively OpenStack is the multitenancy layer for k8s in a 
lot of deployments.

* One day Kubernetes will support multitenancy.
* Then what?


Think of OpenStack like a game console. The moment you make a component 
optional and make it takes extra effort to obtain, few software developers 
target it and rarely does anyone one buy the addons it because there isn't 
software for it. Right now, just about everything in OpenStack is an addon. 
Thats a problem.


I don't have any game consoles nor do I develop software for them,


Me neither, but much like OpenStack it's a two-sided marketplace 
(developers and users in the console case, operators and end-users in 
the OpenStack case), where you succeed or fail based on how much value 
you can get flowing between the two sides. There's a positive feedback 
loop between supply on one side and demand on the other, so like all 
positive feedback loops it's unstable and you have to find some way to 
bootstrap it in the right direction, which is hard. One way to make it 
much, much harder is to segment your market such a way that you give 
yourself a second feedback loop that you also have to bootstrap, that 
depends on the first one, and you only get to use a subset of your 
existing market participants to do it.


As an example from my other reply, we're probably going to try to use 
Barbican to help integrate Heat with external tools like k8s and 
Ansible, but for that to have any impact we'll have to convince users 
that they want to do this badly enough that they'll convince their 
operators to deploy Barbican - and we'll likely have to do so before 
they've even tried it. That's even after we've already convinced them to 
use OpenStack and deploy Heat. If Barbican (and Heat) were available as 
part of every OpenStack deployment, then it'd just be a matter of 
convincing people to use the feature, which would already be available 
and which they could try out at any time. That's a much lower bar.


I'm not defending "make it a monolith" as a solution, but Kevin is 
identifying a real problem.


- ZB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-06 Thread Jay Pipes

On 07/06/2018 12:58 PM, Zane Bitter wrote:

On 02/07/18 19:13, Jay Pipes wrote:

Nova's primary competition is:

* Stand-alone Ironic
* oVirt and stand-alone virsh callers
* Parts of VMWare vCenter [3]
* MaaS in some respects


Do you see KubeVirt or Kata or Virtlet or RancherVM ending up on this 
list at any point? Because a lot of people* do.

>

* https://news.ycombinator.com/item?id=17013779


Please don't lose credibility by saying "a lot of people" see things 
like RancherVM as competitors to Nova [1] by pointing to a HackerNews 
[2] thread where two people discuss why RancherVM exists and where one 
of those people is Darren Shepherd, a co-founder of Rancher, previously 
at Citrix and GoDaddy with a long-known distaste for all things OpenStack.


I don't think that thread is particularly unbiased or helpful.

I'll respond to the rest of your (excellent) points a little later...

Best,
-jay

[1] Nova isn't actually mentioned there. "OpenStack" is.

[2] I've often wondered who has time to actually respond to anything on 
HackerNews. Same for when Slashdot was a thing. In fact, now that I 
think about it, I spend entirely too much time worrying about all of 
this stuff... ;)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-06 Thread Zane Bitter

On 02/07/18 19:13, Jay Pipes wrote:
Also note that when I've said that *OpenStack* should have a smaller 
mission and scope, that doesn't mean that higher-level services 
aren't necessary or wanted.


Thank you for saying this, and could I please ask you to repeat this 
disclaimer whenever you talk about a smaller scope for OpenStack.


Yes. I shall shout it from the highest mountains. [1]


Thanks. Appreciate it :)

[1] I live in Florida, though, which has no mountains. But, when I 
visit, say, North Carolina, I shall certainly shout it from their 
mountains.


That's where I live, so I'll keep an eye out for you if I hear shouting.

Because for those of us working on higher-level services it feels like 
there has been a non-stop chorus (both inside and outside the project) 
of people wanting to redefine OpenStack as something that doesn't 
include us.


I've said in the past (on Twitter, can't find the link right now, but 
it's out there somewhere) something to the effect of "at some point, 
someone just needs to come out and say that OpenStack is, at its core, 
Nova, Neutron, Keystone, Glance and Cinder".


https://twitter.com/jaypipes/status/875377520224460800 for anyone who 
was curious.


Interestingly, that and my equally off-the-cuff reply 
https://twitter.com/zerobanana/status/875559517731381249 are actually 
pretty close to the minimal descriptions of the two broad camps we were 
talking about in the technical vision etherpad. (Noting for the record 
that cdent disputes that views can be distilled into two camps.)


Perhaps this is what you were recollecting. I would use a different 
phrase nowadays to describe what I was thinking with the above.


I don't think I was recalling anything in particular that *you* had 
said. Complaining about the non-core projects (presumably on the logic 
that if we kicked them out of OpenStack all their developers would 
instead go to work on radically simplifying the remaining projects 
instead?) was a widespread popular pastime for at least 
roughly the 4 years from 2013-2016.


I would say instead "Nova, Neutron, Cinder, Keystone and Glance [2] are 
a definitive lower level of an OpenStack deployment. They represent a 
set of required integrated services that supply the most basic 
infrastructure for datacenter resource management when deploying 
OpenStack."


Note the difference in wording. Instead of saying "OpenStack is X", I'm 
saying "These particular services represent a specific layer of an 
OpenStack deployment".


OK great. So this is wrong :) and I will attempt to explain why I think 
that in a second. But first I want to acknowledge what is attractive 
about this viewpoint (even to me). This is a genuinely useful 
observation that leads to a real insight.


The insight, I think, is the same one we all just agreed on in another 
part of the thread: OpenStack is the only open source project 
concentrating on the gap between a rack full of unconfigured equipment 
and somewhere that you could, say, install Kubernetes. We write the bit 
where the rubber meets the road, and if we don't get it done there's 
nobody else to do it! There's an almost infinite variety of different 
applications and they'll all need different parts of the higher layers, 
but ultimately they'll all need to be reified in a physical data center 
and when they do, we'll be there: that's the core of what we're building.


It's honestly only the tiniest of leaps from seeing that idea as 
attractive, useful, and genuinely insightful to seeing it as correct, 
and I don't really blame anybody who made that leap.


I'm going to gloss over the fact that we punted the actual process of 
setting up the data center to a bunch of what turned out to be 
vendor-specific installer projects that you suggest should be punted out 
of OpenStack altogether, because that isn't the biggest problem I have 
with this view.


Back in the '70s there was this idea about AI: even a 2 year old human 
can e.g. recognise images with a high degree of accuracy, but doing e.g. 
calculus is extremely hard in comparison and takes years of training. 
But computers can already do calculus! Ergo, we've solved the hardest 
part already and building the rest out of that will be trivial, AGI is 
just around the corner,   (I believe I cribbed this explanation 
from an outdated memory of Marvin Minsky's 1982 paper "Why People Think 
Computers Can't" - specifically the section "Could a Computer Have 
Common Sense?" - so that's a better source if you actually want to learn 
something about AI.) The popularity of this idea arguably helped created 
the AI bubble, and the inevitable collision with the reality of its 
fundamental wrongness led to the AI Winter. Because in fact just because 
you can build logic out of many layers of heuristics (as human brains 
do), it absolutely does not follow that it's trivial to build other 
things that also require many layers of heuristics once you have some 
basic logic building blocks. 

Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-06 Thread Ben Nemec
Red Hat OpenStack is based on RDO.  It's not pretty far from it, it's 
very close.  It's basically productized RDO, and in the interest of 
everyone's sanity we try to keep the downstream patches to a minimum.


In general I would be careful trying to take the distro analogy too far 
though.  The release cycles of the Red Hat Linux distros are very 
different from that of the OpenStack distros.  RDO would be more akin to 
CentOS in terms of how closely related they are, but the relationship is 
inverted.  CentOS is taking the RHEL source (which is based on whatever 
the current Fedora release is when a new major RHEL version gets 
branched) and distributing packages based on it, while RHOS is taking 
the RDO bits and productizing them.  There's no point in having a 
CentOS-like distro that then repackages the RHOS source because you'd 
end up with essentially RDO again.  RDO and RHOS don't diverge the way 
Fedora and RHEL do after they are branched because they're on the same 
release cycle.


So essentially the flow with the Linux distros looks like:

Upstream->Fedora->RHEL->CentOS

Whereas the OpenStack distros are:

Upstream->RDO->RHOS

With RDO serving the purpose of both Fedora and CentOS.

As for TripleO, it's been integrated with RHOS/RDO since Kilo, and I 
believe it has been the recommended way to deploy in production since 
then as well.


-Ben

On 07/05/2018 03:17 PM, Fox, Kevin M wrote:
I use RDO in production. Its pretty far from RedHat OpenStack. though 
its been a while since I tried the TripleO part of RDO. Is it pretty 
well integrated now? Similar to RedHat OpenStack? or is it more Fedora 
like then CentOS like?


Thanks,
Kevin

*From:* Dmitry Tantsur [dtant...@redhat.com]
*Sent:* Thursday, July 05, 2018 11:17 AM
*To:* OpenStack Development Mailing List (not for usage questions)
*Subject:* Re: [openstack-dev] [tc] [all] TC Report 18-26




On Thu, Jul 5, 2018, 19:31 Fox, Kevin M <mailto:kevin@pnnl.gov>> wrote:


We're pretty far into a tangent...

/me shrugs. I've done it. It can work.

Some things your right. deploying k8s is more work then deploying
ansible. But what I said depends on context. If your goal is to
deploy k8s/manage k8s then having to learn how to use k8s is not a
big ask. adding a different tool such as ansible is an extra
cognitive dependency. Deploying k8s doesn't need a general solution
to deploying generic base OS's. Just enough OS to deploy K8s and
then deploy everything on top in containers. Deploying a seed k8s
with minikube is pretty trivial. I'm not suggesting a solution here
to provide generic provisioning to every use case in the datacenter.
But enough to get a k8s based cluster up and self hosted enough
where you could launch other provisioning/management tools in that
same cluster, if you need that. It provides a solid base for the
datacenter on which you can easily add the services you need for
dealing with everything.

All of the microservices I mentioned can be wrapped up in a single
helm chart and deployed with a single helm install command.

I don't have permission to release anything at the moment, so I
can't prove anything right now. So, take my advice with a grain of
salt. :)

Switching gears, you said why would users use lfs when they can use
a distro, so why use openstack without a distro. I'd say, today
unless you are paying a lot, there isn't really an equivalent distro
that isn't almost as much effort as lfs when you consider day2 ops.
To compare with Redhat again, we have a RHEL (redhat openstack), and
Rawhide (devstack) but no equivalent of CentOS. Though I think
TripleO has been making progress on this front...


It's RDO what you're looking for (equivalent of centos). TripleO is an 
installer project, not a distribution.



Anyway. This thread is I think 2 tangents away from the original
topic now. If folks are interested in continuing this discussion,
lets open a new thread.

Thanks,
Kevin


From: Dmitry Tantsur [dtant...@redhat.com <mailto:dtant...@redhat.com>]
Sent: Wednesday, July 04, 2018 4:24 AM
To: openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>
    Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26

Tried hard to avoid this thread, but this message is so much wrong..

On 07/03/2018 09:48 PM, Fox, Kevin M wrote:
 > I don't dispute trivial, but a self hosting k8s on bare metal is
not incredibly hard. In fact, it is easier then you might think. k8s
is a platform for deploying/managing services. Guess what you need
to provision bare metal? Just a few microservices. A dhcp service.
dhcpd in a daemonset works well. some pxe infrastructure. pixiecore
with a simple htt

Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-05 Thread Fox, Kevin M
Interesting. Thanks for the link. :)

There is a lot of stuff there, so not sure it covers the part I'm talking about 
without more review. but if it doesn't it would be pretty easy to add by the 
looks of it.

Kevin

From: Jeremy Stanley [fu...@yuggoth.org]
Sent: Thursday, July 05, 2018 10:47 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26

On 2018-07-05 17:30:23 + (+), Fox, Kevin M wrote:
[...]
> Deploying k8s doesn't need a general solution to deploying generic
> base OS's. Just enough OS to deploy K8s and then deploy everything
> on top in containers. Deploying a seed k8s with minikube is pretty
> trivial. I'm not suggesting a solution here to provide generic
> provisioning to every use case in the datacenter. But enough to
> get a k8s based cluster up and self hosted enough where you could
> launch other provisioning/management tools in that same cluster,
> if you need that. It provides a solid base for the datacenter on
> which you can easily add the services you need for dealing with
> everything.
>
> All of the microservices I mentioned can be wrapped up in a single
> helm chart and deployed with a single helm install command.
>
> I don't have permission to release anything at the moment, so I
> can't prove anything right now. So, take my advice with a grain of
> salt. :)
[...]

Anything like http://www.airshipit.org/ ?
--
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-05 Thread Fox, Kevin M
I use RDO in production. Its pretty far from RedHat OpenStack. though its been 
a while since I tried the TripleO part of RDO. Is it pretty well integrated 
now? Similar to RedHat OpenStack? or is it more Fedora like then CentOS like?

Thanks,
Kevin

From: Dmitry Tantsur [dtant...@redhat.com]
Sent: Thursday, July 05, 2018 11:17 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26




On Thu, Jul 5, 2018, 19:31 Fox, Kevin M 
mailto:kevin@pnnl.gov>> wrote:
We're pretty far into a tangent...

/me shrugs. I've done it. It can work.

Some things your right. deploying k8s is more work then deploying ansible. But 
what I said depends on context. If your goal is to deploy k8s/manage k8s then 
having to learn how to use k8s is not a big ask. adding a different tool such 
as ansible is an extra cognitive dependency. Deploying k8s doesn't need a 
general solution to deploying generic base OS's. Just enough OS to deploy K8s 
and then deploy everything on top in containers. Deploying a seed k8s with 
minikube is pretty trivial. I'm not suggesting a solution here to provide 
generic provisioning to every use case in the datacenter. But enough to get a 
k8s based cluster up and self hosted enough where you could launch other 
provisioning/management tools in that same cluster, if you need that. It 
provides a solid base for the datacenter on which you can easily add the 
services you need for dealing with everything.

All of the microservices I mentioned can be wrapped up in a single helm chart 
and deployed with a single helm install command.

I don't have permission to release anything at the moment, so I can't prove 
anything right now. So, take my advice with a grain of salt. :)

Switching gears, you said why would users use lfs when they can use a distro, 
so why use openstack without a distro. I'd say, today unless you are paying a 
lot, there isn't really an equivalent distro that isn't almost as much effort 
as lfs when you consider day2 ops. To compare with Redhat again, we have a RHEL 
(redhat openstack), and Rawhide (devstack) but no equivalent of CentOS. Though 
I think TripleO has been making progress on this front...

It's RDO what you're looking for (equivalent of centos). TripleO is an 
installer project, not a distribution.


Anyway. This thread is I think 2 tangents away from the original topic now. If 
folks are interested in continuing this discussion, lets open a new thread.

Thanks,
Kevin


From: Dmitry Tantsur [dtant...@redhat.com<mailto:dtant...@redhat.com>]
Sent: Wednesday, July 04, 2018 4:24 AM
To: openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26

Tried hard to avoid this thread, but this message is so much wrong..

On 07/03/2018 09:48 PM, Fox, Kevin M wrote:
> I don't dispute trivial, but a self hosting k8s on bare metal is not 
> incredibly hard. In fact, it is easier then you might think. k8s is a 
> platform for deploying/managing services. Guess what you need to provision 
> bare metal? Just a few microservices. A dhcp service. dhcpd in a daemonset 
> works well. some pxe infrastructure. pixiecore with a simple http backend 
> works pretty well in practice. a service to provide installation 
> instructions. nginx server handing out kickstart files for example. and a 
> place to fetch rpms from in case you don't have internet access or want to 
> ensure uniformity. nginx server with a mirror yum repo. Its even possible to 
> seed it on minikube and sluff it off to its own cluster.
>
> The main hard part about it is currently no one is shipping a reference 
> implementation of the above. That may change...
>
> It is certainly much much easier then deploying enough OpenStack to get a 
> self hosting ironic working.

Side note: no, it's not. What you describe is similarly hard to installing
standalone ironic from scratch and much harder than using bifrost for
everything. Especially when you try to do it in production. Especially with
unusual operating requirements ("no TFTP servers on my network").

Also, sorry, I cannot resist:
"Guess what you need to orchestrate containers? Just a few things. A container
runtime. Docker works well. some remove execution tooling. ansible works pretty
well in practice. It is certainly much much easier then deploying enough k8s to
get a self hosting containers orchestration working."

Such oversimplications won't bring us anywhere. Sometimes things are hard
because they ARE hard. Where are people complaining that installing a full
GNU/Linux distributions from upstream tarballs is hard? How many operators here
use LFS as their distro? If we are okay with using a distro for GNU/Linux, why
using a distro for OpenStack causes so much contention?

>

Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-05 Thread Dmitry Tantsur
On Thu, Jul 5, 2018, 19:31 Fox, Kevin M  wrote:

> We're pretty far into a tangent...
>
> /me shrugs. I've done it. It can work.
>
> Some things your right. deploying k8s is more work then deploying ansible.
> But what I said depends on context. If your goal is to deploy k8s/manage
> k8s then having to learn how to use k8s is not a big ask. adding a
> different tool such as ansible is an extra cognitive dependency. Deploying
> k8s doesn't need a general solution to deploying generic base OS's. Just
> enough OS to deploy K8s and then deploy everything on top in containers.
> Deploying a seed k8s with minikube is pretty trivial. I'm not suggesting a
> solution here to provide generic provisioning to every use case in the
> datacenter. But enough to get a k8s based cluster up and self hosted enough
> where you could launch other provisioning/management tools in that same
> cluster, if you need that. It provides a solid base for the datacenter on
> which you can easily add the services you need for dealing with everything.
>
> All of the microservices I mentioned can be wrapped up in a single helm
> chart and deployed with a single helm install command.
>
> I don't have permission to release anything at the moment, so I can't
> prove anything right now. So, take my advice with a grain of salt. :)
>
> Switching gears, you said why would users use lfs when they can use a
> distro, so why use openstack without a distro. I'd say, today unless you
> are paying a lot, there isn't really an equivalent distro that isn't almost
> as much effort as lfs when you consider day2 ops. To compare with Redhat
> again, we have a RHEL (redhat openstack), and Rawhide (devstack) but no
> equivalent of CentOS. Though I think TripleO has been making progress on
> this front...
>

It's RDO what you're looking for (equivalent of centos). TripleO is an
installer project, not a distribution.


> Anyway. This thread is I think 2 tangents away from the original topic
> now. If folks are interested in continuing this discussion, lets open a new
> thread.
>
> Thanks,
> Kevin
>
> 
> From: Dmitry Tantsur [dtant...@redhat.com]
> Sent: Wednesday, July 04, 2018 4:24 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26
>
> Tried hard to avoid this thread, but this message is so much wrong..
>
> On 07/03/2018 09:48 PM, Fox, Kevin M wrote:
> > I don't dispute trivial, but a self hosting k8s on bare metal is not
> incredibly hard. In fact, it is easier then you might think. k8s is a
> platform for deploying/managing services. Guess what you need to provision
> bare metal? Just a few microservices. A dhcp service. dhcpd in a daemonset
> works well. some pxe infrastructure. pixiecore with a simple http backend
> works pretty well in practice. a service to provide installation
> instructions. nginx server handing out kickstart files for example. and a
> place to fetch rpms from in case you don't have internet access or want to
> ensure uniformity. nginx server with a mirror yum repo. Its even possible
> to seed it on minikube and sluff it off to its own cluster.
> >
> > The main hard part about it is currently no one is shipping a reference
> implementation of the above. That may change...
> >
> > It is certainly much much easier then deploying enough OpenStack to get
> a self hosting ironic working.
>
> Side note: no, it's not. What you describe is similarly hard to installing
> standalone ironic from scratch and much harder than using bifrost for
> everything. Especially when you try to do it in production. Especially with
> unusual operating requirements ("no TFTP servers on my network").
>
> Also, sorry, I cannot resist:
> "Guess what you need to orchestrate containers? Just a few things. A
> container
> runtime. Docker works well. some remove execution tooling. ansible works
> pretty
> well in practice. It is certainly much much easier then deploying enough
> k8s to
> get a self hosting containers orchestration working."
>
> Such oversimplications won't bring us anywhere. Sometimes things are hard
> because they ARE hard. Where are people complaining that installing a full
> GNU/Linux distributions from upstream tarballs is hard? How many operators
> here
> use LFS as their distro? If we are okay with using a distro for GNU/Linux,
> why
> using a distro for OpenStack causes so much contention?
>
> >
> > Thanks,
> > Kevin
> >
> > 
> > From: Jay Pipes [jaypi...@gmail.com]
> > Sent: Tuesday, July 03, 2018 10:06 AM
> > To: openstack-dev@lists.openstack.org
> > Subject: Re: [o

Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-05 Thread Jeremy Stanley
On 2018-07-05 17:30:23 + (+), Fox, Kevin M wrote:
[...]
> Deploying k8s doesn't need a general solution to deploying generic
> base OS's. Just enough OS to deploy K8s and then deploy everything
> on top in containers. Deploying a seed k8s with minikube is pretty
> trivial. I'm not suggesting a solution here to provide generic
> provisioning to every use case in the datacenter. But enough to
> get a k8s based cluster up and self hosted enough where you could
> launch other provisioning/management tools in that same cluster,
> if you need that. It provides a solid base for the datacenter on
> which you can easily add the services you need for dealing with
> everything.
> 
> All of the microservices I mentioned can be wrapped up in a single
> helm chart and deployed with a single helm install command.
> 
> I don't have permission to release anything at the moment, so I
> can't prove anything right now. So, take my advice with a grain of
> salt. :)
[...]

Anything like http://www.airshipit.org/ ?
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-05 Thread Fox, Kevin M
We're pretty far into a tangent...

/me shrugs. I've done it. It can work.

Some things your right. deploying k8s is more work then deploying ansible. But 
what I said depends on context. If your goal is to deploy k8s/manage k8s then 
having to learn how to use k8s is not a big ask. adding a different tool such 
as ansible is an extra cognitive dependency. Deploying k8s doesn't need a 
general solution to deploying generic base OS's. Just enough OS to deploy K8s 
and then deploy everything on top in containers. Deploying a seed k8s with 
minikube is pretty trivial. I'm not suggesting a solution here to provide 
generic provisioning to every use case in the datacenter. But enough to get a 
k8s based cluster up and self hosted enough where you could launch other 
provisioning/management tools in that same cluster, if you need that. It 
provides a solid base for the datacenter on which you can easily add the 
services you need for dealing with everything.

All of the microservices I mentioned can be wrapped up in a single helm chart 
and deployed with a single helm install command.

I don't have permission to release anything at the moment, so I can't prove 
anything right now. So, take my advice with a grain of salt. :)

Switching gears, you said why would users use lfs when they can use a distro, 
so why use openstack without a distro. I'd say, today unless you are paying a 
lot, there isn't really an equivalent distro that isn't almost as much effort 
as lfs when you consider day2 ops. To compare with Redhat again, we have a RHEL 
(redhat openstack), and Rawhide (devstack) but no equivalent of CentOS. Though 
I think TripleO has been making progress on this front...

Anyway. This thread is I think 2 tangents away from the original topic now. If 
folks are interested in continuing this discussion, lets open a new thread.

Thanks,
Kevin


From: Dmitry Tantsur [dtant...@redhat.com]
Sent: Wednesday, July 04, 2018 4:24 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26

Tried hard to avoid this thread, but this message is so much wrong..

On 07/03/2018 09:48 PM, Fox, Kevin M wrote:
> I don't dispute trivial, but a self hosting k8s on bare metal is not 
> incredibly hard. In fact, it is easier then you might think. k8s is a 
> platform for deploying/managing services. Guess what you need to provision 
> bare metal? Just a few microservices. A dhcp service. dhcpd in a daemonset 
> works well. some pxe infrastructure. pixiecore with a simple http backend 
> works pretty well in practice. a service to provide installation 
> instructions. nginx server handing out kickstart files for example. and a 
> place to fetch rpms from in case you don't have internet access or want to 
> ensure uniformity. nginx server with a mirror yum repo. Its even possible to 
> seed it on minikube and sluff it off to its own cluster.
>
> The main hard part about it is currently no one is shipping a reference 
> implementation of the above. That may change...
>
> It is certainly much much easier then deploying enough OpenStack to get a 
> self hosting ironic working.

Side note: no, it's not. What you describe is similarly hard to installing
standalone ironic from scratch and much harder than using bifrost for
everything. Especially when you try to do it in production. Especially with
unusual operating requirements ("no TFTP servers on my network").

Also, sorry, I cannot resist:
"Guess what you need to orchestrate containers? Just a few things. A container
runtime. Docker works well. some remove execution tooling. ansible works pretty
well in practice. It is certainly much much easier then deploying enough k8s to
get a self hosting containers orchestration working."

Such oversimplications won't bring us anywhere. Sometimes things are hard
because they ARE hard. Where are people complaining that installing a full
GNU/Linux distributions from upstream tarballs is hard? How many operators here
use LFS as their distro? If we are okay with using a distro for GNU/Linux, why
using a distro for OpenStack causes so much contention?

>
> Thanks,
> Kevin
>
> 
> From: Jay Pipes [jaypi...@gmail.com]
> Sent: Tuesday, July 03, 2018 10:06 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26
>
> On 07/02/2018 03:31 PM, Zane Bitter wrote:
>> On 28/06/18 15:09, Fox, Kevin M wrote:
>>>* made the barrier to testing/development as low as 'curl
>>> http://..minikube; minikube start' (this spurs adoption and
>>> contribution)
>>
>> That's not so different from devstack though.
>>
>>>* not having large silo's in deployment projects allowed better
>>> communication on common tooling.
>>>* Operat

Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-04 Thread Dmitry Tantsur

Tried hard to avoid this thread, but this message is so much wrong..

On 07/03/2018 09:48 PM, Fox, Kevin M wrote:

I don't dispute trivial, but a self hosting k8s on bare metal is not incredibly 
hard. In fact, it is easier then you might think. k8s is a platform for 
deploying/managing services. Guess what you need to provision bare metal? Just 
a few microservices. A dhcp service. dhcpd in a daemonset works well. some pxe 
infrastructure. pixiecore with a simple http backend works pretty well in 
practice. a service to provide installation instructions. nginx server handing 
out kickstart files for example. and a place to fetch rpms from in case you 
don't have internet access or want to ensure uniformity. nginx server with a 
mirror yum repo. Its even possible to seed it on minikube and sluff it off to 
its own cluster.

The main hard part about it is currently no one is shipping a reference 
implementation of the above. That may change...

It is certainly much much easier then deploying enough OpenStack to get a self 
hosting ironic working.


Side note: no, it's not. What you describe is similarly hard to installing 
standalone ironic from scratch and much harder than using bifrost for 
everything. Especially when you try to do it in production. Especially with 
unusual operating requirements ("no TFTP servers on my network").


Also, sorry, I cannot resist:
"Guess what you need to orchestrate containers? Just a few things. A container 
runtime. Docker works well. some remove execution tooling. ansible works pretty 
well in practice. It is certainly much much easier then deploying enough k8s to 
get a self hosting containers orchestration working."


Such oversimplications won't bring us anywhere. Sometimes things are hard 
because they ARE hard. Where are people complaining that installing a full 
GNU/Linux distributions from upstream tarballs is hard? How many operators here 
use LFS as their distro? If we are okay with using a distro for GNU/Linux, why 
using a distro for OpenStack causes so much contention?




Thanks,
Kevin


From: Jay Pipes [jaypi...@gmail.com]
Sent: Tuesday, July 03, 2018 10:06 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26

On 07/02/2018 03:31 PM, Zane Bitter wrote:

On 28/06/18 15:09, Fox, Kevin M wrote:

   * made the barrier to testing/development as low as 'curl
http://..minikube; minikube start' (this spurs adoption and
contribution)


That's not so different from devstack though.


   * not having large silo's in deployment projects allowed better
communication on common tooling.
   * Operator focused architecture, not project based architecture.
This simplifies the deployment situation greatly.
   * try whenever possible to focus on just the commons and push vendor
specific needs to plugins so vendors can deal with vendor issues
directly and not corrupt the core.


I agree with all of those, but to be fair to OpenStack, you're leaving
out arguably the most important one:

  * Installation instructions start with "assume a working datacenter"

They have that luxury; we do not. (To be clear, they are 100% right to
take full advantage of that luxury. Although if there are still folks
who go around saying that it's a trivial problem and OpenStackers must
all be idiots for making it look so difficult, they should really stop
embarrassing themselves.)


This.

There is nothing trivial about the creation of a working datacenter --
never mind a *well-running* datacenter. Comparing Kubernetes to
OpenStack -- particular OpenStack's lower levels -- is missing this
fundamental point and ends up comparing apples to oranges.

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-03 Thread Fox, Kevin M
Replying inline in outlook. Sorry. :( Prefixing with KF>

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com] 
Sent: Tuesday, July 03, 2018 1:04 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26

I'll answer inline, so that it's easier to understand what part of your 
message I'm responding to.

On 07/03/2018 02:37 PM, Fox, Kevin M wrote:
> Yes/no on the vendor distro thing. They do provide a lot of options, but they 
> also provide a fully k8s tested/provided route too. kubeadm. I can take linux 
> distro of choice, curl down kubeadm and get a working kubernetes in literally 
> a couple minutes.

How is this different from devstack?

With both approaches:

* Download and run a single script
* Any sort of networking outside of super basic setup requires manual 
intervention
* Not recommended for "production"
* Require workarounds when running as not-root

Is it that you prefer the single Go binary approach of kubeadm which 
hides much of the details that devstack was designed to output (to help 
teach people what's going on under the hood)?

KF> so... go to https://docs.openstack.org/devstack/latest/ and one of the 
first things you see is a bright red Warning box. Don't run it on your laptop. 
It also targets git master rather then production releases so it is more 
targeted at developing on openstack itself rather then developers developing 
their software to run in openstack. My common use case was developing stuff to 
run in, not developing openstack itself. minikube makes this case first class. 
Also, it requires a linux box to deploy it. Minikube works on macos and windows 
as well. Yeah, not really an easy thing to do, but it does it pretty well. I 
did a presentation on Kubernetes once, put up a slide on minikube, and 5 slides 
later, one of the physicists in the room said, btw, I have it working on my mac 
(personal laptop). Not trying to slam devstack. It really is a good piece of 
software. but it still has a ways to go to get to that point. And lastly, 
minikube's default bootstrapper these days is kubeadm. So the kubernetes you 
get to develop against is REALLY close to one you could deploy yourself at 
scale in vms or on bare metal. The tools/containers it uses are byte identical. 
They will behave the same. Devstack is very different then most production 
deployments.

 >
  No compiling anything or building containers. That is what I mean when 
I say they have a product.

What does devstack compile?

By "compile" are you referring to downloading code from git 
repositories? Or are you referring to the fact that with kubeadm you are 
downloading a Go binary that hides the downloading and installation of 
all the other Kubernetes images for you [1]?

KF> The go binary orchestrates a bit, but for the most part, you get one system 
package installed (or use one statically linked binary) kubelet. From there, 
you switch to using prebuilt containers for all the other services. Those 
binaries have been through a build / test/ release pipeline and are guaranteed 
to be the same between all the nodes you install them on. It is easy to run a 
deployment on your test cluster, and ensure it works the same way on your 
production system. You can do the same with say rpms, but then you need to 
build up plumbing to mirror your rpms and plumbing to promote from testing to 
production, etc. Then you have to configure all the nodes to not accidently 
pull from a remote rpm mirror. Some of the system updates try really hard to 
reenable that. :/ K8s gives you easy testing/promotion by the way they tag 
things and prebuild stuff for you. So you just tweak your k8s version and off 
go. You don't have to mirror if you don't want to. Lower barrier to entry there.

[1] 
https://github.com/kubernetes/kubernetes/blob/8d73473ce8118422c9e0c2ba8ea669ebbf8cee1c/cmd/kubeadm/app/cmd/init.go#L267
https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/images/images.go#L63

 > Other vendors provide their own builds, release tooling, or config 
management integration. which is why that list is so big.  But it is up 
to the Operators to decide the route and due to k8s having a very clean, 
easy, low bar for entry it sets the bar for the other products to be 
even better.

I fail to see how devstack and kubeadm aren't very much in the same vein?

KF> You've switched from comparing devstack and minikube to devstack and 
kubeadm. Kubeadm is plumbing to build dev, test, and production systems. 
Devstack is very much only ever intended for the dev phase. And like I said 
before, a little more focused on the dev of openstack itself, not of deving 
code running in it. Minikube is really intended to allow devs to develop 
software to run inside k8s and behave as much as possible to a full k8s cluster.

> The reason people started adopting clouds was because it was very quick to 
> request resou

Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-03 Thread Jay Pipes
optional component, by there still not being a way to download secrets 
to a vm securely from the secret store, by the secret store also being 
completely optional, etc. An app developer can't rely on any of it. :/ Heat is 
hamstrung by the lack of blessing so many other OpenStack services are. You 
can't fix it until you fix that fundamental brokenness in OpenStack.


I guess I just fundamentally disagree that having a monolithic 
all-things-for-all-users application architecture and feature set is 
something that OpenStack should be.


There is a *reason* that Kubernetes jettisoned all the cloud provider 
code from its core. The reason is because setting up that base stuff is 
*hard* and that work isn't germane to what Kubernetes is (a container 
orchestration system, not a datacenter resource management system).



Heat is also hamstrung being an orchestrator of existing API's by there being 
holes in the API's.


I agree there are some holes in some of the APIs. Happy to work on 
plugging those holes as long as the holes are properly identified as 
belonging to the correct API and are not simply a feature request what 
would expand the scope of lower-level plumbing services like Nova.



Think of OpenStack like a game console. The moment you make a component 
optional and make it takes extra effort to obtain, few software developers 
target it and rarely does anyone one buy the addons it because there isn't 
software for it. Right now, just about everything in OpenStack is an addon. 
Thats a problem.


I don't have any game consoles nor do I develop software for them, so I 
don't really see the correlation here. That said, I'm 100% against a 
monolithic application approach, as I've mentioned before.


Best,
-jay


Thanks,
Kevin



From: Jay Pipes [jaypi...@gmail.com]
Sent: Monday, July 02, 2018 4:13 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26

On 06/27/2018 07:23 PM, Zane Bitter wrote:

On 27/06/18 07:55, Jay Pipes wrote:

Above, I was saying that the scope of the *OpenStack* community is
already too broad (IMHO). An example of projects that have made the
*OpenStack* community too broad are purpose-built telco applications
like Tacker [1] and Service Function Chaining. [2]

I've also argued in the past that all distro- or vendor-specific
deployment tools (Fuel, Triple-O, etc [3]) should live outside of
OpenStack because these projects are more products and the relentless
drive of vendor product management (rightfully) pushes the scope of
these applications to gobble up more and more feature space that may
or may not have anything to do with the core OpenStack mission (and
have more to do with those companies' product roadmap).


I'm still sad that we've never managed to come up with a single way to
install OpenStack. The amount of duplicated effort expended on that
problem is mind-boggling. At least we tried though. Excluding those
projects from the community would have just meant giving up from the
beginning.


You have to have motivation from vendors in order to achieve said single
way of installing OpenStack. I gave up a long time ago on distros and
vendors to get behind such an effort.

Where vendors see $$$, they will attempt to carve out value
differentiation. And value differentiation leads to, well, differences,
naturally.

And, despite what some might misguidedly think, Kubernetes has no single
installation method. Their *official* setup/install page is here:

https://kubernetes.io/docs/setup/pick-right-solution/

It lists no fewer than *37* (!) different ways of installing Kubernetes,
and I'm not even including anything listed in the "Custom Solutions"
section.


I think Thierry's new map, that collects installer services in a
separate bucket (that may eventually come with a separate git namespace)
is a helpful way of communicating to users what's happening without
forcing those projects outside of the community.


Sure, I agree the separate bucket is useful, particularly when paired
with information that allows operators to know how stable and/or
bleeding edge the code is expected to be -- you know, those "tags" that
the TC spent time curating.


So to answer your question:

 zaneb: yeah... nobody I know who argues for a small stable
core (in Nova) has ever said there should be fewer higher layer
services.
 zaneb: I'm not entirely sure where you got that idea from.


Note the emphasis on *Nova* above?

Also note that when I've said that *OpenStack* should have a smaller
mission and scope, that doesn't mean that higher-level services aren't
necessary or wanted.


Thank you for saying this, and could I please ask you to repeat this
disclaimer whenever you talk about a smaller scope for OpenStack.


Yes. I shall shout it from the highest mountains. [1]


Because for those of us working on higher-level services it feels like
there has been a non-stop chorus (both inside and outs

Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-03 Thread Fox, Kevin M
I don't dispute trivial, but a self hosting k8s on bare metal is not incredibly 
hard. In fact, it is easier then you might think. k8s is a platform for 
deploying/managing services. Guess what you need to provision bare metal? Just 
a few microservices. A dhcp service. dhcpd in a daemonset works well. some pxe 
infrastructure. pixiecore with a simple http backend works pretty well in 
practice. a service to provide installation instructions. nginx server handing 
out kickstart files for example. and a place to fetch rpms from in case you 
don't have internet access or want to ensure uniformity. nginx server with a 
mirror yum repo. Its even possible to seed it on minikube and sluff it off to 
its own cluster.

The main hard part about it is currently no one is shipping a reference 
implementation of the above. That may change...

It is certainly much much easier then deploying enough OpenStack to get a self 
hosting ironic working.

Thanks,
Kevin 


From: Jay Pipes [jaypi...@gmail.com]
Sent: Tuesday, July 03, 2018 10:06 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26

On 07/02/2018 03:31 PM, Zane Bitter wrote:
> On 28/06/18 15:09, Fox, Kevin M wrote:
>>   * made the barrier to testing/development as low as 'curl
>> http://..minikube; minikube start' (this spurs adoption and
>> contribution)
>
> That's not so different from devstack though.
>
>>   * not having large silo's in deployment projects allowed better
>> communication on common tooling.
>>   * Operator focused architecture, not project based architecture.
>> This simplifies the deployment situation greatly.
>>   * try whenever possible to focus on just the commons and push vendor
>> specific needs to plugins so vendors can deal with vendor issues
>> directly and not corrupt the core.
>
> I agree with all of those, but to be fair to OpenStack, you're leaving
> out arguably the most important one:
>
>  * Installation instructions start with "assume a working datacenter"
>
> They have that luxury; we do not. (To be clear, they are 100% right to
> take full advantage of that luxury. Although if there are still folks
> who go around saying that it's a trivial problem and OpenStackers must
> all be idiots for making it look so difficult, they should really stop
> embarrassing themselves.)

This.

There is nothing trivial about the creation of a working datacenter --
never mind a *well-running* datacenter. Comparing Kubernetes to
OpenStack -- particular OpenStack's lower levels -- is missing this
fundamental point and ends up comparing apples to oranges.

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-03 Thread Fox, Kevin M
Heh. your not going to like it. :)

The very fastest path I can think of but is super disruptive is the following 
(there are also less disruptive paths):

First, define what OpenStack will be. If you don't know, you easily run into 
people working across purposes. Maybe there are other things that will be 
sister projects. that's fine. But it needs to be a whole product/project, not 
split on interests. think k8s sigs not openstack projects. The final result is 
a singular thing though. k8s x.y.z. openstack iaas 2.y.z or something like that.

Have a look at what KubeVirt is doing. I think they have the right approach.

Then, define K8s to be part of the commons. They provide a large amount of 
functionality OpenStack needs in the commons. If it is there, you can reuse it 
and not reinvent it.

Implement a new version of each OpenStack services api on top of K8s api using 
CRD's. At the same time, as we now  defined what OpenStack will be, ensure the 
API has all the base use cases covered.

Provide a rest service -> crd adapter to enable backwards compatibility with 
older OpenStack api versions.

This completely removes statefullness from OpenStack services.

Rather then have a dozen databases you have just an etcd system under the hood. 
It provides locking, and events as well. so no oslo.locking backing service, no 
message queue, no sql databases. This GREATLY simplifies what the operators 
need to do. This removes a lot of code too. Backups are simpler as there is 
only one thing. Operators life is drastically simpler.

upgrade tools should be unified. you upgrade your openstack deployment, not 
upgrade nova, upgrade glance, upgrade neutron, ..., etc

Config can be easier as you can ship config with the same mechanism. Currently 
the operator tries to define cluster config and it gets twisted and split up 
per project/per node/sub component.

Service account stuff is handled by kubernetes service accounts. so no rpc over 
amqp security layer and shipping around credentials manually in config files, 
and figuring out how to roll credentials, etc. agent stuff is much simpler. 
less code.

Provide prebuilt containers for all of your components and some basic tooling 
to deploy it on a k8s. K8s provides a lot of tooling here. We've been building 
it over and over in deployment tools. we can get rid of most of it.

Use http for everything. We all have acknowledged we have been torturing rabbit 
for a while. but its still a critical piece of infrastructure at the core 
today. We need to stop.

Provide a way to have a k8s secret poked into a vm.

I could go on, but I think there is enough discussion points here already. And 
I wonder if anyone made it this far without their head exploding already. :)

Thanks,
Kevin





From: Jay Pipes [jaypi...@gmail.com]
Sent: Monday, July 02, 2018 2:45 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26

On 07/02/2018 03:12 PM, Fox, Kevin M wrote:
> I think a lot of the pushback around not adding more common/required services 
> is the extra load it puts on ops though. hence these:
>>   * Consider abolishing the project walls.
>>   * simplify the architecture for ops
>
> IMO, those need to change to break free from the pushback and make progress 
> on the commons again.

What *specifically* would you do, Kevin?

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-03 Thread Fox, Kevin M
Yes/no on the vendor distro thing. They do provide a lot of options, but they 
also provide a fully k8s tested/provided route too. kubeadm. I can take linux 
distro of choice, curl down kubeadm and get a working kubernetes in literally a 
couple minutes. No compiling anything or building containers. That is what I 
mean when I say they have a product. Other vendors provide their own builds, 
release tooling, or config management integration. which is why that list is so 
big.  But it is up to the Operators to decide the route and due to k8s having a 
very clean, easy, low bar for entry it sets the bar for the other products to 
be even better.

The reason people started adopting clouds was because it was very quick to 
request resources.  One of clouds features (some say drawbacks) vs VM farms has 
been ephemeralness. You build applications on top of VMs to provide a Service 
to your Users. Great. Things like Containers though launch much faster and have 
generally more functionality for plumbing them together then VMs do though. So 
these days containers are out clouding vms at this use case. So, does Nova 
continue to be cloudy vm or does it go for the more production vm use case like 
oVirt and VMware? Without strong orchestration of some kind on top the cloudy 
use case is also really hard with Nova. So we keep getting into this tug of war 
between people wanting VM's as a building blocks of cloud scale applications, 
and those that want Nova to be an oVirt/VMware replacement. Honestly, its not 
doing either use case great because it cant decide what to focus on.
oVirt is a better VMware alternative today then Nova is, having used it. It 
focuses specifically on the same use cases. Nova is better at being a cloud 
then oVirt and VMware. but lags behind Azure/AWS a lot when it comes to having 
apps self host on it. (progress is being made again finally. but its slow)

While some people only ever consider running Kubernetes on top of a cloud, some 
of us realize maintaining both a cloud an a kubernetes is unnecessary and can 
greatly simplify things simply by running k8s on bare metal. This does then 
make it a competitor to Nova  as a platform for running workload on. As k8s 
gains more multitenancy features, this trend will continue to grow I think. 
OpenStack needs to be ready for when that becomes a thing.

Heat is a good start for an orchestration system, but it is hamstrung by it 
being an optional component, by there still not being a way to download secrets 
to a vm securely from the secret store, by the secret store also being 
completely optional, etc. An app developer can't rely on any of it. :/ Heat is 
hamstrung by the lack of blessing so many other OpenStack services are. You 
can't fix it until you fix that fundamental brokenness in OpenStack.

Heat is also hamstrung being an orchestrator of existing API's by there being 
holes in the API's.

Think of OpenStack like a game console. The moment you make a component 
optional and make it takes extra effort to obtain, few software developers 
target it and rarely does anyone one buy the addons it because there isn't 
software for it. Right now, just about everything in OpenStack is an addon. 
Thats a problem.

Thanks,
Kevin



From: Jay Pipes [jaypi...@gmail.com]
Sent: Monday, July 02, 2018 4:13 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26

On 06/27/2018 07:23 PM, Zane Bitter wrote:
> On 27/06/18 07:55, Jay Pipes wrote:
>> Above, I was saying that the scope of the *OpenStack* community is
>> already too broad (IMHO). An example of projects that have made the
>> *OpenStack* community too broad are purpose-built telco applications
>> like Tacker [1] and Service Function Chaining. [2]
>>
>> I've also argued in the past that all distro- or vendor-specific
>> deployment tools (Fuel, Triple-O, etc [3]) should live outside of
>> OpenStack because these projects are more products and the relentless
>> drive of vendor product management (rightfully) pushes the scope of
>> these applications to gobble up more and more feature space that may
>> or may not have anything to do with the core OpenStack mission (and
>> have more to do with those companies' product roadmap).
>
> I'm still sad that we've never managed to come up with a single way to
> install OpenStack. The amount of duplicated effort expended on that
> problem is mind-boggling. At least we tried though. Excluding those
> projects from the community would have just meant giving up from the
> beginning.

You have to have motivation from vendors in order to achieve said single
way of installing OpenStack. I gave up a long time ago on distros and
vendors to get behind such an effort.

Where vendors see $$$, they will attempt to carve out value
differentiation. And value differentiation leads to, well, diff

Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-03 Thread Davanum Srinivas
On Tue, Jul 3, 2018 at 1:06 PM Jay Pipes  wrote:
>
> On 07/02/2018 03:31 PM, Zane Bitter wrote:
> > On 28/06/18 15:09, Fox, Kevin M wrote:
> >>   * made the barrier to testing/development as low as 'curl
> >> http://..minikube; minikube start' (this spurs adoption and
> >> contribution)
> >
> > That's not so different from devstack though.
> >
> >>   * not having large silo's in deployment projects allowed better
> >> communication on common tooling.
> >>   * Operator focused architecture, not project based architecture.
> >> This simplifies the deployment situation greatly.
> >>   * try whenever possible to focus on just the commons and push vendor
> >> specific needs to plugins so vendors can deal with vendor issues
> >> directly and not corrupt the core.
> >
> > I agree with all of those, but to be fair to OpenStack, you're leaving
> > out arguably the most important one:
> >
> >  * Installation instructions start with "assume a working datacenter"
> >
> > They have that luxury; we do not. (To be clear, they are 100% right to
> > take full advantage of that luxury. Although if there are still folks
> > who go around saying that it's a trivial problem and OpenStackers must
> > all be idiots for making it look so difficult, they should really stop
> > embarrassing themselves.)
>
> This.
>
> There is nothing trivial about the creation of a working datacenter --
> never mind a *well-running* datacenter. Comparing Kubernetes to
> OpenStack -- particular OpenStack's lower levels -- is missing this
> fundamental point and ends up comparing apples to oranges.

100%

> Best,
> -jay
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-03 Thread Jay Pipes

On 07/02/2018 03:31 PM, Zane Bitter wrote:

On 28/06/18 15:09, Fox, Kevin M wrote:
  * made the barrier to testing/development as low as 'curl 
http://..minikube; minikube start' (this spurs adoption and 
contribution)


That's not so different from devstack though.

  * not having large silo's in deployment projects allowed better 
communication on common tooling.
  * Operator focused architecture, not project based architecture. 
This simplifies the deployment situation greatly.
  * try whenever possible to focus on just the commons and push vendor 
specific needs to plugins so vendors can deal with vendor issues 
directly and not corrupt the core.


I agree with all of those, but to be fair to OpenStack, you're leaving 
out arguably the most important one:


     * Installation instructions start with "assume a working datacenter"

They have that luxury; we do not. (To be clear, they are 100% right to 
take full advantage of that luxury. Although if there are still folks 
who go around saying that it's a trivial problem and OpenStackers must 
all be idiots for making it look so difficult, they should really stop 
embarrassing themselves.)


This.

There is nothing trivial about the creation of a working datacenter -- 
never mind a *well-running* datacenter. Comparing Kubernetes to 
OpenStack -- particular OpenStack's lower levels -- is missing this 
fundamental point and ends up comparing apples to oranges.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-03 Thread Thierry Carrez

Zane Bitter wrote:

[...]
I think if OpenStack wants to gain back some of the steam it had 
before, it needs to adjust to the new world it is living in. This means:
  * Consider abolishing the project walls. They are driving bad 
architecture (not intentionally but as a side affect of structure)


In the spirit of cdent's blog post about random ideas: one idea I keep 
coming back to (and it's been around for a while, I don't remember who 
it first came from) is to start treating the compute node as a single 
project (I guess the k8s equivalent would be a kubelet). Have a single 
API - commands go in, events come out.


Right, that's what SIG Node in Kubernetes is focused on: optimize what 
ends up running on the Kubernetes node. That's where their goal-oriented 
team structure shines, and why I'd like us to start organizing work 
along those lines as well (rather than along code repository ownership 
lines).



[...]
We probably actually need two groups: one to think about the 
architecture of the user experience of OpenStack, and one to think about 
the internal architecture as a whole.


I'd be very enthusiastic about the TC chartering some group to work on 
this. It has worried me for a long time that there is nobody designing 
OpenStack as an whole; design is done at the level of individual 
projects, and OpenStack is an ad-hoc collection of what they produce. 
Unfortunately we did have an Architecture Working Group for a while (in 
the sense of the second definition above), and it fizzled out because 
there weren't enough people with enough time to work on it. Until we can 
identify at least a theoretical reason why a new effort would be more 
successful, I don't think there is going to be any appetite for trying 
again.


I agree. As one of the very few people that showed up to try to drive 
this working group, I could see that the people calling for more 
architectural up-front design are generally not the people showing up to 
help drive it. Because the reality of that work is not about having good 
ideas -- "put me in charge and I'll fix everything". It's about taking 
the time to document it, advocate for it, and yes, drive it and 
implement it across project team boundaries. It's a lot more work than 
posting a good idea on an email thread wondering why nobody else is 
doing it.


Another thing we need to keep in mind is that OpenStack has a lot of 
successful users, and IMHO we can't afford to break them. Proposing 
incremental, backward-compatible change is therefore more productive 
than talking about how you would design OpenStack if you started today.


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-02 Thread Jay Pipes

On 06/27/2018 07:23 PM, Zane Bitter wrote:

On 27/06/18 07:55, Jay Pipes wrote:
Above, I was saying that the scope of the *OpenStack* community is 
already too broad (IMHO). An example of projects that have made the 
*OpenStack* community too broad are purpose-built telco applications 
like Tacker [1] and Service Function Chaining. [2]


I've also argued in the past that all distro- or vendor-specific 
deployment tools (Fuel, Triple-O, etc [3]) should live outside of 
OpenStack because these projects are more products and the relentless 
drive of vendor product management (rightfully) pushes the scope of 
these applications to gobble up more and more feature space that may 
or may not have anything to do with the core OpenStack mission (and 
have more to do with those companies' product roadmap).


I'm still sad that we've never managed to come up with a single way to 
install OpenStack. The amount of duplicated effort expended on that 
problem is mind-boggling. At least we tried though. Excluding those 
projects from the community would have just meant giving up from the 
beginning.


You have to have motivation from vendors in order to achieve said single 
way of installing OpenStack. I gave up a long time ago on distros and 
vendors to get behind such an effort.


Where vendors see $$$, they will attempt to carve out value 
differentiation. And value differentiation leads to, well, differences, 
naturally.


And, despite what some might misguidedly think, Kubernetes has no single 
installation method. Their *official* setup/install page is here:


https://kubernetes.io/docs/setup/pick-right-solution/

It lists no fewer than *37* (!) different ways of installing Kubernetes, 
and I'm not even including anything listed in the "Custom Solutions" 
section.


I think Thierry's new map, that collects installer services in a 
separate bucket (that may eventually come with a separate git namespace) 
is a helpful way of communicating to users what's happening without 
forcing those projects outside of the community.


Sure, I agree the separate bucket is useful, particularly when paired 
with information that allows operators to know how stable and/or 
bleeding edge the code is expected to be -- you know, those "tags" that 
the TC spent time curating.



So to answer your question:

 zaneb: yeah... nobody I know who argues for a small stable 
core (in Nova) has ever said there should be fewer higher layer 
services.

 zaneb: I'm not entirely sure where you got that idea from.


Note the emphasis on *Nova* above?

Also note that when I've said that *OpenStack* should have a smaller 
mission and scope, that doesn't mean that higher-level services aren't 
necessary or wanted.


Thank you for saying this, and could I please ask you to repeat this 
disclaimer whenever you talk about a smaller scope for OpenStack.


Yes. I shall shout it from the highest mountains. [1]

Because for those of us working on higher-level services it feels like 
there has been a non-stop chorus (both inside and outside the project) 
of people wanting to redefine OpenStack as something that doesn't 
include us.


I've said in the past (on Twitter, can't find the link right now, but 
it's out there somewhere) something to the effect of "at some point, 
someone just needs to come out and say that OpenStack is, at its core, 
Nova, Neutron, Keystone, Glance and Cinder".


Perhaps this is what you were recollecting. I would use a different 
phrase nowadays to describe what I was thinking with the above.


I would say instead "Nova, Neutron, Cinder, Keystone and Glance [2] are 
a definitive lower level of an OpenStack deployment. They represent a 
set of required integrated services that supply the most basic 
infrastructure for datacenter resource management when deploying OpenStack."


Note the difference in wording. Instead of saying "OpenStack is X", I'm 
saying "These particular services represent a specific layer of an 
OpenStack deployment".


Nowadays, I would further add something to the effect of "Depending on 
the particular use cases and workloads the OpenStack deployer wishes to 
promote, an additional layer of services provides workload orchestration 
and workflow management capabilities. This layer of services include 
Heat, Mistral, Tacker, Service Function Chaining, Murano, etc".


Does that provide you with some closure on this feeling of "non-stop 
chorus" of exclusion that you mentioned above?


The reason I haven't dropped this discussion is because I really want to 
know if _all_ of those people were actually talking about something else 
(e.g. a smaller scope for Nova), or if it's just you. Because you and I 
are in complete agreement that Nova has grown a lot of obscure 
capabilities that make it fiendishly difficult to maintain, and that in 
many cases might never have been requested if we'd had higher-level 
tools that could meet the same use cases by composing simpler operations.


IMHO some of the contributing factors 

Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-02 Thread Jay Pipes

On 07/02/2018 03:12 PM, Fox, Kevin M wrote:

I think a lot of the pushback around not adding more common/required services 
is the extra load it puts on ops though. hence these:

  * Consider abolishing the project walls.
  * simplify the architecture for ops


IMO, those need to change to break free from the pushback and make progress on 
the commons again.


What *specifically* would you do, Kevin?

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-02 Thread Zane Bitter
well. 
Some of those projects only exist at all because of boundaries between 
stuff on the compute node, while others are just unnecessarily 
complicated to add to a deployment because of those boundaries. (See 
https://julien.danjou.info/lessons-from-openstack-telemetry-incubation/ 
for some insightful observations on that topic - note that you don't 
have to agree with all of it to appreciate the point that the 
balkanisation of the compute node architecture leads to bad design 
decisions.)


In theory doing that should make it easier to build e.g. a cut-down 
compute API of the kind that Jay was talking about upthread.


I know that the short-term costs of making a change like this are going 
to be high - we aren't even yet at a point where making a stable API for 
compute drivers has been judged to meet a cost/benefit analysis. But 
maybe if we can do a comprehensive job of articulating the long-term 
benefits, we might find that it's still the right thing to do.



  * focus on the commons first.
  * simplify the architecture for ops:
* make as much as possible stateless and centralize remaining state.
* stop moving config options around with every release. Make it promote 
automatically and persist it somewhere.
* improve serial performance before sharding. k8s can do 5000 nodes on one 
control plane. No reason to do nova cells and make ops deal with it except for 
the most huge of clouds
  * consider a reference product (think Linux vanilla kernel. distro's can 
provide their own variants. thats ok)
  * come up with an architecture team for the whole, not the subsystem. The 
whole thing needs to work well.


We probably actually need two groups: one to think about the 
architecture of the user experience of OpenStack, and one to think about 
the internal architecture as a whole.


I'd be very enthusiastic about the TC chartering some group to work on 
this. It has worried me for a long time that there is nobody designing 
OpenStack as an whole; design is done at the level of individual 
projects, and OpenStack is an ad-hoc collection of what they produce. 
Unfortunately we did have an Architecture Working Group for a while (in 
the sense of the second definition above), and it fizzled out because 
there weren't enough people with enough time to work on it. Until we can 
identify at least a theoretical reason why a new effort would be more 
successful, I don't think there is going to be any appetite for trying 
again.


cheers,
Zane.


  * encourage current OpenStack devs to test/deploy Kubernetes. It has some 
very good ideas that OpenStack could benefit from. If you don't know what they 
are, you can't adopt them.

And I know its hard to talk about, but consider just adopting k8s as the 
commons and build on top of it. OpenStack's api's are good. The implementations 
right now are very very heavy for ops. You could tie in K8s's pod scheduler 
with vm stuff running in containers and get a vastly simpler architecture for 
operators to deal with. Yes, this would be a major disruptive change to 
OpenStack. But long term, I think it would make for a much healthier OpenStack.

Thanks,
Kevin

From: Zane Bitter [zbit...@redhat.com]
Sent: Wednesday, June 27, 2018 4:23 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26

On 27/06/18 07:55, Jay Pipes wrote:

WARNING:

Danger, Will Robinson! Strong opinions ahead!


I'd have been disappointed with anything less :)


On 06/26/2018 10:00 PM, Zane Bitter wrote:

On 26/06/18 09:12, Jay Pipes wrote:

Is (one of) the problem(s) with our community that we have too small
of a scope/footprint? No. Not in the slightest.


Incidentally, this is an interesting/amusing example of what we talked
about this morning on IRC[1]: you say your concern is that the scope
of *Nova* is too big and that you'd be happy to have *more* services
in OpenStack if they took the orchestration load off Nova and left it
just to handle the 'plumbing' part (which I agree with, while noting
that nobody knows how to get there from here); but here you're
implying that Kata Containers (something that will clearly have no
effect either way on the simplicity or otherwise of Nova) shouldn't be
part of the Foundation because it will take focus away from
Nova/OpenStack.


Above, I was saying that the scope of the *OpenStack* community is
already too broad (IMHO). An example of projects that have made the
*OpenStack* community too broad are purpose-built telco applications
like Tacker [1] and Service Function Chaining. [2]

I've also argued in the past that all distro- or vendor-specific
deployment tools (Fuel, Triple-O, etc [3]) should live outside of
OpenStack because these projects are more products and the relentless
drive of vendor product management (rightfully) pushes the scope of
these applications to gobble up more and more feature space that may or
may not have anything to do with the core OpenStack mi

Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-02 Thread Fox, Kevin M
I think Keystone is one of the exceptions currently, as it is the 
quintessential common service in all of OpenStack since the rule was made, all 
things auth belong to Keystone and the other projects don't waver from it. The 
same can not be said of, say, Barbican. Steps have been made recently to get 
farther down that path, but still is not there yet. Until it is blessed as a 
common, required component, other silo's are still disincentivized to depend on 
it.

I think a lot of the pushback around not adding more common/required services 
is the extra load it puts on ops though. hence these:
>  * Consider abolishing the project walls.
>  * simplify the architecture for ops

IMO, those need to change to break free from the pushback and make progress on 
the commons again.

Thanks,
Kevin

From: Lance Bragstad [lbrags...@gmail.com]
Sent: Monday, July 02, 2018 11:41 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26

On 06/28/2018 02:09 PM, Fox, Kevin M wrote:
> I'll weigh in a bit with my operator hat on as recent experience it pertains 
> to the current conversation
>
> Kubernetes has largely succeeded in common distribution tools where OpenStack 
> has not been able to.
> kubeadm was created as a way to centralize deployment best practices, config, 
> and upgrade stuff into a common code based that other deployment tools can 
> build on.
>
> I think this has been successful for a few reasons:
>  * kubernetes followed a philosophy of using k8s to deploy/enhance k8s. 
> (Eating its own dogfood)
>  * was willing to make their api robust enough to handle that self 
> enhancement. (secrets are a thing, orchestration is not optional, etc)
>  * they decided to produce a reference product (very important to adoption 
> IMO. You don't have to "build from source" to kick the tires.)
>  * made the barrier to testing/development as low as 'curl 
> http://..minikube; minikube start' (this spurs adoption and contribution)
>  * not having large silo's in deployment projects allowed better 
> communication on common tooling.
>  * Operator focused architecture, not project based architecture. This 
> simplifies the deployment situation greatly.
>  * try whenever possible to focus on just the commons and push vendor 
> specific needs to plugins so vendors can deal with vendor issues directly and 
> not corrupt the core.
>
> I've upgraded many OpenStacks since Essex and usually it is multiple weeks of 
> prep, and a 1-2 day outage to perform the deed. about 50% of the upgrades, 
> something breaks only on the production system and needs hot patching on the 
> spot. About 10% of the time, I've had to write the patch personally.
>
> I had to upgrade a k8s cluster yesterday from 1.9.6 to 1.10.5. For 
> comparison, what did I have to do? A couple hours of looking at release notes 
> and trying to dig up examples of where things broke for others. Nothing 
> popped up. Then:
>
> on the controller, I ran:
> yum install -y kubeadm #get the newest kubeadm
> kubeadm upgrade plan #check things out
>
> It told me I had 2 choices. I could:
>  * kubeadm upgrade v1.9.8
>  * kubeadm upgrade v1.10.5
>
> I ran:
> kubeadm upgrade v1.10.5
>
> The control plane was down for under 60 seconds and then the cluster was 
> upgraded. The rest of the services did a rolling upgrade live and took a few 
> more minutes.
>
> I can take my time to upgrade kubelets as mixed kubelet versions works well.
>
> Upgrading kubelet is about as easy.
>
> Done.
>
> There's a lot of things to learn from the governance / architecture of 
> Kubernetes..
>
> Fundamentally, there isn't huge differences in what Kubernetes and OpenStack 
> tries to provide users. Scheduling a VM or a Container via an api with some 
> kind of networking and storage is the same kind of thing in either case.
>
> The how to get the software (openstack or k8s) running is about as polar 
> opposite you can get though.
>
> I think if OpenStack wants to gain back some of the steam it had before, it 
> needs to adjust to the new world it is living in. This means:
>  * Consider abolishing the project walls. They are driving bad architecture 
> (not intentionally but as a side affect of structure)
>  * focus on the commons first.

Nearly all the work we're been doing from an identity perspective over
the last 18 months has enabled or directly improved the commons (or what
I would consider the commons). I agree that it's important, but we're
already focusing on it to the point where we're out of bandwidth.

Is the problem that it doesn't appear that way? Do we have different
ideas of what the "commons" are?

>  * simplify the architecture for ops:
>* make 

Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-02 Thread Lance Bragstad
ry very heavy for ops. You could tie in K8s's 
> pod scheduler with vm stuff running in containers and get a vastly simpler 
> architecture for operators to deal with. Yes, this would be a major 
> disruptive change to OpenStack. But long term, I think it would make for a 
> much healthier OpenStack.
>
> Thanks,
> Kevin
> 
> From: Zane Bitter [zbit...@redhat.com]
> Sent: Wednesday, June 27, 2018 4:23 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26
>
> On 27/06/18 07:55, Jay Pipes wrote:
>> WARNING:
>>
>> Danger, Will Robinson! Strong opinions ahead!
> I'd have been disappointed with anything less :)
>
>> On 06/26/2018 10:00 PM, Zane Bitter wrote:
>>> On 26/06/18 09:12, Jay Pipes wrote:
>>>> Is (one of) the problem(s) with our community that we have too small
>>>> of a scope/footprint? No. Not in the slightest.
>>> Incidentally, this is an interesting/amusing example of what we talked
>>> about this morning on IRC[1]: you say your concern is that the scope
>>> of *Nova* is too big and that you'd be happy to have *more* services
>>> in OpenStack if they took the orchestration load off Nova and left it
>>> just to handle the 'plumbing' part (which I agree with, while noting
>>> that nobody knows how to get there from here); but here you're
>>> implying that Kata Containers (something that will clearly have no
>>> effect either way on the simplicity or otherwise of Nova) shouldn't be
>>> part of the Foundation because it will take focus away from
>>> Nova/OpenStack.
>> Above, I was saying that the scope of the *OpenStack* community is
>> already too broad (IMHO). An example of projects that have made the
>> *OpenStack* community too broad are purpose-built telco applications
>> like Tacker [1] and Service Function Chaining. [2]
>>
>> I've also argued in the past that all distro- or vendor-specific
>> deployment tools (Fuel, Triple-O, etc [3]) should live outside of
>> OpenStack because these projects are more products and the relentless
>> drive of vendor product management (rightfully) pushes the scope of
>> these applications to gobble up more and more feature space that may or
>> may not have anything to do with the core OpenStack mission (and have
>> more to do with those companies' product roadmap).
> I'm still sad that we've never managed to come up with a single way to
> install OpenStack. The amount of duplicated effort expended on that
> problem is mind-boggling. At least we tried though. Excluding those
> projects from the community would have just meant giving up from the
> beginning.
>
> I think Thierry's new map, that collects installer services in a
> separate bucket (that may eventually come with a separate git namespace)
> is a helpful way of communicating to users what's happening without
> forcing those projects outside of the community.
>
>> On the other hand, my statement that the OpenStack Foundation having 4
>> different focus areas leads to a lack of, well, focus, is a general
>> statement on the OpenStack *Foundation* simultaneously expanding its
>> sphere of influence while at the same time losing sight of OpenStack
>> itself -- and thus the push to create an Open Infrastructure Foundation
>> that would be able to compete with the larger mission of the Linux
>> Foundation.
>>
>> [1] This is nothing against Tacker itself. I just don't believe that
>> *applications* that are specially built for one particular industry
>> belong in the OpenStack set of projects. I had repeatedly stated this on
>> Tacker's application to become an OpenStack project, FWIW:
>>
>> https://review.openstack.org/#/c/276417/
>>
>> [2] There is also nothing wrong with service function chains. I just
>> don't believe they belong in *OpenStack*. They more appropriately belong
>> in the (Open)NFV community because they just are not applicable outside
>> of that community's scope and mission.
>>
>> [3] It's interesting to note that Airship was put into its own
>> playground outside the bounds of the OpenStack community (but inside the
>> bounds of the OpenStack Foundation).
> I wouldn't say it's inside the bounds of the Foundation, and in fact
> confusion about that is a large part of why I wrote the blog post. It is
> a 100% unofficial project that just happens to be hosted on our infra.
> Saying it's inside the bounds of the Foundation is like saying
> Kubernetes is inside the bounds of GitHub.
>
>> Airship is AT's specific
>> deployment tooling for "the edge!". I 

Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-02 Thread Chris Dent

On Thu, 28 Jun 2018, Fox, Kevin M wrote:


I think if OpenStack wants to gain back some of the steam it had before, it 
needs to adjust to the new world it is living in. This means:
* Consider abolishing the project walls. They are driving bad architecture (not 
intentionally but as a side affect of structure)
* focus on the commons first.
* simplify the architecture for ops:
  * make as much as possible stateless and centralize remaining state.
  * stop moving config options around with every release. Make it promote 
automatically and persist it somewhere.
  * improve serial performance before sharding. k8s can do 5000 nodes on one 
control plane. No reason to do nova cells and make ops deal with it except for 
the most huge of clouds
* consider a reference product (think Linux vanilla kernel. distro's can 
provide their own variants. thats ok)
* come up with an architecture team for the whole, not the subsystem. The whole 
thing needs to work well.
* encourage current OpenStack devs to test/deploy Kubernetes. It has some very 
good ideas that OpenStack could benefit from. If you don't know what they are, 
you can't adopt them.


These are ideas worth thinking about. We may not be able to do them
(unclear) but they are stimulating and interesting and we need to
keep the converstaion going. Thank you.

I referenced this thread from a blog post I just made
https://anticdent.org/some-opinions-on-openstack.html
which is just a bunch of random ideas on tweaking OpenStack in the
face of growth and change. It's quite likely it's junk, but there
may be something useful to extract as we try to achieve some focus.


--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-06-29 Thread Jean-Philippe Evrard
My two cents:

> I think if OpenStack wants to gain back some of the steam it had before, it 
> needs to adjust to the new world it is living in. This means:
>  * Consider abolishing the project walls. They are driving bad architecture 
> (not intentionally but as a side affect of structure)

As long as there is no walled garden, everything should be done in a
modular way. I don't think having separated nova from cinder prevented
some contributions, quite the contrary. (Optionally, watch [1]).
I am not familiar with the modularity and ease of contribution in k8s,
so the modularity could be there in a different form.

[1]: https://www.youtube.com/watch?v=xYkh1sAu0UM

>  * focus on the commons first.

Good point.

>  * simplify the architecture for ops:

Good point, but I don't see how code, org structure, or project
classification changes things here.

>  * come up with an architecture team for the whole, not the subsystem. The 
> whole thing needs to work well.

Couldn't that be done with a group TC sponsored?

>  * encourage current OpenStack devs to test/deploy Kubernetes. It has some 
> very good ideas that OpenStack could benefit from. If you don't know what 
> they are, you can't adopt them.

Good idea.

>
> And I know its hard to talk about, but consider just adopting k8s as the 
> commons and build on top of it. OpenStack's api's are good. The 
> implementations right now are very very heavy for ops. You could tie in K8s's 
> pod scheduler with vm stuff running in containers and get a vastly simpler 
> architecture for operators to deal with. Yes, this would be a major 
> disruptive change to OpenStack. But long term, I think it would make for a 
> much healthier OpenStack.

Well, I know operators that wouldn't like k8s and openstack components
on top. If you're talking about just a shim between k8s concepts and
openstack apis, that sounds like a good project : p

>> I've also argued in the past that all distro- or vendor-specific
>> deployment tools (Fuel, Triple-O, etc [3]) should live outside of
>> OpenStack because these projects are more products and the relentless
>> drive of vendor product management (rightfully) pushes the scope of
>> these applications to gobble up more and more feature space that may or
>> may not have anything to do with the core OpenStack mission (and have
>> more to do with those companies' product roadmap).
>
> I'm still sad that we've never managed to come up with a single way to
> install OpenStack. The amount of duplicated effort expended on that
> problem is mind-boggling. At least we tried though. Excluding those
> projects from the community would have just meant giving up from the
> beginning.

Well, I think it's a blessing and a curse.

Sometimes, I'd rather have only one tool, so that we all work on it, and
not dilute the community into small groups.

But when I started deploying OpenStack years ago, I was glad I could
find a community way to deploy it using ,
and not .
So for me, I am glad (what became) OpenStack-Ansible existed and I am
glad it still exists.

The effort your are talking about is not purely duplicated:
- Example: whether openstack-ansible existed or not, people used to
Ansible would still prefer deploying openstack with Ansible
  than with puppet or chef (because of their experience) if not
relying on a vendor. In that case, they would probably create
  their own series of playbooks. (I've seen some). That's the real waste, IMO.
- Deployments projects talk to each other.

Talking about living outside OpenStack, where would, for you,
OpenStack-Ansible, the puppet modules, or OpenStack-Chef be?
For OSA, I consider our community now as NOT vendor specific, as many
actors are now playing with it.
We've spent a considerable effort in outreaching and ensuring everyone
can get involved.
So we should be in openstack/ right? But what about 4 years ago? Every
project starts with a sponsor.

I am not sure a classification (is it outside, is it inside
openstack/?) matters in this case.

>
> I think Thierry's new map, that collects installer services in a
> separate bucket (that may eventually come with a separate git namespace)
> is a helpful way of communicating to users what's happening without
> forcing those projects outside of the community.

Side note: I'd be super happy if OpenStack-Ansible could be on that bucket!

Cheers,
JP (evrardjp)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-06-28 Thread Fox, Kevin M
I'll weigh in a bit with my operator hat on as recent experience it pertains to 
the current conversation

Kubernetes has largely succeeded in common distribution tools where OpenStack 
has not been able to.
kubeadm was created as a way to centralize deployment best practices, config, 
and upgrade stuff into a common code based that other deployment tools can 
build on.

I think this has been successful for a few reasons:
 * kubernetes followed a philosophy of using k8s to deploy/enhance k8s. (Eating 
its own dogfood)
 * was willing to make their api robust enough to handle that self enhancement. 
(secrets are a thing, orchestration is not optional, etc)
 * they decided to produce a reference product (very important to adoption IMO. 
You don't have to "build from source" to kick the tires.)
 * made the barrier to testing/development as low as 'curl 
http://..minikube; minikube start' (this spurs adoption and contribution)
 * not having large silo's in deployment projects allowed better communication 
on common tooling.
 * Operator focused architecture, not project based architecture. This 
simplifies the deployment situation greatly.
 * try whenever possible to focus on just the commons and push vendor specific 
needs to plugins so vendors can deal with vendor issues directly and not 
corrupt the core.

I've upgraded many OpenStacks since Essex and usually it is multiple weeks of 
prep, and a 1-2 day outage to perform the deed. about 50% of the upgrades, 
something breaks only on the production system and needs hot patching on the 
spot. About 10% of the time, I've had to write the patch personally.

I had to upgrade a k8s cluster yesterday from 1.9.6 to 1.10.5. For comparison, 
what did I have to do? A couple hours of looking at release notes and trying to 
dig up examples of where things broke for others. Nothing popped up. Then:

on the controller, I ran:
yum install -y kubeadm #get the newest kubeadm
kubeadm upgrade plan #check things out

It told me I had 2 choices. I could:
 * kubeadm upgrade v1.9.8
 * kubeadm upgrade v1.10.5

I ran:
kubeadm upgrade v1.10.5

The control plane was down for under 60 seconds and then the cluster was 
upgraded. The rest of the services did a rolling upgrade live and took a few 
more minutes.

I can take my time to upgrade kubelets as mixed kubelet versions works well.

Upgrading kubelet is about as easy.

Done.

There's a lot of things to learn from the governance / architecture of 
Kubernetes..

Fundamentally, there isn't huge differences in what Kubernetes and OpenStack 
tries to provide users. Scheduling a VM or a Container via an api with some 
kind of networking and storage is the same kind of thing in either case.

The how to get the software (openstack or k8s) running is about as polar 
opposite you can get though.

I think if OpenStack wants to gain back some of the steam it had before, it 
needs to adjust to the new world it is living in. This means:
 * Consider abolishing the project walls. They are driving bad architecture 
(not intentionally but as a side affect of structure)
 * focus on the commons first.
 * simplify the architecture for ops:
   * make as much as possible stateless and centralize remaining state.
   * stop moving config options around with every release. Make it promote 
automatically and persist it somewhere.
   * improve serial performance before sharding. k8s can do 5000 nodes on one 
control plane. No reason to do nova cells and make ops deal with it except for 
the most huge of clouds
 * consider a reference product (think Linux vanilla kernel. distro's can 
provide their own variants. thats ok)
 * come up with an architecture team for the whole, not the subsystem. The 
whole thing needs to work well.
 * encourage current OpenStack devs to test/deploy Kubernetes. It has some very 
good ideas that OpenStack could benefit from. If you don't know what they are, 
you can't adopt them.

And I know its hard to talk about, but consider just adopting k8s as the 
commons and build on top of it. OpenStack's api's are good. The implementations 
right now are very very heavy for ops. You could tie in K8s's pod scheduler 
with vm stuff running in containers and get a vastly simpler architecture for 
operators to deal with. Yes, this would be a major disruptive change to 
OpenStack. But long term, I think it would make for a much healthier OpenStack.

Thanks,
Kevin

From: Zane Bitter [zbit...@redhat.com]
Sent: Wednesday, June 27, 2018 4:23 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26

On 27/06/18 07:55, Jay Pipes wrote:
> WARNING:
>
> Danger, Will Robinson! Strong opinions ahead!

I'd have been disappointed with anything less :)

> On 06/26/2018 10:00 PM, Zane Bitter wrote:
>> On 26/06/18 09:12, Jay Pipes wrote:
>>> Is (one of) the problem(s) with our community that we have too small
>>> 

Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-06-27 Thread Zane Bitter

On 27/06/18 07:55, Jay Pipes wrote:

WARNING:

Danger, Will Robinson! Strong opinions ahead!


I'd have been disappointed with anything less :)


On 06/26/2018 10:00 PM, Zane Bitter wrote:

On 26/06/18 09:12, Jay Pipes wrote:
Is (one of) the problem(s) with our community that we have too small 
of a scope/footprint? No. Not in the slightest.


Incidentally, this is an interesting/amusing example of what we talked 
about this morning on IRC[1]: you say your concern is that the scope 
of *Nova* is too big and that you'd be happy to have *more* services 
in OpenStack if they took the orchestration load off Nova and left it 
just to handle the 'plumbing' part (which I agree with, while noting 
that nobody knows how to get there from here); but here you're 
implying that Kata Containers (something that will clearly have no 
effect either way on the simplicity or otherwise of Nova) shouldn't be 
part of the Foundation because it will take focus away from 
Nova/OpenStack.


Above, I was saying that the scope of the *OpenStack* community is 
already too broad (IMHO). An example of projects that have made the 
*OpenStack* community too broad are purpose-built telco applications 
like Tacker [1] and Service Function Chaining. [2]


I've also argued in the past that all distro- or vendor-specific 
deployment tools (Fuel, Triple-O, etc [3]) should live outside of 
OpenStack because these projects are more products and the relentless 
drive of vendor product management (rightfully) pushes the scope of 
these applications to gobble up more and more feature space that may or 
may not have anything to do with the core OpenStack mission (and have 
more to do with those companies' product roadmap).


I'm still sad that we've never managed to come up with a single way to 
install OpenStack. The amount of duplicated effort expended on that 
problem is mind-boggling. At least we tried though. Excluding those 
projects from the community would have just meant giving up from the 
beginning.


I think Thierry's new map, that collects installer services in a 
separate bucket (that may eventually come with a separate git namespace) 
is a helpful way of communicating to users what's happening without 
forcing those projects outside of the community.


On the other hand, my statement that the OpenStack Foundation having 4 
different focus areas leads to a lack of, well, focus, is a general 
statement on the OpenStack *Foundation* simultaneously expanding its 
sphere of influence while at the same time losing sight of OpenStack 
itself -- and thus the push to create an Open Infrastructure Foundation 
that would be able to compete with the larger mission of the Linux 
Foundation.


[1] This is nothing against Tacker itself. I just don't believe that 
*applications* that are specially built for one particular industry 
belong in the OpenStack set of projects. I had repeatedly stated this on 
Tacker's application to become an OpenStack project, FWIW:


https://review.openstack.org/#/c/276417/

[2] There is also nothing wrong with service function chains. I just 
don't believe they belong in *OpenStack*. They more appropriately belong 
in the (Open)NFV community because they just are not applicable outside 
of that community's scope and mission.


[3] It's interesting to note that Airship was put into its own 
playground outside the bounds of the OpenStack community (but inside the 
bounds of the OpenStack Foundation).


I wouldn't say it's inside the bounds of the Foundation, and in fact 
confusion about that is a large part of why I wrote the blog post. It is 
a 100% unofficial project that just happens to be hosted on our infra. 
Saying it's inside the bounds of the Foundation is like saying 
Kubernetes is inside the bounds of GitHub.


Airship is AT's specific 
deployment tooling for "the edge!". I actually think this was the 
correct move for this vendor-opinionated deployment tool.



So to answer your question:

 zaneb: yeah... nobody I know who argues for a small stable 
core (in Nova) has ever said there should be fewer higher layer services.

 zaneb: I'm not entirely sure where you got that idea from.


Note the emphasis on *Nova* above?

Also note that when I've said that *OpenStack* should have a smaller 
mission and scope, that doesn't mean that higher-level services aren't 
necessary or wanted.


Thank you for saying this, and could I please ask you to repeat this 
disclaimer whenever you talk about a smaller scope for OpenStack. 
Because for those of us working on higher-level services it feels like 
there has been a non-stop chorus (both inside and outside the project) 
of people wanting to redefine OpenStack as something that doesn't 
include us.


The reason I haven't dropped this discussion is because I really want to 
know if _all_ of those people were actually talking about something else 
(e.g. a smaller scope for Nova), or if it's just you. Because you and I 
are in complete agreement that Nova has grown a lot of 

Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-06-27 Thread Thierry Carrez

Jay Pipes wrote:

[...]
I've also argued in the past that all distro- or vendor-specific 
deployment tools (Fuel, Triple-O, etc [3]) should live outside of 
OpenStack because these projects are more products and the relentless 
drive of vendor product management (rightfully) pushes the scope of 
these applications to gobble up more and more feature space that may or 
may not have anything to do with the core OpenStack mission (and have 
more to do with those companies' product roadmap).


I totally agree on the need to distinguish between 
OpenStack-the-main-product (the set of user-facing API services that one 
assembles to build an infrastructure provider) and the tooling that 
helps deploy it. The map[1] that was produced last year draws that line 
by placing deployment and lifecycle management tooling into a separate 
bucket.


I'm not sure of the value of preventing those interested in openly 
collaborating around packaging solutions from doing it as a part of 
OpenStack-the-community. As long as there is potential for open 
collaboration I think we should encourage it, as long as we make it 
clear where the "main product" (that deployment tooling helps deploying) is.


On the other hand, my statement that the OpenStack Foundation having 4 
different focus areas leads to a lack of, well, focus, is a general 
statement on the OpenStack *Foundation* simultaneously expanding its 
sphere of influence while at the same time losing sight of OpenStack 
itself


I understand that fear -- however it's not really a zero-sum game. In 
all of those "focus areas", OpenStack is a piece of the puzzle, so it's 
still very central to everything we do.


-- and thus the push to create an Open Infrastructure Foundation 
that would be able to compete with the larger mission of the Linux 
Foundation.


As I explained in a talk in Vancouver[2], the strategic evolution of the 
Foundation is more the result of a number of parallel discussions 
happening in 2017 that pointed toward a similar need for a change: 
moving the discussions from being product-oriented to being 
goal-oriented, and no longer be stuck in a "everything we produce must 
be called OpenStack" box. It's more the result of our community's 
evolving needs than the need to "compete".


[1] http://openstack.org/openstack-map
[2] 
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/20968/beyond-datacenter-cloud-the-future-of-the-openstack-foundation


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-06-27 Thread Jay Pipes

WARNING:

Danger, Will Robinson! Strong opinions ahead!

On 06/26/2018 10:00 PM, Zane Bitter wrote:

On 26/06/18 09:12, Jay Pipes wrote:
Is (one of) the problem(s) with our community that we have too small 
of a scope/footprint? No. Not in the slightest.


Incidentally, this is an interesting/amusing example of what we talked 
about this morning on IRC[1]: you say your concern is that the scope of 
*Nova* is too big and that you'd be happy to have *more* services in 
OpenStack if they took the orchestration load off Nova and left it just 
to handle the 'plumbing' part (which I agree with, while noting that 
nobody knows how to get there from here); but here you're implying that 
Kata Containers (something that will clearly have no effect either way 
on the simplicity or otherwise of Nova) shouldn't be part of the 
Foundation because it will take focus away from Nova/OpenStack.


Above, I was saying that the scope of the *OpenStack* community is 
already too broad (IMHO). An example of projects that have made the 
*OpenStack* community too broad are purpose-built telco applications 
like Tacker [1] and Service Function Chaining. [2]


I've also argued in the past that all distro- or vendor-specific 
deployment tools (Fuel, Triple-O, etc [3]) should live outside of 
OpenStack because these projects are more products and the relentless 
drive of vendor product management (rightfully) pushes the scope of 
these applications to gobble up more and more feature space that may or 
may not have anything to do with the core OpenStack mission (and have 
more to do with those companies' product roadmap).


On the other hand, my statement that the OpenStack Foundation having 4 
different focus areas leads to a lack of, well, focus, is a general 
statement on the OpenStack *Foundation* simultaneously expanding its 
sphere of influence while at the same time losing sight of OpenStack 
itself -- and thus the push to create an Open Infrastructure Foundation 
that would be able to compete with the larger mission of the Linux 
Foundation.


[1] This is nothing against Tacker itself. I just don't believe that 
*applications* that are specially built for one particular industry 
belong in the OpenStack set of projects. I had repeatedly stated this on 
Tacker's application to become an OpenStack project, FWIW:


https://review.openstack.org/#/c/276417/

[2] There is also nothing wrong with service function chains. I just 
don't believe they belong in *OpenStack*. They more appropriately belong 
in the (Open)NFV community because they just are not applicable outside 
of that community's scope and mission.


[3] It's interesting to note that Airship was put into its own 
playground outside the bounds of the OpenStack community (but inside the 
bounds of the OpenStack Foundation). Airship is AT's specific 
deployment tooling for "the edge!". I actually think this was the 
correct move for this vendor-opinionated deployment tool.



So to answer your question:

 zaneb: yeah... nobody I know who argues for a small stable 
core (in Nova) has ever said there should be fewer higher layer services.

 zaneb: I'm not entirely sure where you got that idea from.


Note the emphasis on *Nova* above?

Also note that when I've said that *OpenStack* should have a smaller 
mission and scope, that doesn't mean that higher-level services aren't 
necessary or wanted.


It's just that Nova has been a dumping ground over the past 7+ years for 
features that, looking back, should never have been added to Nova (or at 
least, never added to the Compute API) [4].


What we were discussing yesterday on IRC was this:

"Which parts of the Compute API should have been implemented in other 
services?"


What we are discussing here is this:

"Which projects in the OpenStack community expanded the scope of the 
OpenStack mission beyond infrastructure-as-a-service?"


and, following that:

"What should we do about projects that expanded the scope of the 
OpenStack mission beyond infrastructure-as-a-service?"


Note that, clearly, my opinion is that OpenStack's mission should be to 
provide infrastructure as a service projects (both plumbing and porcelain).


This is MHO only. The actual OpenStack mission statement [5] is 
sufficiently vague as to provide no meaningful filtering value for 
determining new entrants to the project ecosystem.


I *personally* believe that should change in order for the *OpenStack* 
community to have some meaningful definition and differentiation from 
the broader cloud computing, application development, and network 
orchestration ecosystems.


All the best,
-jay

[4] ... or never brought into the Compute API to begin with. You know, 
vestigial tail and all that.


[5] for reference: "The OpenStack Mission is to produce a ubiquitous 
Open Source Cloud Computing platform that is easy to use, simple to 
implement, interoperable between deployments, works well at all scales, 
and meets the needs of users and operators of both public 

Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-06-26 Thread Zane Bitter

On 26/06/18 09:12, Jay Pipes wrote:
Is (one of) the problem(s) with our community that we have too small of 
a scope/footprint? No. Not in the slightest.


Incidentally, this is an interesting/amusing example of what we talked 
about this morning on IRC[1]: you say your concern is that the scope of 
*Nova* is too big and that you'd be happy to have *more* services in 
OpenStack if they took the orchestration load off Nova and left it just 
to handle the 'plumbing' part (which I agree with, while noting that 
nobody knows how to get there from here); but here you're implying that 
Kata Containers (something that will clearly have no effect either way 
on the simplicity or otherwise of Nova) shouldn't be part of the 
Foundation because it will take focus away from Nova/OpenStack.


So to answer your question:

 zaneb: yeah... nobody I know who argues for a small stable 
core (in Nova) has ever said there should be fewer higher layer services.

 zaneb: I'm not entirely sure where you got that idea from.

I guess from all the people who keep saying it ;)

Apparently somebody was saying it a year ago too :D
https://twitter.com/zerobanana/status/883052105791156225

cheers,
Zane.

[1] 
http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-26.log.html#t2018-06-26T15:30:33


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-06-26 Thread Zane Bitter

On 26/06/18 09:12, Jay Pipes wrote:

On 06/26/2018 08:41 AM, Chris Dent wrote:

Meanwhile, to continue [last week's theme](/tc-report-18-25.html),
the TC's role as listener, mediator, and influencer lacks
definition.

Zane wrote up a blog post explaining the various ways in which the
OpenStack Foundation is 
[expanding](https://www.zerobanana.com/archive/2018/06/14#osf-expansion).


One has to wonder with 4 "focus areas" for the OpenStack Foundation [1] 
whether there is any actual expectation that there will be any focus at 
all any more.


Are CI/CD and secure containers important? [2] Yes, absolutely.

Is (one of) the problem(s) with our community that we have too small of 
a scope/footprint? No. Not in the slightest.


IMHO, what we need is focus. And having 4 different focus areas doesn't 
help focus things.


One of the upshots of this change is that when discussing stuff we now 
need to be more explicit about who 'we' are.


We, the OpenStack project, will have less stuff to focus on as a result 
of this change (no Zuul, for example, and if that doesn't make you happy 
then perhaps no 'edge' stuff will ;).


We, the OpenStack Foundation, will unquestionably have more stuff.

I keep waiting for people to say "no, that isn't part of our scope". But 
all I see is people saying "yes, we will expand our scope to these new 
sets of things


Arguably we're saying both of these things, but for different 
definitions of 'our'.


(otherwise *gasp* the Linux Foundation will gobble up all 
the hype)".


I could also speculate on what the board was hoping to achieve when it 
made this move, but it would be much better if they were to communicate 
that clearly to the membership themselves. One thing we did at the joint 
leadership meeting was essentially brainstorming for a new mission 
statement for the Foundation, and this very much seemed like a post-hoc 
exercise - we (the Foundation) are operating outside the current mission 
of record, but nobody has yet articulated what our new mission is.



Just my two cents and sorry for being opinionated,


Hey, feel free to complain to the TC on openstack-dev any time. But also 
be aware that if you actually want anything to happen, you also need to 
complain to your Individual Directors of the Foundation and/or on the 
foundation list.


cheers,
Zane.


-jay

[1] https://www.openstack.org/foundation/strategic-focus-areas/

[2] I don't include "edge" in my list of things that are important 
considering nobody even knows what "edge" is yet. I fail to see how 
people can possibly "focus" on something that isn't defined.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-06-26 Thread Fox, Kevin M
"What is OpenStack" 

From: Jay Pipes [jaypi...@gmail.com]
Sent: Tuesday, June 26, 2018 6:12 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26

On 06/26/2018 08:41 AM, Chris Dent wrote:
> Meanwhile, to continue [last week's theme](/tc-report-18-25.html),
> the TC's role as listener, mediator, and influencer lacks
> definition.
>
> Zane wrote up a blog post explaining the various ways in which the
> OpenStack Foundation is
> [expanding](https://www.zerobanana.com/archive/2018/06/14#osf-expansion).

One has to wonder with 4 "focus areas" for the OpenStack Foundation [1]
whether there is any actual expectation that there will be any focus at
all any more.

Are CI/CD and secure containers important? [2] Yes, absolutely.

Is (one of) the problem(s) with our community that we have too small of
a scope/footprint? No. Not in the slightest.

IMHO, what we need is focus. And having 4 different focus areas doesn't
help focus things.

I keep waiting for people to say "no, that isn't part of our scope". But
all I see is people saying "yes, we will expand our scope to these new
sets of things (otherwise *gasp* the Linux Foundation will gobble up all
the hype)".

Just my two cents and sorry for being opinionated,
-jay

[1] https://www.openstack.org/foundation/strategic-focus-areas/

[2] I don't include "edge" in my list of things that are important
considering nobody even knows what "edge" is yet. I fail to see how
people can possibly "focus" on something that isn't defined.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-06-26 Thread Jay Pipes

On 06/26/2018 08:41 AM, Chris Dent wrote:

Meanwhile, to continue [last week's theme](/tc-report-18-25.html),
the TC's role as listener, mediator, and influencer lacks
definition.

Zane wrote up a blog post explaining the various ways in which the
OpenStack Foundation is 
[expanding](https://www.zerobanana.com/archive/2018/06/14#osf-expansion).


One has to wonder with 4 "focus areas" for the OpenStack Foundation [1] 
whether there is any actual expectation that there will be any focus at 
all any more.


Are CI/CD and secure containers important? [2] Yes, absolutely.

Is (one of) the problem(s) with our community that we have too small of 
a scope/footprint? No. Not in the slightest.


IMHO, what we need is focus. And having 4 different focus areas doesn't 
help focus things.


I keep waiting for people to say "no, that isn't part of our scope". But 
all I see is people saying "yes, we will expand our scope to these new 
sets of things (otherwise *gasp* the Linux Foundation will gobble up all 
the hype)".


Just my two cents and sorry for being opinionated,
-jay

[1] https://www.openstack.org/foundation/strategic-focus-areas/

[2] I don't include "edge" in my list of things that are important 
considering nobody even knows what "edge" is yet. I fail to see how 
people can possibly "focus" on something that isn't defined.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] [all] TC Report 18-26

2018-06-26 Thread Chris Dent


HTML: https://anticdent.org/tc-report-18-26.html

All the bits and pieces of OpenStack are interconnected and
interdependent across the many groupings of technology and people.
When we plan or make changes, wiggling something _here_ has
consequences over _there_. Some intended, some unintended.

This is such commonly accepted wisdom that to say it risks being a
cliche but acting accordingly remains hard.

This
[morning](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-26.log.html#t2018-06-26T09:09:57)
Thierry and I had a useful conversation about the [Tech Vision
2018 etherpad](https://etherpad.openstack.org/p/tech-vision-2018).
One of the issues there is agreeing on what we're even talking
about. How can we have a vision for a "cloud" if we don't agree what
that is? There's hope that clarifying the vision will help unify
and direct energy, but as the discussion and the etherpad show,
there's work to do.

The lack of clarity on the vision is one of the reasons why
Adjutant's [application to be
official](https://review.openstack.org/#/c/553643/) still has [no
clear 
outcome](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-19.log.html#t2018-06-19T18:59:43).

Meanwhile, to continue [last week's theme](/tc-report-18-25.html),
the TC's role as listener, mediator, and influencer lacks
definition.

Zane wrote up a blog post explaining the various ways in which the
OpenStack Foundation is 
[expanding](https://www.zerobanana.com/archive/2018/06/14#osf-expansion).
But this raises
[questions](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-20.log.html#t2018-06-20T15:41:41)
about what, if any, role the TC has in that expansion. It appears
that the board has decided to not to do a [joint leadership
meeting](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-21.log.html#t2018-06-21T16:32:17)
at the PTG, which means discussions about such things will need to
happen in other media, or be delayed until the next summit in
Berlin.

To make up for the gap, the TC is
[planning](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-21.log.html#t2018-06-21T16:54:43)
to hold [a 
gathering](http://lists.openstack.org/pipermail/openstack-tc/2018-June/001510.html)
to work on some of the much needed big-picture and
shared-understanding building.

While that shared understanding is critical, we have to be sure that
it incorporates what we can hear from people who are not long-term
members of the community. In a long discussion asking if [our
tooling makes things harder for new
contributors](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-21.log.html#t2018-06-21T15:21:24)
several of us tried to make it clear that we have an incomplete
understanding about the barriers people experience, that we often
assume rather than verify, and that sometimes our interest in and
enthusiasm for making incremental progress (because if iterating in
code is good and just, perhaps it is in social groups too?) can mean
that we avoid the deeper analysis required for paradigm shifts.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev