Red Hat OpenStack is based on RDO. It's not pretty far from it, it's
very close. It's basically productized RDO, and in the interest of
everyone's sanity we try to keep the downstream patches to a minimum.
In general I would be careful trying to take the distro analogy too far
though. The release cycles of the Red Hat Linux distros are very
different from that of the OpenStack distros. RDO would be more akin to
CentOS in terms of how closely related they are, but the relationship is
inverted. CentOS is taking the RHEL source (which is based on whatever
the current Fedora release is when a new major RHEL version gets
branched) and distributing packages based on it, while RHOS is taking
the RDO bits and productizing them. There's no point in having a
CentOS-like distro that then repackages the RHOS source because you'd
end up with essentially RDO again. RDO and RHOS don't diverge the way
Fedora and RHEL do after they are branched because they're on the same
release cycle.
So essentially the flow with the Linux distros looks like:
Upstream->Fedora->RHEL->CentOS
Whereas the OpenStack distros are:
Upstream->RDO->RHOS
With RDO serving the purpose of both Fedora and CentOS.
As for TripleO, it's been integrated with RHOS/RDO since Kilo, and I
believe it has been the recommended way to deploy in production since
then as well.
-Ben
On 07/05/2018 03:17 PM, Fox, Kevin M wrote:
I use RDO in production. Its pretty far from RedHat OpenStack. though
its been a while since I tried the TripleO part of RDO. Is it pretty
well integrated now? Similar to RedHat OpenStack? or is it more Fedora
like then CentOS like?
Thanks,
Kevin
------------------------------------------------------------------------
*From:* Dmitry Tantsur [dtant...@redhat.com]
*Sent:* Thursday, July 05, 2018 11:17 AM
*To:* OpenStack Development Mailing List (not for usage questions)
*Subject:* Re: [openstack-dev] [tc] [all] TC Report 18-26
On Thu, Jul 5, 2018, 19:31 Fox, Kevin M <kevin....@pnnl.gov
<mailto:kevin....@pnnl.gov>> wrote:
We're pretty far into a tangent...
/me shrugs. I've done it. It can work.
Some things your right. deploying k8s is more work then deploying
ansible. But what I said depends on context. If your goal is to
deploy k8s/manage k8s then having to learn how to use k8s is not a
big ask. adding a different tool such as ansible is an extra
cognitive dependency. Deploying k8s doesn't need a general solution
to deploying generic base OS's. Just enough OS to deploy K8s and
then deploy everything on top in containers. Deploying a seed k8s
with minikube is pretty trivial. I'm not suggesting a solution here
to provide generic provisioning to every use case in the datacenter.
But enough to get a k8s based cluster up and self hosted enough
where you could launch other provisioning/management tools in that
same cluster, if you need that. It provides a solid base for the
datacenter on which you can easily add the services you need for
dealing with everything.
All of the microservices I mentioned can be wrapped up in a single
helm chart and deployed with a single helm install command.
I don't have permission to release anything at the moment, so I
can't prove anything right now. So, take my advice with a grain of
salt. :)
Switching gears, you said why would users use lfs when they can use
a distro, so why use openstack without a distro. I'd say, today
unless you are paying a lot, there isn't really an equivalent distro
that isn't almost as much effort as lfs when you consider day2 ops.
To compare with Redhat again, we have a RHEL (redhat openstack), and
Rawhide (devstack) but no equivalent of CentOS. Though I think
TripleO has been making progress on this front...
It's RDO what you're looking for (equivalent of centos). TripleO is an
installer project, not a distribution.
Anyway. This thread is I think 2 tangents away from the original
topic now. If folks are interested in continuing this discussion,
lets open a new thread.
Thanks,
Kevin
________________________________________
From: Dmitry Tantsur [dtant...@redhat.com <mailto:dtant...@redhat.com>]
Sent: Wednesday, July 04, 2018 4:24 AM
To: openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26
Tried hard to avoid this thread, but this message is so much wrong..
On 07/03/2018 09:48 PM, Fox, Kevin M wrote:
> I don't dispute trivial, but a self hosting k8s on bare metal is
not incredibly hard. In fact, it is easier then you might think. k8s
is a platform for deploying/managing services. Guess what you need
to provision bare metal? Just a few microservices. A dhcp service.
dhcpd in a daemonset works well. some pxe infrastructure. pixiecore
with a simple http backend works pretty well in practice. a service
to provide installation instructions. nginx server handing out
kickstart files for example. and a place to fetch rpms from in case
you don't have internet access or want to ensure uniformity. nginx
server with a mirror yum repo. Its even possible to seed it on
minikube and sluff it off to its own cluster.
>
> The main hard part about it is currently no one is shipping a
reference implementation of the above. That may change...
>
> It is certainly much much easier then deploying enough OpenStack
to get a self hosting ironic working.
Side note: no, it's not. What you describe is similarly hard to
installing
standalone ironic from scratch and much harder than using bifrost for
everything. Especially when you try to do it in production.
Especially with
unusual operating requirements ("no TFTP servers on my network").
Also, sorry, I cannot resist:
"Guess what you need to orchestrate containers? Just a few things. A
container
runtime. Docker works well. some remove execution tooling. ansible
works pretty
well in practice. It is certainly much much easier then deploying
enough k8s to
get a self hosting containers orchestration working."
Such oversimplications won't bring us anywhere. Sometimes things are
hard
because they ARE hard. Where are people complaining that installing
a full
GNU/Linux distributions from upstream tarballs is hard? How many
operators here
use LFS as their distro? If we are okay with using a distro for
GNU/Linux, why
using a distro for OpenStack causes so much contention?
>
> Thanks,
> Kevin
>
> ________________________________________
> From: Jay Pipes [jaypi...@gmail.com <mailto:jaypi...@gmail.com>]
> Sent: Tuesday, July 03, 2018 10:06 AM
> To: openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26
>
> On 07/02/2018 03:31 PM, Zane Bitter wrote:
>> On 28/06/18 15:09, Fox, Kevin M wrote:
>>> * made the barrier to testing/development as low as 'curl
>>> http://......minikube; minikube start' (this spurs adoption and
>>> contribution)
>>
>> That's not so different from devstack though.
>>
>>> * not having large silo's in deployment projects allowed better
>>> communication on common tooling.
>>> * Operator focused architecture, not project based architecture.
>>> This simplifies the deployment situation greatly.
>>> * try whenever possible to focus on just the commons and
push vendor
>>> specific needs to plugins so vendors can deal with vendor issues
>>> directly and not corrupt the core.
>>
>> I agree with all of those, but to be fair to OpenStack, you're
leaving
>> out arguably the most important one:
>>
>> * Installation instructions start with "assume a working
datacenter"
>>
>> They have that luxury; we do not. (To be clear, they are 100%
right to
>> take full advantage of that luxury. Although if there are still
folks
>> who go around saying that it's a trivial problem and
OpenStackers must
>> all be idiots for making it look so difficult, they should
really stop
>> embarrassing themselves.)
>
> This.
>
> There is nothing trivial about the creation of a working
datacenter --
> never mind a *well-running* datacenter. Comparing Kubernetes to
> OpenStack -- particular OpenStack's lower levels -- is missing this
> fundamental point and ends up comparing apples to oranges.
>
> Best,
> -jay
>
>
__________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev