Re: Follow up on OKD 4

2019-07-25 Thread Jason Brooks
On Thu, Jul 25, 2019, 5:39 AM Christian Glombek  wrote:

> I agree with the sentiment that supporting more OSes is a good thing.
> However, I believe it is in the community's best interest to get a working
> version of OKD4 rather sooner than later.
>

+1 An OKD that's close to OCP will come faster. Plus, the built-in host
management is a great feature that will benefit the community.

> Management of the underlying machines' OS by k8s/a set of concerted
> operators is a new (super exciting!) feature that is deeply integrated into
> OpenShift 4, but is therefore at the moment also tightly coupled to key
> OS/infra components like Ignition and RPM-OSTree.
> Going from here, I see making it work with Fedora CoreOS (which shares
> those components) as the path of least resistance for getting to a first
> usable OKD4 release.
> From there, we can see where else the community wants to go and how we can
> abstract away more of the underlying parts.
> I think it is safe to say Red Hat has never shied away from upstreaming
> its technologies, and I see no indication of why it should be any different
> in this case.
> Making all of this more universally applicable (and therefore
> upstreamable) in the longer term is a thing I'd love to see done within the
> OKD community.
>
> On Thu, Jul 25, 2019 at 10:20 AM Aleksandar Lazic <
> openshift-li...@me2digital.com> wrote:
>
>> HI.
>>
>> Am 25.07.2019 um 06:52 schrieb Michael Gugino:
>> > I think FCoS could be a mutable detail.  To start with, support for
>> > plain-old-fedora would be helpful to make the platform more portable,
>> > particularly the MCO and machine-api.  If I had to state a goal, it
>> > would be "Bring OKD to the largest possible range of linux distros to
>> > become the defacto implementation of kubernetes."
>>
>> I agree here with Michael. As FCoS or in general CoS looks technical a
>> good idea
>> but it limits the flexibility of possible solutions.
>>
>> For example when you need to change some system settings then you will
>> need to
>> create a new OS Image, this is not very usable in some environments.
>>
>> It would be nice to have the good old option to use the ansible installer
>> to
>> install OKD/Openshift on other Linux distribution where ansible is able
>> to run.
>>
>> > Also, it would be helpful (as previously stated) to build communities
>> > around some of our components that might not have a place in the
>> > official kubernetes, but are valuable downstream components
>> > nevertheless.
>> >
>> > Anyway, I'm just throwing some ideas out there, I wouldn't consider my
>> > statements as advocating strongly in any direction.  Surely FCoS is
>> > the natural fit, but I think considering other distros merits
>> > discussion.
>>
>> +1
>>
>> Regards
>> Aleks
>>
>>
>> > On Wed, Jul 24, 2019 at 9:23 PM Clayton Coleman 
>> wrote:
>> >>
>> >>> On Jul 24, 2019, at 9:14 PM, Michael Gugino 
>> wrote:
>> >>>
>> >>> I think what I'm looking for is more 'modular' rather than DIY.  CVO
>> >>> would need to be adapted to separate container payload from host
>> >>> software (or use something else), and maintaining cross-distro
>> >>> machine-configs might prove tedious, but for the most part, rest of
>> >>> everything from the k8s bins up, should be more or less the same.
>> >>>
>> >>> MCD is good software, but there's not really much going on there that
>> >>> can't be ported to any other OS.  MCD downloads a payload, extracts
>> >>> files, rebases ostree, reboots host.  You can do all of those steps
>> >>> except 'rebases ostree' on any distro.  And instead of 'rebases
>> >>> ostree', we could pull down a container that acts as a local repo that
>> >>> contains all the bits you need to upgrade your host across releases.
>> >>> Users could do things to break this workflow, but it should otherwise
>> >>> work if they aren't fiddling with the hosts.  The MCD payload happens
>> >>> to embed an ignition payload, but it doesn't actually run ignition,
>> >>> just utilizes the file format.
>> >>>
>> >>> From my viewpoint, there's nothing particularly special about ignition
>> >>> in our current process either.  I had the entire OCP 4 stack running
>> >>> on RHEL using the same exact ignition payload, a minimal amount of
>> >>> ansible (which could have been replaced by cloud-init userdata), and a
>> >>> small python library to parse the ignition files.  I was also building
>> >>> repo containers for 3.10 and 3.11 for Fedora.  Not to say the
>> >>> OpenShift 4 experience isn't great, because it is.  RHEL CoreOS + OCP
>> >>> 4 came together quite nicely.
>> >>>
>> >>> I'm all for 'not managing machines' but I'm not sure it has to look
>> >>> exactly like OCP.  Seems the OCP installer and CVO could be
>> >>> adapted/replaced with something else, MCD adapted, pretty much
>> >>> everything else works the same.
>> >>
>> >> Sure - why?  As in, what do you want to do?  What distro do you want
>> >> to use instead of fcos?  What goals / outcomes do you want out of 

Re: Follow up on OKD 4

2019-07-25 Thread Clayton Coleman
> On Jul 25, 2019, at 2:32 PM, Fox, Kevin M  wrote:
>
> While "just works" is a great goal, and its relatively easy to accomplish in 
> the nice, virtualized world of vm's, I've found it is often not the case in 
> the dirty realm of real physical hardware. Sometimes you must rebuild/replace 
> a kernel or add a kernel module to get things to actually work. If you don't 
> support that, Its going to be a problem for many a site.

Ok, so this would be the “I want to be able to run my own kernel” use case.

That’s definitely something I would expect to be available with OKD in
the existing proposal, you would just be providing a different ostree
image at install time.

How often does this happen with fedora today?  I don’t hear it brought
up often so I may just be oblivious to something folks deal with more.
Certainly fcos should work everywhere existing fedora works, but if a
substantial set of people want that flexibility it’s a great data
point.

>
> Thanks,
> Kevin
> 
> From: dev-boun...@lists.openshift.redhat.com 
> [dev-boun...@lists.openshift.redhat.com] on behalf of Josh Berkus 
> [jber...@redhat.com]
> Sent: Thursday, July 25, 2019 11:23 AM
> To: Clayton Coleman; Aleksandar Lazic
> Cc: users; dev
> Subject: Re: Follow up on OKD 4
>
>> On 7/25/19 6:51 AM, Clayton Coleman wrote:
>> 1. Openshift 4 isn’t flexible in the ways people want (Ie you want to
>> add an rpm to the OS to get a kernel module, or you want to ship a
>> complex set of config and managing things with mcd looks too hard)
>> 2. You want to build and maintain these things yourself, so the “just
>> works” mindset doesn’t appeal.
>
> FWIW, 2.5 years ago when we were exploring having a specific
> Atomic+Openshift distro for Kubernetes, we did a straw poll of Fedora
> Cloud users.  We found that 2/3 of respondees wanted a complete package
> (that is, OKD+Atomic) that installed and "just worked" out of the box,
> and far fewer folks wanted to hack their own.  We never had such a
> release due to insufficient engineering resources (and getting stuck
> behind the complete rewrite of the Fedora build pipelines), but that was
> the original goal.
>
> Things may have changed in the interim, but I think that a broad user
> survey would still find a strong audience for a "just works" distro in
> Fedora.
>
> --
> --
> Josh Berkus
> Kubernetes Community
> Red Hat OSAS
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


RE: Follow up on OKD 4

2019-07-25 Thread Fox, Kevin M
While "just works" is a great goal, and its relatively easy to accomplish in 
the nice, virtualized world of vm's, I've found it is often not the case in the 
dirty realm of real physical hardware. Sometimes you must rebuild/replace a 
kernel or add a kernel module to get things to actually work. If you don't 
support that, Its going to be a problem for many a site.

Thanks,
Kevin

From: dev-boun...@lists.openshift.redhat.com 
[dev-boun...@lists.openshift.redhat.com] on behalf of Josh Berkus 
[jber...@redhat.com]
Sent: Thursday, July 25, 2019 11:23 AM
To: Clayton Coleman; Aleksandar Lazic
Cc: users; dev
Subject: Re: Follow up on OKD 4

On 7/25/19 6:51 AM, Clayton Coleman wrote:
> 1. Openshift 4 isn’t flexible in the ways people want (Ie you want to
> add an rpm to the OS to get a kernel module, or you want to ship a
> complex set of config and managing things with mcd looks too hard)
> 2. You want to build and maintain these things yourself, so the “just
> works” mindset doesn’t appeal.

FWIW, 2.5 years ago when we were exploring having a specific
Atomic+Openshift distro for Kubernetes, we did a straw poll of Fedora
Cloud users.  We found that 2/3 of respondees wanted a complete package
(that is, OKD+Atomic) that installed and "just worked" out of the box,
and far fewer folks wanted to hack their own.  We never had such a
release due to insufficient engineering resources (and getting stuck
behind the complete rewrite of the Fedora build pipelines), but that was
the original goal.

Things may have changed in the interim, but I think that a broad user
survey would still find a strong audience for a "just works" distro in
Fedora.

--
--
Josh Berkus
Kubernetes Community
Red Hat OSAS

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Follow up on OKD 4

2019-07-25 Thread Clayton Coleman
On Thu, Jul 25, 2019 at 11:58 AM Fox, Kevin M  wrote:

> Yeah, There is the question what it is now, and the question what it
> potentially should be. I'm asking more from a where should it go standpoint.
>
> Right now, k8s distro's are very much in the early linux distro days.
> Here's how to get a base os going. Ok, now your on your own to deploy
> anything on it. Download tarball, build it, install it, write init script,
> etc. If you look a the total package list in the modern linux distro, the
> os level stuff is usually a very small percentage of the software in the
> distro.
>
> These days we've moved on so far from "the distro is a kernel" that folks
> even talk about running a redhat, a fedora, or a centos container. Thats
> really #4 level stuff only.
>
> olm is like yum. a tool to install stuff. So kind of a #3 tool. Its the
> software packaging itself, mysql, apache, etc. that is also part of the
> distro that is mostly missing I think. a container is like an rpm. one way
> to define a linux distro is a collection of prebuilt/tested/supported rpms
> for common software.
>
> In the linux os today, you can start from "I want to deploy a mysql
> server" and I trust redhat to provide good software, so you go and yum
> install mysql. I could imagine similarly OKD as a collection of software to
> deploy on top of a k8s, where there is an optional, self hosting OS part
> (1-3) the same way Fedora/Centos can be used purely #4 with containers, or
> as a full blown os+workloads.
>
> Sure, you can let the community build all their own stuff. Thats possible
> in linux distro's today too and shouldn't be blocked. But it misses the
> point why folks deploy software from linux distro's over getting it from
> the source. I prefer to run mysql from redhat as apposed to upstream
> because of all the extras the distro packagers provide.
>
> Not trying to short change all the hard work in getting a k8s going. okd's
> doing an amazing job at that. That's really important too. But so is all
> the distro work around software packaging, and thats still much more in its
> infancy I think. We're still mostly at the point where we're debating if
> thats the end users problem.
>
> The package management tools are coming around nicely, but not so much yet
> the distro packages. How do we get a k8s distro of this form going? Is that
> in the general scope of OKD, or should there be a whole new project just
> for that?
>

Honestly we've been thinking of the core components as just that - CVO,
MCO, release images, integrated OS with updates.  That's the core compose
(the minimum set of components you need for maintainable stuff).  I think
Mike's comment about MCO supporting multiple OS's is something like that,
and CVO + release tooling is that too.  I think from the perspective of the
project as a whole I had been thinking that "OKD is both the tools and
process for k8s-as-distro".  OLM is a small component of that, like yum.
CVO + installer + bootstrap is anaconda.  The machine-os and k8s are like
systemd.

That's why I'm asking about how much flexibility people want - after all,
Fedora doesn't let you choose which glibc you use.  If people want to
compose the open source projects differently, all the projects are there
today and should be easy to fork, and we could also consider how we make
them easier to fork in the future.

The discuss OKD4 started with was "what kind of distro do people want".  If
there's a group who want to get more involved in the "build a distro" part
of tools that exist, that definitely seems like a different use case.


>
> The redhat container catalog is a good start too, but we need to be
> thinking all the way up to the k8s level.
>
> Should it be "okd k8s distro" or "fedora k8s distro" or something else?
>
> Thanks,
> Kevin
>
> --
> *From:* Clayton Coleman [ccole...@redhat.com]
> *Sent:* Wednesday, July 24, 2019 10:31 AM
> *To:* Fox, Kevin M
> *Cc:* Michael Gugino; users; dev
> *Subject:* Re: Follow up on OKD 4
>
>
>
> On Wed, Jul 24, 2019 at 12:45 PM Fox, Kevin M  wrote:
>
>> Ah, this raises an interesting discussion I've been wanting to have for a
>> while.
>>
>> There are potentially lots of things you could call a distro.
>>
>> Most linux distro's are made up of several layers:
>> 1. boot loader - components to get the kernel running
>> 2. kernel - provides a place to run higher level software
>> 3. os level services - singletons needed to really call the os an os.
>> (dhcp, systemd, dbus, etc)
>> 4. prebuilt/tested, generic software/services - workload (mysql, apache,
>> firefox, gnome, etc)
>>
>> For sake of discussion, lets map these layers a bit, and assume that the
>> openshift specific components can be added to a vanilla kubernetes. We then
>> have
>>
>> 1. linux distro (could be k8s specific and micro)
>> 2. kubernetes control plane & kubelets
>> 3. openshift components (auth, ingress, cicd/etc)
>> 4. ?  (operators + containers, helm + containers, etc)

Re: Follow up on OKD 4

2019-07-25 Thread Josh Berkus
On 7/25/19 6:51 AM, Clayton Coleman wrote:
> 1. Openshift 4 isn’t flexible in the ways people want (Ie you want to
> add an rpm to the OS to get a kernel module, or you want to ship a
> complex set of config and managing things with mcd looks too hard)
> 2. You want to build and maintain these things yourself, so the “just
> works” mindset doesn’t appeal.

FWIW, 2.5 years ago when we were exploring having a specific
Atomic+Openshift distro for Kubernetes, we did a straw poll of Fedora
Cloud users.  We found that 2/3 of respondees wanted a complete package
(that is, OKD+Atomic) that installed and "just worked" out of the box,
and far fewer folks wanted to hack their own.  We never had such a
release due to insufficient engineering resources (and getting stuck
behind the complete rewrite of the Fedora build pipelines), but that was
the original goal.

Things may have changed in the interim, but I think that a broad user
survey would still find a strong audience for a "just works" distro in
Fedora.

-- 
--
Josh Berkus
Kubernetes Community
Red Hat OSAS

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Follow up on OKD 4

2019-07-25 Thread Daniel Comnea
On Thu, Jul 25, 2019 at 5:01 PM Michael Gugino  wrote:

> I don't really view the 'bucket of parts' and 'complete solution' as
> competing ideas.  It would be nice to build the 'complete solution'
> from the 'bucket of parts' in a reproducible, customizable manner.
> "How is this put together" should be easily followed, enough so that
> someone can 'put it together' on their own infrastructure without
> having to be an expert in designing and configuring the build system.
>
> IMO, if I can't build it, I don't own it.  In 3.x, I could compile all
> the openshift-specific bits from source, I could point at any
> repository I wanted, I could point to any image registry I wanted, I
> could use any distro I wanted.  I could replace the parts I wanted to;
> or I could just run it as-is from the published sources and not worry
> about replacing things.  I even built Fedora Atomic host rpm-trees
> with all the kublet bits pre-installed, similar to what we're doing
> with CoreOS now in 3.x.  It was a great experience, building my own
> system images and running updates was trivial.
>
> I wish we weren't EOL'ing the Atomic Host in Fedora.  It offered a lot
> of flexibility and easy to use tooling.
>
> So maybe what we are asking here is:

   - opinionated OCP 4 philosophy => OKD 4 + FCOS (IPI and UPI) using
   ignition, CVO etc
   - DYI kube philosophy reusing as many v4 components but with your own
   preferred operating system


In terms of approach, priority i think is fair to adopt a baby steps
approach where:

   - phase 1 = try to get out OKD 4 + FCOS asap so folks can start build up
   the knowledge around operating the new solution in a full production env
   - phase 2 = once experience/ knowledge was built up then we can crack on
   with reverse eng and see what we can swap etc.





> On Thu, Jul 25, 2019 at 9:51 AM Clayton Coleman 
> wrote:
> >
> > > On Jul 25, 2019, at 4:19 AM, Aleksandar Lazic <
> openshift-li...@me2digital.com> wrote:
> > >
> > > HI.
> > >
> > >> Am 25.07.2019 um 06:52 schrieb Michael Gugino:
> > >> I think FCoS could be a mutable detail.  To start with, support for
> > >> plain-old-fedora would be helpful to make the platform more portable,
> > >> particularly the MCO and machine-api.  If I had to state a goal, it
> > >> would be "Bring OKD to the largest possible range of linux distros to
> > >> become the defacto implementation of kubernetes."
> > >
> > > I agree here with Michael. As FCoS or in general CoS looks technical a
> good idea
> > > but it limits the flexibility of possible solutions.
> > >
> > > For example when you need to change some system settings then you will
> need to
> > > create a new OS Image, this is not very usable in some environments.
> >
> > I think something we haven’t emphasized enough is that openshift 4 is
> > very heavily structured around changing the cost and mental model
> > around this.  The goal was and is to make these sorts of things
> > unnecessary.  Changing machine settings by building golden images is
> > already the “wrong” (expensive and error prone) pattern - instead, it
> > should be easy to reconfigure machines or to launch new containers to
> > run software on those machines.  There may be two factors here at
> > work:
> >
> > 1. Openshift 4 isn’t flexible in the ways people want (Ie you want to
> > add an rpm to the OS to get a kernel module, or you want to ship a
> > complex set of config and managing things with mcd looks too hard)
> > 2. You want to build and maintain these things yourself, so the “just
> > works” mindset doesn’t appeal.
> >
> > The initial doc alluded to the DIY / bucket of parts use case (I can
> > assemble this on my own but slightly differently) - maybe we can go
> > further now and describe the goal / use case as:
> >
> > I want to be able to compose my own Kubernetes distribution, and I’m
> > willing to give up continuous automatic updates to gain flexibility in
> > picking my own software
> >
> > Does that sound like it captures your request?
> >
> > Note that a key reason why the OS is integrated is so that we can keep
> > machines up to date and do rolling control plane upgrades with no
> > risk.  If you take the OS out of the equation the risk goes up
> > substantially, but if you’re willing to give that up then yes, you
> > could build an OKD that doesn’t tie to the OS.  This trade off is an
> > important one for folks to discuss.  I’d been assuming that people
> > *want* the automatic and safe upgrades, but maybe that’s a bad
> > assumption.
> >
> > What would you be willing to give up?
> >
> > >
> > > It would be nice to have the good old option to use the ansible
> installer to
> > > install OKD/Openshift on other Linux distribution where ansible is
> able to run.
> > >
> > >> Also, it would be helpful (as previously stated) to build communities
> > >> around some of our components that might not have a place in the
> > >> official kubernetes, but are valuable downstream components
> > >> 

Re: Follow up on OKD 4

2019-07-25 Thread Michael Gugino
I don't really view the 'bucket of parts' and 'complete solution' as
competing ideas.  It would be nice to build the 'complete solution'
from the 'bucket of parts' in a reproducible, customizable manner.
"How is this put together" should be easily followed, enough so that
someone can 'put it together' on their own infrastructure without
having to be an expert in designing and configuring the build system.

IMO, if I can't build it, I don't own it.  In 3.x, I could compile all
the openshift-specific bits from source, I could point at any
repository I wanted, I could point to any image registry I wanted, I
could use any distro I wanted.  I could replace the parts I wanted to;
or I could just run it as-is from the published sources and not worry
about replacing things.  I even built Fedora Atomic host rpm-trees
with all the kublet bits pre-installed, similar to what we're doing
with CoreOS now in 3.x.  It was a great experience, building my own
system images and running updates was trivial.

I wish we weren't EOL'ing the Atomic Host in Fedora.  It offered a lot
of flexibility and easy to use tooling.

On Thu, Jul 25, 2019 at 9:51 AM Clayton Coleman  wrote:
>
> > On Jul 25, 2019, at 4:19 AM, Aleksandar Lazic 
> >  wrote:
> >
> > HI.
> >
> >> Am 25.07.2019 um 06:52 schrieb Michael Gugino:
> >> I think FCoS could be a mutable detail.  To start with, support for
> >> plain-old-fedora would be helpful to make the platform more portable,
> >> particularly the MCO and machine-api.  If I had to state a goal, it
> >> would be "Bring OKD to the largest possible range of linux distros to
> >> become the defacto implementation of kubernetes."
> >
> > I agree here with Michael. As FCoS or in general CoS looks technical a good 
> > idea
> > but it limits the flexibility of possible solutions.
> >
> > For example when you need to change some system settings then you will need 
> > to
> > create a new OS Image, this is not very usable in some environments.
>
> I think something we haven’t emphasized enough is that openshift 4 is
> very heavily structured around changing the cost and mental model
> around this.  The goal was and is to make these sorts of things
> unnecessary.  Changing machine settings by building golden images is
> already the “wrong” (expensive and error prone) pattern - instead, it
> should be easy to reconfigure machines or to launch new containers to
> run software on those machines.  There may be two factors here at
> work:
>
> 1. Openshift 4 isn’t flexible in the ways people want (Ie you want to
> add an rpm to the OS to get a kernel module, or you want to ship a
> complex set of config and managing things with mcd looks too hard)
> 2. You want to build and maintain these things yourself, so the “just
> works” mindset doesn’t appeal.
>
> The initial doc alluded to the DIY / bucket of parts use case (I can
> assemble this on my own but slightly differently) - maybe we can go
> further now and describe the goal / use case as:
>
> I want to be able to compose my own Kubernetes distribution, and I’m
> willing to give up continuous automatic updates to gain flexibility in
> picking my own software
>
> Does that sound like it captures your request?
>
> Note that a key reason why the OS is integrated is so that we can keep
> machines up to date and do rolling control plane upgrades with no
> risk.  If you take the OS out of the equation the risk goes up
> substantially, but if you’re willing to give that up then yes, you
> could build an OKD that doesn’t tie to the OS.  This trade off is an
> important one for folks to discuss.  I’d been assuming that people
> *want* the automatic and safe upgrades, but maybe that’s a bad
> assumption.
>
> What would you be willing to give up?
>
> >
> > It would be nice to have the good old option to use the ansible installer to
> > install OKD/Openshift on other Linux distribution where ansible is able to 
> > run.
> >
> >> Also, it would be helpful (as previously stated) to build communities
> >> around some of our components that might not have a place in the
> >> official kubernetes, but are valuable downstream components
> >> nevertheless.
> >>
> >> Anyway, I'm just throwing some ideas out there, I wouldn't consider my
> >> statements as advocating strongly in any direction.  Surely FCoS is
> >> the natural fit, but I think considering other distros merits
> >> discussion.
> >
> > +1
> >
> > Regards
> > Aleks
> >
> >
> >>> On Wed, Jul 24, 2019 at 9:23 PM Clayton Coleman  
> >>> wrote:
> >>>
>  On Jul 24, 2019, at 9:14 PM, Michael Gugino  wrote:
> 
>  I think what I'm looking for is more 'modular' rather than DIY.  CVO
>  would need to be adapted to separate container payload from host
>  software (or use something else), and maintaining cross-distro
>  machine-configs might prove tedious, but for the most part, rest of
>  everything from the k8s bins up, should be more or less the same.
> 
>  MCD is good software, but there's not 

RE: Follow up on OKD 4

2019-07-25 Thread Fox, Kevin M
Yeah, There is the question what it is now, and the question what it 
potentially should be. I'm asking more from a where should it go standpoint.

Right now, k8s distro's are very much in the early linux distro days. Here's 
how to get a base os going. Ok, now your on your own to deploy anything on it. 
Download tarball, build it, install it, write init script, etc. If you look a 
the total package list in the modern linux distro, the os level stuff is 
usually a very small percentage of the software in the distro.

These days we've moved on so far from "the distro is a kernel" that folks even 
talk about running a redhat, a fedora, or a centos container. Thats really #4 
level stuff only.

olm is like yum. a tool to install stuff. So kind of a #3 tool. Its the 
software packaging itself, mysql, apache, etc. that is also part of the distro 
that is mostly missing I think. a container is like an rpm. one way to define a 
linux distro is a collection of prebuilt/tested/supported rpms for common 
software.

In the linux os today, you can start from "I want to deploy a mysql server" and 
I trust redhat to provide good software, so you go and yum install mysql. I 
could imagine similarly OKD as a collection of software to deploy on top of a 
k8s, where there is an optional, self hosting OS part (1-3) the same way 
Fedora/Centos can be used purely #4 with containers, or as a full blown 
os+workloads.

Sure, you can let the community build all their own stuff. Thats possible in 
linux distro's today too and shouldn't be blocked. But it misses the point why 
folks deploy software from linux distro's over getting it from the source. I 
prefer to run mysql from redhat as apposed to upstream because of all the 
extras the distro packagers provide.

Not trying to short change all the hard work in getting a k8s going. okd's 
doing an amazing job at that. That's really important too. But so is all the 
distro work around software packaging, and thats still much more in its infancy 
I think. We're still mostly at the point where we're debating if thats the end 
users problem.

The package management tools are coming around nicely, but not so much yet the 
distro packages. How do we get a k8s distro of this form going? Is that in the 
general scope of OKD, or should there be a whole new project just for that?

The redhat container catalog is a good start too, but we need to be thinking 
all the way up to the k8s level.

Should it be "okd k8s distro" or "fedora k8s distro" or something else?

Thanks,
Kevin


From: Clayton Coleman [ccole...@redhat.com]
Sent: Wednesday, July 24, 2019 10:31 AM
To: Fox, Kevin M
Cc: Michael Gugino; users; dev
Subject: Re: Follow up on OKD 4



On Wed, Jul 24, 2019 at 12:45 PM Fox, Kevin M 
mailto:kevin@pnnl.gov>> wrote:
Ah, this raises an interesting discussion I've been wanting to have for a while.

There are potentially lots of things you could call a distro.

Most linux distro's are made up of several layers:
1. boot loader - components to get the kernel running
2. kernel - provides a place to run higher level software
3. os level services - singletons needed to really call the os an os. (dhcp, 
systemd, dbus, etc)
4. prebuilt/tested, generic software/services - workload (mysql, apache, 
firefox, gnome, etc)

For sake of discussion, lets map these layers a bit, and assume that the 
openshift specific components can be added to a vanilla kubernetes. We then have

1. linux distro (could be k8s specific and micro)
2. kubernetes control plane & kubelets
3. openshift components (auth, ingress, cicd/etc)
4. ?  (operators + containers, helm + containers, etc)

openshift use to be defined as being 1-3.

As things like ake/eks/gke make it easy to deploy 1-2, maybe openshift should 
really become modular so it focuses more on 3 and 4.

That's interesting that you'd say that.  I think kube today is like "install a 
kernel with bash and serial port magic", whereas OpenShift 4 is "here's a 
compose, an installer, a disk formatter, yum, yum repos, lifecycle, glibc, 
optional packages, and sys utils".  I don't know if you can extend the analogy 
there (if you want to use EKS, you're effectively running on someone's VPS, but 
you can only use their distro and you can't change anything), but definitely a 
good debate.


As for having something that provides a #1 that is super tiny/easy to maintain 
so that you can do #2 on top easily, I'm for that as well, but should be 
decoupled from 3-4 I think. Should you be able to switch out your #1 for 
someone elses #1 while keeping the rest? That's the question from previous in 
the thread.

I think the analogy I've been using is that openshift is a proper distro in the 
sense that you don't take someone's random kernel and use it with someone 
else's random glibc and a third party's random gcc, but you might not care 
about the stuff on top.  The things in 3 for kube feel more like glibc than 
"which version of Firefox do I 

Re: Follow up on OKD 4

2019-07-25 Thread Clayton Coleman
> On Jul 25, 2019, at 4:19 AM, Aleksandar Lazic 
>  wrote:
>
> HI.
>
>> Am 25.07.2019 um 06:52 schrieb Michael Gugino:
>> I think FCoS could be a mutable detail.  To start with, support for
>> plain-old-fedora would be helpful to make the platform more portable,
>> particularly the MCO and machine-api.  If I had to state a goal, it
>> would be "Bring OKD to the largest possible range of linux distros to
>> become the defacto implementation of kubernetes."
>
> I agree here with Michael. As FCoS or in general CoS looks technical a good 
> idea
> but it limits the flexibility of possible solutions.
>
> For example when you need to change some system settings then you will need to
> create a new OS Image, this is not very usable in some environments.

I think something we haven’t emphasized enough is that openshift 4 is
very heavily structured around changing the cost and mental model
around this.  The goal was and is to make these sorts of things
unnecessary.  Changing machine settings by building golden images is
already the “wrong” (expensive and error prone) pattern - instead, it
should be easy to reconfigure machines or to launch new containers to
run software on those machines.  There may be two factors here at
work:

1. Openshift 4 isn’t flexible in the ways people want (Ie you want to
add an rpm to the OS to get a kernel module, or you want to ship a
complex set of config and managing things with mcd looks too hard)
2. You want to build and maintain these things yourself, so the “just
works” mindset doesn’t appeal.

The initial doc alluded to the DIY / bucket of parts use case (I can
assemble this on my own but slightly differently) - maybe we can go
further now and describe the goal / use case as:

I want to be able to compose my own Kubernetes distribution, and I’m
willing to give up continuous automatic updates to gain flexibility in
picking my own software

Does that sound like it captures your request?

Note that a key reason why the OS is integrated is so that we can keep
machines up to date and do rolling control plane upgrades with no
risk.  If you take the OS out of the equation the risk goes up
substantially, but if you’re willing to give that up then yes, you
could build an OKD that doesn’t tie to the OS.  This trade off is an
important one for folks to discuss.  I’d been assuming that people
*want* the automatic and safe upgrades, but maybe that’s a bad
assumption.

What would you be willing to give up?

>
> It would be nice to have the good old option to use the ansible installer to
> install OKD/Openshift on other Linux distribution where ansible is able to 
> run.
>
>> Also, it would be helpful (as previously stated) to build communities
>> around some of our components that might not have a place in the
>> official kubernetes, but are valuable downstream components
>> nevertheless.
>>
>> Anyway, I'm just throwing some ideas out there, I wouldn't consider my
>> statements as advocating strongly in any direction.  Surely FCoS is
>> the natural fit, but I think considering other distros merits
>> discussion.
>
> +1
>
> Regards
> Aleks
>
>
>>> On Wed, Jul 24, 2019 at 9:23 PM Clayton Coleman  wrote:
>>>
 On Jul 24, 2019, at 9:14 PM, Michael Gugino  wrote:

 I think what I'm looking for is more 'modular' rather than DIY.  CVO
 would need to be adapted to separate container payload from host
 software (or use something else), and maintaining cross-distro
 machine-configs might prove tedious, but for the most part, rest of
 everything from the k8s bins up, should be more or less the same.

 MCD is good software, but there's not really much going on there that
 can't be ported to any other OS.  MCD downloads a payload, extracts
 files, rebases ostree, reboots host.  You can do all of those steps
 except 'rebases ostree' on any distro.  And instead of 'rebases
 ostree', we could pull down a container that acts as a local repo that
 contains all the bits you need to upgrade your host across releases.
 Users could do things to break this workflow, but it should otherwise
 work if they aren't fiddling with the hosts.  The MCD payload happens
 to embed an ignition payload, but it doesn't actually run ignition,
 just utilizes the file format.

 From my viewpoint, there's nothing particularly special about ignition
 in our current process either.  I had the entire OCP 4 stack running
 on RHEL using the same exact ignition payload, a minimal amount of
 ansible (which could have been replaced by cloud-init userdata), and a
 small python library to parse the ignition files.  I was also building
 repo containers for 3.10 and 3.11 for Fedora.  Not to say the
 OpenShift 4 experience isn't great, because it is.  RHEL CoreOS + OCP
 4 came together quite nicely.

 I'm all for 'not managing machines' but I'm not sure it has to look
 exactly like OCP.  Seems the OCP installer and CVO could be