Re: Follow up on OKD 4

2019-08-01 Thread Colin Walters
On Wed, Jul 24, 2019, at 10:40 AM, Michael Gugino wrote:
> I tried FCoS prior to the release by using the assembler on github.
> Too much secret sauce in how to actually construct an image.  I
> thought atomic was much more polished, not really sure what the
> value-add of ignition is in this usecase.

The OpenShift installer and Machine Config Operator are built around Ignition.

>  Just give me a way to build
> simple image pipelines and I don't need ignition.  To that end, there
> should be an okd 'spin' of FCoS IMO.

Yes, this is what https://hackmd.io/ZOQo-1VnRNOrgIcGosq29A proposes.

>  Atomic was dead simple to build
> ostree repos for.  I'd prefer to have ignition be opt-in. 

This is a bit of a side discussion, but rpm-ostree hasn't changed at all; we 
still support e.g. using it with Anaconda upstream.  But it's also true that 
the focus is now on the "CoreOS model" which pairs (Ignition, ostree) and 
that's what coreos-assembler is for.

> Since we're
> supporting the mcd-once-from to parse ignition on RHEL, we don't need
> ignition to actually install okd.  To me, it seems FCoS was created
> just to have a more open version of RHEL CoreOS, and I'm not sure FCoS
> actually solves anyone's needs relative to atomic.  It feels like we
> jumped the shark on this one.

One pitch for Ignition is that it unifies bare metal and cloud provisioning - 
in RHEL you have kickstart vs cloud init.
Again, the OpenShift installer (and MCO) today is built on top of this - if 
you're doing a bare metal PXE setup it works almost exactly the same as an AWS 
AMI boot.

> I'd like to see OKD be distro-independent.  

> MCD is good software, but there's not really much going on there that can't 
> be ported to any other OS.  MCD downloads a payload, extracts files, rebases 
> ostree, reboots host.  You can do all of those steps except 'rebases ostree' 
> on any distro.

First, you're missing a bit of the interesting bits around "extracts files" 
because the MCO correctly handles *state transitions* between Ignition configs 
(e.g. ensuring that old files are deleted), and this relies on it being 
provisioned into a known state that wasn't written by a mix of e.g. Kickstart 
and Puppet/Ansible or whatever.

And the OS updates...that's just the start.  Nowadays when I'm talking about 
RHCOS in the context of OpenShift 4, I usually start talking about the MCO 
first - and in particular that the MCO combines with RHCOS to make a 
"Kubernetes native OS".

Concrete example; we now support setting FIPS mode:
https://github.com/openshift/machine-config-operator/pull/889
(As well as generic kernel arguments)

But this is all still just the beginning for the MCO, next up may be SELinux:
https://github.com/openshift/machine-config-operator/issues/852
And you get the idea.

Obviously, we could support kernel arguments on e.g. Ubuntu too.  And FIPS, 
etc.  But...the engineering cost starts to add up.

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Follow up on OKD 4

2019-07-29 Thread Jonathan Lebon
On Wed, Jul 24, 2019 at 10:42 AM Michael Gugino  wrote:
>
> I tried FCoS prior to the release by using the assembler on github.
> Too much secret sauce in how to actually construct an image.

Hmm, that's odd. FCOS is definitely magnitudes easier to build locally
than FAH ever was. The steps are basically just:

$ cosa init https://github.com/coreos/fedora-coreos-config
$ cosa fetch
$ cosa build

The cosa README[1] has some steps on how to get set up. There's no
"secret sauce" really (you can see the full pipeline the official FCOS
releases use[2] -- lots of goop, but it's mostly centered around
simple `cosa` commands). We have people in freenode/#fedora-coreos who
would be happy to help you (or anyone) get set up!

[1] https://github.com/coreos/coreos-assembler
[2] 
https://github.com/coreos/fedora-coreos-pipeline/blob/8e1d80b7d289b7ab1939e1a0d47f1e39bda30351/Jenkinsfile

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Follow up on OKD 4

2019-07-27 Thread Aleksandar Lazic
Am 25.07.2019 um 19:31 schrieb Daniel Comnea:
> 
> 
> On Thu, Jul 25, 2019 at 5:01 PM Michael Gugino  > wrote:
> 
> I don't really view the 'bucket of parts' and 'complete solution' as
> competing ideas.  It would be nice to build the 'complete solution'
> from the 'bucket of parts' in a reproducible, customizable manner.
> "How is this put together" should be easily followed, enough so that
> someone can 'put it together' on their own infrastructure without
> having to be an expert in designing and configuring the build system.
> 
> IMO, if I can't build it, I don't own it.  In 3.x, I could compile all
> the openshift-specific bits from source, I could point at any
> repository I wanted, I could point to any image registry I wanted, I
> could use any distro I wanted.  I could replace the parts I wanted to;
> or I could just run it as-is from the published sources and not worry
> about replacing things.  I even built Fedora Atomic host rpm-trees
> with all the kublet bits pre-installed, similar to what we're doing
> with CoreOS now in 3.x.  It was a great experience, building my own
> system images and running updates was trivial.

+1

> I wish we weren't EOL'ing the Atomic Host in Fedora.  It offered a lot
> of flexibility and easy to use tooling.
> 
> So maybe what we are asking here is:
> 
>   * opinionated OCP 4 philosophy => OKD 4 + FCOS (IPI and UPI) using ignition,
> CVO etc
>   * DYI kube philosophy reusing as many v4 components but with your own
> preferred operating system

+1 and in addition "preferred hoster" similar to 3.x.

It would be nice when the ansible-installer is still available and possible for
4 as it makes it possible to run OKD 4 on many Distros where ansible is running.

> In terms of approach, priority i think is fair to adopt a baby steps approach 
> where:
> 
>   * phase 1 = try to get out OKD 4 + FCOS asap so folks can start build up the
> knowledge around operating the new solution in a full production env
>   * phase 2 = once experience/ knowledge was built up then we can crack on 
> with
> reverse eng and see what we can swap etc.

I don't think that reverse eng is a good way to go. As most of the parts are OSS
and even the platform `none` exists it would be nice to get the docs out for
this option to be able to run the ansible-installer vor OKD 4.

https://github.com/openshift/installer/blob/master/CHANGELOG.md#090---2019-01-05

```
Added

There is a new none platform for bring-your-own infrastructure users who
want to generate Ignition configurations. The new platform is mostly
undocumented; users will usually interact with it via OpenShift Ansible.

```
> On Thu, Jul 25, 2019 at 9:51 AM Clayton Coleman  > wrote:
> >
> > > On Jul 25, 2019, at 4:19 AM, Aleksandar Lazic
> mailto:openshift-li...@me2digital.com>> 
> wrote:
> > >
> > > HI.
> > >
> > >> Am 25.07.2019 um 06:52 schrieb Michael Gugino:
> > >> I think FCoS could be a mutable detail.  To start with, support for
> > >> plain-old-fedora would be helpful to make the platform more portable,
> > >> particularly the MCO and machine-api.  If I had to state a goal, it
> > >> would be "Bring OKD to the largest possible range of linux distros to
> > >> become the defacto implementation of kubernetes."
> > >
> > > I agree here with Michael. As FCoS or in general CoS looks technical a
> good idea
> > > but it limits the flexibility of possible solutions.
> > >
> > > For example when you need to change some system settings then you will
> need to
> > > create a new OS Image, this is not very usable in some environments.
> >
> > I think something we haven’t emphasized enough is that openshift 4 is
> > very heavily structured around changing the cost and mental model
> > around this.  The goal was and is to make these sorts of things
> > unnecessary.  Changing machine settings by building golden images is
> > already the “wrong” (expensive and error prone) pattern - instead, it
> > should be easy to reconfigure machines or to launch new containers to
> > run software on those machines.  There may be two factors here at
> > work:
> >
> > 1. Openshift 4 isn’t flexible in the ways people want (Ie you want to
> > add an rpm to the OS to get a kernel module, or you want to ship a
> > complex set of config and managing things with mcd looks too hard)
> > 2. You want to build and maintain these things yourself, so the “just
> > works” mindset doesn’t appeal.
> >
> > The initial doc alluded to the DIY / bucket of parts use case (I can
> > assemble this on my own but slightly differently) - maybe we can go
> > further now and describe the goal / use case as:
> >
> > I want to be able to compose my own Kubernetes distribution, and 

Re: Follow up on OKD 4

2019-07-27 Thread Aleksandar Lazic
Am 25.07.2019 um 21:20 schrieb Clayton Coleman:
>> On Jul 25, 2019, at 2:32 PM, Fox, Kevin M  wrote:
>>
>> While "just works" is a great goal, and its relatively easy to accomplish in 
>> the nice, virtualized world of vm's, I've found it is often not the case in 
>> the dirty realm of real physical hardware. Sometimes you must 
>> rebuild/replace a kernel or add a kernel module to get things to actually 
>> work. If you don't support that, Its going to be a problem for many a site.
> 
> Ok, so this would be the “I want to be able to run my own kernel” use case.

Well It's more "I want to run my own Distro on my preferd Hoster" which is not
possible yet.

> That’s definitely something I would expect to be available with OKD in
> the existing proposal, you would just be providing a different ostree
> image at install time.
> 
> How often does this happen with fedora today?  I don’t hear it brought
> up often so I may just be oblivious to something folks deal with more.
> Certainly fcos should work everywhere existing fedora works, but if a
> substantial set of people want that flexibility it’s a great data
> point.
> 
>>
>> Thanks,
>> Kevin
>> 
>> From: dev-boun...@lists.openshift.redhat.com 
>> [dev-boun...@lists.openshift.redhat.com] on behalf of Josh Berkus 
>> [jber...@redhat.com]
>> Sent: Thursday, July 25, 2019 11:23 AM
>> To: Clayton Coleman; Aleksandar Lazic
>> Cc: users; dev
>> Subject: Re: Follow up on OKD 4
>>
>>> On 7/25/19 6:51 AM, Clayton Coleman wrote:
>>> 1. Openshift 4 isn’t flexible in the ways people want (Ie you want to
>>> add an rpm to the OS to get a kernel module, or you want to ship a
>>> complex set of config and managing things with mcd looks too hard)
>>> 2. You want to build and maintain these things yourself, so the “just
>>> works” mindset doesn’t appeal.
>>
>> FWIW, 2.5 years ago when we were exploring having a specific
>> Atomic+Openshift distro for Kubernetes, we did a straw poll of Fedora
>> Cloud users.  We found that 2/3 of respondees wanted a complete package
>> (that is, OKD+Atomic) that installed and "just worked" out of the box,
>> and far fewer folks wanted to hack their own.  We never had such a
>> release due to insufficient engineering resources (and getting stuck
>> behind the complete rewrite of the Fedora build pipelines), but that was
>> the original goal.
>>
>> Things may have changed in the interim, but I think that a broad user
>> survey would still find a strong audience for a "just works" distro in
>> Fedora.
>>
>> --
>> --
>> Josh Berkus
>> Kubernetes Community
>> Red Hat OSAS
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Follow up on OKD 4

2019-07-25 Thread Jason Brooks
On Thu, Jul 25, 2019, 5:39 AM Christian Glombek  wrote:

> I agree with the sentiment that supporting more OSes is a good thing.
> However, I believe it is in the community's best interest to get a working
> version of OKD4 rather sooner than later.
>

+1 An OKD that's close to OCP will come faster. Plus, the built-in host
management is a great feature that will benefit the community.

> Management of the underlying machines' OS by k8s/a set of concerted
> operators is a new (super exciting!) feature that is deeply integrated into
> OpenShift 4, but is therefore at the moment also tightly coupled to key
> OS/infra components like Ignition and RPM-OSTree.
> Going from here, I see making it work with Fedora CoreOS (which shares
> those components) as the path of least resistance for getting to a first
> usable OKD4 release.
> From there, we can see where else the community wants to go and how we can
> abstract away more of the underlying parts.
> I think it is safe to say Red Hat has never shied away from upstreaming
> its technologies, and I see no indication of why it should be any different
> in this case.
> Making all of this more universally applicable (and therefore
> upstreamable) in the longer term is a thing I'd love to see done within the
> OKD community.
>
> On Thu, Jul 25, 2019 at 10:20 AM Aleksandar Lazic <
> openshift-li...@me2digital.com> wrote:
>
>> HI.
>>
>> Am 25.07.2019 um 06:52 schrieb Michael Gugino:
>> > I think FCoS could be a mutable detail.  To start with, support for
>> > plain-old-fedora would be helpful to make the platform more portable,
>> > particularly the MCO and machine-api.  If I had to state a goal, it
>> > would be "Bring OKD to the largest possible range of linux distros to
>> > become the defacto implementation of kubernetes."
>>
>> I agree here with Michael. As FCoS or in general CoS looks technical a
>> good idea
>> but it limits the flexibility of possible solutions.
>>
>> For example when you need to change some system settings then you will
>> need to
>> create a new OS Image, this is not very usable in some environments.
>>
>> It would be nice to have the good old option to use the ansible installer
>> to
>> install OKD/Openshift on other Linux distribution where ansible is able
>> to run.
>>
>> > Also, it would be helpful (as previously stated) to build communities
>> > around some of our components that might not have a place in the
>> > official kubernetes, but are valuable downstream components
>> > nevertheless.
>> >
>> > Anyway, I'm just throwing some ideas out there, I wouldn't consider my
>> > statements as advocating strongly in any direction.  Surely FCoS is
>> > the natural fit, but I think considering other distros merits
>> > discussion.
>>
>> +1
>>
>> Regards
>> Aleks
>>
>>
>> > On Wed, Jul 24, 2019 at 9:23 PM Clayton Coleman 
>> wrote:
>> >>
>> >>> On Jul 24, 2019, at 9:14 PM, Michael Gugino 
>> wrote:
>> >>>
>> >>> I think what I'm looking for is more 'modular' rather than DIY.  CVO
>> >>> would need to be adapted to separate container payload from host
>> >>> software (or use something else), and maintaining cross-distro
>> >>> machine-configs might prove tedious, but for the most part, rest of
>> >>> everything from the k8s bins up, should be more or less the same.
>> >>>
>> >>> MCD is good software, but there's not really much going on there that
>> >>> can't be ported to any other OS.  MCD downloads a payload, extracts
>> >>> files, rebases ostree, reboots host.  You can do all of those steps
>> >>> except 'rebases ostree' on any distro.  And instead of 'rebases
>> >>> ostree', we could pull down a container that acts as a local repo that
>> >>> contains all the bits you need to upgrade your host across releases.
>> >>> Users could do things to break this workflow, but it should otherwise
>> >>> work if they aren't fiddling with the hosts.  The MCD payload happens
>> >>> to embed an ignition payload, but it doesn't actually run ignition,
>> >>> just utilizes the file format.
>> >>>
>> >>> From my viewpoint, there's nothing particularly special about ignition
>> >>> in our current process either.  I had the entire OCP 4 stack running
>> >>> on RHEL using the same exact ignition payload, a minimal amount of
>> >>> ansible (which could have been replaced by cloud-init userdata), and a
>> >>> small python library to parse the ignition files.  I was also building
>> >>> repo containers for 3.10 and 3.11 for Fedora.  Not to say the
>> >>> OpenShift 4 experience isn't great, because it is.  RHEL CoreOS + OCP
>> >>> 4 came together quite nicely.
>> >>>
>> >>> I'm all for 'not managing machines' but I'm not sure it has to look
>> >>> exactly like OCP.  Seems the OCP installer and CVO could be
>> >>> adapted/replaced with something else, MCD adapted, pretty much
>> >>> everything else works the same.
>> >>
>> >> Sure - why?  As in, what do you want to do?  What distro do you want
>> >> to use instead of fcos?  What goals / outcomes do you want out of 

Re: Follow up on OKD 4

2019-07-25 Thread Clayton Coleman
> On Jul 25, 2019, at 2:32 PM, Fox, Kevin M  wrote:
>
> While "just works" is a great goal, and its relatively easy to accomplish in 
> the nice, virtualized world of vm's, I've found it is often not the case in 
> the dirty realm of real physical hardware. Sometimes you must rebuild/replace 
> a kernel or add a kernel module to get things to actually work. If you don't 
> support that, Its going to be a problem for many a site.

Ok, so this would be the “I want to be able to run my own kernel” use case.

That’s definitely something I would expect to be available with OKD in
the existing proposal, you would just be providing a different ostree
image at install time.

How often does this happen with fedora today?  I don’t hear it brought
up often so I may just be oblivious to something folks deal with more.
Certainly fcos should work everywhere existing fedora works, but if a
substantial set of people want that flexibility it’s a great data
point.

>
> Thanks,
> Kevin
> 
> From: dev-boun...@lists.openshift.redhat.com 
> [dev-boun...@lists.openshift.redhat.com] on behalf of Josh Berkus 
> [jber...@redhat.com]
> Sent: Thursday, July 25, 2019 11:23 AM
> To: Clayton Coleman; Aleksandar Lazic
> Cc: users; dev
> Subject: Re: Follow up on OKD 4
>
>> On 7/25/19 6:51 AM, Clayton Coleman wrote:
>> 1. Openshift 4 isn’t flexible in the ways people want (Ie you want to
>> add an rpm to the OS to get a kernel module, or you want to ship a
>> complex set of config and managing things with mcd looks too hard)
>> 2. You want to build and maintain these things yourself, so the “just
>> works” mindset doesn’t appeal.
>
> FWIW, 2.5 years ago when we were exploring having a specific
> Atomic+Openshift distro for Kubernetes, we did a straw poll of Fedora
> Cloud users.  We found that 2/3 of respondees wanted a complete package
> (that is, OKD+Atomic) that installed and "just worked" out of the box,
> and far fewer folks wanted to hack their own.  We never had such a
> release due to insufficient engineering resources (and getting stuck
> behind the complete rewrite of the Fedora build pipelines), but that was
> the original goal.
>
> Things may have changed in the interim, but I think that a broad user
> survey would still find a strong audience for a "just works" distro in
> Fedora.
>
> --
> --
> Josh Berkus
> Kubernetes Community
> Red Hat OSAS
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


RE: Follow up on OKD 4

2019-07-25 Thread Fox, Kevin M
While "just works" is a great goal, and its relatively easy to accomplish in 
the nice, virtualized world of vm's, I've found it is often not the case in the 
dirty realm of real physical hardware. Sometimes you must rebuild/replace a 
kernel or add a kernel module to get things to actually work. If you don't 
support that, Its going to be a problem for many a site.

Thanks,
Kevin

From: dev-boun...@lists.openshift.redhat.com 
[dev-boun...@lists.openshift.redhat.com] on behalf of Josh Berkus 
[jber...@redhat.com]
Sent: Thursday, July 25, 2019 11:23 AM
To: Clayton Coleman; Aleksandar Lazic
Cc: users; dev
Subject: Re: Follow up on OKD 4

On 7/25/19 6:51 AM, Clayton Coleman wrote:
> 1. Openshift 4 isn’t flexible in the ways people want (Ie you want to
> add an rpm to the OS to get a kernel module, or you want to ship a
> complex set of config and managing things with mcd looks too hard)
> 2. You want to build and maintain these things yourself, so the “just
> works” mindset doesn’t appeal.

FWIW, 2.5 years ago when we were exploring having a specific
Atomic+Openshift distro for Kubernetes, we did a straw poll of Fedora
Cloud users.  We found that 2/3 of respondees wanted a complete package
(that is, OKD+Atomic) that installed and "just worked" out of the box,
and far fewer folks wanted to hack their own.  We never had such a
release due to insufficient engineering resources (and getting stuck
behind the complete rewrite of the Fedora build pipelines), but that was
the original goal.

Things may have changed in the interim, but I think that a broad user
survey would still find a strong audience for a "just works" distro in
Fedora.

--
--
Josh Berkus
Kubernetes Community
Red Hat OSAS

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Follow up on OKD 4

2019-07-25 Thread Clayton Coleman
On Thu, Jul 25, 2019 at 11:58 AM Fox, Kevin M  wrote:

> Yeah, There is the question what it is now, and the question what it
> potentially should be. I'm asking more from a where should it go standpoint.
>
> Right now, k8s distro's are very much in the early linux distro days.
> Here's how to get a base os going. Ok, now your on your own to deploy
> anything on it. Download tarball, build it, install it, write init script,
> etc. If you look a the total package list in the modern linux distro, the
> os level stuff is usually a very small percentage of the software in the
> distro.
>
> These days we've moved on so far from "the distro is a kernel" that folks
> even talk about running a redhat, a fedora, or a centos container. Thats
> really #4 level stuff only.
>
> olm is like yum. a tool to install stuff. So kind of a #3 tool. Its the
> software packaging itself, mysql, apache, etc. that is also part of the
> distro that is mostly missing I think. a container is like an rpm. one way
> to define a linux distro is a collection of prebuilt/tested/supported rpms
> for common software.
>
> In the linux os today, you can start from "I want to deploy a mysql
> server" and I trust redhat to provide good software, so you go and yum
> install mysql. I could imagine similarly OKD as a collection of software to
> deploy on top of a k8s, where there is an optional, self hosting OS part
> (1-3) the same way Fedora/Centos can be used purely #4 with containers, or
> as a full blown os+workloads.
>
> Sure, you can let the community build all their own stuff. Thats possible
> in linux distro's today too and shouldn't be blocked. But it misses the
> point why folks deploy software from linux distro's over getting it from
> the source. I prefer to run mysql from redhat as apposed to upstream
> because of all the extras the distro packagers provide.
>
> Not trying to short change all the hard work in getting a k8s going. okd's
> doing an amazing job at that. That's really important too. But so is all
> the distro work around software packaging, and thats still much more in its
> infancy I think. We're still mostly at the point where we're debating if
> thats the end users problem.
>
> The package management tools are coming around nicely, but not so much yet
> the distro packages. How do we get a k8s distro of this form going? Is that
> in the general scope of OKD, or should there be a whole new project just
> for that?
>

Honestly we've been thinking of the core components as just that - CVO,
MCO, release images, integrated OS with updates.  That's the core compose
(the minimum set of components you need for maintainable stuff).  I think
Mike's comment about MCO supporting multiple OS's is something like that,
and CVO + release tooling is that too.  I think from the perspective of the
project as a whole I had been thinking that "OKD is both the tools and
process for k8s-as-distro".  OLM is a small component of that, like yum.
CVO + installer + bootstrap is anaconda.  The machine-os and k8s are like
systemd.

That's why I'm asking about how much flexibility people want - after all,
Fedora doesn't let you choose which glibc you use.  If people want to
compose the open source projects differently, all the projects are there
today and should be easy to fork, and we could also consider how we make
them easier to fork in the future.

The discuss OKD4 started with was "what kind of distro do people want".  If
there's a group who want to get more involved in the "build a distro" part
of tools that exist, that definitely seems like a different use case.


>
> The redhat container catalog is a good start too, but we need to be
> thinking all the way up to the k8s level.
>
> Should it be "okd k8s distro" or "fedora k8s distro" or something else?
>
> Thanks,
> Kevin
>
> --
> *From:* Clayton Coleman [ccole...@redhat.com]
> *Sent:* Wednesday, July 24, 2019 10:31 AM
> *To:* Fox, Kevin M
> *Cc:* Michael Gugino; users; dev
> *Subject:* Re: Follow up on OKD 4
>
>
>
> On Wed, Jul 24, 2019 at 12:45 PM Fox, Kevin M  wrote:
>
>> Ah, this raises an interesting discussion I've been wanting to have for a
>> while.
>>
>> There are potentially lots of things you could call a distro.
>>
>> Most linux distro's are made up of several layers:
>> 1. boot loader - components to get the kernel running
>> 2. kernel - provides a place to run higher level software
>> 3. os level services - singletons needed to really call the os an os.
>> (dhcp, systemd, dbus, etc)
>> 4. prebuilt/tested, generic software/services - workload (mysql, apache,
>> firefox, gnome, etc)
>>
>> For sake of discussio

Re: Follow up on OKD 4

2019-07-25 Thread Josh Berkus
On 7/25/19 6:51 AM, Clayton Coleman wrote:
> 1. Openshift 4 isn’t flexible in the ways people want (Ie you want to
> add an rpm to the OS to get a kernel module, or you want to ship a
> complex set of config and managing things with mcd looks too hard)
> 2. You want to build and maintain these things yourself, so the “just
> works” mindset doesn’t appeal.

FWIW, 2.5 years ago when we were exploring having a specific
Atomic+Openshift distro for Kubernetes, we did a straw poll of Fedora
Cloud users.  We found that 2/3 of respondees wanted a complete package
(that is, OKD+Atomic) that installed and "just worked" out of the box,
and far fewer folks wanted to hack their own.  We never had such a
release due to insufficient engineering resources (and getting stuck
behind the complete rewrite of the Fedora build pipelines), but that was
the original goal.

Things may have changed in the interim, but I think that a broad user
survey would still find a strong audience for a "just works" distro in
Fedora.

-- 
--
Josh Berkus
Kubernetes Community
Red Hat OSAS

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Follow up on OKD 4

2019-07-25 Thread Daniel Comnea
On Thu, Jul 25, 2019 at 5:01 PM Michael Gugino  wrote:

> I don't really view the 'bucket of parts' and 'complete solution' as
> competing ideas.  It would be nice to build the 'complete solution'
> from the 'bucket of parts' in a reproducible, customizable manner.
> "How is this put together" should be easily followed, enough so that
> someone can 'put it together' on their own infrastructure without
> having to be an expert in designing and configuring the build system.
>
> IMO, if I can't build it, I don't own it.  In 3.x, I could compile all
> the openshift-specific bits from source, I could point at any
> repository I wanted, I could point to any image registry I wanted, I
> could use any distro I wanted.  I could replace the parts I wanted to;
> or I could just run it as-is from the published sources and not worry
> about replacing things.  I even built Fedora Atomic host rpm-trees
> with all the kublet bits pre-installed, similar to what we're doing
> with CoreOS now in 3.x.  It was a great experience, building my own
> system images and running updates was trivial.
>
> I wish we weren't EOL'ing the Atomic Host in Fedora.  It offered a lot
> of flexibility and easy to use tooling.
>
> So maybe what we are asking here is:

   - opinionated OCP 4 philosophy => OKD 4 + FCOS (IPI and UPI) using
   ignition, CVO etc
   - DYI kube philosophy reusing as many v4 components but with your own
   preferred operating system


In terms of approach, priority i think is fair to adopt a baby steps
approach where:

   - phase 1 = try to get out OKD 4 + FCOS asap so folks can start build up
   the knowledge around operating the new solution in a full production env
   - phase 2 = once experience/ knowledge was built up then we can crack on
   with reverse eng and see what we can swap etc.





> On Thu, Jul 25, 2019 at 9:51 AM Clayton Coleman 
> wrote:
> >
> > > On Jul 25, 2019, at 4:19 AM, Aleksandar Lazic <
> openshift-li...@me2digital.com> wrote:
> > >
> > > HI.
> > >
> > >> Am 25.07.2019 um 06:52 schrieb Michael Gugino:
> > >> I think FCoS could be a mutable detail.  To start with, support for
> > >> plain-old-fedora would be helpful to make the platform more portable,
> > >> particularly the MCO and machine-api.  If I had to state a goal, it
> > >> would be "Bring OKD to the largest possible range of linux distros to
> > >> become the defacto implementation of kubernetes."
> > >
> > > I agree here with Michael. As FCoS or in general CoS looks technical a
> good idea
> > > but it limits the flexibility of possible solutions.
> > >
> > > For example when you need to change some system settings then you will
> need to
> > > create a new OS Image, this is not very usable in some environments.
> >
> > I think something we haven’t emphasized enough is that openshift 4 is
> > very heavily structured around changing the cost and mental model
> > around this.  The goal was and is to make these sorts of things
> > unnecessary.  Changing machine settings by building golden images is
> > already the “wrong” (expensive and error prone) pattern - instead, it
> > should be easy to reconfigure machines or to launch new containers to
> > run software on those machines.  There may be two factors here at
> > work:
> >
> > 1. Openshift 4 isn’t flexible in the ways people want (Ie you want to
> > add an rpm to the OS to get a kernel module, or you want to ship a
> > complex set of config and managing things with mcd looks too hard)
> > 2. You want to build and maintain these things yourself, so the “just
> > works” mindset doesn’t appeal.
> >
> > The initial doc alluded to the DIY / bucket of parts use case (I can
> > assemble this on my own but slightly differently) - maybe we can go
> > further now and describe the goal / use case as:
> >
> > I want to be able to compose my own Kubernetes distribution, and I’m
> > willing to give up continuous automatic updates to gain flexibility in
> > picking my own software
> >
> > Does that sound like it captures your request?
> >
> > Note that a key reason why the OS is integrated is so that we can keep
> > machines up to date and do rolling control plane upgrades with no
> > risk.  If you take the OS out of the equation the risk goes up
> > substantially, but if you’re willing to give that up then yes, you
> > could build an OKD that doesn’t tie to the OS.  This trade off is an
> > important one for folks to discuss.  I’d been assuming that people
> > *want* the automatic and safe upgrades, but maybe that’s a bad
> > assumption.
> >
> > What would you be willing to give up?
> >
> > >
> > > It would be nice to have the good old option to use the ansible
> installer to
> > > install OKD/Openshift on other Linux distribution where ansible is
> able to run.
> > >
> > >> Also, it would be helpful (as previously stated) to build communities
> > >> around some of our components that might not have a place in the
> > >> official kubernetes, but are valuable downstream components
> > >> 

Re: Follow up on OKD 4

2019-07-25 Thread Michael Gugino
I don't really view the 'bucket of parts' and 'complete solution' as
competing ideas.  It would be nice to build the 'complete solution'
from the 'bucket of parts' in a reproducible, customizable manner.
"How is this put together" should be easily followed, enough so that
someone can 'put it together' on their own infrastructure without
having to be an expert in designing and configuring the build system.

IMO, if I can't build it, I don't own it.  In 3.x, I could compile all
the openshift-specific bits from source, I could point at any
repository I wanted, I could point to any image registry I wanted, I
could use any distro I wanted.  I could replace the parts I wanted to;
or I could just run it as-is from the published sources and not worry
about replacing things.  I even built Fedora Atomic host rpm-trees
with all the kublet bits pre-installed, similar to what we're doing
with CoreOS now in 3.x.  It was a great experience, building my own
system images and running updates was trivial.

I wish we weren't EOL'ing the Atomic Host in Fedora.  It offered a lot
of flexibility and easy to use tooling.

On Thu, Jul 25, 2019 at 9:51 AM Clayton Coleman  wrote:
>
> > On Jul 25, 2019, at 4:19 AM, Aleksandar Lazic 
> >  wrote:
> >
> > HI.
> >
> >> Am 25.07.2019 um 06:52 schrieb Michael Gugino:
> >> I think FCoS could be a mutable detail.  To start with, support for
> >> plain-old-fedora would be helpful to make the platform more portable,
> >> particularly the MCO and machine-api.  If I had to state a goal, it
> >> would be "Bring OKD to the largest possible range of linux distros to
> >> become the defacto implementation of kubernetes."
> >
> > I agree here with Michael. As FCoS or in general CoS looks technical a good 
> > idea
> > but it limits the flexibility of possible solutions.
> >
> > For example when you need to change some system settings then you will need 
> > to
> > create a new OS Image, this is not very usable in some environments.
>
> I think something we haven’t emphasized enough is that openshift 4 is
> very heavily structured around changing the cost and mental model
> around this.  The goal was and is to make these sorts of things
> unnecessary.  Changing machine settings by building golden images is
> already the “wrong” (expensive and error prone) pattern - instead, it
> should be easy to reconfigure machines or to launch new containers to
> run software on those machines.  There may be two factors here at
> work:
>
> 1. Openshift 4 isn’t flexible in the ways people want (Ie you want to
> add an rpm to the OS to get a kernel module, or you want to ship a
> complex set of config and managing things with mcd looks too hard)
> 2. You want to build and maintain these things yourself, so the “just
> works” mindset doesn’t appeal.
>
> The initial doc alluded to the DIY / bucket of parts use case (I can
> assemble this on my own but slightly differently) - maybe we can go
> further now and describe the goal / use case as:
>
> I want to be able to compose my own Kubernetes distribution, and I’m
> willing to give up continuous automatic updates to gain flexibility in
> picking my own software
>
> Does that sound like it captures your request?
>
> Note that a key reason why the OS is integrated is so that we can keep
> machines up to date and do rolling control plane upgrades with no
> risk.  If you take the OS out of the equation the risk goes up
> substantially, but if you’re willing to give that up then yes, you
> could build an OKD that doesn’t tie to the OS.  This trade off is an
> important one for folks to discuss.  I’d been assuming that people
> *want* the automatic and safe upgrades, but maybe that’s a bad
> assumption.
>
> What would you be willing to give up?
>
> >
> > It would be nice to have the good old option to use the ansible installer to
> > install OKD/Openshift on other Linux distribution where ansible is able to 
> > run.
> >
> >> Also, it would be helpful (as previously stated) to build communities
> >> around some of our components that might not have a place in the
> >> official kubernetes, but are valuable downstream components
> >> nevertheless.
> >>
> >> Anyway, I'm just throwing some ideas out there, I wouldn't consider my
> >> statements as advocating strongly in any direction.  Surely FCoS is
> >> the natural fit, but I think considering other distros merits
> >> discussion.
> >
> > +1
> >
> > Regards
> > Aleks
> >
> >
> >>> On Wed, Jul 24, 2019 at 9:23 PM Clayton Coleman  
> >>> wrote:
> >>>
>  On Jul 24, 2019, at 9:14 PM, Michael Gugino  wrote:
> 
>  I think what I'm looking for is more 'modular' rather than DIY.  CVO
>  would need to be adapted to separate container payload from host
>  software (or use something else), and maintaining cross-distro
>  machine-configs might prove tedious, but for the most part, rest of
>  everything from the k8s bins up, should be more or less the same.
> 
>  MCD is good software, but there's not 

RE: Follow up on OKD 4

2019-07-25 Thread Fox, Kevin M
Yeah, There is the question what it is now, and the question what it 
potentially should be. I'm asking more from a where should it go standpoint.

Right now, k8s distro's are very much in the early linux distro days. Here's 
how to get a base os going. Ok, now your on your own to deploy anything on it. 
Download tarball, build it, install it, write init script, etc. If you look a 
the total package list in the modern linux distro, the os level stuff is 
usually a very small percentage of the software in the distro.

These days we've moved on so far from "the distro is a kernel" that folks even 
talk about running a redhat, a fedora, or a centos container. Thats really #4 
level stuff only.

olm is like yum. a tool to install stuff. So kind of a #3 tool. Its the 
software packaging itself, mysql, apache, etc. that is also part of the distro 
that is mostly missing I think. a container is like an rpm. one way to define a 
linux distro is a collection of prebuilt/tested/supported rpms for common 
software.

In the linux os today, you can start from "I want to deploy a mysql server" and 
I trust redhat to provide good software, so you go and yum install mysql. I 
could imagine similarly OKD as a collection of software to deploy on top of a 
k8s, where there is an optional, self hosting OS part (1-3) the same way 
Fedora/Centos can be used purely #4 with containers, or as a full blown 
os+workloads.

Sure, you can let the community build all their own stuff. Thats possible in 
linux distro's today too and shouldn't be blocked. But it misses the point why 
folks deploy software from linux distro's over getting it from the source. I 
prefer to run mysql from redhat as apposed to upstream because of all the 
extras the distro packagers provide.

Not trying to short change all the hard work in getting a k8s going. okd's 
doing an amazing job at that. That's really important too. But so is all the 
distro work around software packaging, and thats still much more in its infancy 
I think. We're still mostly at the point where we're debating if thats the end 
users problem.

The package management tools are coming around nicely, but not so much yet the 
distro packages. How do we get a k8s distro of this form going? Is that in the 
general scope of OKD, or should there be a whole new project just for that?

The redhat container catalog is a good start too, but we need to be thinking 
all the way up to the k8s level.

Should it be "okd k8s distro" or "fedora k8s distro" or something else?

Thanks,
Kevin


From: Clayton Coleman [ccole...@redhat.com]
Sent: Wednesday, July 24, 2019 10:31 AM
To: Fox, Kevin M
Cc: Michael Gugino; users; dev
Subject: Re: Follow up on OKD 4



On Wed, Jul 24, 2019 at 12:45 PM Fox, Kevin M 
mailto:kevin@pnnl.gov>> wrote:
Ah, this raises an interesting discussion I've been wanting to have for a while.

There are potentially lots of things you could call a distro.

Most linux distro's are made up of several layers:
1. boot loader - components to get the kernel running
2. kernel - provides a place to run higher level software
3. os level services - singletons needed to really call the os an os. (dhcp, 
systemd, dbus, etc)
4. prebuilt/tested, generic software/services - workload (mysql, apache, 
firefox, gnome, etc)

For sake of discussion, lets map these layers a bit, and assume that the 
openshift specific components can be added to a vanilla kubernetes. We then have

1. linux distro (could be k8s specific and micro)
2. kubernetes control plane & kubelets
3. openshift components (auth, ingress, cicd/etc)
4. ?  (operators + containers, helm + containers, etc)

openshift use to be defined as being 1-3.

As things like ake/eks/gke make it easy to deploy 1-2, maybe openshift should 
really become modular so it focuses more on 3 and 4.

That's interesting that you'd say that.  I think kube today is like "install a 
kernel with bash and serial port magic", whereas OpenShift 4 is "here's a 
compose, an installer, a disk formatter, yum, yum repos, lifecycle, glibc, 
optional packages, and sys utils".  I don't know if you can extend the analogy 
there (if you want to use EKS, you're effectively running on someone's VPS, but 
you can only use their distro and you can't change anything), but definitely a 
good debate.


As for having something that provides a #1 that is super tiny/easy to maintain 
so that you can do #2 on top easily, I'm for that as well, but should be 
decoupled from 3-4 I think. Should you be able to switch out your #1 for 
someone elses #1 while keeping the rest? That's the question from previous in 
the thread.

I think the analogy I've been using is that openshift is a proper distro in the 
sense that you don't take someone's random kernel and use it with someone 
else's random glibc and a third party's random gcc, but you might not care 
about the stuff on top.  The things i

Re: Follow up on OKD 4

2019-07-25 Thread Clayton Coleman
> On Jul 25, 2019, at 4:19 AM, Aleksandar Lazic 
>  wrote:
>
> HI.
>
>> Am 25.07.2019 um 06:52 schrieb Michael Gugino:
>> I think FCoS could be a mutable detail.  To start with, support for
>> plain-old-fedora would be helpful to make the platform more portable,
>> particularly the MCO and machine-api.  If I had to state a goal, it
>> would be "Bring OKD to the largest possible range of linux distros to
>> become the defacto implementation of kubernetes."
>
> I agree here with Michael. As FCoS or in general CoS looks technical a good 
> idea
> but it limits the flexibility of possible solutions.
>
> For example when you need to change some system settings then you will need to
> create a new OS Image, this is not very usable in some environments.

I think something we haven’t emphasized enough is that openshift 4 is
very heavily structured around changing the cost and mental model
around this.  The goal was and is to make these sorts of things
unnecessary.  Changing machine settings by building golden images is
already the “wrong” (expensive and error prone) pattern - instead, it
should be easy to reconfigure machines or to launch new containers to
run software on those machines.  There may be two factors here at
work:

1. Openshift 4 isn’t flexible in the ways people want (Ie you want to
add an rpm to the OS to get a kernel module, or you want to ship a
complex set of config and managing things with mcd looks too hard)
2. You want to build and maintain these things yourself, so the “just
works” mindset doesn’t appeal.

The initial doc alluded to the DIY / bucket of parts use case (I can
assemble this on my own but slightly differently) - maybe we can go
further now and describe the goal / use case as:

I want to be able to compose my own Kubernetes distribution, and I’m
willing to give up continuous automatic updates to gain flexibility in
picking my own software

Does that sound like it captures your request?

Note that a key reason why the OS is integrated is so that we can keep
machines up to date and do rolling control plane upgrades with no
risk.  If you take the OS out of the equation the risk goes up
substantially, but if you’re willing to give that up then yes, you
could build an OKD that doesn’t tie to the OS.  This trade off is an
important one for folks to discuss.  I’d been assuming that people
*want* the automatic and safe upgrades, but maybe that’s a bad
assumption.

What would you be willing to give up?

>
> It would be nice to have the good old option to use the ansible installer to
> install OKD/Openshift on other Linux distribution where ansible is able to 
> run.
>
>> Also, it would be helpful (as previously stated) to build communities
>> around some of our components that might not have a place in the
>> official kubernetes, but are valuable downstream components
>> nevertheless.
>>
>> Anyway, I'm just throwing some ideas out there, I wouldn't consider my
>> statements as advocating strongly in any direction.  Surely FCoS is
>> the natural fit, but I think considering other distros merits
>> discussion.
>
> +1
>
> Regards
> Aleks
>
>
>>> On Wed, Jul 24, 2019 at 9:23 PM Clayton Coleman  wrote:
>>>
 On Jul 24, 2019, at 9:14 PM, Michael Gugino  wrote:

 I think what I'm looking for is more 'modular' rather than DIY.  CVO
 would need to be adapted to separate container payload from host
 software (or use something else), and maintaining cross-distro
 machine-configs might prove tedious, but for the most part, rest of
 everything from the k8s bins up, should be more or less the same.

 MCD is good software, but there's not really much going on there that
 can't be ported to any other OS.  MCD downloads a payload, extracts
 files, rebases ostree, reboots host.  You can do all of those steps
 except 'rebases ostree' on any distro.  And instead of 'rebases
 ostree', we could pull down a container that acts as a local repo that
 contains all the bits you need to upgrade your host across releases.
 Users could do things to break this workflow, but it should otherwise
 work if they aren't fiddling with the hosts.  The MCD payload happens
 to embed an ignition payload, but it doesn't actually run ignition,
 just utilizes the file format.

 From my viewpoint, there's nothing particularly special about ignition
 in our current process either.  I had the entire OCP 4 stack running
 on RHEL using the same exact ignition payload, a minimal amount of
 ansible (which could have been replaced by cloud-init userdata), and a
 small python library to parse the ignition files.  I was also building
 repo containers for 3.10 and 3.11 for Fedora.  Not to say the
 OpenShift 4 experience isn't great, because it is.  RHEL CoreOS + OCP
 4 came together quite nicely.

 I'm all for 'not managing machines' but I'm not sure it has to look
 exactly like OCP.  Seems the OCP installer and CVO could be

Re: Follow up on OKD 4

2019-07-24 Thread Michael Gugino
I think FCoS could be a mutable detail.  To start with, support for
plain-old-fedora would be helpful to make the platform more portable,
particularly the MCO and machine-api.  If I had to state a goal, it
would be "Bring OKD to the largest possible range of linux distros to
become the defacto implementation of kubernetes."

Also, it would be helpful (as previously stated) to build communities
around some of our components that might not have a place in the
official kubernetes, but are valuable downstream components
nevertheless.

Anyway, I'm just throwing some ideas out there, I wouldn't consider my
statements as advocating strongly in any direction.  Surely FCoS is
the natural fit, but I think considering other distros merits
discussion.

On Wed, Jul 24, 2019 at 9:23 PM Clayton Coleman  wrote:
>
> > On Jul 24, 2019, at 9:14 PM, Michael Gugino  wrote:
> >
> > I think what I'm looking for is more 'modular' rather than DIY.  CVO
> > would need to be adapted to separate container payload from host
> > software (or use something else), and maintaining cross-distro
> > machine-configs might prove tedious, but for the most part, rest of
> > everything from the k8s bins up, should be more or less the same.
> >
> > MCD is good software, but there's not really much going on there that
> > can't be ported to any other OS.  MCD downloads a payload, extracts
> > files, rebases ostree, reboots host.  You can do all of those steps
> > except 'rebases ostree' on any distro.  And instead of 'rebases
> > ostree', we could pull down a container that acts as a local repo that
> > contains all the bits you need to upgrade your host across releases.
> > Users could do things to break this workflow, but it should otherwise
> > work if they aren't fiddling with the hosts.  The MCD payload happens
> > to embed an ignition payload, but it doesn't actually run ignition,
> > just utilizes the file format.
> >
> > From my viewpoint, there's nothing particularly special about ignition
> > in our current process either.  I had the entire OCP 4 stack running
> > on RHEL using the same exact ignition payload, a minimal amount of
> > ansible (which could have been replaced by cloud-init userdata), and a
> > small python library to parse the ignition files.  I was also building
> > repo containers for 3.10 and 3.11 for Fedora.  Not to say the
> > OpenShift 4 experience isn't great, because it is.  RHEL CoreOS + OCP
> > 4 came together quite nicely.
> >
> > I'm all for 'not managing machines' but I'm not sure it has to look
> > exactly like OCP.  Seems the OCP installer and CVO could be
> > adapted/replaced with something else, MCD adapted, pretty much
> > everything else works the same.
>
> Sure - why?  As in, what do you want to do?  What distro do you want
> to use instead of fcos?  What goals / outcomes do you want out of the
> ability to do whatever?  Ie the previous suggestion (the auto updating
> kube distro) has the concrete goal of “don’t worry about security /
> updates / nodes and still be able to run containers”, and fcos is a
> detail, even if it’s an important one.  How would you pitch the
> alternative?
>
>
> >
> >> On Wed, Jul 24, 2019 at 12:05 PM Clayton Coleman  
> >> wrote:
> >>
> >>
> >>
> >>
> >>> On Wed, Jul 24, 2019 at 10:40 AM Michael Gugino  
> >>> wrote:
> >>>
> >>> I tried FCoS prior to the release by using the assembler on github.
> >>> Too much secret sauce in how to actually construct an image.  I
> >>> thought atomic was much more polished, not really sure what the
> >>> value-add of ignition is in this usecase.  Just give me a way to build
> >>> simple image pipelines and I don't need ignition.  To that end, there
> >>> should be an okd 'spin' of FCoS IMO.  Atomic was dead simple to build
> >>> ostree repos for.  I'd prefer to have ignition be opt-in.  Since we're
> >>> supporting the mcd-once-from to parse ignition on RHEL, we don't need
> >>> ignition to actually install okd.  To me, it seems FCoS was created
> >>> just to have a more open version of RHEL CoreOS, and I'm not sure FCoS
> >>> actually solves anyone's needs relative to atomic.  It feels like we
> >>> jumped the shark on this one.
> >>
> >>
> >> That’s feedback that’s probably something you should share in the fcos 
> >> forums as well.  I will say that I find the OCP + RHEL experience 
> >> unsatisfying and doesn't truly live up to what RHCOS+OCP can do (since it 
> >> lacks the key features like ignition and immutable hosts).  Are you saying 
> >> you'd prefer to have more of a "DIY kube bistro" than the "highly 
> >> opinionated, totally integrated OKD" proposal?  I think that's a good 
> >> question the community should get a chance to weigh in on (in my original 
> >> email that was the implicit question - do you want something that looks 
> >> like OCP4, or something that is completely different).
> >>
> >>>
> >>>
> >>> I'd like to see OKD be distro-independent.  Obviously Fedora should be
> >>> our primary target (I'd argue Fedora over 

Re: Follow up on OKD 4

2019-07-24 Thread Clayton Coleman
> On Jul 24, 2019, at 9:14 PM, Michael Gugino  wrote:
>
> I think what I'm looking for is more 'modular' rather than DIY.  CVO
> would need to be adapted to separate container payload from host
> software (or use something else), and maintaining cross-distro
> machine-configs might prove tedious, but for the most part, rest of
> everything from the k8s bins up, should be more or less the same.
>
> MCD is good software, but there's not really much going on there that
> can't be ported to any other OS.  MCD downloads a payload, extracts
> files, rebases ostree, reboots host.  You can do all of those steps
> except 'rebases ostree' on any distro.  And instead of 'rebases
> ostree', we could pull down a container that acts as a local repo that
> contains all the bits you need to upgrade your host across releases.
> Users could do things to break this workflow, but it should otherwise
> work if they aren't fiddling with the hosts.  The MCD payload happens
> to embed an ignition payload, but it doesn't actually run ignition,
> just utilizes the file format.
>
> From my viewpoint, there's nothing particularly special about ignition
> in our current process either.  I had the entire OCP 4 stack running
> on RHEL using the same exact ignition payload, a minimal amount of
> ansible (which could have been replaced by cloud-init userdata), and a
> small python library to parse the ignition files.  I was also building
> repo containers for 3.10 and 3.11 for Fedora.  Not to say the
> OpenShift 4 experience isn't great, because it is.  RHEL CoreOS + OCP
> 4 came together quite nicely.
>
> I'm all for 'not managing machines' but I'm not sure it has to look
> exactly like OCP.  Seems the OCP installer and CVO could be
> adapted/replaced with something else, MCD adapted, pretty much
> everything else works the same.

Sure - why?  As in, what do you want to do?  What distro do you want
to use instead of fcos?  What goals / outcomes do you want out of the
ability to do whatever?  Ie the previous suggestion (the auto updating
kube distro) has the concrete goal of “don’t worry about security /
updates / nodes and still be able to run containers”, and fcos is a
detail, even if it’s an important one.  How would you pitch the
alternative?


>
>> On Wed, Jul 24, 2019 at 12:05 PM Clayton Coleman  wrote:
>>
>>
>>
>>
>>> On Wed, Jul 24, 2019 at 10:40 AM Michael Gugino  wrote:
>>>
>>> I tried FCoS prior to the release by using the assembler on github.
>>> Too much secret sauce in how to actually construct an image.  I
>>> thought atomic was much more polished, not really sure what the
>>> value-add of ignition is in this usecase.  Just give me a way to build
>>> simple image pipelines and I don't need ignition.  To that end, there
>>> should be an okd 'spin' of FCoS IMO.  Atomic was dead simple to build
>>> ostree repos for.  I'd prefer to have ignition be opt-in.  Since we're
>>> supporting the mcd-once-from to parse ignition on RHEL, we don't need
>>> ignition to actually install okd.  To me, it seems FCoS was created
>>> just to have a more open version of RHEL CoreOS, and I'm not sure FCoS
>>> actually solves anyone's needs relative to atomic.  It feels like we
>>> jumped the shark on this one.
>>
>>
>> That’s feedback that’s probably something you should share in the fcos 
>> forums as well.  I will say that I find the OCP + RHEL experience 
>> unsatisfying and doesn't truly live up to what RHCOS+OCP can do (since it 
>> lacks the key features like ignition and immutable hosts).  Are you saying 
>> you'd prefer to have more of a "DIY kube bistro" than the "highly 
>> opinionated, totally integrated OKD" proposal?  I think that's a good 
>> question the community should get a chance to weigh in on (in my original 
>> email that was the implicit question - do you want something that looks like 
>> OCP4, or something that is completely different).
>>
>>>
>>>
>>> I'd like to see OKD be distro-independent.  Obviously Fedora should be
>>> our primary target (I'd argue Fedora over FCoS), but I think it should
>>> be true upstream software in the sense that apache2 http server is
>>> upstream and not distro specific.  To this end, perhaps it makes sense
>>> to consume k/k instead of openshift/origin for okd.  OKD should be
>>> free to do wild and crazy things independently of the enterprise
>>> product.  Perhaps there's a usecase for treating k/k vs
>>> openshift/origin as a swappable base layer.
>>
>>
>> That’s even more dramatic a change from OKD even as it was in 3.x.  I’d be 
>> happy to see people excited about reusing cvo / mcd and be able to mix and 
>> match, but most of the things here would be a huge investment to build.  In 
>> my original email I might call this the “I want to build my own distro" - if 
>> that's what people want to build, I think we can do things to enable it.  
>> But it would probably not be "openshift" in the same way.
>>
>>>
>>>
>>> It would be nice to have a more native kubernetes place to develop our

Re: Follow up on OKD 4

2019-07-24 Thread Michael Gugino
I think what I'm looking for is more 'modular' rather than DIY.  CVO
would need to be adapted to separate container payload from host
software (or use something else), and maintaining cross-distro
machine-configs might prove tedious, but for the most part, rest of
everything from the k8s bins up, should be more or less the same.

MCD is good software, but there's not really much going on there that
can't be ported to any other OS.  MCD downloads a payload, extracts
files, rebases ostree, reboots host.  You can do all of those steps
except 'rebases ostree' on any distro.  And instead of 'rebases
ostree', we could pull down a container that acts as a local repo that
contains all the bits you need to upgrade your host across releases.
Users could do things to break this workflow, but it should otherwise
work if they aren't fiddling with the hosts.  The MCD payload happens
to embed an ignition payload, but it doesn't actually run ignition,
just utilizes the file format.

>From my viewpoint, there's nothing particularly special about ignition
in our current process either.  I had the entire OCP 4 stack running
on RHEL using the same exact ignition payload, a minimal amount of
ansible (which could have been replaced by cloud-init userdata), and a
small python library to parse the ignition files.  I was also building
repo containers for 3.10 and 3.11 for Fedora.  Not to say the
OpenShift 4 experience isn't great, because it is.  RHEL CoreOS + OCP
4 came together quite nicely.

I'm all for 'not managing machines' but I'm not sure it has to look
exactly like OCP.  Seems the OCP installer and CVO could be
adapted/replaced with something else, MCD adapted, pretty much
everything else works the same.

On Wed, Jul 24, 2019 at 12:05 PM Clayton Coleman  wrote:
>
>
>
>
> On Wed, Jul 24, 2019 at 10:40 AM Michael Gugino  wrote:
>>
>> I tried FCoS prior to the release by using the assembler on github.
>> Too much secret sauce in how to actually construct an image.  I
>> thought atomic was much more polished, not really sure what the
>> value-add of ignition is in this usecase.  Just give me a way to build
>> simple image pipelines and I don't need ignition.  To that end, there
>> should be an okd 'spin' of FCoS IMO.  Atomic was dead simple to build
>> ostree repos for.  I'd prefer to have ignition be opt-in.  Since we're
>> supporting the mcd-once-from to parse ignition on RHEL, we don't need
>> ignition to actually install okd.  To me, it seems FCoS was created
>> just to have a more open version of RHEL CoreOS, and I'm not sure FCoS
>> actually solves anyone's needs relative to atomic.  It feels like we
>> jumped the shark on this one.
>
>
> That’s feedback that’s probably something you should share in the fcos forums 
> as well.  I will say that I find the OCP + RHEL experience unsatisfying and 
> doesn't truly live up to what RHCOS+OCP can do (since it lacks the key 
> features like ignition and immutable hosts).  Are you saying you'd prefer to 
> have more of a "DIY kube bistro" than the "highly opinionated, totally 
> integrated OKD" proposal?  I think that's a good question the community 
> should get a chance to weigh in on (in my original email that was the 
> implicit question - do you want something that looks like OCP4, or something 
> that is completely different).
>
>>
>>
>> I'd like to see OKD be distro-independent.  Obviously Fedora should be
>> our primary target (I'd argue Fedora over FCoS), but I think it should
>> be true upstream software in the sense that apache2 http server is
>> upstream and not distro specific.  To this end, perhaps it makes sense
>> to consume k/k instead of openshift/origin for okd.  OKD should be
>> free to do wild and crazy things independently of the enterprise
>> product.  Perhaps there's a usecase for treating k/k vs
>> openshift/origin as a swappable base layer.
>
>
> That’s even more dramatic a change from OKD even as it was in 3.x.  I’d be 
> happy to see people excited about reusing cvo / mcd and be able to mix and 
> match, but most of the things here would be a huge investment to build.  In 
> my original email I might call this the “I want to build my own distro" - if 
> that's what people want to build, I think we can do things to enable it.  But 
> it would probably not be "openshift" in the same way.
>
>>
>>
>> It would be nice to have a more native kubernetes place to develop our
>> components against so we can upstream them, or otherwise just build a
>> solid community around how we think kubernetes should be deployed and
>> consumed.  Similar to how Fedora has a package repository, we should
>> have a Kubernetes component repository (I realize operatorhub fulfulls
>> some of this, but I'm talking about a place for OLM and things like
>> MCD to live).
>
>
> MCD is really tied to the OS.  The idea of a generic MCD seems like it loses 
> the value of MCD being specific to an OS.
>
> I do think there are two types of components we have - things designed to 
> work 

Re: Follow up on OKD 4

2019-07-24 Thread Clayton Coleman
On Wed, Jul 24, 2019 at 12:45 PM Fox, Kevin M  wrote:

> Ah, this raises an interesting discussion I've been wanting to have for a
> while.
>
> There are potentially lots of things you could call a distro.
>
> Most linux distro's are made up of several layers:
> 1. boot loader - components to get the kernel running
> 2. kernel - provides a place to run higher level software
> 3. os level services - singletons needed to really call the os an os.
> (dhcp, systemd, dbus, etc)
> 4. prebuilt/tested, generic software/services - workload (mysql, apache,
> firefox, gnome, etc)
>
> For sake of discussion, lets map these layers a bit, and assume that the
> openshift specific components can be added to a vanilla kubernetes. We then
> have
>
> 1. linux distro (could be k8s specific and micro)
> 2. kubernetes control plane & kubelets
> 3. openshift components (auth, ingress, cicd/etc)
> 4. ?  (operators + containers, helm + containers, etc)
>
> openshift use to be defined as being 1-3.
>

> As things like ake/eks/gke make it easy to deploy 1-2, maybe openshift
> should really become modular so it focuses more on 3 and 4.
>

That's interesting that you'd say that.  I think kube today is like
"install a kernel with bash and serial port magic", whereas OpenShift 4 is
"here's a compose, an installer, a disk formatter, yum, yum repos,
lifecycle, glibc, optional packages, and sys utils".  I don't know if you
can extend the analogy there (if you want to use EKS, you're effectively
running on someone's VPS, but you can only use their distro and you can't
change anything), but definitely a good debate.


>
> As for having something that provides a #1 that is super tiny/easy to
> maintain so that you can do #2 on top easily, I'm for that as well, but
> should be decoupled from 3-4 I think. Should you be able to switch out your
> #1 for someone elses #1 while keeping the rest? That's the question from
> previous in the thread.
>

I think the analogy I've been using is that openshift is a proper distro in
the sense that you don't take someone's random kernel and use it with
someone else's random glibc and a third party's random gcc, but you might
not care about the stuff on top.  The things in 3 for kube feel more like
glibc than "which version of Firefox do I install", since a cluster without
ingress isn't very useful.


>
> #4 I think is very important and while the operator framework is starting
> to make some inroads on it, there is still a lot of work to do to make an
> equivalent of the 'redhat' distro of software that runs on k8s.
>
> A lot of focus has been on making a distro out of k8s. but its really
> mostly been at the level of, how do I get a kernel booted/upgraded. I think
> the more important distro thing #4 is how do you make a distribution of
> prebuilt, easy to install software to run on top of k8s. Redhat's distro is
> really 99% userspace and a bit of getting the thing booted.
>

Really?  I don't think that's true at all - I'd flip it around and say it's
85% booted, with 15% user space today.  I'd probably draw the line at auth,
ingress, and olm as being "the top of the bottom".   OpenShift 4 is 100%
about making that bottom layer just working, rather than being about OLM up.


> Its value is in having a suite of prebuilt, tested, stable, and easily
> installable/upgradable software  with a team of humans that can provide
> support for it. The kernel/bootloader part is really just a means to enable
> #4. No one installs a kernel/os just to get a kernel. This part is
> currently lacking. Where is the equivalent of Redhat/Centos/Fedora for #4.
>
> In the context of OKD, which of these layers is OKD focused on?
>

In the proposal before it was definitely 1-3 (which is the same as the OCP4
path).  If you only care about 4, you're talking more about OLM on top of
Kube, which is more like JBoss or something like homebrew on Mac (you don't
upgrade your Mac via home-brew, but you do consume lots of stuff out there).


>
> Thanks,
> Kevin
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


RE: Follow up on OKD 4

2019-07-24 Thread Fox, Kevin M
Ah, this raises an interesting discussion I've been wanting to have for a while.

There are potentially lots of things you could call a distro.

Most linux distro's are made up of several layers:
1. boot loader - components to get the kernel running
2. kernel - provides a place to run higher level software
3. os level services - singletons needed to really call the os an os. (dhcp, 
systemd, dbus, etc)
4. prebuilt/tested, generic software/services - workload (mysql, apache, 
firefox, gnome, etc)

For sake of discussion, lets map these layers a bit, and assume that the 
openshift specific components can be added to a vanilla kubernetes. We then have

1. linux distro (could be k8s specific and micro)
2. kubernetes control plane & kubelets
3. openshift components (auth, ingress, cicd/etc)
4. ?  (operators + containers, helm + containers, etc)

openshift use to be defined as being 1-3.

As things like ake/eks/gke make it easy to deploy 1-2, maybe openshift should 
really become modular so it focuses more on 3 and 4.

As for having something that provides a #1 that is super tiny/easy to maintain 
so that you can do #2 on top easily, I'm for that as well, but should be 
decoupled from 3-4 I think. Should you be able to switch out your #1 for 
someone elses #1 while keeping the rest? That's the question from previous in 
the thread.

#4 I think is very important and while the operator framework is starting to 
make some inroads on it, there is still a lot of work to do to make an 
equivalent of the 'redhat' distro of software that runs on k8s.

A lot of focus has been on making a distro out of k8s. but its really mostly 
been at the level of, how do I get a kernel booted/upgraded. I think the more 
important distro thing #4 is how do you make a distribution of prebuilt, easy 
to install software to run on top of k8s. Redhat's distro is really 99% 
userspace and a bit of getting the thing booted. Its value is in having a suite 
of prebuilt, tested, stable, and easily installable/upgradable software  with a 
team of humans that can provide support for it. The kernel/bootloader part is 
really just a means to enable #4. No one installs a kernel/os just to get a 
kernel. This part is currently lacking. Where is the equivalent of 
Redhat/Centos/Fedora for #4.

In the context of OKD, which of these layers is OKD focused on?

Thanks,
Kevin


From: dev-boun...@lists.openshift.redhat.com 
[dev-boun...@lists.openshift.redhat.com] on behalf of Clayton Coleman 
[ccole...@redhat.com]
Sent: Wednesday, July 24, 2019 9:04 AM
To: Michael Gugino
Cc: users; dev
Subject: Re: Follow up on OKD 4




On Wed, Jul 24, 2019 at 10:40 AM Michael Gugino 
mailto:mgug...@redhat.com>> wrote:
I tried FCoS prior to the release by using the assembler on github.
Too much secret sauce in how to actually construct an image.  I
thought atomic was much more polished, not really sure what the
value-add of ignition is in this usecase.  Just give me a way to build
simple image pipelines and I don't need ignition.  To that end, there
should be an okd 'spin' of FCoS IMO.  Atomic was dead simple to build
ostree repos for.  I'd prefer to have ignition be opt-in.  Since we're
supporting the mcd-once-from to parse ignition on RHEL, we don't need
ignition to actually install okd.  To me, it seems FCoS was created
just to have a more open version of RHEL CoreOS, and I'm not sure FCoS
actually solves anyone's needs relative to atomic.  It feels like we
jumped the shark on this one.

That’s feedback that’s probably something you should share in the fcos forums 
as well.  I will say that I find the OCP + RHEL experience unsatisfying and 
doesn't truly live up to what RHCOS+OCP can do (since it lacks the key features 
like ignition and immutable hosts).  Are you saying you'd prefer to have more 
of a "DIY kube bistro" than the "highly opinionated, totally integrated OKD" 
proposal?  I think that's a good question the community should get a chance to 
weigh in on (in my original email that was the implicit question - do you want 
something that looks like OCP4, or something that is completely different).


I'd like to see OKD be distro-independent.  Obviously Fedora should be
our primary target (I'd argue Fedora over FCoS), but I think it should
be true upstream software in the sense that apache2 http server is
upstream and not distro specific.  To this end, perhaps it makes sense
to consume k/k instead of openshift/origin for okd.  OKD should be
free to do wild and crazy things independently of the enterprise
product.  Perhaps there's a usecase for treating k/k vs
openshift/origin as a swappable base layer.

That’s even more dramatic a change from OKD even as it was in 3.x.  I’d be 
happy to see people excited about reusing cvo / mcd and be able to mix and 
match, but most of the things here would be a huge investment to build.  In my 
original email I might call this the “I want to build my own distro&

Re: Follow up on OKD 4

2019-07-24 Thread Clayton Coleman
On Wed, Jul 24, 2019 at 12:18 PM Fox, Kevin M  wrote:

> So, last I heard OpenShift was starting to modularize, so it could load
> the OpenShift parts as extensions to the kube-apiserver? Has this been
> completed? Maybe the idea below of being able to deploy vanilla k8s is
> workable as the OpenShift parts could easily be added on top?
>

OpenShift in 3.x was a core control plane plus 3-5 components.  In 4.x it's
a kube control plane, a bunch of core wiring, the OS, the installer, and
then a good 30 other components.  OpenShift 4 is so far beyond "vanilla
kube + extensions" now that while it's probably technically possible to do
that, it's less of "openshift" and more of "a kube cluster with a few
APIs".  I.e. openshift is machine management, automated updates, integrated
monitoring, etc.


>
> Thanks,
> Kevin
> 
> From: dev-boun...@lists.openshift.redhat.com [
> dev-boun...@lists.openshift.redhat.com] on behalf of Michael Gugino [
> mgug...@redhat.com]
> Sent: Wednesday, July 24, 2019 7:40 AM
> To: Clayton Coleman
> Cc: users; dev
> Subject: Re: Follow up on OKD 4
>
> I tried FCoS prior to the release by using the assembler on github.
> Too much secret sauce in how to actually construct an image.  I
> thought atomic was much more polished, not really sure what the
> value-add of ignition is in this usecase.  Just give me a way to build
> simple image pipelines and I don't need ignition.  To that end, there
> should be an okd 'spin' of FCoS IMO.  Atomic was dead simple to build
> ostree repos for.  I'd prefer to have ignition be opt-in.  Since we're
> supporting the mcd-once-from to parse ignition on RHEL, we don't need
> ignition to actually install okd.  To me, it seems FCoS was created
> just to have a more open version of RHEL CoreOS, and I'm not sure FCoS
> actually solves anyone's needs relative to atomic.  It feels like we
> jumped the shark on this one.
>
> I'd like to see OKD be distro-independent.  Obviously Fedora should be
> our primary target (I'd argue Fedora over FCoS), but I think it should
> be true upstream software in the sense that apache2 http server is
> upstream and not distro specific.  To this end, perhaps it makes sense
> to consume k/k instead of openshift/origin for okd.  OKD should be
> free to do wild and crazy things independently of the enterprise
> product.  Perhaps there's a usecase for treating k/k vs
> openshift/origin as a swappable base layer.
>
> It would be nice to have a more native kubernetes place to develop our
> components against so we can upstream them, or otherwise just build a
> solid community around how we think kubernetes should be deployed and
> consumed.  Similar to how Fedora has a package repository, we should
> have a Kubernetes component repository (I realize operatorhub fulfulls
> some of this, but I'm talking about a place for OLM and things like
> MCD to live).
>
> I think we could integrate with existing package managers via a
> 'repo-in-a-container' type strategy for those not using ostree.
>
> As far as slack vs IRC, I vote IRC or any free software solution (but
> my preference is IRC because it's simple and I like it).
>
> On Sun, Jul 21, 2019 at 12:28 PM Clayton Coleman 
> wrote:
> >
> >
> >
> > On Sat, Jul 20, 2019 at 12:40 PM Justin Cook  wrote:
> >>
> >> Once upon a time Freenode #openshift-dev was vibrant with loads of
> activity and publicly available logs. I jumped in asked questions and Red
> Hatters came from the woodwork and some amazing work was done.
> >>
> >> Perfect.
> >>
> >> Slack not so much. Since Monday there have been three comments with two
> reply threads. All this with 524 people. Crickets.
> >>
> >> Please explain how this is better. I’d really love to know why IRC
> ceased. It worked and worked brilliantly.
> >
> >
> > Is your concern about volume or location (irc vs slack)?
> >
> > Re volume: It should be relatively easy to move some common discussion
> types into the #openshift-dev slack channel (especially triage / general
> QA) that might be distributed to other various slack channels today (both
> private and public), and I can take the follow up to look into that.  Some
> of the volume that was previously in IRC moved to these slack channels, but
> they're not anything private (just convenient).
> >
> > Re location:  I don't know how many people want to go back to IRC from
> slack, but that's a fairly easy survey to do here if someone can volunteer
> to drive that, and I can run the same one internally.  Some of it is
> inertia - people have to be in slack sig-* channels - and some of it is
> preferen

RE: Follow up on OKD 4

2019-07-24 Thread Fox, Kevin M
So, last I heard OpenShift was starting to modularize, so it could load the 
OpenShift parts as extensions to the kube-apiserver? Has this been completed? 
Maybe the idea below of being able to deploy vanilla k8s is workable as the 
OpenShift parts could easily be added on top?

Thanks,
Kevin

From: dev-boun...@lists.openshift.redhat.com 
[dev-boun...@lists.openshift.redhat.com] on behalf of Michael Gugino 
[mgug...@redhat.com]
Sent: Wednesday, July 24, 2019 7:40 AM
To: Clayton Coleman
Cc: users; dev
Subject: Re: Follow up on OKD 4

I tried FCoS prior to the release by using the assembler on github.
Too much secret sauce in how to actually construct an image.  I
thought atomic was much more polished, not really sure what the
value-add of ignition is in this usecase.  Just give me a way to build
simple image pipelines and I don't need ignition.  To that end, there
should be an okd 'spin' of FCoS IMO.  Atomic was dead simple to build
ostree repos for.  I'd prefer to have ignition be opt-in.  Since we're
supporting the mcd-once-from to parse ignition on RHEL, we don't need
ignition to actually install okd.  To me, it seems FCoS was created
just to have a more open version of RHEL CoreOS, and I'm not sure FCoS
actually solves anyone's needs relative to atomic.  It feels like we
jumped the shark on this one.

I'd like to see OKD be distro-independent.  Obviously Fedora should be
our primary target (I'd argue Fedora over FCoS), but I think it should
be true upstream software in the sense that apache2 http server is
upstream and not distro specific.  To this end, perhaps it makes sense
to consume k/k instead of openshift/origin for okd.  OKD should be
free to do wild and crazy things independently of the enterprise
product.  Perhaps there's a usecase for treating k/k vs
openshift/origin as a swappable base layer.

It would be nice to have a more native kubernetes place to develop our
components against so we can upstream them, or otherwise just build a
solid community around how we think kubernetes should be deployed and
consumed.  Similar to how Fedora has a package repository, we should
have a Kubernetes component repository (I realize operatorhub fulfulls
some of this, but I'm talking about a place for OLM and things like
MCD to live).

I think we could integrate with existing package managers via a
'repo-in-a-container' type strategy for those not using ostree.

As far as slack vs IRC, I vote IRC or any free software solution (but
my preference is IRC because it's simple and I like it).

On Sun, Jul 21, 2019 at 12:28 PM Clayton Coleman  wrote:
>
>
>
> On Sat, Jul 20, 2019 at 12:40 PM Justin Cook  wrote:
>>
>> Once upon a time Freenode #openshift-dev was vibrant with loads of activity 
>> and publicly available logs. I jumped in asked questions and Red Hatters 
>> came from the woodwork and some amazing work was done.
>>
>> Perfect.
>>
>> Slack not so much. Since Monday there have been three comments with two 
>> reply threads. All this with 524 people. Crickets.
>>
>> Please explain how this is better. I’d really love to know why IRC ceased. 
>> It worked and worked brilliantly.
>
>
> Is your concern about volume or location (irc vs slack)?
>
> Re volume: It should be relatively easy to move some common discussion types 
> into the #openshift-dev slack channel (especially triage / general QA) that 
> might be distributed to other various slack channels today (both private and 
> public), and I can take the follow up to look into that.  Some of the volume 
> that was previously in IRC moved to these slack channels, but they're not 
> anything private (just convenient).
>
> Re location:  I don't know how many people want to go back to IRC from slack, 
> but that's a fairly easy survey to do here if someone can volunteer to drive 
> that, and I can run the same one internally.  Some of it is inertia - people 
> have to be in slack sig-* channels - and some of it is preference (in that 
> IRC is an inferior experience for long running communication).
>
>>
>>
>> There are mentions of sigs and bits and pieces, but absolutely no progress. 
>> I fail to see why anyone would want to regress. OCP4 maybe brilliant, but as 
>> I said in a private email, without upstream there is no culture or insurance 
>> we’ve come to love from decades of heart and soul.
>>
>> Ladies and gentlemen, this is essentially getting to the point the community 
>> is being abandoned. Man years of work acknowledged with the roadmap pulled 
>> out from under us.
>
>
> I don't think that's a fair characterization, but I understand why you feel 
> that way and we are working to get the 4.x work moving.  The FCoS team as 
> mentioned just released their first preview last week, I've been working

Re: Follow up on OKD 4

2019-07-24 Thread Justin Cook
On 24 Jul 2019, 04:57 -0500, Daniel Comnea , wrote:
> > > > >
> > > > >
> > > > > All i'm trying to say with the above is:
> > > > >
> > > > > Should we go with IRC as a form of communication we should then be 
> > > > > ready to have bodies lined up to:
> > > > >
> > > > > • look after and admin the IRC channels.
> > > > > • enable the IRC log channels and also filter out the noise to be 
> > > > > consumable (not just stream the logs somewhere and tick the box)
> > > > >
> > >
> > > Easy enough. It’s been done time and again. Let’s give it a whirl. Since 
> > > I’m the one complaining perhaps I can put my name in for consideration.
> > >
> > [DC]: i understood not everyone is okay with logging any activity due to 
> > GDPR so i think this goes off the table

GDPR has very little to do with this and can be easily mitigated. Your 
communication on IRC is logged via handles and IP addresses, and the right to 
be forgotten only applies to personal identifiable information and is not 
absolute. As for the public domain, when someone enters a public channel, there 
can be an explicit statement they consent to their information they have 
provided entering the public domain. Their continuing use of the channel 
implies they consent. If someone want’s the handle, let’s say an email address 
scrubbed, then so be it. Otherwise, there’s nothing of interest. But, yet 
again, if they used the channel with explicit notification, their use is 
consent.

Using GDPR as an excuse is fear mongering by people who don’t understand it.

Justin Cook
> > > > >
> > > > >
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Follow up on OKD 4

2019-07-24 Thread Clayton Coleman
On Wed, Jul 24, 2019 at 10:40 AM Michael Gugino  wrote:

> I tried FCoS prior to the release by using the assembler on github.
> Too much secret sauce in how to actually construct an image.  I
> thought atomic was much more polished, not really sure what the
> value-add of ignition is in this usecase.  Just give me a way to build
> simple image pipelines and I don't need ignition.  To that end, there
> should be an okd 'spin' of FCoS IMO.  Atomic was dead simple to build
> ostree repos for.  I'd prefer to have ignition be opt-in.  Since we're
> supporting the mcd-once-from to parse ignition on RHEL, we don't need
> ignition to actually install okd.  To me, it seems FCoS was created
> just to have a more open version of RHEL CoreOS, and I'm not sure FCoS
> actually solves anyone's needs relative to atomic.  It feels like we
> jumped the shark on this one.
>

That’s feedback that’s probably something you should share in the fcos
forums as well.  I will say that I find the OCP + RHEL experience
unsatisfying and doesn't truly live up to what RHCOS+OCP can do (since it
lacks the key features like ignition and immutable hosts).  Are you saying
you'd prefer to have more of a "DIY kube bistro" than the "highly
opinionated, totally integrated OKD" proposal?  I think that's a good
question the community should get a chance to weigh in on (in my original
email that was the implicit question - do you want something that looks
like OCP4, or something that is completely different).


>
> I'd like to see OKD be distro-independent.  Obviously Fedora should be
> our primary target (I'd argue Fedora over FCoS), but I think it should
> be true upstream software in the sense that apache2 http server is
> upstream and not distro specific.  To this end, perhaps it makes sense
> to consume k/k instead of openshift/origin for okd.  OKD should be
> free to do wild and crazy things independently of the enterprise
> product.  Perhaps there's a usecase for treating k/k vs
> openshift/origin as a swappable base layer.
>

That’s even more dramatic a change from OKD even as it was in 3.x.  I’d be
happy to see people excited about reusing cvo / mcd and be able to mix and
match, but most of the things here would be a huge investment to build.  In
my original email I might call this the “I want to build my own distro" -
if that's what people want to build, I think we can do things to enable
it.  But it would probably not be "openshift" in the same way.


>
> It would be nice to have a more native kubernetes place to develop our
> components against so we can upstream them, or otherwise just build a
> solid community around how we think kubernetes should be deployed and
> consumed.  Similar to how Fedora has a package repository, we should
> have a Kubernetes component repository (I realize operatorhub fulfulls
> some of this, but I'm talking about a place for OLM and things like
> MCD to live).
>

MCD is really tied to the OS.  The idea of a generic MCD seems like it
loses the value of MCD being specific to an OS.

I do think there are two types of components we have - things designed to
work well with kube, and things designed to work well with "openshift the
distro".  The former can be developed against Kube (like operator hub /
olm) but the latter doesn't really make sense to develop against unless it
matches what is being built.  In that vein, OKD4 looking not like OCP4
wouldn't benefit you or the components.


>
> I think we could integrate with existing package managers via a
> 'repo-in-a-container' type strategy for those not using ostree.
>

A big part of openshift 4 is "we're tired of managing machines".  It sounds
like you are arguing for "let people do whatever", which is definitely
valid (but doesn't sound like openshift 4).


>
> As far as slack vs IRC, I vote IRC or any free software solution (but
> my preference is IRC because it's simple and I like it).
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Follow up on OKD 4

2019-07-24 Thread Michael Gugino
I tried FCoS prior to the release by using the assembler on github.
Too much secret sauce in how to actually construct an image.  I
thought atomic was much more polished, not really sure what the
value-add of ignition is in this usecase.  Just give me a way to build
simple image pipelines and I don't need ignition.  To that end, there
should be an okd 'spin' of FCoS IMO.  Atomic was dead simple to build
ostree repos for.  I'd prefer to have ignition be opt-in.  Since we're
supporting the mcd-once-from to parse ignition on RHEL, we don't need
ignition to actually install okd.  To me, it seems FCoS was created
just to have a more open version of RHEL CoreOS, and I'm not sure FCoS
actually solves anyone's needs relative to atomic.  It feels like we
jumped the shark on this one.

I'd like to see OKD be distro-independent.  Obviously Fedora should be
our primary target (I'd argue Fedora over FCoS), but I think it should
be true upstream software in the sense that apache2 http server is
upstream and not distro specific.  To this end, perhaps it makes sense
to consume k/k instead of openshift/origin for okd.  OKD should be
free to do wild and crazy things independently of the enterprise
product.  Perhaps there's a usecase for treating k/k vs
openshift/origin as a swappable base layer.

It would be nice to have a more native kubernetes place to develop our
components against so we can upstream them, or otherwise just build a
solid community around how we think kubernetes should be deployed and
consumed.  Similar to how Fedora has a package repository, we should
have a Kubernetes component repository (I realize operatorhub fulfulls
some of this, but I'm talking about a place for OLM and things like
MCD to live).

I think we could integrate with existing package managers via a
'repo-in-a-container' type strategy for those not using ostree.

As far as slack vs IRC, I vote IRC or any free software solution (but
my preference is IRC because it's simple and I like it).

On Sun, Jul 21, 2019 at 12:28 PM Clayton Coleman  wrote:
>
>
>
> On Sat, Jul 20, 2019 at 12:40 PM Justin Cook  wrote:
>>
>> Once upon a time Freenode #openshift-dev was vibrant with loads of activity 
>> and publicly available logs. I jumped in asked questions and Red Hatters 
>> came from the woodwork and some amazing work was done.
>>
>> Perfect.
>>
>> Slack not so much. Since Monday there have been three comments with two 
>> reply threads. All this with 524 people. Crickets.
>>
>> Please explain how this is better. I’d really love to know why IRC ceased. 
>> It worked and worked brilliantly.
>
>
> Is your concern about volume or location (irc vs slack)?
>
> Re volume: It should be relatively easy to move some common discussion types 
> into the #openshift-dev slack channel (especially triage / general QA) that 
> might be distributed to other various slack channels today (both private and 
> public), and I can take the follow up to look into that.  Some of the volume 
> that was previously in IRC moved to these slack channels, but they're not 
> anything private (just convenient).
>
> Re location:  I don't know how many people want to go back to IRC from slack, 
> but that's a fairly easy survey to do here if someone can volunteer to drive 
> that, and I can run the same one internally.  Some of it is inertia - people 
> have to be in slack sig-* channels - and some of it is preference (in that 
> IRC is an inferior experience for long running communication).
>
>>
>>
>> There are mentions of sigs and bits and pieces, but absolutely no progress. 
>> I fail to see why anyone would want to regress. OCP4 maybe brilliant, but as 
>> I said in a private email, without upstream there is no culture or insurance 
>> we’ve come to love from decades of heart and soul.
>>
>> Ladies and gentlemen, this is essentially getting to the point the community 
>> is being abandoned. Man years of work acknowledged with the roadmap pulled 
>> out from under us.
>
>
> I don't think that's a fair characterization, but I understand why you feel 
> that way and we are working to get the 4.x work moving.  The FCoS team as 
> mentioned just released their first preview last week, I've been working with 
> Diane and others to identify who on the team is going to take point on the 
> design work, and there's a draft in flight that I saw yesterday.  Every 
> component of OKD4 *besides* the FCoS integration is public and has been 
> public for months.
>
> I do want to make sure we can get a basic preview up as quickly as possible - 
> one option I was working on with the legal side was whether we could offer a 
> short term preview of OKD4 based on top of RHCoS.  That is possible if folks 
> are willing to accept the terms on try.openshift.com in order to access it in 
> the very short term (and then once FCoS is available that would not be 
> necessary).  If that's an option you or anyone on this thread are interested 
> in please let me know, just as something we can do to speed 

Re: Follow up on OKD 4

2019-07-22 Thread Daniel Comnea
On Mon, Jul 22, 2019 at 8:52 AM Justin Cook  wrote:

> On 22 Jul 2019, 00:07 +0100, Gleidson Nascimento , wrote:
>
> I'm with Daniel, I believe it is easier to attract help by using Slack
> instead of IRC.
>
>
> My experience over many years — especially with OCP3 — IRC with public
> logs smashes Slack. It’s not comparable. The proof is in the pudding.
> Compare the public IRC logs with the Slack channel.
>
> The way I see it is we should practice openness in everything. Slack is
> proprietary. Google does not index the logs. That’s a full stop for me. As
> a matter of fact, many others agree. Just search it. The most disappointing
> thing is for over two decades *open *IRC has been used with *open*
> mailing lists and *open* documentation with a new trend of using fancy
> (arguably not) things that own the data we produce and we have to pay them
> to index it for us and in the end it’s not publicly available — see a theme
> emerging?
>
> So go ahead and have your Slack with three threads per week and we’ll see
> if your *belief* stays the same. The wide open searchable public IRC is
> the heavyweight champion that’s never let us down. As a matter of fact,
> being completely open helped build OCP3 and we all know how that worked
> out.
>
> *Justin* - let me provide some info from my side, i'm not trying to get
into a personal religious debate here however i think we need to
acknowledge few things:

you saying:


   - *IRC with public logs smashes Slack*
   - *Slack is proprietary. Google does not index the logs.*

My response:

I totally agree with that but let's do a quick reality check taking example
some IRC channels, shall we?


   - ansible IRC channel doesn't log the conversation - does the comments
   [1] and [2] resonate with you? It does for me and that is a huge -1 from my
   side.
   - centos-devel/ centos channels doesn't log the conversation. Saying
   that for the centos meetings (i.e PaaS SIG) it get logged and is per SIG.
   That in itself is very useful however as a guy who was consuming the output
   for the last year as PaaS SIG chair/ member i will say is not appealing to
   go over the output if a meeting had high traffic (same way if you have a 6
   hour meeting recording, will you watch it from A to Z ? ;) )
   - fedora-coreos it does log [3] but if i'll turn every morning and see
   what has been discussed you see a lot of noise caused by who join/leave
   - openshift/ openshift-dev channels had something on [4] but does it
   still works ?


All i'm trying to say with the above is:

Should we go with IRC as a form of communication we should then be ready to
have bodies lined up to:


   - look after and admin the IRC channels.
   - enable the IRC log channels and also filter out the noise to be
   consumable (not just stream the logs somewhere and tick the box)

In addition to the channel logs, my main requirement is to access the IRC
channels from any device and not lose track of what has been discussed.
A respected gentlemen involved in various opensource projects once wrote
[5] and so with that i'd say:

   - who will take on board the setup so everyone can benefit from it?


If you swing to slack, i'd say:

   - K8s slack is free in that neither you nor i/ others pay for it and
   everyone can join there
   - OpenShift Common slack channel is also free, RH is paying the bill
   (another investment from their side) however as said Diane setup up that
   place initially with a different scope.
   - once you logged in you can scroll back many months in the past
   - you get ability to share code snippet -> in IRC you don't. You could
   argue that folks can use github gist or any pastebin service however the
   content can be deleted/ expired and so we go back to square one


[1] https://github.com/ansible/community/issues/242#issuecomment-334239958
[2] https://github.com/ansible/community/issues/242#issuecomment-336890994
[3] https://echelog.com/logs/browse/fedora-coreos/1563746400
[4] https://botbot.me/freenode/openshift-dev/
[5]
https://doughellmann.com/blog/2015/03/12/deploying-nested-znc-services-with-ansible/

you also saying

   - *Slack with three threads per week*

How is the traffic on fedora-coreos OR centos-devel channels going? Have
you seen high volume ?

I think is unfair to say that, in reality even on the mentioned IRC
channels we don't see much traffic.  #ansible is an exception but that is
because ansible core (no idea how many are RH employees vs the rest) devs
do hang around there.

At the end i think we need to take a step back and ask ourselves:

   - who is involved in OKD?
  - who is contributing - with tests, integration, docs, logistics etc
  etc (if i can use an analogy - *help producing the wine*)
  - who is consuming it (the analogy - *consuming/ drinking the wine*)
   - what is the scope of OKD based on the resources available ?
  - does OKD afford/ have capacity for an Infra team to look after
  tools? any volunteers? :)
  - 

Re: Follow up on OKD 4

2019-07-22 Thread Justin Cook
On 22 Jul 2019, 00:07 +0100, Gleidson Nascimento , wrote:
> I'm with Daniel, I believe it is easier to attract help by using Slack 
> instead of IRC.

My experience over many years — especially with OCP3 — IRC with public logs 
smashes Slack. It’s not comparable. The proof is in the pudding. Compare the 
public IRC logs with the Slack channel.

The way I see it is we should practice openness in everything. Slack is 
proprietary. Google does not index the logs. That’s a full stop for me. As a 
matter of fact, many others agree. Just search it. The most disappointing thing 
is for over two decades open IRC has been used with open mailing lists and open 
documentation with a new trend of using fancy (arguably not) things that own 
the data we produce and we have to pay them to index it for us and in the end 
it’s not publicly available — see a theme emerging?

So go ahead and have your Slack with three threads per week and we’ll see if 
your belief stays the same. The wide open searchable public IRC is the 
heavyweight champion that’s never let us down. As a matter of fact, being 
completely open helped build OCP3 and we all know how that worked out.

Justin
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Follow up on OKD 4

2019-07-21 Thread Gleidson Nascimento
I'm with Daniel, I believe it is easier to attract help by using Slack instead 
of IRC.

From: dev-boun...@lists.openshift.redhat.com 
 on behalf of Daniel Comnea 

Sent: 20 July 2019 1:02 PM
To: Christian Glombek 
Cc: users ; dev 

Subject: Re: Follow up on OKD 4

Hi Christian,

Welcome and thanks for volunteering on kicking off this effort.

My vote goes to #openshift-dev slack too, OpenShift Commons Slack scope was/is 
a bit different geared towards ISVs.

IRC -  personally have no problem, however the chances to attract more folks 
(especially non RH employees) who might be willing to help growing OKD 
community are higher on slack.

On Fri, Jul 19, 2019 at 9:33 PM Christian Glombek 
mailto:cglom...@redhat.com>> wrote:
+1 for using kubernetes #openshift-dev slack for the OKD WG meetings


On Fri, Jul 19, 2019 at 6:49 PM Clayton Coleman 
mailto:ccole...@redhat.com>> wrote:
The kube #openshift-dev slack might also make sense, since we have 518 people 
there right now

On Fri, Jul 19, 2019 at 12:46 PM Christian Glombek 
mailto:cglom...@redhat.com>> wrote:
Hi everyone,

first of all, I'd like to thank Clayton for kicking this off!

As I only just joined this ML, let me quickly introduce myself:

I am an Associate Software Engineer on the OpenShift machine-config-operator 
(mco) team and I'm based out of Berlin, Germany.
Last year, I participated in Google Summer of Code as a student with Fedora IoT 
and joined Red Hat shortly thereafter to work on the Fedora CoreOS (FCOS) team.
I joined the MCO team when it was established earlier this year.

Having been a Fedora/Atomic community member for some years, I'm a strong 
proponent of using FCOS as base OS for OKD and would like to see it enabled :)
As I work on the team that looks after the MCO, which is one of the parts of 
OpenShift that will need some adaptation in order to support another base OS, I 
am confident I can help with contributions there
(of course I don't want to shut the door for other OSes to be used as base if 
people are interested in that :).

Proposal: Create WG and hold regular meetings

I'd like to propose the creation of the OKD Working Group that will hold 
bi-weekly meetings.
(or should we call it a SIG? Also open to suggestions to find the right venue: 
IRC?, OpenShift Commons Slack?).

I'll survey some people in the coming days to find a suitable meeting time.

If you have any feedback or suggestions, please feel free to reach out, either 
via this list or personally!
I can be found as lorbus on IRC/Fedora, @lorbus42 on Twitter, or simply via 
email :)

I'll send out more info here ASAP. Stay tuned!


With kind regards

CHRISTIAN GLOMBEK
Associate Software Engineer

Red Hat GmbH, registred seat: Grassbrunn
Commercial register: Amtsgericht Muenchen, HRB 153243
Managing directors: Charles Cachera, Michael O'Neill, Thomas Savage, Eric 
Shander


On Wed, Jul 17, 2019 at 10:45 PM Clayton Coleman 
mailto:ccole...@redhat.com>> wrote:
Thanks for everyone who provided feedback over the last few weeks.  There's 
been a lot of good feedback, including some things I'll try to capture here:

* More structured working groups would be good
* Better public roadmap
* Concrete schedule for OKD 4
* Concrete proposal for OKD 4

I've heard generally positive comments about the suggestions and philosophy in 
the last email, with a desire for more details around what the actual steps 
might look like, so I think it's safe to say that the idea of "continuously up 
to date Kubernetes distribution" resonated.  We'll continue to take feedback 
along this direction (private or public).

Since 4 was the kickoff for this discussion, and with the recent release of the 
Fedora CoreOS beta 
(https://docs.fedoraproject.org/en-US/fedora-coreos/getting-started/) figuring 
prominently in the discussions so far, I got some volunteers from that team to 
take point on setting up a working group (SIG?) around the initial level of 
integration and drafting a proposal.

Steve and Christian have both been working on Fedora CoreOS and graciously 
agreed to help drive the next steps on Fedora CoreOS and OKD potential 
integration into a proposal.  There's a rough level draft doc they plan to 
share - but for now I will turn this over to them and they'll help organize 
time / forum / process for kicking off this effort.  As that continues, we'll 
identify new SIGs to spawn off as necessary to cover other topics, including 
initial CI and release automation to deliver any necessary changes.

Thanks to everyone who gave feedback, and stay tuned here for more!
___
users mailing list
us...@lists.openshift.redhat.com<mailto:us...@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
___
dev mailing list
dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>
http://lists

Re: Follow up on OKD 4

2019-07-21 Thread Clayton Coleman
On Sat, Jul 20, 2019 at 12:40 PM Justin Cook  wrote:

> Once upon a time Freenode #openshift-dev was vibrant with loads of
> activity and publicly available logs. I jumped in asked questions and Red
> Hatters came from the woodwork and some amazing work was done.
>
> Perfect.
>
> Slack not so much. Since Monday there have been three comments with two
> reply threads. All this with 524 people. Crickets.
>
> Please explain how this is better. I’d really love to know why IRC ceased.
> It worked and worked brilliantly.
>

Is your concern about volume or location (irc vs slack)?

Re volume: It should be relatively easy to move some common discussion
types into the #openshift-dev slack channel (especially triage / general
QA) that might be distributed to other various slack channels today (both
private and public), and I can take the follow up to look into that.  Some
of the volume that was previously in IRC moved to these slack channels, but
they're not anything private (just convenient).

Re location:  I don't know how many people want to go back to IRC from
slack, but that's a fairly easy survey to do here if someone can volunteer
to drive that, and I can run the same one internally.  Some of it is
inertia - people have to be in slack sig-* channels - and some of it is
preference (in that IRC is an inferior experience for long running
communication).


>
> There are mentions of sigs and bits and pieces, but absolutely no
> progress. I fail to see why anyone would want to regress. OCP4 maybe
> brilliant, but as I said in a private email, without upstream there is no
> culture or insurance we’ve come to love from decades of heart and soul.
>
> Ladies and gentlemen, this is essentially getting to the point the
> community is being abandoned. Man years of work acknowledged with the
> roadmap pulled out from under us.
>

I don't think that's a fair characterization, but I understand why you feel
that way and we are working to get the 4.x work moving.  The FCoS team as
mentioned just released their first preview last week, I've been working
with Diane and others to identify who on the team is going to take point on
the design work, and there's a draft in flight that I saw yesterday.  Every
component of OKD4 *besides* the FCoS integration is public and has been
public for months.

I do want to make sure we can get a basic preview up as quickly as possible
- one option I was working on with the legal side was whether we could
offer a short term preview of OKD4 based on top of RHCoS.  That is possible
if folks are willing to accept the terms on try.openshift.com in order to
access it in the very short term (and then once FCoS is available that
would not be necessary).  If that's an option you or anyone on this thread
are interested in please let me know, just as something we can do to speed
up.


>
> I completely understand the disruption caused by the acquisition. But,
> after kicking the tyres and our meeting a few weeks back, it’s been pretty
> quiet. The clock is ticking on corporate long-term strategies. Some of
> those corporates spent plenty of dosh on licensing OCP and hiring
> consultants to implement.
>

> Red Hat need to lead from the front. Get IRC revived, throw us a bone, and
> have us put our money where our mouth is — we’ll get involved. We’re
> begging for it.
>
> Until then we’re running out of patience via clientele and will need to
> start a community effort perhaps by forking OKD3 and integrating upstream.
> I am not interested in doing that. We shouldn’t have to.
>

In the spirit of full transparency, FCoS integrated into OKD is going to
take several months to get to the point where it meets the quality bar I'd
expect for OKD4.  If that timeframe doesn't work for folks, we can
definitely consider other options like having RHCoS availability behind a
terms agreement, a franken-OKD without host integration (which might take
just as long to get and not really be a step forward for folks vs 3), or
other, more dramatic options.  Have folks given FCoS a try this week?
https://docs.fedoraproject.org/en-US/fedora-coreos/getting-started/.
That's a great place to get started

As always PRs and fixes to 3.x will continue to be welcomed and that effort
continues unabated.
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Follow up on OKD 4

2019-07-20 Thread Justin Cook
Once upon a time Freenode #openshift-dev was vibrant with loads of activity and 
publicly available logs. I jumped in asked questions and Red Hatters came from 
the woodwork and some amazing work was done.

Perfect.

Slack not so much. Since Monday there have been three comments with two reply 
threads. All this with 524 people. Crickets.

Please explain how this is better. I’d really love to know why IRC ceased. It 
worked and worked brilliantly.

There are mentions of sigs and bits and pieces, but absolutely no progress. I 
fail to see why anyone would want to regress. OCP4 maybe brilliant, but as I 
said in a private email, without upstream there is no culture or insurance 
we’ve come to love from decades of heart and soul.

Ladies and gentlemen, this is essentially getting to the point the community is 
being abandoned. Man years of work acknowledged with the roadmap pulled out 
from under us.

I completely understand the disruption caused by the acquisition. But, after 
kicking the tyres and our meeting a few weeks back, it’s been pretty quiet. The 
clock is ticking on corporate long-term strategies. Some of those corporates 
spent plenty of dosh on licensing OCP and hiring consultants to implement.

Red Hat need to lead from the front. Get IRC revived, throw us a bone, and have 
us put our money where our mouth is — we’ll get involved. We’re begging for it.

Until then we’re running out of patience via clientele and will need to start a 
community effort perhaps by forking OKD3 and integrating upstream. I am not 
interested in doing that. We shouldn’t have to.

Cheers,

Justin Cook
On 20 Jul 2019, 06:34 +0530, Daniel Comnea , wrote:
> Hi Christian,
>
> Welcome and thanks for volunteering on kicking off this effort.
>
> My vote goes to #openshift-dev slack too, OpenShift Commons Slack scope 
> was/is a bit different geared towards ISVs.
>
> IRC -  personally have no problem, however the chances to attract more folks 
> (especially non RH employees) who might be willing to help growing OKD 
> community are higher on slack.
>
> > On Fri, Jul 19, 2019 at 9:33 PM Christian Glombek  
> > wrote:
> > > +1 for using kubernetes #openshift-dev slack for the OKD WG meetings
> > >
> > > >
> > > > On Fri, Jul 19, 2019 at 6:49 PM Clayton Coleman  
> > > > wrote:
> > > > > The kube #openshift-dev slack might also make sense, since we have 
> > > > > 518 people there right now
> > > > >
> > > > > > On Fri, Jul 19, 2019 at 12:46 PM Christian Glombek 
> > > > > >  wrote:
> > > > > > > Hi everyone,
> > > > > > >
> > > > > > > first of all, I'd like to thank Clayton for kicking this off!
> > > > > > >
> > > > > > > As I only just joined this ML, let me quickly introduce myself:
> > > > > > >
> > > > > > > I am an Associate Software Engineer on the OpenShift 
> > > > > > > machine-config-operator (mco) team and I'm based out of Berlin, 
> > > > > > > Germany.
> > > > > > > Last year, I participated in Google Summer of Code as a student 
> > > > > > > with Fedora IoT and joined Red Hat shortly thereafter to work on 
> > > > > > > the Fedora CoreOS (FCOS) team.
> > > > > > > I joined the MCO team when it was established earlier this year.
> > > > > > >
> > > > > > > Having been a Fedora/Atomic community member for some years, I'm 
> > > > > > > a strong proponent of using FCOS as base OS for OKD and would 
> > > > > > > like to see it enabled :)
> > > > > > > As I work on the team that looks after the MCO, which is one of 
> > > > > > > the parts of OpenShift that will need some adaptation in order to 
> > > > > > > support another base OS, I am confident I can help with 
> > > > > > > contributions there
> > > > > > > (of course I don't want to shut the door for other OSes to be 
> > > > > > > used as base if people are interested in that :).
> > > > > > >
> > > > > > > Proposal: Create WG and hold regular meetings
> > > > > > >
> > > > > > > I'd like to propose the creation of the OKD Working Group that 
> > > > > > > will hold bi-weekly meetings.
> > > > > > > (or should we call it a SIG? Also open to suggestions to find the 
> > > > > > > right venue: IRC?, OpenShift Commons Slack?).
> > > > > > >
> > > > > > > I'll survey some people in the coming days to find a suitable 
> > > > > > > meeting time.
> > > > > > >
> > > > > > > If you have any feedback or suggestions, please feel free to 
> > > > > > > reach out, either via this list or personally!
> > > > > > > I can be found as lorbus on IRC/Fedora, @lorbus42 on Twitter, or 
> > > > > > > simply via email :)
> > > > > > >
> > > > > > > I'll send out more info here ASAP. Stay tuned!
> > > > > > >
> > > > > > > With kind regards
> > > > > > >
> > > > > > > CHRISTIAN GLOMBEK
> > > > > > > Associate Software Engineer
> > > > > > >
> > > > > > > Red Hat GmbH, registred seat: Grassbrunn
> > > > > > > Commercial register: Amtsgericht Muenchen, HRB 153243
> > > > > > > Managing directors: Charles Cachera, Michael O'Neill, Thomas 
> > > > > > > Savage, Eric Shander
> > > > > 

Re: Follow up on OKD 4

2019-07-19 Thread Daniel Comnea
Hi Christian,

Welcome and thanks for volunteering on kicking off this effort.

My vote goes to #openshift-dev slack too, OpenShift Commons Slack scope
was/is a bit different geared towards ISVs.

IRC -  personally have no problem, however the chances to attract more
folks (especially non RH employees) who might be willing to help growing
OKD community are higher on slack.

On Fri, Jul 19, 2019 at 9:33 PM Christian Glombek 
wrote:

> +1 for using kubernetes #openshift-dev slack for the OKD WG meetings
>
>
> On Fri, Jul 19, 2019 at 6:49 PM Clayton Coleman 
> wrote:
>
>> The kube #openshift-dev slack might also make sense, since we have 518
>> people there right now
>>
>> On Fri, Jul 19, 2019 at 12:46 PM Christian Glombek 
>> wrote:
>>
>>> Hi everyone,
>>>
>>> first of all, I'd like to thank Clayton for kicking this off!
>>>
>>> As I only just joined this ML, let me quickly introduce myself:
>>>
>>> I am an Associate Software Engineer on the OpenShift
>>> machine-config-operator (mco) team and I'm based out of Berlin, Germany.
>>> Last year, I participated in Google Summer of Code as a student with
>>> Fedora IoT and joined Red Hat shortly thereafter to work on the Fedora
>>> CoreOS (FCOS) team.
>>> I joined the MCO team when it was established earlier this year.
>>>
>>> Having been a Fedora/Atomic community member for some years, I'm a
>>> strong proponent of using FCOS as base OS for OKD and would like to see it
>>> enabled :)
>>> As I work on the team that looks after the MCO, which is one of the
>>> parts of OpenShift that will need some adaptation in order to support
>>> another base OS, I am confident I can help with contributions there
>>> (of course I don't want to shut the door for other OSes to be used as
>>> base if people are interested in that :).
>>>
>>> Proposal: Create WG and hold regular meetings
>>>
>>> I'd like to propose the creation of the OKD Working Group that will hold
>>> bi-weekly meetings.
>>> (or should we call it a SIG? Also open to suggestions to find the right
>>> venue: IRC?, OpenShift Commons Slack?).
>>>
>>> I'll survey some people in the coming days to find a suitable meeting
>>> time.
>>>
>>> If you have any feedback or suggestions, please feel free to reach out,
>>> either via this list or personally!
>>> I can be found as lorbus on IRC/Fedora, @lorbus42 on Twitter, or simply
>>> via email :)
>>>
>>> I'll send out more info here ASAP. Stay tuned!
>>>
>>> With kind regards
>>>
>>> CHRISTIAN GLOMBEK
>>> Associate Software Engineer
>>>
>>> Red Hat GmbH, registred seat: Grassbrunn
>>> Commercial register: Amtsgericht Muenchen, HRB 153243
>>> Managing directors: Charles Cachera, Michael O'Neill, Thomas Savage, Eric 
>>> Shander
>>>
>>>
>>>
>>> On Wed, Jul 17, 2019 at 10:45 PM Clayton Coleman 
>>> wrote:
>>>
 Thanks for everyone who provided feedback over the last few weeks.
 There's been a lot of good feedback, including some things I'll try to
 capture here:

 * More structured working groups would be good
 * Better public roadmap
 * Concrete schedule for OKD 4
 * Concrete proposal for OKD 4

 I've heard generally positive comments about the suggestions and
 philosophy in the last email, with a desire for more details around what
 the actual steps might look like, so I think it's safe to say that the idea
 of "continuously up to date Kubernetes distribution" resonated.  We'll
 continue to take feedback along this direction (private or public).

 Since 4 was the kickoff for this discussion, and with the recent
 release of the Fedora CoreOS beta (
 https://docs.fedoraproject.org/en-US/fedora-coreos/getting-started/) 
 figuring
 prominently in the discussions so far, I got some volunteers from that team
 to take point on setting up a working group (SIG?) around the initial level
 of integration and drafting a proposal.

 Steve and Christian have both been working on Fedora CoreOS and
 graciously agreed to help drive the next steps on Fedora CoreOS and OKD
 potential integration into a proposal.  There's a rough level draft doc
 they plan to share - but for now I will turn this over to them and they'll
 help organize time / forum / process for kicking off this effort.  As that
 continues, we'll identify new SIGs to spawn off as necessary to cover other
 topics, including initial CI and release automation to deliver any
 necessary changes.

 Thanks to everyone who gave feedback, and stay tuned here for more!

>>> ___
>>> users mailing list
>>> us...@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
___
dev mailing list
dev@lists.openshift.redhat.com

Re: Follow up on OKD 4

2019-07-19 Thread Christian Glombek
+1 for using kubernetes #openshift-dev slack for the OKD WG meetings


On Fri, Jul 19, 2019 at 6:49 PM Clayton Coleman  wrote:

> The kube #openshift-dev slack might also make sense, since we have 518
> people there right now
>
> On Fri, Jul 19, 2019 at 12:46 PM Christian Glombek 
> wrote:
>
>> Hi everyone,
>>
>> first of all, I'd like to thank Clayton for kicking this off!
>>
>> As I only just joined this ML, let me quickly introduce myself:
>>
>> I am an Associate Software Engineer on the OpenShift
>> machine-config-operator (mco) team and I'm based out of Berlin, Germany.
>> Last year, I participated in Google Summer of Code as a student with
>> Fedora IoT and joined Red Hat shortly thereafter to work on the Fedora
>> CoreOS (FCOS) team.
>> I joined the MCO team when it was established earlier this year.
>>
>> Having been a Fedora/Atomic community member for some years, I'm a strong
>> proponent of using FCOS as base OS for OKD and would like to see it enabled
>> :)
>> As I work on the team that looks after the MCO, which is one of the parts
>> of OpenShift that will need some adaptation in order to support another
>> base OS, I am confident I can help with contributions there
>> (of course I don't want to shut the door for other OSes to be used as
>> base if people are interested in that :).
>>
>> Proposal: Create WG and hold regular meetings
>>
>> I'd like to propose the creation of the OKD Working Group that will hold
>> bi-weekly meetings.
>> (or should we call it a SIG? Also open to suggestions to find the right
>> venue: IRC?, OpenShift Commons Slack?).
>>
>> I'll survey some people in the coming days to find a suitable meeting
>> time.
>>
>> If you have any feedback or suggestions, please feel free to reach out,
>> either via this list or personally!
>> I can be found as lorbus on IRC/Fedora, @lorbus42 on Twitter, or simply
>> via email :)
>>
>> I'll send out more info here ASAP. Stay tuned!
>>
>> With kind regards
>>
>> CHRISTIAN GLOMBEK
>> Associate Software Engineer
>>
>> Red Hat GmbH, registred seat: Grassbrunn
>> Commercial register: Amtsgericht Muenchen, HRB 153243
>> Managing directors: Charles Cachera, Michael O'Neill, Thomas Savage, Eric 
>> Shander
>>
>>
>>
>> On Wed, Jul 17, 2019 at 10:45 PM Clayton Coleman 
>> wrote:
>>
>>> Thanks for everyone who provided feedback over the last few weeks.
>>> There's been a lot of good feedback, including some things I'll try to
>>> capture here:
>>>
>>> * More structured working groups would be good
>>> * Better public roadmap
>>> * Concrete schedule for OKD 4
>>> * Concrete proposal for OKD 4
>>>
>>> I've heard generally positive comments about the suggestions and
>>> philosophy in the last email, with a desire for more details around what
>>> the actual steps might look like, so I think it's safe to say that the idea
>>> of "continuously up to date Kubernetes distribution" resonated.  We'll
>>> continue to take feedback along this direction (private or public).
>>>
>>> Since 4 was the kickoff for this discussion, and with the recent release
>>> of the Fedora CoreOS beta (
>>> https://docs.fedoraproject.org/en-US/fedora-coreos/getting-started/) 
>>> figuring
>>> prominently in the discussions so far, I got some volunteers from that team
>>> to take point on setting up a working group (SIG?) around the initial level
>>> of integration and drafting a proposal.
>>>
>>> Steve and Christian have both been working on Fedora CoreOS and
>>> graciously agreed to help drive the next steps on Fedora CoreOS and OKD
>>> potential integration into a proposal.  There's a rough level draft doc
>>> they plan to share - but for now I will turn this over to them and they'll
>>> help organize time / forum / process for kicking off this effort.  As that
>>> continues, we'll identify new SIGs to spawn off as necessary to cover other
>>> topics, including initial CI and release automation to deliver any
>>> necessary changes.
>>>
>>> Thanks to everyone who gave feedback, and stay tuned here for more!
>>>
>> ___
>> users mailing list
>> us...@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Follow up on OKD 4

2019-07-19 Thread Clayton Coleman
The kube #openshift-dev slack might also make sense, since we have 518
people there right now

On Fri, Jul 19, 2019 at 12:46 PM Christian Glombek 
wrote:

> Hi everyone,
>
> first of all, I'd like to thank Clayton for kicking this off!
>
> As I only just joined this ML, let me quickly introduce myself:
>
> I am an Associate Software Engineer on the OpenShift
> machine-config-operator (mco) team and I'm based out of Berlin, Germany.
> Last year, I participated in Google Summer of Code as a student with
> Fedora IoT and joined Red Hat shortly thereafter to work on the Fedora
> CoreOS (FCOS) team.
> I joined the MCO team when it was established earlier this year.
>
> Having been a Fedora/Atomic community member for some years, I'm a strong
> proponent of using FCOS as base OS for OKD and would like to see it enabled
> :)
> As I work on the team that looks after the MCO, which is one of the parts
> of OpenShift that will need some adaptation in order to support another
> base OS, I am confident I can help with contributions there
> (of course I don't want to shut the door for other OSes to be used as base
> if people are interested in that :).
>
> Proposal: Create WG and hold regular meetings
>
> I'd like to propose the creation of the OKD Working Group that will hold
> bi-weekly meetings.
> (or should we call it a SIG? Also open to suggestions to find the right
> venue: IRC?, OpenShift Commons Slack?).
>
> I'll survey some people in the coming days to find a suitable meeting time.
>
> If you have any feedback or suggestions, please feel free to reach out,
> either via this list or personally!
> I can be found as lorbus on IRC/Fedora, @lorbus42 on Twitter, or simply
> via email :)
>
> I'll send out more info here ASAP. Stay tuned!
>
> With kind regards
>
> CHRISTIAN GLOMBEK
> Associate Software Engineer
>
> Red Hat GmbH, registred seat: Grassbrunn
> Commercial register: Amtsgericht Muenchen, HRB 153243
> Managing directors: Charles Cachera, Michael O'Neill, Thomas Savage, Eric 
> Shander
>
>
>
> On Wed, Jul 17, 2019 at 10:45 PM Clayton Coleman 
> wrote:
>
>> Thanks for everyone who provided feedback over the last few weeks.
>> There's been a lot of good feedback, including some things I'll try to
>> capture here:
>>
>> * More structured working groups would be good
>> * Better public roadmap
>> * Concrete schedule for OKD 4
>> * Concrete proposal for OKD 4
>>
>> I've heard generally positive comments about the suggestions and
>> philosophy in the last email, with a desire for more details around what
>> the actual steps might look like, so I think it's safe to say that the idea
>> of "continuously up to date Kubernetes distribution" resonated.  We'll
>> continue to take feedback along this direction (private or public).
>>
>> Since 4 was the kickoff for this discussion, and with the recent release
>> of the Fedora CoreOS beta (
>> https://docs.fedoraproject.org/en-US/fedora-coreos/getting-started/) figuring
>> prominently in the discussions so far, I got some volunteers from that team
>> to take point on setting up a working group (SIG?) around the initial level
>> of integration and drafting a proposal.
>>
>> Steve and Christian have both been working on Fedora CoreOS and
>> graciously agreed to help drive the next steps on Fedora CoreOS and OKD
>> potential integration into a proposal.  There's a rough level draft doc
>> they plan to share - but for now I will turn this over to them and they'll
>> help organize time / forum / process for kicking off this effort.  As that
>> continues, we'll identify new SIGs to spawn off as necessary to cover other
>> topics, including initial CI and release automation to deliver any
>> necessary changes.
>>
>> Thanks to everyone who gave feedback, and stay tuned here for more!
>>
> ___
> users mailing list
> us...@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Follow up on OKD 4

2019-07-19 Thread Christian Glombek
Hi everyone,

first of all, I'd like to thank Clayton for kicking this off!

As I only just joined this ML, let me quickly introduce myself:

I am an Associate Software Engineer on the OpenShift
machine-config-operator (mco) team and I'm based out of Berlin, Germany.
Last year, I participated in Google Summer of Code as a student with Fedora
IoT and joined Red Hat shortly thereafter to work on the Fedora CoreOS
(FCOS) team.
I joined the MCO team when it was established earlier this year.

Having been a Fedora/Atomic community member for some years, I'm a strong
proponent of using FCOS as base OS for OKD and would like to see it enabled
:)
As I work on the team that looks after the MCO, which is one of the parts
of OpenShift that will need some adaptation in order to support another
base OS, I am confident I can help with contributions there
(of course I don't want to shut the door for other OSes to be used as base
if people are interested in that :).

Proposal: Create WG and hold regular meetings

I'd like to propose the creation of the OKD Working Group that will hold
bi-weekly meetings.
(or should we call it a SIG? Also open to suggestions to find the right
venue: IRC?, OpenShift Commons Slack?).

I'll survey some people in the coming days to find a suitable meeting time.

If you have any feedback or suggestions, please feel free to reach out,
either via this list or personally!
I can be found as lorbus on IRC/Fedora, @lorbus42 on Twitter, or simply via
email :)

I'll send out more info here ASAP. Stay tuned!

With kind regards

CHRISTIAN GLOMBEK
Associate Software Engineer

Red Hat GmbH, registred seat: Grassbrunn
Commercial register: Amtsgericht Muenchen, HRB 153243
Managing directors: Charles Cachera, Michael O'Neill, Thomas Savage,
Eric Shander



On Wed, Jul 17, 2019 at 10:45 PM Clayton Coleman 
wrote:

> Thanks for everyone who provided feedback over the last few weeks.
> There's been a lot of good feedback, including some things I'll try to
> capture here:
>
> * More structured working groups would be good
> * Better public roadmap
> * Concrete schedule for OKD 4
> * Concrete proposal for OKD 4
>
> I've heard generally positive comments about the suggestions and
> philosophy in the last email, with a desire for more details around what
> the actual steps might look like, so I think it's safe to say that the idea
> of "continuously up to date Kubernetes distribution" resonated.  We'll
> continue to take feedback along this direction (private or public).
>
> Since 4 was the kickoff for this discussion, and with the recent release
> of the Fedora CoreOS beta (
> https://docs.fedoraproject.org/en-US/fedora-coreos/getting-started/) figuring
> prominently in the discussions so far, I got some volunteers from that team
> to take point on setting up a working group (SIG?) around the initial level
> of integration and drafting a proposal.
>
> Steve and Christian have both been working on Fedora CoreOS and graciously
> agreed to help drive the next steps on Fedora CoreOS and OKD potential
> integration into a proposal.  There's a rough level draft doc they plan to
> share - but for now I will turn this over to them and they'll help organize
> time / forum / process for kicking off this effort.  As that continues,
> we'll identify new SIGs to spawn off as necessary to cover other topics,
> including initial CI and release automation to deliver any necessary
> changes.
>
> Thanks to everyone who gave feedback, and stay tuned here for more!
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev