Re: Follow up on OKD 4

2019-07-24 Thread Daniel Comnea
On Mon, Jul 22, 2019 at 4:02 PM Justin Cook  wrote:

> On 22 Jul 2019, 12:24 +0100, Daniel Comnea , wrote:
>
> I totally agree with that but let's do a quick reality check taking
> example some IRC channels, shall we?
>
>- ansible IRC channel doesn't log the conversation - does the comments
>[1] and [2] resonate with you? It does for me and that is a huge -1 from my
>side.
>
>
> Yes that’s most unfortunate for #ansible.
>
>
>- centos-devel/ centos channels doesn't log the conversation. Saying
>that for the centos meetings (i.e PaaS SIG) it get logged and is per SIG.
>That in itself is very useful however as a guy who was consuming the output
>for the last year as PaaS SIG chair/ member i will say is not appealing to
>go over the output if a meeting had high traffic (same way if you have a 6
>hour meeting recording, will you watch it from A to Z ? ;) )
>- fedora-coreos it does log [3] but if i'll turn every morning and see
>what has been discussed you see a lot of noise caused by who join/leave
>
>
>
> #centos and #fedora could most certainly do better. We’re getting on to
> three months after RHEL8 release and no hint of CentOS8.
>
> [DC]: i think is a bit unfair to say that, the info are out - see [1] ,
[2] and [3]

[1] https://blog.centos.org/2019/05/centos-8-0-1905-build-status/
[2] https://blog.centos.org/2019/06/centos-8-status-17-june-2019/
[3] https://wiki.centos.org/About/Building_8


>- openshift/ openshift-dev channels had something on [4] but does it
>still works ?
>
>
> This is one point of my complaint.
>
>
>
> All i'm trying to say with the above is:
>
> Should we go with IRC as a form of communication we should then be ready
> to have bodies lined up to:
>
>- look after and admin the IRC channels.
>- enable the IRC log channels and also filter out the noise to be
>consumable (not just stream the logs somewhere and tick the box)
>
>
> Easy enough. It’s been done time and again. Let’s give it a whirl. Since
> I’m the one complaining perhaps I can put my name in for consideration.
>
> [DC]: i understood not everyone is okay with logging any activity due to
GDPR so i think this goes off the table

>
>
> In addition to the channel logs, my main requirement is to access the IRC
> channels from any device and not lose track of what has been discussed.
> A respected gentlemen involved in various opensource projects once wrote
> [5] and so with that i'd say:
>
>- who will take on board the setup so everyone can benefit from it?
>
>
> https://www.irccloud.com/irc/freenode
> https://matrix.org/faq
>
> Again some options here, but most certainly doable with a little effort.
> #openshift-dev is advertised all over the place.
> https://www.okd.io/#contribute
>
> If you swing to slack, i'd say:
>
>- K8s slack is free in that neither you nor i/ others pay for it and
>everyone can join there
>- OpenShift Common slack channel is also free, RH is paying the bill
>(another investment from their side) however as said Diane setup up that
>place initially with a different scope.
>- once you logged in you can scroll back many months in the past
>- you get ability to share code snippet -> in IRC you don't. You could
>argue that folks can use github gist or any pastebin service however the
>content can be deleted/ expired and so we go back to square one
>
>
> Slack logs are not indexed by search engines. This prevents me from
> supporting it in its entirety. People have been sharing code snippets for
> decades on IRC. And, it’s worked fantastic. Just from my personal
> experience of Slack absorbing or repelling so much energy and collaboration
> from the community — of which no one can explain really — I don’t see it as
> a viable option given we have the numbers in front of us from this very
> project which undeniably shows it doesn’t work.
>
>
> [1] https://github.com/ansible/community/issues/242#issuecomment-334239958
> [2] https://github.com/ansible/community/issues/242#issuecomment-336890994
> [3] https://echelog.com/logs/browse/fedora-coreos/1563746400
> [4] https://botbot.me/freenode/openshift-dev/
> [5]
> https://doughellmann.com/blog/2015/03/12/deploying-nested-znc-services-with-ansible/
>
> you also saying
>
>- *Slack with three threads per week*
>
> How is the traffic on fedora-coreos OR centos-devel channels going? Have
> you seen high volume ?
>
>
> Why do you mention other projects and their traffic? #openshift-dev had
> incredible amounts of traffic which helped make it a success. Different
> channels have different attendance depending on a tremendous amount of
> factors. But, just when OCP emerged a champion of Kubernetes mindshare
> toward end of 2017 things went blank.
>
> [DC]: i do mention because i'm looking at other active channels who are
related to OpenShift (past/ present). I'm not denying that there was high
amount of traffic in the past however my previous point was:

the OKD 

Re: Follow up on OKD 4

2019-07-24 Thread Michael Gugino
I tried FCoS prior to the release by using the assembler on github.
Too much secret sauce in how to actually construct an image.  I
thought atomic was much more polished, not really sure what the
value-add of ignition is in this usecase.  Just give me a way to build
simple image pipelines and I don't need ignition.  To that end, there
should be an okd 'spin' of FCoS IMO.  Atomic was dead simple to build
ostree repos for.  I'd prefer to have ignition be opt-in.  Since we're
supporting the mcd-once-from to parse ignition on RHEL, we don't need
ignition to actually install okd.  To me, it seems FCoS was created
just to have a more open version of RHEL CoreOS, and I'm not sure FCoS
actually solves anyone's needs relative to atomic.  It feels like we
jumped the shark on this one.

I'd like to see OKD be distro-independent.  Obviously Fedora should be
our primary target (I'd argue Fedora over FCoS), but I think it should
be true upstream software in the sense that apache2 http server is
upstream and not distro specific.  To this end, perhaps it makes sense
to consume k/k instead of openshift/origin for okd.  OKD should be
free to do wild and crazy things independently of the enterprise
product.  Perhaps there's a usecase for treating k/k vs
openshift/origin as a swappable base layer.

It would be nice to have a more native kubernetes place to develop our
components against so we can upstream them, or otherwise just build a
solid community around how we think kubernetes should be deployed and
consumed.  Similar to how Fedora has a package repository, we should
have a Kubernetes component repository (I realize operatorhub fulfulls
some of this, but I'm talking about a place for OLM and things like
MCD to live).

I think we could integrate with existing package managers via a
'repo-in-a-container' type strategy for those not using ostree.

As far as slack vs IRC, I vote IRC or any free software solution (but
my preference is IRC because it's simple and I like it).

On Sun, Jul 21, 2019 at 12:28 PM Clayton Coleman  wrote:
>
>
>
> On Sat, Jul 20, 2019 at 12:40 PM Justin Cook  wrote:
>>
>> Once upon a time Freenode #openshift-dev was vibrant with loads of activity 
>> and publicly available logs. I jumped in asked questions and Red Hatters 
>> came from the woodwork and some amazing work was done.
>>
>> Perfect.
>>
>> Slack not so much. Since Monday there have been three comments with two 
>> reply threads. All this with 524 people. Crickets.
>>
>> Please explain how this is better. I’d really love to know why IRC ceased. 
>> It worked and worked brilliantly.
>
>
> Is your concern about volume or location (irc vs slack)?
>
> Re volume: It should be relatively easy to move some common discussion types 
> into the #openshift-dev slack channel (especially triage / general QA) that 
> might be distributed to other various slack channels today (both private and 
> public), and I can take the follow up to look into that.  Some of the volume 
> that was previously in IRC moved to these slack channels, but they're not 
> anything private (just convenient).
>
> Re location:  I don't know how many people want to go back to IRC from slack, 
> but that's a fairly easy survey to do here if someone can volunteer to drive 
> that, and I can run the same one internally.  Some of it is inertia - people 
> have to be in slack sig-* channels - and some of it is preference (in that 
> IRC is an inferior experience for long running communication).
>
>>
>>
>> There are mentions of sigs and bits and pieces, but absolutely no progress. 
>> I fail to see why anyone would want to regress. OCP4 maybe brilliant, but as 
>> I said in a private email, without upstream there is no culture or insurance 
>> we’ve come to love from decades of heart and soul.
>>
>> Ladies and gentlemen, this is essentially getting to the point the community 
>> is being abandoned. Man years of work acknowledged with the roadmap pulled 
>> out from under us.
>
>
> I don't think that's a fair characterization, but I understand why you feel 
> that way and we are working to get the 4.x work moving.  The FCoS team as 
> mentioned just released their first preview last week, I've been working with 
> Diane and others to identify who on the team is going to take point on the 
> design work, and there's a draft in flight that I saw yesterday.  Every 
> component of OKD4 *besides* the FCoS integration is public and has been 
> public for months.
>
> I do want to make sure we can get a basic preview up as quickly as possible - 
> one option I was working on with the legal side was whether we could offer a 
> short term preview of OKD4 based on top of RHCoS.  That is possible if folks 
> are willing to accept the terms on try.openshift.com in order to access it in 
> the very short term (and then once FCoS is available that would not be 
> necessary).  If that's an option you or anyone on this thread are interested 
> in please let me know, just as something we can do to speed 

Re: Proposal: Deploy and switch to Discourse

2019-07-24 Thread Matthew Miller
On Fri, Jul 12, 2019 at 10:18:49AM -0400, Neal Gompa wrote:
> Fedora didn't shut down its users@ list when it deployed
> discussions.fp.o. And adoption of Discourse in Fedora hasn't been very
> high outside of the Silverblue/CoreOS bubble.

For what it's worth, the Fedora instance is hosted (by Discourse). We'd be
happy to have OKD have a category there, if you want to do that rather than
standing up a whole new one. (Either as an experiment or a permanent home.)

I think this is complementary to Stack Exchange, which is very focused on
specific question-and-answer exchanges and actively discourages discussion.

-- 
Matthew Millermat...@mattdm.org 
Fedora Project Leader  mat...@fedoraproject.org   

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Follow up on OKD 4

2019-07-24 Thread Clayton Coleman
On Wed, Jul 24, 2019 at 10:40 AM Michael Gugino  wrote:

> I tried FCoS prior to the release by using the assembler on github.
> Too much secret sauce in how to actually construct an image.  I
> thought atomic was much more polished, not really sure what the
> value-add of ignition is in this usecase.  Just give me a way to build
> simple image pipelines and I don't need ignition.  To that end, there
> should be an okd 'spin' of FCoS IMO.  Atomic was dead simple to build
> ostree repos for.  I'd prefer to have ignition be opt-in.  Since we're
> supporting the mcd-once-from to parse ignition on RHEL, we don't need
> ignition to actually install okd.  To me, it seems FCoS was created
> just to have a more open version of RHEL CoreOS, and I'm not sure FCoS
> actually solves anyone's needs relative to atomic.  It feels like we
> jumped the shark on this one.
>

That’s feedback that’s probably something you should share in the fcos
forums as well.  I will say that I find the OCP + RHEL experience
unsatisfying and doesn't truly live up to what RHCOS+OCP can do (since it
lacks the key features like ignition and immutable hosts).  Are you saying
you'd prefer to have more of a "DIY kube bistro" than the "highly
opinionated, totally integrated OKD" proposal?  I think that's a good
question the community should get a chance to weigh in on (in my original
email that was the implicit question - do you want something that looks
like OCP4, or something that is completely different).


>
> I'd like to see OKD be distro-independent.  Obviously Fedora should be
> our primary target (I'd argue Fedora over FCoS), but I think it should
> be true upstream software in the sense that apache2 http server is
> upstream and not distro specific.  To this end, perhaps it makes sense
> to consume k/k instead of openshift/origin for okd.  OKD should be
> free to do wild and crazy things independently of the enterprise
> product.  Perhaps there's a usecase for treating k/k vs
> openshift/origin as a swappable base layer.
>

That’s even more dramatic a change from OKD even as it was in 3.x.  I’d be
happy to see people excited about reusing cvo / mcd and be able to mix and
match, but most of the things here would be a huge investment to build.  In
my original email I might call this the “I want to build my own distro" -
if that's what people want to build, I think we can do things to enable
it.  But it would probably not be "openshift" in the same way.


>
> It would be nice to have a more native kubernetes place to develop our
> components against so we can upstream them, or otherwise just build a
> solid community around how we think kubernetes should be deployed and
> consumed.  Similar to how Fedora has a package repository, we should
> have a Kubernetes component repository (I realize operatorhub fulfulls
> some of this, but I'm talking about a place for OLM and things like
> MCD to live).
>

MCD is really tied to the OS.  The idea of a generic MCD seems like it
loses the value of MCD being specific to an OS.

I do think there are two types of components we have - things designed to
work well with kube, and things designed to work well with "openshift the
distro".  The former can be developed against Kube (like operator hub /
olm) but the latter doesn't really make sense to develop against unless it
matches what is being built.  In that vein, OKD4 looking not like OCP4
wouldn't benefit you or the components.


>
> I think we could integrate with existing package managers via a
> 'repo-in-a-container' type strategy for those not using ostree.
>

A big part of openshift 4 is "we're tired of managing machines".  It sounds
like you are arguing for "let people do whatever", which is definitely
valid (but doesn't sound like openshift 4).


>
> As far as slack vs IRC, I vote IRC or any free software solution (but
> my preference is IRC because it's simple and I like it).
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Follow up on OKD 4

2019-07-24 Thread Justin Cook
On 24 Jul 2019, 04:57 -0500, Daniel Comnea , wrote:
> > > > >
> > > > >
> > > > > All i'm trying to say with the above is:
> > > > >
> > > > > Should we go with IRC as a form of communication we should then be 
> > > > > ready to have bodies lined up to:
> > > > >
> > > > > • look after and admin the IRC channels.
> > > > > • enable the IRC log channels and also filter out the noise to be 
> > > > > consumable (not just stream the logs somewhere and tick the box)
> > > > >
> > >
> > > Easy enough. It’s been done time and again. Let’s give it a whirl. Since 
> > > I’m the one complaining perhaps I can put my name in for consideration.
> > >
> > [DC]: i understood not everyone is okay with logging any activity due to 
> > GDPR so i think this goes off the table

GDPR has very little to do with this and can be easily mitigated. Your 
communication on IRC is logged via handles and IP addresses, and the right to 
be forgotten only applies to personal identifiable information and is not 
absolute. As for the public domain, when someone enters a public channel, there 
can be an explicit statement they consent to their information they have 
provided entering the public domain. Their continuing use of the channel 
implies they consent. If someone want’s the handle, let’s say an email address 
scrubbed, then so be it. Otherwise, there’s nothing of interest. But, yet 
again, if they used the channel with explicit notification, their use is 
consent.

Using GDPR as an excuse is fear mongering by people who don’t understand it.

Justin Cook
> > > > >
> > > > >
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


RE: Follow up on OKD 4

2019-07-24 Thread Fox, Kevin M
So, last I heard OpenShift was starting to modularize, so it could load the 
OpenShift parts as extensions to the kube-apiserver? Has this been completed? 
Maybe the idea below of being able to deploy vanilla k8s is workable as the 
OpenShift parts could easily be added on top?

Thanks,
Kevin

From: dev-boun...@lists.openshift.redhat.com 
[dev-boun...@lists.openshift.redhat.com] on behalf of Michael Gugino 
[mgug...@redhat.com]
Sent: Wednesday, July 24, 2019 7:40 AM
To: Clayton Coleman
Cc: users; dev
Subject: Re: Follow up on OKD 4

I tried FCoS prior to the release by using the assembler on github.
Too much secret sauce in how to actually construct an image.  I
thought atomic was much more polished, not really sure what the
value-add of ignition is in this usecase.  Just give me a way to build
simple image pipelines and I don't need ignition.  To that end, there
should be an okd 'spin' of FCoS IMO.  Atomic was dead simple to build
ostree repos for.  I'd prefer to have ignition be opt-in.  Since we're
supporting the mcd-once-from to parse ignition on RHEL, we don't need
ignition to actually install okd.  To me, it seems FCoS was created
just to have a more open version of RHEL CoreOS, and I'm not sure FCoS
actually solves anyone's needs relative to atomic.  It feels like we
jumped the shark on this one.

I'd like to see OKD be distro-independent.  Obviously Fedora should be
our primary target (I'd argue Fedora over FCoS), but I think it should
be true upstream software in the sense that apache2 http server is
upstream and not distro specific.  To this end, perhaps it makes sense
to consume k/k instead of openshift/origin for okd.  OKD should be
free to do wild and crazy things independently of the enterprise
product.  Perhaps there's a usecase for treating k/k vs
openshift/origin as a swappable base layer.

It would be nice to have a more native kubernetes place to develop our
components against so we can upstream them, or otherwise just build a
solid community around how we think kubernetes should be deployed and
consumed.  Similar to how Fedora has a package repository, we should
have a Kubernetes component repository (I realize operatorhub fulfulls
some of this, but I'm talking about a place for OLM and things like
MCD to live).

I think we could integrate with existing package managers via a
'repo-in-a-container' type strategy for those not using ostree.

As far as slack vs IRC, I vote IRC or any free software solution (but
my preference is IRC because it's simple and I like it).

On Sun, Jul 21, 2019 at 12:28 PM Clayton Coleman  wrote:
>
>
>
> On Sat, Jul 20, 2019 at 12:40 PM Justin Cook  wrote:
>>
>> Once upon a time Freenode #openshift-dev was vibrant with loads of activity 
>> and publicly available logs. I jumped in asked questions and Red Hatters 
>> came from the woodwork and some amazing work was done.
>>
>> Perfect.
>>
>> Slack not so much. Since Monday there have been three comments with two 
>> reply threads. All this with 524 people. Crickets.
>>
>> Please explain how this is better. I’d really love to know why IRC ceased. 
>> It worked and worked brilliantly.
>
>
> Is your concern about volume or location (irc vs slack)?
>
> Re volume: It should be relatively easy to move some common discussion types 
> into the #openshift-dev slack channel (especially triage / general QA) that 
> might be distributed to other various slack channels today (both private and 
> public), and I can take the follow up to look into that.  Some of the volume 
> that was previously in IRC moved to these slack channels, but they're not 
> anything private (just convenient).
>
> Re location:  I don't know how many people want to go back to IRC from slack, 
> but that's a fairly easy survey to do here if someone can volunteer to drive 
> that, and I can run the same one internally.  Some of it is inertia - people 
> have to be in slack sig-* channels - and some of it is preference (in that 
> IRC is an inferior experience for long running communication).
>
>>
>>
>> There are mentions of sigs and bits and pieces, but absolutely no progress. 
>> I fail to see why anyone would want to regress. OCP4 maybe brilliant, but as 
>> I said in a private email, without upstream there is no culture or insurance 
>> we’ve come to love from decades of heart and soul.
>>
>> Ladies and gentlemen, this is essentially getting to the point the community 
>> is being abandoned. Man years of work acknowledged with the roadmap pulled 
>> out from under us.
>
>
> I don't think that's a fair characterization, but I understand why you feel 
> that way and we are working to get the 4.x work moving.  The FCoS team as 
> mentioned just released their first preview last week, I've been working with 
> Diane and others to identify who on the team is going to take point on the 
> design work, and there's a draft in flight that I saw yesterday.  Every 
> component of OKD4 *besides* the FCoS integration is 

Re: Follow up on OKD 4

2019-07-24 Thread Clayton Coleman
On Wed, Jul 24, 2019 at 12:18 PM Fox, Kevin M  wrote:

> So, last I heard OpenShift was starting to modularize, so it could load
> the OpenShift parts as extensions to the kube-apiserver? Has this been
> completed? Maybe the idea below of being able to deploy vanilla k8s is
> workable as the OpenShift parts could easily be added on top?
>

OpenShift in 3.x was a core control plane plus 3-5 components.  In 4.x it's
a kube control plane, a bunch of core wiring, the OS, the installer, and
then a good 30 other components.  OpenShift 4 is so far beyond "vanilla
kube + extensions" now that while it's probably technically possible to do
that, it's less of "openshift" and more of "a kube cluster with a few
APIs".  I.e. openshift is machine management, automated updates, integrated
monitoring, etc.


>
> Thanks,
> Kevin
> 
> From: dev-boun...@lists.openshift.redhat.com [
> dev-boun...@lists.openshift.redhat.com] on behalf of Michael Gugino [
> mgug...@redhat.com]
> Sent: Wednesday, July 24, 2019 7:40 AM
> To: Clayton Coleman
> Cc: users; dev
> Subject: Re: Follow up on OKD 4
>
> I tried FCoS prior to the release by using the assembler on github.
> Too much secret sauce in how to actually construct an image.  I
> thought atomic was much more polished, not really sure what the
> value-add of ignition is in this usecase.  Just give me a way to build
> simple image pipelines and I don't need ignition.  To that end, there
> should be an okd 'spin' of FCoS IMO.  Atomic was dead simple to build
> ostree repos for.  I'd prefer to have ignition be opt-in.  Since we're
> supporting the mcd-once-from to parse ignition on RHEL, we don't need
> ignition to actually install okd.  To me, it seems FCoS was created
> just to have a more open version of RHEL CoreOS, and I'm not sure FCoS
> actually solves anyone's needs relative to atomic.  It feels like we
> jumped the shark on this one.
>
> I'd like to see OKD be distro-independent.  Obviously Fedora should be
> our primary target (I'd argue Fedora over FCoS), but I think it should
> be true upstream software in the sense that apache2 http server is
> upstream and not distro specific.  To this end, perhaps it makes sense
> to consume k/k instead of openshift/origin for okd.  OKD should be
> free to do wild and crazy things independently of the enterprise
> product.  Perhaps there's a usecase for treating k/k vs
> openshift/origin as a swappable base layer.
>
> It would be nice to have a more native kubernetes place to develop our
> components against so we can upstream them, or otherwise just build a
> solid community around how we think kubernetes should be deployed and
> consumed.  Similar to how Fedora has a package repository, we should
> have a Kubernetes component repository (I realize operatorhub fulfulls
> some of this, but I'm talking about a place for OLM and things like
> MCD to live).
>
> I think we could integrate with existing package managers via a
> 'repo-in-a-container' type strategy for those not using ostree.
>
> As far as slack vs IRC, I vote IRC or any free software solution (but
> my preference is IRC because it's simple and I like it).
>
> On Sun, Jul 21, 2019 at 12:28 PM Clayton Coleman 
> wrote:
> >
> >
> >
> > On Sat, Jul 20, 2019 at 12:40 PM Justin Cook  wrote:
> >>
> >> Once upon a time Freenode #openshift-dev was vibrant with loads of
> activity and publicly available logs. I jumped in asked questions and Red
> Hatters came from the woodwork and some amazing work was done.
> >>
> >> Perfect.
> >>
> >> Slack not so much. Since Monday there have been three comments with two
> reply threads. All this with 524 people. Crickets.
> >>
> >> Please explain how this is better. I’d really love to know why IRC
> ceased. It worked and worked brilliantly.
> >
> >
> > Is your concern about volume or location (irc vs slack)?
> >
> > Re volume: It should be relatively easy to move some common discussion
> types into the #openshift-dev slack channel (especially triage / general
> QA) that might be distributed to other various slack channels today (both
> private and public), and I can take the follow up to look into that.  Some
> of the volume that was previously in IRC moved to these slack channels, but
> they're not anything private (just convenient).
> >
> > Re location:  I don't know how many people want to go back to IRC from
> slack, but that's a fairly easy survey to do here if someone can volunteer
> to drive that, and I can run the same one internally.  Some of it is
> inertia - people have to be in slack sig-* channels - and some of it is
> preference (in that IRC is an inferior experience for long running
> communication).
> >
> >>
> >>
> >> There are mentions of sigs and bits and pieces, but absolutely no
> progress. I fail to see why anyone would want to regress. OCP4 maybe
> brilliant, but as I said in a private email, without upstream there is no
> culture or insurance we’ve come to love from decades of heart and 

RE: Follow up on OKD 4

2019-07-24 Thread Fox, Kevin M
Ah, this raises an interesting discussion I've been wanting to have for a while.

There are potentially lots of things you could call a distro.

Most linux distro's are made up of several layers:
1. boot loader - components to get the kernel running
2. kernel - provides a place to run higher level software
3. os level services - singletons needed to really call the os an os. (dhcp, 
systemd, dbus, etc)
4. prebuilt/tested, generic software/services - workload (mysql, apache, 
firefox, gnome, etc)

For sake of discussion, lets map these layers a bit, and assume that the 
openshift specific components can be added to a vanilla kubernetes. We then have

1. linux distro (could be k8s specific and micro)
2. kubernetes control plane & kubelets
3. openshift components (auth, ingress, cicd/etc)
4. ?  (operators + containers, helm + containers, etc)

openshift use to be defined as being 1-3.

As things like ake/eks/gke make it easy to deploy 1-2, maybe openshift should 
really become modular so it focuses more on 3 and 4.

As for having something that provides a #1 that is super tiny/easy to maintain 
so that you can do #2 on top easily, I'm for that as well, but should be 
decoupled from 3-4 I think. Should you be able to switch out your #1 for 
someone elses #1 while keeping the rest? That's the question from previous in 
the thread.

#4 I think is very important and while the operator framework is starting to 
make some inroads on it, there is still a lot of work to do to make an 
equivalent of the 'redhat' distro of software that runs on k8s.

A lot of focus has been on making a distro out of k8s. but its really mostly 
been at the level of, how do I get a kernel booted/upgraded. I think the more 
important distro thing #4 is how do you make a distribution of prebuilt, easy 
to install software to run on top of k8s. Redhat's distro is really 99% 
userspace and a bit of getting the thing booted. Its value is in having a suite 
of prebuilt, tested, stable, and easily installable/upgradable software  with a 
team of humans that can provide support for it. The kernel/bootloader part is 
really just a means to enable #4. No one installs a kernel/os just to get a 
kernel. This part is currently lacking. Where is the equivalent of 
Redhat/Centos/Fedora for #4.

In the context of OKD, which of these layers is OKD focused on?

Thanks,
Kevin


From: dev-boun...@lists.openshift.redhat.com 
[dev-boun...@lists.openshift.redhat.com] on behalf of Clayton Coleman 
[ccole...@redhat.com]
Sent: Wednesday, July 24, 2019 9:04 AM
To: Michael Gugino
Cc: users; dev
Subject: Re: Follow up on OKD 4




On Wed, Jul 24, 2019 at 10:40 AM Michael Gugino 
mailto:mgug...@redhat.com>> wrote:
I tried FCoS prior to the release by using the assembler on github.
Too much secret sauce in how to actually construct an image.  I
thought atomic was much more polished, not really sure what the
value-add of ignition is in this usecase.  Just give me a way to build
simple image pipelines and I don't need ignition.  To that end, there
should be an okd 'spin' of FCoS IMO.  Atomic was dead simple to build
ostree repos for.  I'd prefer to have ignition be opt-in.  Since we're
supporting the mcd-once-from to parse ignition on RHEL, we don't need
ignition to actually install okd.  To me, it seems FCoS was created
just to have a more open version of RHEL CoreOS, and I'm not sure FCoS
actually solves anyone's needs relative to atomic.  It feels like we
jumped the shark on this one.

That’s feedback that’s probably something you should share in the fcos forums 
as well.  I will say that I find the OCP + RHEL experience unsatisfying and 
doesn't truly live up to what RHCOS+OCP can do (since it lacks the key features 
like ignition and immutable hosts).  Are you saying you'd prefer to have more 
of a "DIY kube bistro" than the "highly opinionated, totally integrated OKD" 
proposal?  I think that's a good question the community should get a chance to 
weigh in on (in my original email that was the implicit question - do you want 
something that looks like OCP4, or something that is completely different).


I'd like to see OKD be distro-independent.  Obviously Fedora should be
our primary target (I'd argue Fedora over FCoS), but I think it should
be true upstream software in the sense that apache2 http server is
upstream and not distro specific.  To this end, perhaps it makes sense
to consume k/k instead of openshift/origin for okd.  OKD should be
free to do wild and crazy things independently of the enterprise
product.  Perhaps there's a usecase for treating k/k vs
openshift/origin as a swappable base layer.

That’s even more dramatic a change from OKD even as it was in 3.x.  I’d be 
happy to see people excited about reusing cvo / mcd and be able to mix and 
match, but most of the things here would be a huge investment to build.  In my 
original email I might call this the “I want to build my own distro" - if 
that's what people want 

Re: Follow up on OKD 4

2019-07-24 Thread Clayton Coleman
On Wed, Jul 24, 2019 at 12:45 PM Fox, Kevin M  wrote:

> Ah, this raises an interesting discussion I've been wanting to have for a
> while.
>
> There are potentially lots of things you could call a distro.
>
> Most linux distro's are made up of several layers:
> 1. boot loader - components to get the kernel running
> 2. kernel - provides a place to run higher level software
> 3. os level services - singletons needed to really call the os an os.
> (dhcp, systemd, dbus, etc)
> 4. prebuilt/tested, generic software/services - workload (mysql, apache,
> firefox, gnome, etc)
>
> For sake of discussion, lets map these layers a bit, and assume that the
> openshift specific components can be added to a vanilla kubernetes. We then
> have
>
> 1. linux distro (could be k8s specific and micro)
> 2. kubernetes control plane & kubelets
> 3. openshift components (auth, ingress, cicd/etc)
> 4. ?  (operators + containers, helm + containers, etc)
>
> openshift use to be defined as being 1-3.
>

> As things like ake/eks/gke make it easy to deploy 1-2, maybe openshift
> should really become modular so it focuses more on 3 and 4.
>

That's interesting that you'd say that.  I think kube today is like
"install a kernel with bash and serial port magic", whereas OpenShift 4 is
"here's a compose, an installer, a disk formatter, yum, yum repos,
lifecycle, glibc, optional packages, and sys utils".  I don't know if you
can extend the analogy there (if you want to use EKS, you're effectively
running on someone's VPS, but you can only use their distro and you can't
change anything), but definitely a good debate.


>
> As for having something that provides a #1 that is super tiny/easy to
> maintain so that you can do #2 on top easily, I'm for that as well, but
> should be decoupled from 3-4 I think. Should you be able to switch out your
> #1 for someone elses #1 while keeping the rest? That's the question from
> previous in the thread.
>

I think the analogy I've been using is that openshift is a proper distro in
the sense that you don't take someone's random kernel and use it with
someone else's random glibc and a third party's random gcc, but you might
not care about the stuff on top.  The things in 3 for kube feel more like
glibc than "which version of Firefox do I install", since a cluster without
ingress isn't very useful.


>
> #4 I think is very important and while the operator framework is starting
> to make some inroads on it, there is still a lot of work to do to make an
> equivalent of the 'redhat' distro of software that runs on k8s.
>
> A lot of focus has been on making a distro out of k8s. but its really
> mostly been at the level of, how do I get a kernel booted/upgraded. I think
> the more important distro thing #4 is how do you make a distribution of
> prebuilt, easy to install software to run on top of k8s. Redhat's distro is
> really 99% userspace and a bit of getting the thing booted.
>

Really?  I don't think that's true at all - I'd flip it around and say it's
85% booted, with 15% user space today.  I'd probably draw the line at auth,
ingress, and olm as being "the top of the bottom".   OpenShift 4 is 100%
about making that bottom layer just working, rather than being about OLM up.


> Its value is in having a suite of prebuilt, tested, stable, and easily
> installable/upgradable software  with a team of humans that can provide
> support for it. The kernel/bootloader part is really just a means to enable
> #4. No one installs a kernel/os just to get a kernel. This part is
> currently lacking. Where is the equivalent of Redhat/Centos/Fedora for #4.
>
> In the context of OKD, which of these layers is OKD focused on?
>

In the proposal before it was definitely 1-3 (which is the same as the OCP4
path).  If you only care about 4, you're talking more about OLM on top of
Kube, which is more like JBoss or something like homebrew on Mac (you don't
upgrade your Mac via home-brew, but you do consume lots of stuff out there).


>
> Thanks,
> Kevin
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


OKD Working Group Community Survey & Kick-off OKD WG Meeting Details

2019-07-24 Thread Diane Mueller-Klingspor
All,

Could you take a minute and do this short survey for the OKD Working Group
(mtg logistics, communication channels, interest levels, feedback)

Survey link here: https://forms.gle/abEFZ6oey79jxGjJ7

We'll be holding the OKD Working Group kick-off meeting next week on July
31st at 9:00 am Pacific.  The OKD working group purpose is to discuss, give
guidance & enable collaboration on current development efforts for OKD4,
Fedora CoreOS  (FCOS) and Kubernetes. The OKD WG will also include
discussion of shared community goals for OKD4 and beyond.

OKD WG Mtg details here:
https://commons.openshift.org/events.html#event|okd-working-group-kick-off-meeting|983

We are also hosting a OpenShift Commons Briefing tomorrow (July 25th at
9:00 am Pacific) with Ben Breard and Benjamin Gilbert on FCOS

Briefing details here:
https://commons.openshift.org/events.html#event|introduction-to-fedora-coreos-fcos-with-ben-breard-and-benjamin-gilbert-red-hat|982


Kind Regards,

Diane Mueller
Director, Community Development
Red Hat OpenShift
@openshiftcommons

We have more in Common than you know, learn more at
http://commons.openshift.org
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Follow up on OKD 4

2019-07-24 Thread Clayton Coleman
> On Jul 24, 2019, at 9:14 PM, Michael Gugino  wrote:
>
> I think what I'm looking for is more 'modular' rather than DIY.  CVO
> would need to be adapted to separate container payload from host
> software (or use something else), and maintaining cross-distro
> machine-configs might prove tedious, but for the most part, rest of
> everything from the k8s bins up, should be more or less the same.
>
> MCD is good software, but there's not really much going on there that
> can't be ported to any other OS.  MCD downloads a payload, extracts
> files, rebases ostree, reboots host.  You can do all of those steps
> except 'rebases ostree' on any distro.  And instead of 'rebases
> ostree', we could pull down a container that acts as a local repo that
> contains all the bits you need to upgrade your host across releases.
> Users could do things to break this workflow, but it should otherwise
> work if they aren't fiddling with the hosts.  The MCD payload happens
> to embed an ignition payload, but it doesn't actually run ignition,
> just utilizes the file format.
>
> From my viewpoint, there's nothing particularly special about ignition
> in our current process either.  I had the entire OCP 4 stack running
> on RHEL using the same exact ignition payload, a minimal amount of
> ansible (which could have been replaced by cloud-init userdata), and a
> small python library to parse the ignition files.  I was also building
> repo containers for 3.10 and 3.11 for Fedora.  Not to say the
> OpenShift 4 experience isn't great, because it is.  RHEL CoreOS + OCP
> 4 came together quite nicely.
>
> I'm all for 'not managing machines' but I'm not sure it has to look
> exactly like OCP.  Seems the OCP installer and CVO could be
> adapted/replaced with something else, MCD adapted, pretty much
> everything else works the same.

Sure - why?  As in, what do you want to do?  What distro do you want
to use instead of fcos?  What goals / outcomes do you want out of the
ability to do whatever?  Ie the previous suggestion (the auto updating
kube distro) has the concrete goal of “don’t worry about security /
updates / nodes and still be able to run containers”, and fcos is a
detail, even if it’s an important one.  How would you pitch the
alternative?


>
>> On Wed, Jul 24, 2019 at 12:05 PM Clayton Coleman  wrote:
>>
>>
>>
>>
>>> On Wed, Jul 24, 2019 at 10:40 AM Michael Gugino  wrote:
>>>
>>> I tried FCoS prior to the release by using the assembler on github.
>>> Too much secret sauce in how to actually construct an image.  I
>>> thought atomic was much more polished, not really sure what the
>>> value-add of ignition is in this usecase.  Just give me a way to build
>>> simple image pipelines and I don't need ignition.  To that end, there
>>> should be an okd 'spin' of FCoS IMO.  Atomic was dead simple to build
>>> ostree repos for.  I'd prefer to have ignition be opt-in.  Since we're
>>> supporting the mcd-once-from to parse ignition on RHEL, we don't need
>>> ignition to actually install okd.  To me, it seems FCoS was created
>>> just to have a more open version of RHEL CoreOS, and I'm not sure FCoS
>>> actually solves anyone's needs relative to atomic.  It feels like we
>>> jumped the shark on this one.
>>
>>
>> That’s feedback that’s probably something you should share in the fcos 
>> forums as well.  I will say that I find the OCP + RHEL experience 
>> unsatisfying and doesn't truly live up to what RHCOS+OCP can do (since it 
>> lacks the key features like ignition and immutable hosts).  Are you saying 
>> you'd prefer to have more of a "DIY kube bistro" than the "highly 
>> opinionated, totally integrated OKD" proposal?  I think that's a good 
>> question the community should get a chance to weigh in on (in my original 
>> email that was the implicit question - do you want something that looks like 
>> OCP4, or something that is completely different).
>>
>>>
>>>
>>> I'd like to see OKD be distro-independent.  Obviously Fedora should be
>>> our primary target (I'd argue Fedora over FCoS), but I think it should
>>> be true upstream software in the sense that apache2 http server is
>>> upstream and not distro specific.  To this end, perhaps it makes sense
>>> to consume k/k instead of openshift/origin for okd.  OKD should be
>>> free to do wild and crazy things independently of the enterprise
>>> product.  Perhaps there's a usecase for treating k/k vs
>>> openshift/origin as a swappable base layer.
>>
>>
>> That’s even more dramatic a change from OKD even as it was in 3.x.  I’d be 
>> happy to see people excited about reusing cvo / mcd and be able to mix and 
>> match, but most of the things here would be a huge investment to build.  In 
>> my original email I might call this the “I want to build my own distro" - if 
>> that's what people want to build, I think we can do things to enable it.  
>> But it would probably not be "openshift" in the same way.
>>
>>>
>>>
>>> It would be nice to have a more native kubernetes place to develop our

Re: Follow up on OKD 4

2019-07-24 Thread Michael Gugino
I think what I'm looking for is more 'modular' rather than DIY.  CVO
would need to be adapted to separate container payload from host
software (or use something else), and maintaining cross-distro
machine-configs might prove tedious, but for the most part, rest of
everything from the k8s bins up, should be more or less the same.

MCD is good software, but there's not really much going on there that
can't be ported to any other OS.  MCD downloads a payload, extracts
files, rebases ostree, reboots host.  You can do all of those steps
except 'rebases ostree' on any distro.  And instead of 'rebases
ostree', we could pull down a container that acts as a local repo that
contains all the bits you need to upgrade your host across releases.
Users could do things to break this workflow, but it should otherwise
work if they aren't fiddling with the hosts.  The MCD payload happens
to embed an ignition payload, but it doesn't actually run ignition,
just utilizes the file format.

>From my viewpoint, there's nothing particularly special about ignition
in our current process either.  I had the entire OCP 4 stack running
on RHEL using the same exact ignition payload, a minimal amount of
ansible (which could have been replaced by cloud-init userdata), and a
small python library to parse the ignition files.  I was also building
repo containers for 3.10 and 3.11 for Fedora.  Not to say the
OpenShift 4 experience isn't great, because it is.  RHEL CoreOS + OCP
4 came together quite nicely.

I'm all for 'not managing machines' but I'm not sure it has to look
exactly like OCP.  Seems the OCP installer and CVO could be
adapted/replaced with something else, MCD adapted, pretty much
everything else works the same.

On Wed, Jul 24, 2019 at 12:05 PM Clayton Coleman  wrote:
>
>
>
>
> On Wed, Jul 24, 2019 at 10:40 AM Michael Gugino  wrote:
>>
>> I tried FCoS prior to the release by using the assembler on github.
>> Too much secret sauce in how to actually construct an image.  I
>> thought atomic was much more polished, not really sure what the
>> value-add of ignition is in this usecase.  Just give me a way to build
>> simple image pipelines and I don't need ignition.  To that end, there
>> should be an okd 'spin' of FCoS IMO.  Atomic was dead simple to build
>> ostree repos for.  I'd prefer to have ignition be opt-in.  Since we're
>> supporting the mcd-once-from to parse ignition on RHEL, we don't need
>> ignition to actually install okd.  To me, it seems FCoS was created
>> just to have a more open version of RHEL CoreOS, and I'm not sure FCoS
>> actually solves anyone's needs relative to atomic.  It feels like we
>> jumped the shark on this one.
>
>
> That’s feedback that’s probably something you should share in the fcos forums 
> as well.  I will say that I find the OCP + RHEL experience unsatisfying and 
> doesn't truly live up to what RHCOS+OCP can do (since it lacks the key 
> features like ignition and immutable hosts).  Are you saying you'd prefer to 
> have more of a "DIY kube bistro" than the "highly opinionated, totally 
> integrated OKD" proposal?  I think that's a good question the community 
> should get a chance to weigh in on (in my original email that was the 
> implicit question - do you want something that looks like OCP4, or something 
> that is completely different).
>
>>
>>
>> I'd like to see OKD be distro-independent.  Obviously Fedora should be
>> our primary target (I'd argue Fedora over FCoS), but I think it should
>> be true upstream software in the sense that apache2 http server is
>> upstream and not distro specific.  To this end, perhaps it makes sense
>> to consume k/k instead of openshift/origin for okd.  OKD should be
>> free to do wild and crazy things independently of the enterprise
>> product.  Perhaps there's a usecase for treating k/k vs
>> openshift/origin as a swappable base layer.
>
>
> That’s even more dramatic a change from OKD even as it was in 3.x.  I’d be 
> happy to see people excited about reusing cvo / mcd and be able to mix and 
> match, but most of the things here would be a huge investment to build.  In 
> my original email I might call this the “I want to build my own distro" - if 
> that's what people want to build, I think we can do things to enable it.  But 
> it would probably not be "openshift" in the same way.
>
>>
>>
>> It would be nice to have a more native kubernetes place to develop our
>> components against so we can upstream them, or otherwise just build a
>> solid community around how we think kubernetes should be deployed and
>> consumed.  Similar to how Fedora has a package repository, we should
>> have a Kubernetes component repository (I realize operatorhub fulfulls
>> some of this, but I'm talking about a place for OLM and things like
>> MCD to live).
>
>
> MCD is really tied to the OS.  The idea of a generic MCD seems like it loses 
> the value of MCD being specific to an OS.
>
> I do think there are two types of components we have - things designed to 
> work 

Re: Follow up on OKD 4

2019-07-24 Thread Michael Gugino
I think FCoS could be a mutable detail.  To start with, support for
plain-old-fedora would be helpful to make the platform more portable,
particularly the MCO and machine-api.  If I had to state a goal, it
would be "Bring OKD to the largest possible range of linux distros to
become the defacto implementation of kubernetes."

Also, it would be helpful (as previously stated) to build communities
around some of our components that might not have a place in the
official kubernetes, but are valuable downstream components
nevertheless.

Anyway, I'm just throwing some ideas out there, I wouldn't consider my
statements as advocating strongly in any direction.  Surely FCoS is
the natural fit, but I think considering other distros merits
discussion.

On Wed, Jul 24, 2019 at 9:23 PM Clayton Coleman  wrote:
>
> > On Jul 24, 2019, at 9:14 PM, Michael Gugino  wrote:
> >
> > I think what I'm looking for is more 'modular' rather than DIY.  CVO
> > would need to be adapted to separate container payload from host
> > software (or use something else), and maintaining cross-distro
> > machine-configs might prove tedious, but for the most part, rest of
> > everything from the k8s bins up, should be more or less the same.
> >
> > MCD is good software, but there's not really much going on there that
> > can't be ported to any other OS.  MCD downloads a payload, extracts
> > files, rebases ostree, reboots host.  You can do all of those steps
> > except 'rebases ostree' on any distro.  And instead of 'rebases
> > ostree', we could pull down a container that acts as a local repo that
> > contains all the bits you need to upgrade your host across releases.
> > Users could do things to break this workflow, but it should otherwise
> > work if they aren't fiddling with the hosts.  The MCD payload happens
> > to embed an ignition payload, but it doesn't actually run ignition,
> > just utilizes the file format.
> >
> > From my viewpoint, there's nothing particularly special about ignition
> > in our current process either.  I had the entire OCP 4 stack running
> > on RHEL using the same exact ignition payload, a minimal amount of
> > ansible (which could have been replaced by cloud-init userdata), and a
> > small python library to parse the ignition files.  I was also building
> > repo containers for 3.10 and 3.11 for Fedora.  Not to say the
> > OpenShift 4 experience isn't great, because it is.  RHEL CoreOS + OCP
> > 4 came together quite nicely.
> >
> > I'm all for 'not managing machines' but I'm not sure it has to look
> > exactly like OCP.  Seems the OCP installer and CVO could be
> > adapted/replaced with something else, MCD adapted, pretty much
> > everything else works the same.
>
> Sure - why?  As in, what do you want to do?  What distro do you want
> to use instead of fcos?  What goals / outcomes do you want out of the
> ability to do whatever?  Ie the previous suggestion (the auto updating
> kube distro) has the concrete goal of “don’t worry about security /
> updates / nodes and still be able to run containers”, and fcos is a
> detail, even if it’s an important one.  How would you pitch the
> alternative?
>
>
> >
> >> On Wed, Jul 24, 2019 at 12:05 PM Clayton Coleman  
> >> wrote:
> >>
> >>
> >>
> >>
> >>> On Wed, Jul 24, 2019 at 10:40 AM Michael Gugino  
> >>> wrote:
> >>>
> >>> I tried FCoS prior to the release by using the assembler on github.
> >>> Too much secret sauce in how to actually construct an image.  I
> >>> thought atomic was much more polished, not really sure what the
> >>> value-add of ignition is in this usecase.  Just give me a way to build
> >>> simple image pipelines and I don't need ignition.  To that end, there
> >>> should be an okd 'spin' of FCoS IMO.  Atomic was dead simple to build
> >>> ostree repos for.  I'd prefer to have ignition be opt-in.  Since we're
> >>> supporting the mcd-once-from to parse ignition on RHEL, we don't need
> >>> ignition to actually install okd.  To me, it seems FCoS was created
> >>> just to have a more open version of RHEL CoreOS, and I'm not sure FCoS
> >>> actually solves anyone's needs relative to atomic.  It feels like we
> >>> jumped the shark on this one.
> >>
> >>
> >> That’s feedback that’s probably something you should share in the fcos 
> >> forums as well.  I will say that I find the OCP + RHEL experience 
> >> unsatisfying and doesn't truly live up to what RHCOS+OCP can do (since it 
> >> lacks the key features like ignition and immutable hosts).  Are you saying 
> >> you'd prefer to have more of a "DIY kube bistro" than the "highly 
> >> opinionated, totally integrated OKD" proposal?  I think that's a good 
> >> question the community should get a chance to weigh in on (in my original 
> >> email that was the implicit question - do you want something that looks 
> >> like OCP4, or something that is completely different).
> >>
> >>>
> >>>
> >>> I'd like to see OKD be distro-independent.  Obviously Fedora should be
> >>> our primary target (I'd argue Fedora over