Am 25.07.2019 um 19:31 schrieb Daniel Comnea:
> 
> 
> On Thu, Jul 25, 2019 at 5:01 PM Michael Gugino <mgug...@redhat.com
> <mailto:mgug...@redhat.com>> wrote:
> 
>     I don't really view the 'bucket of parts' and 'complete solution' as
>     competing ideas.  It would be nice to build the 'complete solution'
>     from the 'bucket of parts' in a reproducible, customizable manner.
>     "How is this put together" should be easily followed, enough so that
>     someone can 'put it together' on their own infrastructure without
>     having to be an expert in designing and configuring the build system.
> 
>     IMO, if I can't build it, I don't own it.  In 3.x, I could compile all
>     the openshift-specific bits from source, I could point at any
>     repository I wanted, I could point to any image registry I wanted, I
>     could use any distro I wanted.  I could replace the parts I wanted to;
>     or I could just run it as-is from the published sources and not worry
>     about replacing things.  I even built Fedora Atomic host rpm-trees
>     with all the kublet bits pre-installed, similar to what we're doing
>     with CoreOS now in 3.x.  It was a great experience, building my own
>     system images and running updates was trivial.

+1

>     I wish we weren't EOL'ing the Atomic Host in Fedora.  It offered a lot
>     of flexibility and easy to use tooling.
> 
> So maybe what we are asking here is:
> 
>   * opinionated OCP 4 philosophy => OKD 4 + FCOS (IPI and UPI) using ignition,
>     CVO etc
>   * DYI kube philosophy reusing as many v4 components but with your own
>     preferred operating system

+1 and in addition "preferred hoster" similar to 3.x.

It would be nice when the ansible-installer is still available and possible for
4 as it makes it possible to run OKD 4 on many Distros where ansible is running.

> In terms of approach, priority i think is fair to adopt a baby steps approach 
> where:
> 
>   * phase 1 = try to get out OKD 4 + FCOS asap so folks can start build up the
>     knowledge around operating the new solution in a full production env
>   * phase 2 = once experience/ knowledge was built up then we can crack on 
> with
>     reverse eng and see what we can swap etc.

I don't think that reverse eng is a good way to go. As most of the parts are OSS
and even the platform `none` exists it would be nice to get the docs out for
this option to be able to run the ansible-installer vor OKD 4.

https://github.com/openshift/installer/blob/master/CHANGELOG.md#090---2019-01-05

```
Added

    There is a new none platform for bring-your-own infrastructure users who
want to generate Ignition configurations. The new platform is mostly
undocumented; users will usually interact with it via OpenShift Ansible.

```
>     On Thu, Jul 25, 2019 at 9:51 AM Clayton Coleman <ccole...@redhat.com
>     <mailto:ccole...@redhat.com>> wrote:
>     >
>     > > On Jul 25, 2019, at 4:19 AM, Aleksandar Lazic
>     <openshift-li...@me2digital.com <mailto:openshift-li...@me2digital.com>> 
> wrote:
>     > >
>     > > HI.
>     > >
>     > >> Am 25.07.2019 um 06:52 schrieb Michael Gugino:
>     > >> I think FCoS could be a mutable detail.  To start with, support for
>     > >> plain-old-fedora would be helpful to make the platform more portable,
>     > >> particularly the MCO and machine-api.  If I had to state a goal, it
>     > >> would be "Bring OKD to the largest possible range of linux distros to
>     > >> become the defacto implementation of kubernetes."
>     > >
>     > > I agree here with Michael. As FCoS or in general CoS looks technical a
>     good idea
>     > > but it limits the flexibility of possible solutions.
>     > >
>     > > For example when you need to change some system settings then you will
>     need to
>     > > create a new OS Image, this is not very usable in some environments.
>     >
>     > I think something we haven’t emphasized enough is that openshift 4 is
>     > very heavily structured around changing the cost and mental model
>     > around this.  The goal was and is to make these sorts of things
>     > unnecessary.  Changing machine settings by building golden images is
>     > already the “wrong” (expensive and error prone) pattern - instead, it
>     > should be easy to reconfigure machines or to launch new containers to
>     > run software on those machines.  There may be two factors here at
>     > work:
>     >
>     > 1. Openshift 4 isn’t flexible in the ways people want (Ie you want to
>     > add an rpm to the OS to get a kernel module, or you want to ship a
>     > complex set of config and managing things with mcd looks too hard)
>     > 2. You want to build and maintain these things yourself, so the “just
>     > works” mindset doesn’t appeal.
>     >
>     > The initial doc alluded to the DIY / bucket of parts use case (I can
>     > assemble this on my own but slightly differently) - maybe we can go
>     > further now and describe the goal / use case as:
>     >
>     > I want to be able to compose my own Kubernetes distribution, and I’m
>     > willing to give up continuous automatic updates to gain flexibility in
>     > picking my own software
>     >
>     > Does that sound like it captures your request?
>     >
>     > Note that a key reason why the OS is integrated is so that we can keep
>     > machines up to date and do rolling control plane upgrades with no
>     > risk.  If you take the OS out of the equation the risk goes up
>     > substantially, but if you’re willing to give that up then yes, you
>     > could build an OKD that doesn’t tie to the OS.  This trade off is an
>     > important one for folks to discuss.  I’d been assuming that people
>     > *want* the automatic and safe upgrades, but maybe that’s a bad
>     > assumption.
>     >
>     > What would you be willing to give up?
>     >
>     > >
>     > > It would be nice to have the good old option to use the ansible 
> installer to
>     > > install OKD/Openshift on other Linux distribution where ansible is 
> able
>     to run.
>     > >
>     > >> Also, it would be helpful (as previously stated) to build communities
>     > >> around some of our components that might not have a place in the
>     > >> official kubernetes, but are valuable downstream components
>     > >> nevertheless.
>     > >>
>     > >> Anyway, I'm just throwing some ideas out there, I wouldn't consider 
> my
>     > >> statements as advocating strongly in any direction.  Surely FCoS is
>     > >> the natural fit, but I think considering other distros merits
>     > >> discussion.
>     > >
>     > > +1
>     > >
>     > > Regards
>     > > Aleks
>     > >
>     > >
>     > >>> On Wed, Jul 24, 2019 at 9:23 PM Clayton Coleman <ccole...@redhat.com
>     <mailto:ccole...@redhat.com>> wrote:
>     > >>>
>     > >>>> On Jul 24, 2019, at 9:14 PM, Michael Gugino <mgug...@redhat.com
>     <mailto:mgug...@redhat.com>> wrote:
>     > >>>>
>     > >>>> I think what I'm looking for is more 'modular' rather than DIY.  
> CVO
>     > >>>> would need to be adapted to separate container payload from host
>     > >>>> software (or use something else), and maintaining cross-distro
>     > >>>> machine-configs might prove tedious, but for the most part, rest of
>     > >>>> everything from the k8s bins up, should be more or less the same.
>     > >>>>
>     > >>>> MCD is good software, but there's not really much going on there 
> that
>     > >>>> can't be ported to any other OS.  MCD downloads a payload, extracts
>     > >>>> files, rebases ostree, reboots host.  You can do all of those steps
>     > >>>> except 'rebases ostree' on any distro.  And instead of 'rebases
>     > >>>> ostree', we could pull down a container that acts as a local repo 
> that
>     > >>>> contains all the bits you need to upgrade your host across 
> releases.
>     > >>>> Users could do things to break this workflow, but it should 
> otherwise
>     > >>>> work if they aren't fiddling with the hosts.  The MCD payload 
> happens
>     > >>>> to embed an ignition payload, but it doesn't actually run ignition,
>     > >>>> just utilizes the file format.
>     > >>>>
>     > >>>> From my viewpoint, there's nothing particularly special about 
> ignition
>     > >>>> in our current process either.  I had the entire OCP 4 stack 
> running
>     > >>>> on RHEL using the same exact ignition payload, a minimal amount of
>     > >>>> ansible (which could have been replaced by cloud-init userdata), 
> and a
>     > >>>> small python library to parse the ignition files.  I was also 
> building
>     > >>>> repo containers for 3.10 and 3.11 for Fedora.  Not to say the
>     > >>>> OpenShift 4 experience isn't great, because it is.  RHEL CoreOS + 
> OCP
>     > >>>> 4 came together quite nicely.
>     > >>>>
>     > >>>> I'm all for 'not managing machines' but I'm not sure it has to look
>     > >>>> exactly like OCP.  Seems the OCP installer and CVO could be
>     > >>>> adapted/replaced with something else, MCD adapted, pretty much
>     > >>>> everything else works the same.
>     > >>>
>     > >>> Sure - why?  As in, what do you want to do?  What distro do you want
>     > >>> to use instead of fcos?  What goals / outcomes do you want out of 
> the
>     > >>> ability to do whatever?  Ie the previous suggestion (the auto 
> updating
>     > >>> kube distro) has the concrete goal of “don’t worry about security /
>     > >>> updates / nodes and still be able to run containers”, and fcos is a
>     > >>> detail, even if it’s an important one.  How would you pitch the
>     > >>> alternative?
>     > >>>
>     > >>>
>     > >>>>
>     > >>>>> On Wed, Jul 24, 2019 at 12:05 PM Clayton Coleman
>     <ccole...@redhat.com <mailto:ccole...@redhat.com>> wrote:
>     > >>>>>
>     > >>>>>
>     > >>>>>
>     > >>>>>
>     > >>>>>> On Wed, Jul 24, 2019 at 10:40 AM Michael Gugino 
> <mgug...@redhat.com
>     <mailto:mgug...@redhat.com>> wrote:
>     > >>>>>>
>     > >>>>>> I tried FCoS prior to the release by using the assembler on 
> github.
>     > >>>>>> Too much secret sauce in how to actually construct an image.  I
>     > >>>>>> thought atomic was much more polished, not really sure what the
>     > >>>>>> value-add of ignition is in this usecase.  Just give me a way to 
> build
>     > >>>>>> simple image pipelines and I don't need ignition.  To that end, 
> there
>     > >>>>>> should be an okd 'spin' of FCoS IMO.  Atomic was dead simple to 
> build
>     > >>>>>> ostree repos for.  I'd prefer to have ignition be opt-in.  Since 
> we're
>     > >>>>>> supporting the mcd-once-from to parse ignition on RHEL, we don't 
> need
>     > >>>>>> ignition to actually install okd.  To me, it seems FCoS was 
> created
>     > >>>>>> just to have a more open version of RHEL CoreOS, and I'm not 
> sure FCoS
>     > >>>>>> actually solves anyone's needs relative to atomic.  It feels 
> like we
>     > >>>>>> jumped the shark on this one.
>     > >>>>>
>     > >>>>>
>     > >>>>> That’s feedback that’s probably something you should share in the
>     fcos forums as well.  I will say that I find the OCP + RHEL experience
>     unsatisfying and doesn't truly live up to what RHCOS+OCP can do (since it
>     lacks the key features like ignition and immutable hosts).  Are you saying
>     you'd prefer to have more of a "DIY kube bistro" than the "highly
>     opinionated, totally integrated OKD" proposal?  I think that's a good
>     question the community should get a chance to weigh in on (in my original
>     email that was the implicit question - do you want something that looks 
> like
>     OCP4, or something that is completely different).
>     > >>>>>
>     > >>>>>>
>     > >>>>>>
>     > >>>>>> I'd like to see OKD be distro-independent.  Obviously Fedora 
> should be
>     > >>>>>> our primary target (I'd argue Fedora over FCoS), but I think it 
> should
>     > >>>>>> be true upstream software in the sense that apache2 http server 
> is
>     > >>>>>> upstream and not distro specific.  To this end, perhaps it makes 
> sense
>     > >>>>>> to consume k/k instead of openshift/origin for okd.  OKD should 
> be
>     > >>>>>> free to do wild and crazy things independently of the enterprise
>     > >>>>>> product.  Perhaps there's a usecase for treating k/k vs
>     > >>>>>> openshift/origin as a swappable base layer.
>     > >>>>>
>     > >>>>>
>     > >>>>> That’s even more dramatic a change from OKD even as it was in 
> 3.x. 
>     I’d be happy to see people excited about reusing cvo / mcd and be able to
>     mix and match, but most of the things here would be a huge investment to
>     build.  In my original email I might call this the “I want to build my own
>     distro" - if that's what people want to build, I think we can do things to
>     enable it.  But it would probably not be "openshift" in the same way.
>     > >>>>>
>     > >>>>>>
>     > >>>>>>
>     > >>>>>> It would be nice to have a more native kubernetes place to 
> develop our
>     > >>>>>> components against so we can upstream them, or otherwise just 
> build a
>     > >>>>>> solid community around how we think kubernetes should be 
> deployed and
>     > >>>>>> consumed.  Similar to how Fedora has a package repository, we 
> should
>     > >>>>>> have a Kubernetes component repository (I realize operatorhub 
> fulfulls
>     > >>>>>> some of this, but I'm talking about a place for OLM and things 
> like
>     > >>>>>> MCD to live).
>     > >>>>>
>     > >>>>>
>     > >>>>> MCD is really tied to the OS.  The idea of a generic MCD seems 
> like
>     it loses the value of MCD being specific to an OS.
>     > >>>>>
>     > >>>>> I do think there are two types of components we have - things
>     designed to work well with kube, and things designed to work well with
>     "openshift the distro".  The former can be developed against Kube (like
>     operator hub / olm) but the latter doesn't really make sense to develop
>     against unless it matches what is being built.  In that vein, OKD4 looking
>     not like OCP4 wouldn't benefit you or the components.
>     > >>>>>
>     > >>>>>>
>     > >>>>>>
>     > >>>>>> I think we could integrate with existing package managers via a
>     > >>>>>> 'repo-in-a-container' type strategy for those not using ostree.
>     > >>>>>
>     > >>>>>
>     > >>>>> A big part of openshift 4 is "we're tired of managing machines".  
> It
>     sounds like you are arguing for "let people do whatever", which is
>     definitely valid (but doesn't sound like openshift 4).
>     > >>>>>
>     > >>>>>>
>     > >>>>>>
>     > >>>>>> As far as slack vs IRC, I vote IRC or any free software solution 
> (but
>     > >>>>>> my preference is IRC because it's simple and I like it).
>     > >>>>
>     > >>>>
>     > >>>>
>     > >>>> --
>     > >>>> Michael Gugino
>     > >>>> Senior Software Engineer - OpenShift
>     > >>>> mgug...@redhat.com <mailto:mgug...@redhat.com>
>     > >>>> 540-846-0304
>     > >>
>     > >>
>     > >>
>     > >
> 
> 
> 
>     -- 
>     Michael Gugino
>     Senior Software Engineer - OpenShift
>     mgug...@redhat.com <mailto:mgug...@redhat.com>
>     540-846-0304
> 
>     _______________________________________________
>     dev mailing list
>     d...@lists.openshift.redhat.com <mailto:d...@lists.openshift.redhat.com>
>     http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
> 
> 
> _______________________________________________
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> 

_______________________________________________
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

Reply via email to