Re: Annotation NodeSelector is missing

2018-04-12 Thread Michael Gugino
That will be the default infra_selector unless you modify it, but yes
that's the new format.  You'll want to apply that label to your infra
hosts as well.

On Thu, Apr 12, 2018 at 9:06 AM, Charles Moulliard <cmoul...@redhat.com> wrote:
> Hi Michael,
>
> Do I have to use this openshift-ansible var to define it for 3.9 ?
>
> openshift_hosted_infra_selector="node-role.kubernetes.io/infra=true"
>
> Regards,
>
> Charles
>
> On Thu, Apr 12, 2018 at 2:56 PM, Michael Gugino <mgug...@redhat.com> wrote:
>>
>> If you're installing an all-in-one, you need to set the label
>> node-role.kubernetes.io/compute=true on the node as that is the
>> default node selector in 3.9.
>>
>> On Thu, Apr 12, 2018 at 4:01 AM, Charles Moulliard <cmoul...@redhat.com>
>> wrote:
>> > Hi,
>> >
>> > I have installed openshift origin using the openshift-ansible
>> > "release-3.9"
>> > branch but
>> > when I create a new application such as MySQL using the openshift
>> > template
>> > within by example the project "demo", then the deployment fails and
>> > reports
>> > this error
>> >
>> > "0/1 nodes are available: 1 MatchNodeSelector."
>> >
>> > This problem is due to the fact that the project doesn't include the
>> > following annotation "openshift.io/node-selector="
>> >
>> > See -->
>> > oc describe project/demo
>> > Name: demo
>> > Created: About a minute ago
>> > Labels: 
>> > Annotations:openshift.io/description=
>> > openshift.io/display-name=
>> > openshift.io/requester=admin
>> > openshift.io/sa.scc.mcs=s0:c11,c5
>> > openshift.io/sa.scc.supplemental-groups=100012/1
>> > openshift.io/sa.scc.uid-range=100012/1
>> > ...
>> >
>> > Is there a way to tell to openshift to add such annotation for every
>> > project
>> > created ?
>> > Do I have to report a bug to origin or openshif-ansible project ?
>> >
>> > Regards,
>> >
>> > Charles
>> >
>> > ___
>> > dev mailing list
>> > dev@lists.openshift.redhat.com
>> > http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>> >
>>
>>
>>
>> --
>> Michael Gugino
>> Senior Software Engineer - OpenShift
>> mgug...@redhat.com
>> 540-846-0304
>
>



-- 
Michael Gugino
Senior Software Engineer - OpenShift
mgug...@redhat.com
540-846-0304

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Openshift Origin 3.9 rpms -> ansible playbooks

2018-03-16 Thread Michael Gugino
I typically use the head of each release branch for openshift-ansible.
Most of the time that's a good approach for anything that is already
released (such as 3.7).  Alternatively, you can install the
openshift-ansible origin rpm on a centos host.

I don't recommend using openshift_repos_enable_testing, it's been
nothing but problems for me.  I don't recommend using any testing
versions of any packages either.

For testing new origin code, I have had the best luck with Fedora
Atomic 27.  However, I don't think 3.9 origin is in great shape at the
moment.

On Fri, Mar 16, 2018 at 3:49 PM, Charles Moulliard <cmoul...@redhat.com> wrote:
> Do you suggest to use a tagged version of the openshift ansible git repo
> combined
> with such parameters within the inventory ?
>
> E.g
>
> git clone -b
> https://github.com/openshift/openshift-ansible/releases/tag/openshift-ansible-3.7.38-1
>
> inventory
> ===
> openshift_repos_enable_testing = true
> containerized = false
> openshift_release = v3.7
> openshift_pkg_version = "-3.7.0-1.0.7ed6862" # See ->
> https://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin37/
> openshift_deployment_type = origin
> ...
>
> ansible-playbook -i inventory openshift-ansible/playbooks/byo/config.yml
>
> On Fri, Mar 16, 2018 at 8:44 PM, Michael Gugino <mgug...@redhat.com> wrote:
>>
>> I would start again with new hosts, probably the easiest way forward.
>> I recommend using the released versions if you are not comfortable
>> with troubleshooting openshift-ansible.
>>
>> On Fri, Mar 16, 2018 at 3:29 PM, Charles Moulliard <cmoul...@redhat.com>
>> wrote:
>> > Yep. I did a mistake but I have cleaned my machine and did a new
>> > deployment
>> > but we a get weird error again
>> >
>> > INSTALLER STATUS
>> >
>> > 
>> > Initialization : Complete
>> > Health Check   : Complete
>> > etcd Install   : Complete
>> > Master Install : Complete
>> > Master Additional Install  : Complete
>> > Node Install   : In Progress
>> > This phase can be restarted by running:
>> > playbooks/byo/openshift-node/config.yml
>> >
>> >
>> >
>> > Failure summary:
>> >
>> >
>> >   1. Hosts:192.168.99.50
>> >  Play: Configure nodes
>> >  Task: Install sdn-ovs package
>> >  Message:  Error: Package: origin-sdn-ovs-3.7.0-1.0.7ed6862.x86_64
>> > (centos-openshift-origin37)
>> >   Requires: origin-node = 3.7.0-1.0.7ed6862
>> >   Installed:
>> > origin-node-3.7.1-1.el7.git.0.0a2d6a1.x86_64
>> > (@centos-openshift-origin37-testing)
>> >   origin-node = 3.7.1-1.el7.git.0.0a2d6a1
>> >   Available:
>> > origin-node-v3.7.1-0.3.7.1.el7.git.0.b113c29.x86_64
>> > (centos-openshift-origin37-testing)
>> >
>> > So it seems that the version calculated and downloaded from this repo
>> > https://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin37/ is
>> > not
>> > consistent
>> >
>> > Is there a way to download / install properly all the required origin
>> > rmps
>> > as ansible playbook fails ?
>> >
>> >
>> > On Fri, Mar 16, 2018 at 8:24 PM, Michael Gugino <mgug...@redhat.com>
>> > wrote:
>> >>
>> >> Charles,
>> >>
>> >>   You cannot install 3.9 from the openshift-ansible 3.7 branch.  If
>> >> you want to install 3.9, you need to use the openshift-ansible 3.9
>> >> branch, which is not officially released yet.
>> >>
>> >>   There is not a testing repo for each release of openshift; just the
>> >> latest unreleased version.  If you enable testing repos on the 3.7
>> >> branch, you'll end up getting 3.9 packages which is what's happening
>> >> here.
>> >>
>> >> On Fri, Mar 16, 2018 at 11:32 AM, Charles Moulliard
>> >> <cmoul...@redhat.com>
>> >> wrote:
>> >> > Hi,
>> >> >
>> >> > Are the OpenShift Origin v3.9 rpms available from a repo ?
>> >> > How can we get them in order to install openshift cluster using
>> >> > ansible
>> >> > playbook ?
>> >> >
>> >> > Regards
>> >> >
>> >> > Charles
>> >> >
>> >> > ___
>> >> > dev mailing list
>> >> > dev@lists.openshift.redhat.com
>> >> > http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>> >> >
>> >>
>> >>
>> >>
>> >> --
>> >> Michael Gugino
>> >> Senior Software Engineer - OpenShift
>> >> mgug...@redhat.com
>> >> 540-846-0304
>> >
>> >
>>
>>
>>
>> --
>> Michael Gugino
>> Senior Software Engineer - OpenShift
>> mgug...@redhat.com
>> 540-846-0304
>
>



-- 
Michael Gugino
Senior Software Engineer - OpenShift
mgug...@redhat.com
540-846-0304

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Openshift Origin 3.9 rpms -> ansible playbooks

2018-03-16 Thread Michael Gugino
I would start again with new hosts, probably the easiest way forward.
I recommend using the released versions if you are not comfortable
with troubleshooting openshift-ansible.

On Fri, Mar 16, 2018 at 3:29 PM, Charles Moulliard <cmoul...@redhat.com> wrote:
> Yep. I did a mistake but I have cleaned my machine and did a new deployment
> but we a get weird error again
>
> INSTALLER STATUS
> 
> Initialization : Complete
> Health Check   : Complete
> etcd Install   : Complete
> Master Install : Complete
> Master Additional Install  : Complete
> Node Install   : In Progress
> This phase can be restarted by running:
> playbooks/byo/openshift-node/config.yml
>
>
>
> Failure summary:
>
>
>   1. Hosts:192.168.99.50
>  Play: Configure nodes
>  Task: Install sdn-ovs package
>  Message:  Error: Package: origin-sdn-ovs-3.7.0-1.0.7ed6862.x86_64
> (centos-openshift-origin37)
>   Requires: origin-node = 3.7.0-1.0.7ed6862
>   Installed:
> origin-node-3.7.1-1.el7.git.0.0a2d6a1.x86_64
> (@centos-openshift-origin37-testing)
>   origin-node = 3.7.1-1.el7.git.0.0a2d6a1
>   Available:
> origin-node-v3.7.1-0.3.7.1.el7.git.0.b113c29.x86_64
> (centos-openshift-origin37-testing)
>
> So it seems that the version calculated and downloaded from this repo
> https://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin37/ is not
> consistent
>
> Is there a way to download / install properly all the required origin rmps
> as ansible playbook fails ?
>
>
> On Fri, Mar 16, 2018 at 8:24 PM, Michael Gugino <mgug...@redhat.com> wrote:
>>
>> Charles,
>>
>>   You cannot install 3.9 from the openshift-ansible 3.7 branch.  If
>> you want to install 3.9, you need to use the openshift-ansible 3.9
>> branch, which is not officially released yet.
>>
>>   There is not a testing repo for each release of openshift; just the
>> latest unreleased version.  If you enable testing repos on the 3.7
>> branch, you'll end up getting 3.9 packages which is what's happening
>> here.
>>
>> On Fri, Mar 16, 2018 at 11:32 AM, Charles Moulliard <cmoul...@redhat.com>
>> wrote:
>> > Hi,
>> >
>> > Are the OpenShift Origin v3.9 rpms available from a repo ?
>> > How can we get them in order to install openshift cluster using ansible
>> > playbook ?
>> >
>> > Regards
>> >
>> > Charles
>> >
>> > ___
>> > dev mailing list
>> > dev@lists.openshift.redhat.com
>> > http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>> >
>>
>>
>>
>> --
>> Michael Gugino
>> Senior Software Engineer - OpenShift
>> mgug...@redhat.com
>> 540-846-0304
>
>



-- 
Michael Gugino
Senior Software Engineer - OpenShift
mgug...@redhat.com
540-846-0304

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Openshift Origin 3.9 rpms -> ansible playbooks

2018-03-16 Thread Michael Gugino
Charles,

  You cannot install 3.9 from the openshift-ansible 3.7 branch.  If
you want to install 3.9, you need to use the openshift-ansible 3.9
branch, which is not officially released yet.

  There is not a testing repo for each release of openshift; just the
latest unreleased version.  If you enable testing repos on the 3.7
branch, you'll end up getting 3.9 packages which is what's happening
here.

On Fri, Mar 16, 2018 at 11:32 AM, Charles Moulliard <cmoul...@redhat.com> wrote:
> Hi,
>
> Are the OpenShift Origin v3.9 rpms available from a repo ?
> How can we get them in order to install openshift cluster using ansible
> playbook ?
>
> Regards
>
> Charles
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>



-- 
Michael Gugino
Senior Software Engineer - OpenShift
mgug...@redhat.com
540-846-0304

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Openshift Origin 3.9 rpms -> ansible playbooks

2018-03-16 Thread Michael Gugino
I use this basic inventory with fedora atomic:
https://github.com/michaelgugino/openshift-stuff/tree/master/fedora-atomic

I usually deploy to AWS for testing and development.  Wherever you
deploy, your instance's hostnames need to be resolvable.

I don't have any recommendations for using any specific tags.  I
usually use the head of a release branch; I very rarely use specific
tags or commits.

I don't have any general recommendations for deploying any other than
released openshift code.  If you're trying to deploy testing images /
rpms,
your setup is going to depend on what you're trying to accomplish.  I
highly recommend deploying released code.  That is the only way it's
going to
'just work.'

On Fri, Mar 16, 2018 at 4:41 PM, Charles Moulliard <cmoul...@redhat.com> wrote:
>
>
> On Fri, Mar 16, 2018 at 8:57 PM, Michael Gugino <mgug...@redhat.com> wrote:
>>
>> I typically use the head of each release branch for openshift-ansible.
>> Most of the time that's a good approach for anything that is already
>> released (such as 3.7).
>
>
>>> So using this tagged release is a good idea then :
>>> https://github.com/openshift/openshift-ansible/releases/tag/openshift-ansible-3.7.38-1
>>> ?
>
>> Alternatively, you can install the
>> openshift-ansible origin rpm on a centos host.
>
>
>>> How can I do that ?
>>
>>
>> I don't recommend using openshift_repos_enable_testing, it's been
>> nothing but problems for me.  I don't recommend using any testing
>> versions of any packages either.
>
>
>>> I agree as last run generayed such bad rpms installed
>
> Beginning of executing the playbook
> origin.x86_643.7.0-1.0.7ed6862
> @centos-openshift-origin37
> origin-clients.x86_643.7.0-1.0.7ed6862
> @centos-openshift-origin37
> origin-master.x86_64 3.7.0-1.0.7ed6862
> @centos-openshift-origin37
>
> Later ...
>
> origin.x86_643.7.1-1.el7.git.0.0a2d6a1
> @centos-openshift-origin37-testing
> origin-clients.x86_643.7.1-1.el7.git.0.0a2d6a1
> @centos-openshift-origin37-testing
> origin-master.x86_64 3.7.1-1.el7.git.0.0a2d6a1
> @centos-openshift-origin37-testing
> origin-node.x86_64   3.7.1-1.el7.git.0.0a2d6a1
> @centos-openshift-origin37-testing
>
>
>>
>>
>> For testing new origin code, I have had the best luck with Fedora
>> Atomic 27.
>
>
>>> Can you share what you did (info, doc, ...) ?
>
>>
>>   However, I don't think 3.9 origin is in great shape at the
>> moment.
>>
>> On Fri, Mar 16, 2018 at 3:49 PM, Charles Moulliard <cmoul...@redhat.com>
>> wrote:
>> > Do you suggest to use a tagged version of the openshift ansible git repo
>> > combined
>> > with such parameters within the inventory ?
>> >
>> > E.g
>> >
>> > git clone -b
>> >
>> > https://github.com/openshift/openshift-ansible/releases/tag/openshift-ansible-3.7.38-1
>> >
>> > inventory
>> > ===
>> > openshift_repos_enable_testing = true
>> > containerized = false
>> > openshift_release = v3.7
>> > openshift_pkg_version = "-3.7.0-1.0.7ed6862" # See ->
>> > https://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin37/
>> > openshift_deployment_type = origin
>> > ...
>> >
>> > ansible-playbook -i inventory openshift-ansible/playbooks/byo/config.yml
>> >
>> > On Fri, Mar 16, 2018 at 8:44 PM, Michael Gugino <mgug...@redhat.com>
>> > wrote:
>> >>
>> >> I would start again with new hosts, probably the easiest way forward.
>> >> I recommend using the released versions if you are not comfortable
>> >> with troubleshooting openshift-ansible.
>> >>
>> >> On Fri, Mar 16, 2018 at 3:29 PM, Charles Moulliard
>> >> <cmoul...@redhat.com>
>> >> wrote:
>> >> > Yep. I did a mistake but I have cleaned my machine and did a new
>> >> > deployment
>> >> > but we a get weird error again
>> >> >
>> >> > INSTALLER STATUS
>> >> >
>> >> >
>> >> > 
>> >> > Initialization : Complete
>> >> > Health Check   : Complete
>> >> > etcd Inst

Re: Openshift Origin 3.9 rpms -> ansible playbooks

2018-03-19 Thread Michael Gugino
3.9 hasn't shipped yet, we're still cleaning up a last couple of
items.  I'm not sure off hand who releases the work for the other
distros.

On Mon, Mar 19, 2018 at 4:15 AM, Charles Moulliard <cmoul...@redhat.com> wrote:
> Do we know when Origin official rpms for OpenShift 3.9 will be released
> under "http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin39/; ?
> Who manage such builds for RHEL, CentOS, Fedora ?
>
> On Fri, Mar 16, 2018 at 4:42 PM, Troy Dawson <tdaw...@redhat.com> wrote:
>>
>> On Fri, Mar 16, 2018 at 8:32 AM, Charles Moulliard <cmoul...@redhat.com>
>> wrote:
>> > Hi,
>> >
>> > Are the OpenShift Origin v3.9 rpms available from a repo ?
>> > How can we get them in order to install openshift cluster using ansible
>> > playbook ?
>> >
>> > Regards
>> >
>> > Charles
>> >
>> > ___
>> > dev mailing list
>> > dev@lists.openshift.redhat.com
>> > http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>> >
>>
>> https://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin39/
>>
>>   I think it's
>> ansible-playbook -e openshift_repos_enable_testing=true
>
>
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>



-- 
Michael Gugino
Senior Software Engineer - OpenShift
mgug...@redhat.com
540-846-0304

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Openshift Origin 3.9 rpms -> ansible playbooks

2018-03-16 Thread Michael Gugino
For 3.7 and below, you need to do some manual preparation steps and
then the playbook you want to run is:
openshift-ansible/playbooks/byo/config.yml

Prerequisites: 
https://docs.openshift.org/latest/install_config/install/prerequisites.html

Host prep: 
https://docs.openshift.org/latest/install_config/install/host_preparation.html

I think some of the items in those pages are already done on atomic
host (such as installing docker).

On Fri, Mar 16, 2018 at 5:48 PM, Charles Moulliard <cmoul...@redhat.com> wrote:
> I see from your github repo that you use playbook committed under master
> branch
> as you use these commands to install ocp
>
> ansible-playbook -i inventory/hosts.localhost playbooks/prerequisites.yml
> ansible-playbook -i inventory/hosts.localhost playbooks/deploy_cluster.yml
>
> Unfortunately that fails for me to install Openshift Origin v3.7
>
> On Fri, Mar 16, 2018 at 10:17 PM, Michael Gugino <mgug...@redhat.com> wrote:
>>
>> I use this basic inventory with fedora atomic:
>> https://github.com/michaelgugino/openshift-stuff/tree/master/fedora-atomic
>>
>> I usually deploy to AWS for testing and development.  Wherever you
>> deploy, your instance's hostnames need to be resolvable.
>>
>> I don't have any recommendations for using any specific tags.  I
>> usually use the head of a release branch; I very rarely use specific
>> tags or commits.
>>
>> I don't have any general recommendations for deploying any other than
>> released openshift code.  If you're trying to deploy testing images /
>> rpms,
>> your setup is going to depend on what you're trying to accomplish.  I
>> highly recommend deploying released code.  That is the only way it's
>> going to
>> 'just work.'
>>
>> On Fri, Mar 16, 2018 at 4:41 PM, Charles Moulliard <cmoul...@redhat.com>
>> wrote:
>> >
>> >
>> > On Fri, Mar 16, 2018 at 8:57 PM, Michael Gugino <mgug...@redhat.com>
>> > wrote:
>> >>
>> >> I typically use the head of each release branch for openshift-ansible.
>> >> Most of the time that's a good approach for anything that is already
>> >> released (such as 3.7).
>> >
>> >
>> >>> So using this tagged release is a good idea then :
>> >>>
>> >>> https://github.com/openshift/openshift-ansible/releases/tag/openshift-ansible-3.7.38-1
>> >>> ?
>> >
>> >> Alternatively, you can install the
>> >> openshift-ansible origin rpm on a centos host.
>> >
>> >
>> >>> How can I do that ?
>> >>
>> >>
>> >> I don't recommend using openshift_repos_enable_testing, it's been
>> >> nothing but problems for me.  I don't recommend using any testing
>> >> versions of any packages either.
>> >
>> >
>> >>> I agree as last run generayed such bad rpms installed
>> >
>> > Beginning of executing the playbook
>> > origin.x86_643.7.0-1.0.7ed6862
>> > @centos-openshift-origin37
>> > origin-clients.x86_643.7.0-1.0.7ed6862
>> > @centos-openshift-origin37
>> > origin-master.x86_64 3.7.0-1.0.7ed6862
>> > @centos-openshift-origin37
>> >
>> > Later ...
>> >
>> > origin.x86_643.7.1-1.el7.git.0.0a2d6a1
>> > @centos-openshift-origin37-testing
>> > origin-clients.x86_643.7.1-1.el7.git.0.0a2d6a1
>> > @centos-openshift-origin37-testing
>> > origin-master.x86_64 3.7.1-1.el7.git.0.0a2d6a1
>> > @centos-openshift-origin37-testing
>> > origin-node.x86_64   3.7.1-1.el7.git.0.0a2d6a1
>> > @centos-openshift-origin37-testing
>> >
>> >
>> >>
>> >>
>> >> For testing new origin code, I have had the best luck with Fedora
>> >> Atomic 27.
>> >
>> >
>> >>> Can you share what you did (info, doc, ...) ?
>> >
>> >>
>> >>   However, I don't think 3.9 origin is in great shape at the
>> >> moment.
>> >>
>> >> On Fri, Mar 16, 2018 at 3:49 PM, Charles Moulliard
>> >> <cmoul...@redhat.com>
>> >> wrote:
>> >> > Do you suggest to use a tagged version of the openshift ansible git
>> >> > repo
>> >> > combined
>> >> > with such parameters within the inventory ?
>> >> >
>> >> > E.g
>> >> >
>> >> &g

Re: Custom certificate and the host associated with masterPublicURL

2018-08-30 Thread Michael Gugino
OpenShift components themselves call the masterURL.  We ensure that
the internal API endpoint is trusted by all OpenShift components.  I
strongly suggest following the documentation even if it appears to
work otherwise, changing this behavior might result in breaking during
an upgrade or other scenario where a custom certificate at the
masterURL wasn't accounted for.

On Wed, Aug 29, 2018 at 9:06 AM, Daniel Comnea  wrote:
> Hi,
>
> I'm trying to understand from a technical point of view the hard requirement
> around namedCertificates and the hostname associated with the
> masterPublicURL vs masterURL.
>
> According to the docs [1] it says
>
> "
> The namedCertificates section should be configured only for the host name
> associated with the masterPublicURLand oauthConfig.assetPublicURL settings n
> the /etc/origin/master/master-config.yaml file. Using a custom serving
> certificate for the host name associated with the masterURL will result in
> TLS errors as infrastructure components will attempt to contact the master
> API using the internal masterURL host.
> "
>
> However the above note/ requirement doesn't applies to the self-signed
> certificated generated by the openshift-ansible installer and as such the OP
> can have the same value defined to the below variables in his/her inventory
>
> openshift_master_cluster_public_hostname => map to masterPublicURL
> openshift_master_cluster_hostname => map to masterURL
>
>
> without having any side effect - ie TLS errors.
>
> Is there anything "special" around the self-signed certificates produced by
> the openshift-ansible installer which doesn't generate any TLS errors ?
> If not then i'd expect same TLS errors as for when the namedCertificates
> section is present.
>
>
> Dani
>
> [1]
> https://docs.openshift.com/container-platform/3.10/install_config/certificate_customization.html#configuring-custom-certificates
>
>
> _______
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>



-- 
Michael Gugino
Senior Software Engineer - OpenShift
mgug...@redhat.com
540-846-0304

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: invalid reference format

2018-09-12 Thread Michael Gugino
Yes, we use regex to replace that value.

It's not valid to set oreg_url in the way you are attempting to set
it, but it may actually work for a large majority of the images.  You
can set the registry-console image directly as a workaround.

openshift_cockpit_deployer_image, should contain fully qualified image
name and desired version, example:
myregistry.com/testing/cockpit:latest

On Wed, Sep 12, 2018 at 12:47 PM, Neale Ferguson  wrote:
> Hi,
>
> I built 3.10 from source and performed an ansible-playbook installation.
> Everything thing went fine except for registry-console. What have I failed
> to configure or what may be missing such that when registry-console is
> started it fails with:
>
>
>
> Warning  InspectFailed   5s (x8 over 27s)  kubelet,
> docker-test.sinenomine.net  Failed to apply default image tag
> "docker.io/clefos/origin-${component}:latest": couldn't parse image
> reference "docker.io/clefos/origin-${component}:latest": invalid reference
> format
>
>
>
> (clefos is the name of my repo where the images built are placed)
>
>
>
> I assume ${component} is supposed to be substituted by something during the
> playbook processing.
>
>
>
> Neale
>
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>



-- 
Michael Gugino
Senior Software Engineer - OpenShift
mgug...@redhat.com
540-846-0304

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: OKD 4 - A Modest Proposal

2019-06-26 Thread Michael Gugino
In Fedora Atomic, it was trivial to build a custom ostree image.  I
would hope the same is available for the FCOS model as well.

On Wed, Jun 26, 2019 at 1:36 PM Colin Walters  wrote:
>
>
>
> On Wed, Jun 26, 2019, at 1:10 PM, Colin Walters wrote:
>
> > The tricky thing here is...if we want this to work the same as OpenShift 
> > 4/OCP
> > with RHEL CoreOS, then what we're really talking about here is a 
> > *derivative*
> > of FCOS that for example embeds the kubelet from OKD.  And short term
> > it will need to use Ignition spec 2.  There may be other things I'm 
> > forgetting.
>
> It was pointed out to me that there is also some relevant discussion in this 
> issue:
> https://github.com/coreos/fedora-coreos-tracker/issues/93
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev



-- 
Michael Gugino
Senior Software Engineer - OpenShift
mgug...@redhat.com
540-846-0304

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Proposal: Deploy and switch to Discourse

2019-07-12 Thread Michael Gugino
I propose we keep the mailing list, and get back on Freenode for
support instead of slack.  In fact, I think we should move all
openshift discussions that are not confidential to freenode.

On Fri, Jul 12, 2019 at 10:12 AM Colin Walters  wrote:
>
> Hi,
>
> I think the Common's use of Slack is not a good match for "support".  
> Requiring an invitation is also an impediment to quickly asking questions.  
> Further Slack is proprietary, and also any discussion there won't be easily 
> found by Google.
>
> On the other hand we have these mailing lists, which are fine but they're 
> traditional mailing lists with all the tradeoffs there.
>
> I propose we shut down the user@ and dev@ lists and deploy a Discourse 
> instance, which is what the cool kids ;) are doing:
> https://discussion.fedoraproject.org/
> http://internals.rust-lang.org/
> etc.
>
> Discourse is IMO really nice because for people who want a mailing list it 
> can act like that, but for people who both want a modern web UI and most 
> importantly just want to drop in occasionally and not be committed to 
> receiving a stream of email, it works a lot better.  Also importantly to me 
> it's FOSS.
>
> I would also personally lean towards not using Slack too but I see that as a 
> separate discussion - it's real time, and that's a distinct thing from 
> discourse.  If we get a lot of momentum in our Discourse though over Slack we 
> can consider what to do later.
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev



-- 
Michael Gugino
Senior Software Engineer - OpenShift
mgug...@redhat.com
540-846-0304

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: RFC: OKD4 Roadmap Draft

2019-08-16 Thread Michael Gugino
 you are subscribed to the Google Groups
> "okd-wg" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to okd-wg+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/okd-wg/CAH16ShKTveB9F9jaH24ycGLO2KbnHSHkP5T0cYLquumLUEe_BQ%40mail.gmail.com
> <https://groups.google.com/d/msgid/okd-wg/CAH16ShKTveB9F9jaH24ycGLO2KbnHSHkP5T0cYLquumLUEe_BQ%40mail.gmail.com?utm_medium=email_source=footer>
> .
>


-- 
Michael Gugino
Senior Software Engineer - OpenShift
mgug...@redhat.com
540-846-0304
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: RFC: OKD4 Roadmap Draft

2019-08-16 Thread Michael Gugino
I'm not sure what you mean 'bootstrap machine is anything'.

Haven't seriously looked into single-host cluster yet, but from a high
level, creating the bootstrap node as normal, but not pivoting the API to
the 'real' cluster would seem to do some of what we want, if what we want
is 'just give me a working API so I can run containers' and not 'all of
openshift stuffed into a single host'.

On Fri, Aug 16, 2019 at 11:36 AM Clayton Coleman 
wrote:

>
>
> On Aug 16, 2019, at 11:29 AM, Michael Gugino  wrote:
>
> Pretty much already had all of this working here:
> https://github.com/openshift/openshift-ansible/pull/10898
>
> For single host cluster, I think path of least resistance would be to
> modify the bootstrap host to not pivot, make it clear it's 'not for
> production' and we can take lots of shortcuts for someone just looking for
> an easy, 1-VM openshift api.
>
>
> Does that assume bootstrap machine is “anything”?
>
>
> I'm most interested in running OKD 4.x on Fedora rather than CoreOS.  I
> might try to do something with that this weekend as POC.
>
>
> Thanks
>
>
> On Fri, Aug 16, 2019 at 10:49 AM Clayton Coleman 
> wrote:
>
>>
>>
>> On Aug 16, 2019, at 10:39 AM, Michael McCune  wrote:
>>
>>
>>
>> On Wed, Aug 14, 2019 at 12:27 PM Christian Glombek 
>> wrote:
>>
>>> The OKD4 roadmap is currently being drafted here:
>>>
>>> https://hackmd.io/Py3RcpuyQE2psYEIBQIzfw
>>>
>>> There was an initial discussion on it in yesterday's WG meeting, with
>>> some feedback given already.
>>>
>>> I have updated the draft and am now calling for comments for a final
>>> time, before a formal
>>> Call for Agreement shall follow at the beginning of next week on the OKD
>>> WG Google group list.
>>>
>>> Please add your comments before Monday. Thank you.
>>>
>>>
>> i'm not sure if i should add this on the document, but is there any
>> consensus (one way or the other) about the notion of bringing forward the
>> all-in-one work that was done in openshift-ansible for version 3?
>>
>> i am aware of code ready containers, but i would really like to see us
>> provide the option for a single machine install.
>>
>>
>> It’s possible for someone to emulate much of the install, bootstrap, and
>> subsequent operations on a single machine (the installer isn’t that much
>> code, the bulk of the work is across the operators).  You’d end up copying
>> a fair bit of the installer, but it may be tractable.  You’d need to really
>> understand the config passed to bootstrap via ignition, how the bootstrap
>> script works, and how you would trick etcd to start on the bootstrap
>> machine.  When the etcd operator lands in 4.3, that last becomes easier
>> (the operator runs and configures a local etcd).
>>
>> Single master / single node configurations are possible, but they will be
>> hard.  Many of the core design decisions of 4 are there to ensure the
>> cluster can self host, and they also require that machines really be
>> members of the cluster.
>>
>> A simpler, less complex path might be to (once we have OKD proto working)
>> to create a custom payload that excludes the installer, the MCD, and to use
>> ansible to configure the prereqs on a single machine (etcd in a specific
>> config), then emulate parts of the bootstrap script and run a single
>> instance (which in theory should work today).  You might be able to update
>> it.  Someone exploring this would possibly be able to get openshift running
>> on a non coreos control plane, so worth exploring if someone has the time.
>>
>>
>> peace o/
>>
>>
>>> Christian Glombek
>>>
>>> Associate Software Engineer
>>>
>>> Red Hat GmbH <https://www.redhat.com/>
>>>
>>> <https://www.google.com/maps/place/Engeldamm+64b,+10179+Berlin/@52.5058176,13.4191433,17z/data=!3m1!4b1!4m5!3m4!1s0x47a84e30d99f7f43:0xe6059fb480bfd85c!8m2!3d52.5058176!4d13.421332>
>>>
>>> cglom...@redhat.com 
>>> <https://red.ht/sig>
>>>
>>>
>>> Red Hat GmbH, http://www.de.redhat.com/, Sitz: Grasbrunn, Germany,
>>> Handelsregister: Amtsgericht München, HRB 153243,
>>> Geschäftsführer: Charles Cachera, Michael O'Neill, Tom Savage, Eric Shander
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "okd-wg" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to okd-wg+unsub

Re: RFC: OKD4 Roadmap Draft

2019-08-28 Thread Michael Gugino
Just to follow up on this.  I did work on it the weekend before last
as intended.  I hoped to get to do more this past weekend, but time
did not avail itself.

Here's where we are:

There's been some changes to terraform, plugins, etc since I was
working on this for RHEL, and there have been corresponding changes in
4.1 and 4.2 branches.  I hacked it together enough to deploy a
bootstrap host and a master, as well as parse the igition files and
start services.  Everything looked like it should have worked, but
something is wrong with the certificates on the bootstrap host.  The
kubectl client is complaining that the server cert of the api is only
valid for  not localhost.  Manually editing the kubeconfig
that is on the bootstrap host results then in 'server cert signed by
unknown authority' despite the fact that a CA is embedded in the
kubeconfig.

I'm unsure if RHCOS is doing something to mutate those files after the
ignition payload.  I would suspect not, but I'm just not sure how to
recover from these issues.

After a while, things got pretty hacked up.  I'm going to go back to
the direction of hacking the installer to provision fedora+userdata vs
having my own terraform files and see if I can make more progress
there.

On Mon, Aug 19, 2019 at 3:11 AM Clayton Coleman  wrote:
>
> > On Aug 16, 2019, at 10:25 PM, Michael McCune  wrote:
> >
> >> On Fri, Aug 16, 2019 at 2:36 PM Kevin Lapagna <4...@gmx.ch> wrote:
> >>
> >>> On Fri, Aug 16, 2019 at 4:50 PM Clayton Coleman  
> >>> wrote:
> >>>
> >>> Single master / single node configurations are possible, but they will be 
> >>> hard.  Many of the core design decisions of 4 are there to ensure the 
> >>> cluster can self host, and they also require that machines really be 
> >>> members of the cluster.
> >>
> >>
> >> How about (as alternative) spinning up multiple virtual machines and 
> >> simulate "the real thing". Sure, that uses lots of memory, but it will 
> >> nicely show what 4.x is capable of.
> >
> > my understanding is that this is essentially similar to the current
> > installer with a libvirt backend as the deployment, at least that's
> > how it looks when i try to run an installation on a single physical
> > node with multiple virtual machines.
>
> Yes.  Although as mike notes it may be possible to get this running
> with less effort via his route, which is a good alternative for simple
> single machine.
>
> >
> > peace o/
>
> --
> You received this message because you are subscribed to the Google Groups 
> "okd-wg" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to okd-wg+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/okd-wg/CAH16ShJS4Fe7xtt8hqgxw2Zcj4Hu9-MaTEXhr8e1gqvS8ti70g%40mail.gmail.com.



-- 
Michael Gugino
Senior Software Engineer - OpenShift
mgug...@redhat.com
540-846-0304

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Follow up on OKD 4

2019-07-24 Thread Michael Gugino
rder to access it in 
> the very short term (and then once FCoS is available that would not be 
> necessary).  If that's an option you or anyone on this thread are interested 
> in please let me know, just as something we can do to speed up.
>
>>
>>
>> I completely understand the disruption caused by the acquisition. But, after 
>> kicking the tyres and our meeting a few weeks back, it’s been pretty quiet. 
>> The clock is ticking on corporate long-term strategies. Some of those 
>> corporates spent plenty of dosh on licensing OCP and hiring consultants to 
>> implement.
>>
>>
>> Red Hat need to lead from the front. Get IRC revived, throw us a bone, and 
>> have us put our money where our mouth is — we’ll get involved. We’re begging 
>> for it.
>>
>> Until then we’re running out of patience via clientele and will need to 
>> start a community effort perhaps by forking OKD3 and integrating upstream. I 
>> am not interested in doing that. We shouldn’t have to.
>
>
> In the spirit of full transparency, FCoS integrated into OKD is going to take 
> several months to get to the point where it meets the quality bar I'd expect 
> for OKD4.  If that timeframe doesn't work for folks, we can definitely 
> consider other options like having RHCoS availability behind a terms 
> agreement, a franken-OKD without host integration (which might take just as 
> long to get and not really be a step forward for folks vs 3), or other, more 
> dramatic options.  Have folks given FCoS a try this week?  
> https://docs.fedoraproject.org/en-US/fedora-coreos/getting-started/.  That's 
> a great place to get started
>
> As always PRs and fixes to 3.x will continue to be welcomed and that effort 
> continues unabated.
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev



-- 
Michael Gugino
Senior Software Engineer - OpenShift
mgug...@redhat.com
540-846-0304

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Follow up on OKD 4

2019-07-24 Thread Michael Gugino
I think what I'm looking for is more 'modular' rather than DIY.  CVO
would need to be adapted to separate container payload from host
software (or use something else), and maintaining cross-distro
machine-configs might prove tedious, but for the most part, rest of
everything from the k8s bins up, should be more or less the same.

MCD is good software, but there's not really much going on there that
can't be ported to any other OS.  MCD downloads a payload, extracts
files, rebases ostree, reboots host.  You can do all of those steps
except 'rebases ostree' on any distro.  And instead of 'rebases
ostree', we could pull down a container that acts as a local repo that
contains all the bits you need to upgrade your host across releases.
Users could do things to break this workflow, but it should otherwise
work if they aren't fiddling with the hosts.  The MCD payload happens
to embed an ignition payload, but it doesn't actually run ignition,
just utilizes the file format.

>From my viewpoint, there's nothing particularly special about ignition
in our current process either.  I had the entire OCP 4 stack running
on RHEL using the same exact ignition payload, a minimal amount of
ansible (which could have been replaced by cloud-init userdata), and a
small python library to parse the ignition files.  I was also building
repo containers for 3.10 and 3.11 for Fedora.  Not to say the
OpenShift 4 experience isn't great, because it is.  RHEL CoreOS + OCP
4 came together quite nicely.

I'm all for 'not managing machines' but I'm not sure it has to look
exactly like OCP.  Seems the OCP installer and CVO could be
adapted/replaced with something else, MCD adapted, pretty much
everything else works the same.

On Wed, Jul 24, 2019 at 12:05 PM Clayton Coleman  wrote:
>
>
>
>
> On Wed, Jul 24, 2019 at 10:40 AM Michael Gugino  wrote:
>>
>> I tried FCoS prior to the release by using the assembler on github.
>> Too much secret sauce in how to actually construct an image.  I
>> thought atomic was much more polished, not really sure what the
>> value-add of ignition is in this usecase.  Just give me a way to build
>> simple image pipelines and I don't need ignition.  To that end, there
>> should be an okd 'spin' of FCoS IMO.  Atomic was dead simple to build
>> ostree repos for.  I'd prefer to have ignition be opt-in.  Since we're
>> supporting the mcd-once-from to parse ignition on RHEL, we don't need
>> ignition to actually install okd.  To me, it seems FCoS was created
>> just to have a more open version of RHEL CoreOS, and I'm not sure FCoS
>> actually solves anyone's needs relative to atomic.  It feels like we
>> jumped the shark on this one.
>
>
> That’s feedback that’s probably something you should share in the fcos forums 
> as well.  I will say that I find the OCP + RHEL experience unsatisfying and 
> doesn't truly live up to what RHCOS+OCP can do (since it lacks the key 
> features like ignition and immutable hosts).  Are you saying you'd prefer to 
> have more of a "DIY kube bistro" than the "highly opinionated, totally 
> integrated OKD" proposal?  I think that's a good question the community 
> should get a chance to weigh in on (in my original email that was the 
> implicit question - do you want something that looks like OCP4, or something 
> that is completely different).
>
>>
>>
>> I'd like to see OKD be distro-independent.  Obviously Fedora should be
>> our primary target (I'd argue Fedora over FCoS), but I think it should
>> be true upstream software in the sense that apache2 http server is
>> upstream and not distro specific.  To this end, perhaps it makes sense
>> to consume k/k instead of openshift/origin for okd.  OKD should be
>> free to do wild and crazy things independently of the enterprise
>> product.  Perhaps there's a usecase for treating k/k vs
>> openshift/origin as a swappable base layer.
>
>
> That’s even more dramatic a change from OKD even as it was in 3.x.  I’d be 
> happy to see people excited about reusing cvo / mcd and be able to mix and 
> match, but most of the things here would be a huge investment to build.  In 
> my original email I might call this the “I want to build my own distro" - if 
> that's what people want to build, I think we can do things to enable it.  But 
> it would probably not be "openshift" in the same way.
>
>>
>>
>> It would be nice to have a more native kubernetes place to develop our
>> components against so we can upstream them, or otherwise just build a
>> solid community around how we think kubernetes should be deployed and
>> consumed.  Similar to how Fedora has a package repository, we should
>> have a Kubernetes component repository (I realize operatorhub fulfulls
>&

Re: Follow up on OKD 4

2019-07-24 Thread Michael Gugino
I think FCoS could be a mutable detail.  To start with, support for
plain-old-fedora would be helpful to make the platform more portable,
particularly the MCO and machine-api.  If I had to state a goal, it
would be "Bring OKD to the largest possible range of linux distros to
become the defacto implementation of kubernetes."

Also, it would be helpful (as previously stated) to build communities
around some of our components that might not have a place in the
official kubernetes, but are valuable downstream components
nevertheless.

Anyway, I'm just throwing some ideas out there, I wouldn't consider my
statements as advocating strongly in any direction.  Surely FCoS is
the natural fit, but I think considering other distros merits
discussion.

On Wed, Jul 24, 2019 at 9:23 PM Clayton Coleman  wrote:
>
> > On Jul 24, 2019, at 9:14 PM, Michael Gugino  wrote:
> >
> > I think what I'm looking for is more 'modular' rather than DIY.  CVO
> > would need to be adapted to separate container payload from host
> > software (or use something else), and maintaining cross-distro
> > machine-configs might prove tedious, but for the most part, rest of
> > everything from the k8s bins up, should be more or less the same.
> >
> > MCD is good software, but there's not really much going on there that
> > can't be ported to any other OS.  MCD downloads a payload, extracts
> > files, rebases ostree, reboots host.  You can do all of those steps
> > except 'rebases ostree' on any distro.  And instead of 'rebases
> > ostree', we could pull down a container that acts as a local repo that
> > contains all the bits you need to upgrade your host across releases.
> > Users could do things to break this workflow, but it should otherwise
> > work if they aren't fiddling with the hosts.  The MCD payload happens
> > to embed an ignition payload, but it doesn't actually run ignition,
> > just utilizes the file format.
> >
> > From my viewpoint, there's nothing particularly special about ignition
> > in our current process either.  I had the entire OCP 4 stack running
> > on RHEL using the same exact ignition payload, a minimal amount of
> > ansible (which could have been replaced by cloud-init userdata), and a
> > small python library to parse the ignition files.  I was also building
> > repo containers for 3.10 and 3.11 for Fedora.  Not to say the
> > OpenShift 4 experience isn't great, because it is.  RHEL CoreOS + OCP
> > 4 came together quite nicely.
> >
> > I'm all for 'not managing machines' but I'm not sure it has to look
> > exactly like OCP.  Seems the OCP installer and CVO could be
> > adapted/replaced with something else, MCD adapted, pretty much
> > everything else works the same.
>
> Sure - why?  As in, what do you want to do?  What distro do you want
> to use instead of fcos?  What goals / outcomes do you want out of the
> ability to do whatever?  Ie the previous suggestion (the auto updating
> kube distro) has the concrete goal of “don’t worry about security /
> updates / nodes and still be able to run containers”, and fcos is a
> detail, even if it’s an important one.  How would you pitch the
> alternative?
>
>
> >
> >> On Wed, Jul 24, 2019 at 12:05 PM Clayton Coleman  
> >> wrote:
> >>
> >>
> >>
> >>
> >>> On Wed, Jul 24, 2019 at 10:40 AM Michael Gugino  
> >>> wrote:
> >>>
> >>> I tried FCoS prior to the release by using the assembler on github.
> >>> Too much secret sauce in how to actually construct an image.  I
> >>> thought atomic was much more polished, not really sure what the
> >>> value-add of ignition is in this usecase.  Just give me a way to build
> >>> simple image pipelines and I don't need ignition.  To that end, there
> >>> should be an okd 'spin' of FCoS IMO.  Atomic was dead simple to build
> >>> ostree repos for.  I'd prefer to have ignition be opt-in.  Since we're
> >>> supporting the mcd-once-from to parse ignition on RHEL, we don't need
> >>> ignition to actually install okd.  To me, it seems FCoS was created
> >>> just to have a more open version of RHEL CoreOS, and I'm not sure FCoS
> >>> actually solves anyone's needs relative to atomic.  It feels like we
> >>> jumped the shark on this one.
> >>
> >>
> >> That’s feedback that’s probably something you should share in the fcos 
> >> forums as well.  I will say that I find the OCP + RHEL experience 
> >> unsatisfying and doesn't truly live up to what RHCOS+OCP can do (since it 
> >> lacks the key features like ignition and immutable hos

Re: Follow up on OKD 4

2019-07-25 Thread Michael Gugino
I don't really view the 'bucket of parts' and 'complete solution' as
competing ideas.  It would be nice to build the 'complete solution'
from the 'bucket of parts' in a reproducible, customizable manner.
"How is this put together" should be easily followed, enough so that
someone can 'put it together' on their own infrastructure without
having to be an expert in designing and configuring the build system.

IMO, if I can't build it, I don't own it.  In 3.x, I could compile all
the openshift-specific bits from source, I could point at any
repository I wanted, I could point to any image registry I wanted, I
could use any distro I wanted.  I could replace the parts I wanted to;
or I could just run it as-is from the published sources and not worry
about replacing things.  I even built Fedora Atomic host rpm-trees
with all the kublet bits pre-installed, similar to what we're doing
with CoreOS now in 3.x.  It was a great experience, building my own
system images and running updates was trivial.

I wish we weren't EOL'ing the Atomic Host in Fedora.  It offered a lot
of flexibility and easy to use tooling.

On Thu, Jul 25, 2019 at 9:51 AM Clayton Coleman  wrote:
>
> > On Jul 25, 2019, at 4:19 AM, Aleksandar Lazic 
> >  wrote:
> >
> > HI.
> >
> >> Am 25.07.2019 um 06:52 schrieb Michael Gugino:
> >> I think FCoS could be a mutable detail.  To start with, support for
> >> plain-old-fedora would be helpful to make the platform more portable,
> >> particularly the MCO and machine-api.  If I had to state a goal, it
> >> would be "Bring OKD to the largest possible range of linux distros to
> >> become the defacto implementation of kubernetes."
> >
> > I agree here with Michael. As FCoS or in general CoS looks technical a good 
> > idea
> > but it limits the flexibility of possible solutions.
> >
> > For example when you need to change some system settings then you will need 
> > to
> > create a new OS Image, this is not very usable in some environments.
>
> I think something we haven’t emphasized enough is that openshift 4 is
> very heavily structured around changing the cost and mental model
> around this.  The goal was and is to make these sorts of things
> unnecessary.  Changing machine settings by building golden images is
> already the “wrong” (expensive and error prone) pattern - instead, it
> should be easy to reconfigure machines or to launch new containers to
> run software on those machines.  There may be two factors here at
> work:
>
> 1. Openshift 4 isn’t flexible in the ways people want (Ie you want to
> add an rpm to the OS to get a kernel module, or you want to ship a
> complex set of config and managing things with mcd looks too hard)
> 2. You want to build and maintain these things yourself, so the “just
> works” mindset doesn’t appeal.
>
> The initial doc alluded to the DIY / bucket of parts use case (I can
> assemble this on my own but slightly differently) - maybe we can go
> further now and describe the goal / use case as:
>
> I want to be able to compose my own Kubernetes distribution, and I’m
> willing to give up continuous automatic updates to gain flexibility in
> picking my own software
>
> Does that sound like it captures your request?
>
> Note that a key reason why the OS is integrated is so that we can keep
> machines up to date and do rolling control plane upgrades with no
> risk.  If you take the OS out of the equation the risk goes up
> substantially, but if you’re willing to give that up then yes, you
> could build an OKD that doesn’t tie to the OS.  This trade off is an
> important one for folks to discuss.  I’d been assuming that people
> *want* the automatic and safe upgrades, but maybe that’s a bad
> assumption.
>
> What would you be willing to give up?
>
> >
> > It would be nice to have the good old option to use the ansible installer to
> > install OKD/Openshift on other Linux distribution where ansible is able to 
> > run.
> >
> >> Also, it would be helpful (as previously stated) to build communities
> >> around some of our components that might not have a place in the
> >> official kubernetes, but are valuable downstream components
> >> nevertheless.
> >>
> >> Anyway, I'm just throwing some ideas out there, I wouldn't consider my
> >> statements as advocating strongly in any direction.  Surely FCoS is
> >> the natural fit, but I think considering other distros merits
> >> discussion.
> >
> > +1
> >
> > Regards
> > Aleks
> >
> >
> >>> On Wed, Jul 24, 2019 at 9:23 PM Clayton Coleman  
> >>> wrote:
> >>>
> >>>> On