OpenShift Origin incorporating CoreOS technologies ?

2018-05-16 Thread Daniel Comnea
Hi,

Following RH Summit and the news about CoreOS Tectonic features being
integrated into OCP, can we get any insights as to whether the Tectonics
features will make it into Origin too?


Thank you,
Dani
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: OpenShift Origin incorporating CoreOS technologies ?

2018-05-16 Thread Daniel Comnea
Thank you for quick response! It wasn't clear from the podcast but glad you
took time to clarify it.

will be good if a design proposal will be created and store in Origin so we
can better understand how things are planned to work, in case we need to
help out contributing to fill in the dots.



Dani

On Wed, May 16, 2018 at 1:53 PM, Clayton Coleman 
wrote:

> Many if not most of the features will be in Origin.  Probably the one
> exception is over the air cluster updates - the pieces of that will be
> open, but the mechanism for Origin updates may be more similar to the
> existing setup today than to what tectonic has.  We’re still sorting
> out how that will work.
>
> > On May 16, 2018, at 6:28 AM, Daniel Comnea 
> wrote:
> >
> > Hi,
> >
> > Following RH Summit and the news about CoreOS Tectonic features being
> integrated into OCP, can we get any insights as to whether the Tectonics
> features will make it into Origin too?
> >
> >
> > Thank you,
> > Dani
> > ___
> > dev mailing list
> > dev@lists.openshift.redhat.com
> > http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Origin EOL policy and what does trigger a new minor release

2018-05-16 Thread Daniel Comnea
Hi,

I'm sending out this email to understand what is the Origin EOL policy and
also understand / start a conversation around what is considered critical
bug which does trigger a new Origin minor release.


The rational started from [1] where after i migrated all my internal prod
environments from 1.5.1 to 3.7.0 but due to bug [2] was fixed in [3] i had
to move to 3.7.2 (picked latest minor due to CVEs too).

Now after going to all that long/ painful (due to extensive maintenance
window and few disruptions at apps level) upgrade process, i then got hit
by [1] and as it stands today don't have many options on the table except
forking and trying to back port the patch myself.


It will be naive to think that Origin will get all/ majority of the OCP bug
fixes however i do expect to have a gate or a transparent/known (public)
process which defines what critical bug is (same in how you might have for
OCP) such that a new Origin can be triggered.


Cheers,
Dani

[1] https://github.com/openshift/origin/issues/19138
[2] https://github.com/openshift/origin/pull/17620
[3] https://github.com/openshift/origin/releases/tag/v3.7.1
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: CentOS PaaS SIG meeting (2018-05-16)

2018-05-16 Thread Daniel Comnea
Ricardo,

The email's subject is wrong ;) the meeting for today hasn't started yet. I
suspect the email's subject should have been dated for May 2nd but that was
sent out so maybe it was sent too early ;)


Dani

On Wed, May 16, 2018 at 4:16 PM, Ricardo Martinelli de Oliveira <
rmart...@redhat.com> wrote:

> Hello,
> It's time for our weekly PaaS SIG sync-up meeting
>
> Time: 1700 UTC - Wedensdays (date -d "1700 UTC")
> Date: Today Wedensday, 02 May 2018
> Where: IRC- Freenode - #centos-devel
>
> Agenda:
> - OpenShift Current Status
> -- rpms
> -- automation
> - Open Floor
>
> Minutes from last meeting:
> https://www.centos.org/minutes/2018/May/centos-devel.
> 2018-05-02-17.01.log.html
>
> --
> Ricardo Martinelli de Oliveira
> Senior Software Engineer
> T: +55 11 3524-6125 <+55%2011%203524-6126> | M: +55 11 9 7069-6531
> Av. Brigadeiro Faria Lima 3900, 8° Andar. São Paulo, Brasil
> 
> .
> 
> TRIED. TESTED. TRUSTED. 
>
>  Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil
> pelo *Great Place to Work*.
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: CentOS PaaS SIG meeting (2018-05-16)

2018-05-17 Thread Daniel Comnea
No biggie Ricardo, thank you !

On Wed, May 16, 2018 at 7:19 PM, Ricardo Martinelli de Oliveira <
rmart...@redhat.com> wrote:

> You are right, Daniel. My apologies for that.
>
> The subject is correct, but the body not. I will pay more attention for
> the next meetings.
>
> On Wed, May 16, 2018 at 12:38 PM, Daniel Comnea 
> wrote:
>
>> Ricardo,
>>
>> The email's subject is wrong ;) the meeting for today hasn't started yet.
>> I suspect the email's subject should have been dated for May 2nd but that
>> was sent out so maybe it was sent too early ;)
>>
>>
>> Dani
>>
>> On Wed, May 16, 2018 at 4:16 PM, Ricardo Martinelli de Oliveira <
>> rmart...@redhat.com> wrote:
>>
>>> Hello,
>>> It's time for our weekly PaaS SIG sync-up meeting
>>>
>>> Time: 1700 UTC - Wedensdays (date -d "1700 UTC")
>>> Date: Today Wedensday, 02 May 2018
>>> Where: IRC- Freenode - #centos-devel
>>>
>>> Agenda:
>>> - OpenShift Current Status
>>> -- rpms
>>> -- automation
>>> - Open Floor
>>>
>>> Minutes from last meeting:
>>> https://www.centos.org/minutes/2018/May/centos-devel.2018-05
>>> -02-17.01.log.html
>>>
>>> --
>>> Ricardo Martinelli de Oliveira
>>> Senior Software Engineer
>>> T: +55 11 3524-6125 <+55%2011%203524-6126> | M: +55 11 9 7069-6531
>>> Av. Brigadeiro Faria Lima 3900, 8° Andar. São Paulo, Brasil
>>> <https://maps.google.com/?q=Av.+Brigadeiro+Faria+Lima+3900,+8%C2%B0+Andar.+S%C3%A3o+Paulo,+Brasil&entry=gmail&source=g>
>>> .
>>> <https://red.ht/sig>
>>> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
>>>
>>>  Red Hat é reconhecida entre as melhores empresas para trabalhar no
>>> Brasil pelo *Great Place to Work*.
>>>
>>> ___
>>> dev mailing list
>>> dev@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>>
>>>
>>
>
>
> --
> Ricardo Martinelli de Oliveira
> Senior Software Engineer
> T: +55 11 3524-6125 <+55%2011%203524-6126> | M: +55 11 9 7069-6531
> Av. Brigadeiro Faria Lima 3900, 8° Andar. São Paulo, Brasil
> <https://maps.google.com/?q=Av.+Brigadeiro+Faria+Lima+3900,+8%C2%B0+Andar.+S%C3%A3o+Paulo,+Brasil&entry=gmail&source=g>
> .
> <https://red.ht/sig>
> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
>
>  Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil
> pelo *Great Place to Work*.
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Origin EOL policy and what does trigger a new minor release

2018-05-17 Thread Daniel Comnea
*PSB*

On Wed, May 16, 2018 at 5:11 PM, Clayton Coleman 
wrote:

> Currently the process is:
>
> 1. critical security vulnerabilities are back ported
> 2. anyone is free to backport a change that is justifiable if you can get
> review and meet the bar for review
>
3. anyone who helps backport a change is expected to help keep CI jobs
> working if you see something is broken - right now only a small pool of
> people are doing that so I've been asking folks to chip in and keep the
> jobs up to date if you're going to submit PRs
> 4. all changes should be in master first (we won't backport an issue that
> hasn't merged to upstream kube or to origin master)
>
*[DC]:  Can you please be more specific around "merged to upstream kube" ?
Reason i'm asking is because K8 is always ahead by 1 cycle with Origin and
as such are you saying that upstream kube branch should match with what
origin master code base cycle is - i.e. say currently origin master is
being worked on 1.10 K8 code base and as such upstream kube to "watch" is
1.10 branch ? *

>
> I cut releases on critical issues and otherwise the tag is just rolling
> (if you merge to release-3.7 the change will show up).
>
>
> On Wed, May 16, 2018 at 11:07 AM, Daniel Comnea 
> wrote:
>
>> Hi,
>>
>> I'm sending out this email to understand what is the Origin EOL policy
>> and also understand / start a conversation around what is considered
>> critical bug which does trigger a new Origin minor release.
>>
>>
>> The rational started from [1] where after i migrated all my internal prod
>> environments from 1.5.1 to 3.7.0 but due to bug [2] was fixed in [3] i had
>> to move to 3.7.2 (picked latest minor due to CVEs too).
>>
>> Now after going to all that long/ painful (due to extensive maintenance
>> window and few disruptions at apps level) upgrade process, i then got hit
>> by [1] and as it stands today don't have many options on the table except
>> forking and trying to back port the patch myself.
>>
>>
>> It will be naive to think that Origin will get all/ majority of the OCP
>> bug fixes however i do expect to have a gate or a transparent/known
>> (public) process which defines what critical bug is (same in how you might
>> have for OCP) such that a new Origin can be triggered.
>>
>>
>> Cheers,
>> Dani
>>
>> [1] https://github.com/openshift/origin/issues/19138
>> [2] https://github.com/openshift/origin/pull/17620
>> [3] https://github.com/openshift/origin/releases/tag/v3.7.1
>>
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: OpenShift Web Console - 3.9 - Pod / CrashLoopBackOff

2018-05-23 Thread Daniel Comnea
On Wed, May 23, 2018 at 5:20 PM, Vyacheslav Semushin 
wrote:

> 2018-05-17 17:18 GMT+02:00 Charles Moulliard :
>
>> The trick / solution  described there doesn t work. I tried also using
>> the ansible playbook of Openshift to remove the project and recreate it and
>> the pod is always recreated with Openshift annotation = anyuid
>>
>
> The reason of why "anyuid" SCC is being applied is because it was granted
> to all authenticated users. And because anyuid has priority 10, it gets
> applied instead of "restricted" SCC.
>
[DC]: how do you know about anyuid and priority 10? In other words how can
i find out each scc what priority has ?

>
>
> --
> Slava Semushin | OpenShift
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Custom SCC assigned to wrong pods

2018-05-23 Thread Daniel Comnea
Hi,

I'm running Origin 3.7.0 and i've created a custom SCC [1] which is being
referenced by different Deployments objects using serviceAccountName: foo-
scc-restricted.

Now the odd thing which i cannot explain is why glusterFS pods [2] which
doesn't reference the new created serviceAccountName [3] do have the new
custom scc being used [4]...is that normal or is a bug?



Cheers,
Dani

[1] https://gist.github.com/DanyC97/56070e3f1523e31c1ad96980df6d7fe5
[2] https://gist.github.com/DanyC97/6b7a15ed8de87951cee6d038646e0918
[3] https://gist.github.com/DanyC97/6b7a15ed8de87951cee6d038646e0918#file-
glusterfs-deployment-yml-L65
[4] https://gist.github.com/DanyC97/6b7a15ed8de87951cee6d038646e0918#file-
glusterfs-deployment-yml-L11
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Custom SCC assigned to wrong pods

2018-05-23 Thread Daniel Comnea
I see the rational, thank you for quick response and knowledge.

On Wed, May 23, 2018 at 10:59 PM, Jordan Liggitt 
wrote:

> By making your SCC available to all authenticated users, it gets added to
> the set considered for every pod run by every service account:
>
> users:
> - system:serviceaccount:foo:foo-sa
> groups:
> - system:authenticated
>
>
> If you want to limit it to just your foo-sa service account, you should
> remove the system:authenticated group from the SCC
>
>
>
> On Wed, May 23, 2018 at 5:54 PM, Daniel Comnea 
> wrote:
>
>> Hi,
>>
>> I'm running Origin 3.7.0 and i've created a custom SCC [1] which is
>> being referenced by different Deployments objects using
>> serviceAccountName: foo-scc-restricted.
>>
>> Now the odd thing which i cannot explain is why glusterFS pods [2] which
>> doesn't reference the new created serviceAccountName [3] do have the new
>> custom scc being used [4]...is that normal or is a bug?
>>
>>
>>
>> Cheers,
>> Dani
>>
>> [1] https://gist.github.com/DanyC97/56070e3f1523e31c1ad96980df6d7fe5
>> [2] https://gist.github.com/DanyC97/6b7a15ed8de87951cee6d038646e0918
>> [3] https://gist.github.com/DanyC97/6b7a15ed8de87951cee6d038646e
>> 0918#file-glusterfs-deployment-yml-L65
>> [4] https://gist.github.com/DanyC97/6b7a15ed8de87951cee6d038646e
>> 0918#file-glusterfs-deployment-yml-L11
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: OpenShift Web Console - 3.9 - Pod / CrashLoopBackOff

2018-05-24 Thread Daniel Comnea
Fair point Slava, hat off.

Thanks for the info.

On Thu, May 24, 2018 at 11:16 AM, Vyacheslav Semushin 
wrote:

> 2018-05-24 10:10 GMT+02:00 Charles Moulliard :
>
>> +1 to document somewhere how SCC is working, priority defined,  and
>> what should be done to resolve such issues
>>
>
> Perhaps this info is hard to find but it's there:
> https://docs.openshift.org/latest/architecture/additional_concepts/
> authorization.html#scc-prioritization
>
>
> --
> Slava Semushin | OpenShift
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Any alternative to "oc adm policy add-scc-to-user" ?

2018-05-24 Thread Daniel Comnea
Hi,

Is any alternative to "oc adm policy add-scc-to-user" command in the same
way there is one for "oc create serviceaccount foo" which can be achieved
by

apiVersion: v1

kind: ServiceAccount

metadata:

  name: foo-sa

  namespace: foo


I'd like to be able to put all the info in a file rather than run oc cmd
sequentially.


Thanks
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Any alternative to "oc adm policy add-scc-to-user" ?

2018-05-24 Thread Daniel Comnea
Well yeah that is an option but then that is more or less like "oc edit
scc" which is not what i want since i need to know all the users and that
is tricky depending on the time when i run it (green field deployment,
after upgrade etc)

On Thu, May 24, 2018 at 10:24 PM, Mateus Caruccio <
mateus.caruc...@getupcloud.com> wrote:

> Hey, you could use oc's --loglevel=N to see the exact HTTP
> request/response flow with the api and adapt it to your need.
> I believe a level of 8 should be enough.
>
> --
> Mateus Caruccio / Master of Puppets
> GetupCloud.com
> We make the infrastructure invisible
> Gartner Cool Vendor 2017
>
> 2018-05-24 18:16 GMT-03:00 Daniel Comnea :
>
>> Hi,
>>
>> Is any alternative to "oc adm policy add-scc-to-user" command in the
>> same way there is one for "oc create serviceaccount foo" which can
>> be achieved by
>>
>> apiVersion: v1
>>
>> kind: ServiceAccount
>>
>> metadata:
>>
>>   name: foo-sa
>>
>>   namespace: foo
>>
>>
>> I'd like to be able to put all the info in a file rather than run oc cmd
>> sequentially.
>>
>>
>> Thanks
>>
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Any alternative to "oc adm policy add-scc-to-user" ?

2018-05-24 Thread Daniel Comnea
Not to mention that with the spec file at least i should be able to use
either kubectl or oc cli while with "oc adm" you can do it only with oc cli.

On Thu, May 24, 2018 at 10:32 PM, Daniel Comnea 
wrote:

> Well yeah that is an option but then that is more or less like "oc edit
> scc" which is not what i want since i need to know all the users and that
> is tricky depending on the time when i run it (green field deployment,
> after upgrade etc)
>
> On Thu, May 24, 2018 at 10:24 PM, Mateus Caruccio <
> mateus.caruc...@getupcloud.com> wrote:
>
>> Hey, you could use oc's --loglevel=N to see the exact HTTP
>> request/response flow with the api and adapt it to your need.
>> I believe a level of 8 should be enough.
>>
>> --
>> Mateus Caruccio / Master of Puppets
>> GetupCloud.com
>> We make the infrastructure invisible
>> Gartner Cool Vendor 2017
>>
>> 2018-05-24 18:16 GMT-03:00 Daniel Comnea :
>>
>>> Hi,
>>>
>>> Is any alternative to "oc adm policy add-scc-to-user" command in the
>>> same way there is one for "oc create serviceaccount foo" which can
>>> be achieved by
>>>
>>> apiVersion: v1
>>>
>>> kind: ServiceAccount
>>>
>>> metadata:
>>>
>>>   name: foo-sa
>>>
>>>   namespace: foo
>>>
>>>
>>> I'd like to be able to put all the info in a file rather than run oc cmd
>>> sequentially.
>>>
>>>
>>> Thanks
>>>
>>>
>>> ___
>>> dev mailing list
>>> dev@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>>
>>>
>>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Any alternative to "oc adm policy add-scc-to-user" ?

2018-05-25 Thread Daniel Comnea
Slava,

spot on !!!

I don't know why i was under the impression that in 3.7 RBAC been fully
implemented and everything on parity, guess i was wrong.
Thank you for sharing the PR, it has very useful info there ...how on earth
i missed it ;-(

Best,
Dani

On Fri, May 25, 2018 at 8:31 AM, Vyacheslav Semushin 
wrote:

> 2018-05-24 23:16 GMT+02:00 Daniel Comnea :
>
>> Hi,
>>
>> Is any alternative to "oc adm policy add-scc-to-user" command in the
>> same way there is one for "oc create serviceaccount foo" which can
>> be achieved by
>>
>> apiVersion: v1
>>
>> kind: ServiceAccount
>>
>> metadata:
>>
>>   name: foo-sa
>>
>>   namespace: foo
>>
>>
>> I'd like to be able to put all the info in a file rather than run oc cmd
>> sequentially.
>>
>
> No, there was no alternative except editing SCC via oc edit/oc patch/etc.
>
> Since 3.10 it became possible to use ClusterRole and ClusterRoleBindings
> for such things. See related PR for details: https://github.com/openshift/
> origin/pull/19349 It also has a link to a Trello card that contains a few
> pointers.
>
>
> --
> Slava Semushin | OpenShift
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Any alternative to "oc adm policy add-scc-to-user" ?

2018-05-25 Thread Daniel Comnea
Right you are again, my bad as i i've mixed things up.

SCC has the equivalent of K8 PSP and not all the SCC been incorporated
(yet) into PSP.
Now is all clear in my head, thanks for taking the time to respond.




On Fri, May 25, 2018 at 9:31 AM, Vyacheslav Semushin 
wrote:

> 2018-05-25 10:23 GMT+02:00 Daniel Comnea :
>
>> Slava,
>>
>> spot on !!!
>>
>> I don't know why i was under the impression that in 3.7 RBAC been fully
>> implemented and everything on parity, guess i was wrong.
>>
>
> One doesn't exclude another: RBAC was fully implemented and replaced our
> previous mechanisms. But based on my understanding, RBAC is mostly about
> authentication/authorization so it has low relation with SCC. Also because
> SCC is our own API we didn't implement such integration before.
>
>
> --
> Slava Semushin | OpenShift
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Understanding which AWS code will be supported moving forward

2018-06-05 Thread Daniel Comnea
Hi,

Anyone able to clarify what is the path forward regarding the AWS code
deployment?

Looking in openshift-ansible repo i do see [1] however looking in openshift-
ansible-contrib i do see a different code base for 3.9 (which is also
different compared with [3] where cfn was used).



Cheers,
Dani

[1] https://github.com/openshift/openshift-ansible/tree/master/playbooks/aws

[2] 
https://github.com/openshift/openshift-ansible-contrib/tree/master/reference-architecture/3.9/playbooks
& https://access.redhat
.com/documentation/en-us/reference_architectures/2018/html/deploying_and_managing_
openshift_3.9_on_amazon_web_services/red_hat_openshift
_container_platform_prerequisites

[3] https://github.com/openshift/openshift-ansible-contrib
/tree/master/reference-architecture/aws-ansible
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Understanding which AWS code will be supported moving forward

2018-06-06 Thread Daniel Comnea
Had a discussion with Ryan on [1] and he kindly answered most of my
questions however i still have one final set of questions:


   - do you have a list of todo items which are missing from openshift-
   ansible aws code which is required to make it GA?
   - which aws code are you planning to support/ improve moving forward:
   the stop gap solution done by Ryan in openshift-ansible-contrib or the
   existing/ current code in openshift-ansible repo (which is not GA yet)


Hopefully the people cc'ed will be able to chime in and provide some
clarity.


Cheers,
Dani

[1] https://github.com/openshift/openshift-ansible-contrib/issues/1044

On Tue, Jun 5, 2018 at 11:25 AM, Daniel Comnea 
wrote:

> Hi,
>
> Anyone able to clarify what is the path forward regarding the AWS code
> deployment?
>
> Looking in openshift-ansible repo i do see [1] however looking in
> openshift-ansible-contrib i do see a different code base for 3.9 (which
> is also different compared with [3] where cfn was used).
>
>
>
> Cheers,
> Dani
>
> [1] https://github.com/openshift/openshift-ansible/tree/master/playbooks/
> aws
>
> [2] https://github.com/openshift/openshift-ansible-contrib/
> tree/master/reference-architecture/3.9/playbooks & https://access.redhat
> .com/documentation/en-us/reference_architectures/2018/html/
> deploying_and_managing_openshift_3.9_on_amazon_web_services/red_hat_
> openshift_container_platform_prerequisites
>
> [3] https://github.com/openshift/openshift-ansible-contrib/
> tree/master/reference-architecture/aws-ansible
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Understanding which AWS code will be supported moving forward

2018-06-06 Thread Daniel Comnea
Thank you for insight Michael, it will help me making a decision on which
horse to ride.

Ryan - thank you too.

On Wed, Jun 6, 2018 at 1:59 PM, Michael Gugino  wrote:

> Most likely those aws roles and plays will go away in favor of the
> next-gen installer sometime later in the year.  I don't recommend
> using AWS plays and nobody seems to be maintaining them at this point.
>
> On Wed, Jun 6, 2018 at 5:03 AM, Daniel Comnea 
> wrote:
> > Had a discussion with Ryan on [1] and he kindly answered most of my
> > questions however i still have one final set of questions:
> >
> > do you have a list of todo items which are missing from openshift-ansible
> > aws code which is required to make it GA?
> > which aws code are you planning to support/ improve moving forward: the
> stop
> > gap solution done by Ryan in openshift-ansible-contrib or the existing/
> > current code in openshift-ansible repo (which is not GA yet)
> >
> >
> > Hopefully the people cc'ed will be able to chime in and provide some
> > clarity.
> >
> >
> > Cheers,
> > Dani
> >
> > [1] https://github.com/openshift/openshift-ansible-contrib/issues/1044
> >
> >
> > On Tue, Jun 5, 2018 at 11:25 AM, Daniel Comnea 
> > wrote:
> >>
> >> Hi,
> >>
> >> Anyone able to clarify what is the path forward regarding the AWS code
> >> deployment?
> >>
> >> Looking in openshift-ansible repo i do see [1] however looking in
> >> openshift-ansible-contrib i do see a different code base for 3.9 (which
> is
> >> also different compared with [3] where cfn was used).
> >>
> >>
> >>
> >> Cheers,
> >> Dani
> >>
> >> [1]
> >> https://github.com/openshift/openshift-ansible/tree/master/
> playbooks/aws
> >>
> >> [2]
> >> https://github.com/openshift/openshift-ansible-contrib/
> tree/master/reference-architecture/3.9/playbooks
> >> &
> >> https://access.redhat.com/documentation/en-us/reference_
> architectures/2018/html/deploying_and_managing_
> openshift_3.9_on_amazon_web_services/red_hat_openshift_container_platform_
> prerequisites
> >>
> >> [3]
> >> https://github.com/openshift/openshift-ansible-contrib/
> tree/master/reference-architecture/aws-ansible
> >
> >
>
>
>
> --
> Michael Gugino
> Senior Software Engineer - OpenShift
> mgug...@redhat.com
> 540-846-0304
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Origin 3.10 release

2018-06-14 Thread Daniel Comnea
Lala,

in case you want to give it a go, you can try [1]  which i kicked few days
ago to get ourselves in a position to be ready to release the rpms as part
of PaaS SIG once Clayton & co will cut a release.



HTH,
Dani

[1] https://cbs.centos.org/koji/taskinfo?taskID=449606

On Tue, Jun 12, 2018 at 3:58 PM, Clayton Coleman 
wrote:

> You should be using the current rolling tag.  We're not yet ready to cut
> an rc candidate.
>
> Please see my previous email to the list about accessing the latest RPMs
> or zips for the project.
>
> On Tue, Jun 12, 2018 at 8:10 AM, Lalatendu Mohanty 
> wrote:
>
>> Hi,
>>
>> We are working on code changes required for running  (cluster up)
>> Origin3.10 in Minishift. So wondering when can we expect v3.10 alpha (or
>> any) release?
>>
>> Thanks,
>> Lala
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>>
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Custom SCC assigned to wrong pods

2018-06-18 Thread Daniel Comnea
Hi Jordan,

Reviving the thread on the custom scc with another question if you don't
mind:

After i removed the

groups:
- system:authenticated

from my custom scc i went ahead and done the following:

1) Created Foo project
2) Created my custom scc (which i shared in my previous email)
3) Deployed the app pods
4) Upgraded Openshift to 3.6.1 – pods started to crash due to having the
default restricted scc instead of the custom scc previously assigned.

The docs says very clear that only the default scc will be reset to initial
state and so i was expecting the POD to pick up the custom scc even if they
get bounced during upgrade.

Any thoughts ?

Thanks !

On Wed, May 23, 2018 at 11:18 PM, Daniel Comnea 
wrote:

> I see the rational, thank you for quick response and knowledge.
>
> On Wed, May 23, 2018 at 10:59 PM, Jordan Liggitt 
> wrote:
>
>> By making your SCC available to all authenticated users, it gets added to
>> the set considered for every pod run by every service account:
>>
>> users:
>> - system:serviceaccount:foo:foo-sa
>> groups:
>> - system:authenticated
>>
>>
>> If you want to limit it to just your foo-sa service account, you should
>> remove the system:authenticated group from the SCC
>>
>>
>>
>> On Wed, May 23, 2018 at 5:54 PM, Daniel Comnea 
>> wrote:
>>
>>> Hi,
>>>
>>> I'm running Origin 3.7.0 and i've created a custom SCC [1] which is
>>> being referenced by different Deployments objects using
>>> serviceAccountName: foo-scc-restricted.
>>>
>>> Now the odd thing which i cannot explain is why glusterFS pods [2]
>>> which doesn't reference the new created serviceAccountName [3] do have
>>> the new custom scc being used [4]...is that normal or is a bug?
>>>
>>>
>>>
>>> Cheers,
>>> Dani
>>>
>>> [1] https://gist.github.com/DanyC97/56070e3f1523e31c1ad96980df6d7fe5
>>> [2] https://gist.github.com/DanyC97/6b7a15ed8de87951cee6d038646e0918
>>> [3] https://gist.github.com/DanyC97/6b7a15ed8de87951cee6d038646e
>>> 0918#file-glusterfs-deployment-yml-L65
>>> [4] https://gist.github.com/DanyC97/6b7a15ed8de87951cee6d038646e
>>> 0918#file-glusterfs-deployment-yml-L11
>>>
>>> ___
>>> dev mailing list
>>> dev@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>>
>>>
>>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Custom SCC assigned to wrong pods

2018-06-19 Thread Daniel Comnea
On Mon, Jun 18, 2018 at 11:19 PM, Jordan Liggitt 
wrote:

> Redeploying the application creates new pods.
>
> Since you removed the part of your custom scc that allowed it to apply to
> your pods, those new pods were once again subject to the restricted policy.
>
[DC]: that was not removed, it was added in step 2) and never removed
however during step 4) (open shift upgrade) something happened which made
the new pods subject to default restricted policy.

>
> On Jun 18, 2018, at 6:12 PM, Daniel Comnea  wrote:
>
> Hi Jordan,
>
> Reviving the thread on the custom scc with another question if you don't
> mind:
>
> After i removed the
>
> groups:
> - system:authenticated
>
> from my custom scc i went ahead and done the following:
>
> 1) Created Foo project
> 2) Created my custom scc (which i shared in my previous email)
> 3) Deployed the app pods
> 4) Upgraded Openshift to 3.6.1 – pods started to crash due to having the
> default restricted scc instead of the custom scc previously assigned.
>
> The docs says very clear that only the default scc will be reset to
> initial state and so i was expecting the POD to pick up the custom scc
> even if they get bounced during upgrade.
>
> Any thoughts ?
>
> Thanks !
>
> On Wed, May 23, 2018 at 11:18 PM, Daniel Comnea 
> wrote:
>
>> I see the rational, thank you for quick response and knowledge.
>>
>> On Wed, May 23, 2018 at 10:59 PM, Jordan Liggitt 
>> wrote:
>>
>>> By making your SCC available to all authenticated users, it gets added
>>> to the set considered for every pod run by every service account:
>>>
>>> users:
>>> - system:serviceaccount:foo:foo-sa
>>> groups:
>>> - system:authenticated
>>>
>>>
>>> If you want to limit it to just your foo-sa service account, you should
>>> remove the system:authenticated group from the SCC
>>>
>>>
>>>
>>> On Wed, May 23, 2018 at 5:54 PM, Daniel Comnea 
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> I'm running Origin 3.7.0 and i've created a custom SCC [1] which is
>>>> being referenced by different Deployments objects using
>>>> serviceAccountName: foo-scc-restricted.
>>>>
>>>> Now the odd thing which i cannot explain is why glusterFS pods [2]
>>>> which doesn't reference the new created serviceAccountName [3] do have
>>>> the new custom scc being used [4]...is that normal or is a bug?
>>>>
>>>>
>>>>
>>>> Cheers,
>>>> Dani
>>>>
>>>> [1] https://gist.github.com/DanyC97/56070e3f1523e31c1ad96980df6d7fe5
>>>> [2] https://gist.github.com/DanyC97/6b7a15ed8de87951cee6d038646e0918
>>>> [3] https://gist.github.com/DanyC97/6b7a15ed8de87951cee6d038646e
>>>> 0918#file-glusterfs-deployment-yml-L65
>>>> [4] https://gist.github.com/DanyC97/6b7a15ed8de87951cee6d038646e
>>>> 0918#file-glusterfs-deployment-yml-L11
>>>>
>>>> ___
>>>> dev mailing list
>>>> dev@lists.openshift.redhat.com
>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>>>
>>>>
>>>
>>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Custom SCC assigned to wrong pods

2018-06-19 Thread Daniel Comnea
Thanks Slava for reply.

For everyone benefit (in case others come across the same issue) it was all
down to my custom scc *priority* which was *null*. Once i set it to a value
higher than 0 ( default 'restricted' scc has 0) then everything works as
expected.

Thanks guys !

On Tue, Jun 19, 2018 at 11:26 AM, Vyacheslav Semushin 
wrote:

> 2018-06-19 10:31 GMT+02:00 Daniel Comnea :
>
>>
>>
>> On Mon, Jun 18, 2018 at 11:19 PM, Jordan Liggitt 
>> wrote:
>>
>>> Redeploying the application creates new pods.
>>>
>>> Since you removed the part of your custom scc that allowed it to apply
>>> to your pods, those new pods were once again subject to the restricted
>>> policy.
>>>
>> [DC]: that was not removed, it was added in step 2) and never removed
>> however during step 4) (open shift upgrade) something happened which made
>> the new pods subject to default restricted policy.
>>
>
> If "pods started to crash", it means that they were re-created (or new
> ones were added).
>
> Could you show us a pod definition (oc get pod  -o yaml?
>
>
> --
> Slava Semushin | OpenShift
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Custom SCC assigned to wrong pods

2018-06-21 Thread Daniel Comnea
Valid points, thank you. I'll reconsider my approach

On Wed, Jun 20, 2018 at 9:17 AM, Vyacheslav Semushin 
wrote:

> 2018-06-20 8:22 GMT+02:00 Daniel Comnea :
>
>> Thanks Slava for reply.
>>
>> For everyone benefit (in case others come across the same issue) it was
>> all down to my custom scc *priority* which was *null*. Once i set it to
>> a value higher than 0 ( default 'restricted' scc has 0) then everything
>> works as expected.
>>
>
> If it's possible, it's better to modify a pod manifest to explicitly
> request everything that it expects to have. If your custom SCC was beaten
> by the "restricted" SCC, it means that for the system these SCCs were
> recognized as covering everything a pod needs to have. If a pod needs
> something that the "restricted" SCC doesn't provide, this pod should
> request for it and "restricted" SCC won't be selected at all because it
> doesn't fulfill the request.
>
> While an approach with priority field works, it could stop working when a
> user will be granted access to yet another SCC with a higher priority (for
> example, "anyuid").
>
> HTH
>
> --
> Slava Semushin | OpenShift
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


[CentOS PaaS SIG]: new rpms available

2018-07-31 Thread Daniel Comnea
Hi,

We would like to announce that new openshift-ansible rpms been made
*available:*


   1. *openshift v3.6* => openshift-ansible-3.6.173.0.128-1.git.1.a18588a.el7
   which can be found at [1]
   2. *openshift v3.7* => openshift-ansible-3.7.61-1.git.1.3624530.el7
   which can be found at [2]
   3. *openshift v3.9* => openshift-ansible-3.9.40-1.git.1.b3380d7.el7
   which can be found at [3]


For openshift v3.10 we are working on getting the origin v3.10.0-rc.0 out
for testing very soon once we manage to fix an issue in our code automation.



Thank you,
PaaS SiG team

[1] http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin36/
[2] http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin37/
[3] http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin39/
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


[CentOS PaaS SIG]: Origin v3.10 rpms available for testing

2018-08-03 Thread Daniel Comnea
Hi,

We would like to announce that Origin v3.10 rpms are available for testing
at [1].

As such we are calling for help from community to start testing and let us
know if there are issues with the rpms and its dependencies.

And in the spirit of transparency see below the plan to promote the rpms to
mirror.centos.org repo:


   1. in the next 24/72 hours the packages should be promoted to the test
   repo [2] (currently it does not exist, we are waiting to be sync'ed in
   the background)
   2. in a week time if we haven't heard any issues/ blockers we are going
   to promote to [3] repo (currently it doesn't exist, it will once the rpm
   will be promoted and signed)



Thank you,
PaaS SiG team

[1] 
https://cbs.centos.org/repos/paas7-openshift-origin310-testing/x86_64/os/Packages

[2] https://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin310/
[3] http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin310/
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: [CentOS PaaS SIG]: Origin v3.10 rpms available for testing

2018-08-07 Thread Daniel Comnea
Hi,

With a bit of delay caused by the fact that key member of CentOS infra team
were traveling, happy to announce that the packages were promoted to the
[1].

In addition, i'm going to start the process of promoting the packages to
[2] due to positive feedback received from community around testing.



Thank you,
PaaS SIG team


[1] https://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin310/
[2]]http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin310/



On Fri, Aug 3, 2018 at 9:46 AM, Daniel Comnea  wrote:

> Hi,
>
> We would like to announce that Origin v3.10 rpms are available for
> testing at [1].
>
> As such we are calling for help from community to start testing and let us
> know if there are issues with the rpms and its dependencies.
>
> And in the spirit of transparency see below the plan to promote the rpms
> to mirror.centos.org repo:
>
>
>1. in the next 24/72 hours the packages should be promoted to the test
>repo [2] (currently it does not exist, we are waiting to be sync'ed in
>the background)
>2. in a week time if we haven't heard any issues/ blockers we are
>going to promote to [3] repo (currently it doesn't exist, it will once
>the rpm will be promoted and signed)
>
>
>
> Thank you,
> PaaS SiG team
>
> [1] https://cbs.centos.org/repos/paas7-openshift-origin310-testing/x86_64/
> os/Packages
> [2] https://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin310/
> [3] http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin310/
>
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Why rpms are still required with ansible openshift installation 3.10 ?

2018-08-08 Thread Daniel Comnea
My understanding is that shouldn't happen, as such i'd suggest you open an
issue against openshift-ansible repo

On Wed, Aug 8, 2018 at 1:19 PM, Charles Moulliard 
wrote:

> Hi
>
> Is there a reason why these rpms "origin-node-3.10.0" and
> "origin-clients-3.10.0" are needed to install origin 3.10 on Centos7 even
> if the option "containerized=true" is defined AND where only docker images
> should be installed ?
>
> TASK [openshift_node : Install node, clients, and conntrack packages]
> 
> 
> ***
> FAILED - RETRYING: Install node, clients, and conntrack packages (3
> retries left).
> ...
> failed: [192.168.99.50] (item={u'name': u'origin-node-3.10.0'}) =>
> {"attempts": 3, "changed": false, "item": {"name": "origin-node-3.10.0"},
> "msg": "No package matching 'origin-node-3.10.0' found available, installed
> or updated", "rc": 126, "results": ["No package matching
> 'origin-node-3.10.0' found available, installed or updated"]}
> FAILED - RETRYING: Install node, clients, and conntrack packages (3
> retries left).
> ...
> failed: [192.168.99.50] (item={u'name': u'origin-clients-3.10.0'}) =>
> {"attempts": 3, "changed": false, "item": {"name":
> "origin-clients-3.10.0"}, "msg": "No package matching
> 'origin-clients-3.10.0' found available, installed or updated", "rc": 126,
> "results": ["No package matching 'origin-clients-3.10.0' found available,
> installed or updated"]}
> c
>
> Config of the inventory
> containerized=true
> openshift_deployment_type=origin
> openshift_enable_excluders=false
> openshift_release="3.10"
> openshift_image_tag=v3.10.0
> openshift_pkg_version=-3.10.0
>
> Regards
>
> Charles
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


[CentOS PaaS SIG]: Origin v3.10 rpms released to mirror.centos.org repo

2018-08-08 Thread Daniel Comnea
Hi,

Following my previous [1] notification, happy to announce the official
release of Origin v3.10 rpms which can be found at [2]



Thank you,
PaaS SIG team

[1] http://lists.openshift.redhat.com/openshift-archives/dev
/2018-August/msg1.html
[2] http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin310/
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: [CentOS PaaS SIG]: Origin v3.10 rpms released to mirror.centos.org repo

2018-08-09 Thread Daniel Comnea
Cheers Patrick, we trying our best ;)

On Wed, Aug 8, 2018 at 8:46 PM, Patrick Tescher 
wrote:

> Wow, this is so much faster than any recent release. Good job team!
>
>
> On Aug 8, 2018, at 11:46 AM, Daniel Comnea  wrote:
>
> Hi,
>
> Following my previous [1] notification, happy to announce the official
> release of Origin v3.10 rpms which can be found at [2]
>
>
>
> Thank you,
> PaaS SIG team
>
> [1] http://lists.openshift.redhat.com/openshift-archives/dev/
> 2018-August/msg1.html
> [2] http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin310/
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Removed "openshift start node" from origin master

2018-08-14 Thread Daniel Comnea
Hi Clayton,

Great progress!

So am i right to say that *"**splitting OpenShift up to make it be able to
run on top of kubernetes"* end result will be more like openshift distinct
features turning more like add-ons rather than what we have today?



On Tue, Aug 14, 2018 at 6:17 PM, Clayton Coleman 
wrote:

> As part of the continuation of splitting OpenShift up to make it be able
> to run on top of kubernetes, we just merged https://github.com/
> openshift/origin/pull/20344 which removes "openshift start node" and the
> "openshift start" commands.  This means that the openshift binary will no
> longer include the kubelet code and if you want an "all-in-one" openshift
> experience you'll want to use "oc cluster up".
>
> There should be no impact to end users - starting in 3.10 we already only
> used the kubelet (part of hyperkube binary) and use the
> "openshift-node-config" binary to translate the node-config.yaml into
> kubelet arguments.  oc cluster up has been running in this configuration
> for a while.
>
> integration tests have been changed to only start the master components
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Removed "openshift start node" from origin master

2018-08-15 Thread Daniel Comnea
Sounds good, thank you for feedback.

On Wed, Aug 15, 2018 at 2:47 AM, Clayton Coleman 
wrote:

> That’s the long term direction, now that many extension points are
> maturing enough to be useful.  But I’ll caution and say the primary goal is
> to reduce maintenance costs, improve upgrade isolation, and maintain the
> appropriate level of security, so some of the more nuanced splits might
> take much longer.
>
> On Aug 14, 2018, at 6:51 PM, Daniel Comnea  wrote:
>
> Hi Clayton,
>
> Great progress!
>
> So am i right to say that *"**splitting OpenShift up to make it be able
> to run on top of kubernetes"* end result will be more like openshift
> distinct features turning more like add-ons rather than what we have today?
>
>
>
> On Tue, Aug 14, 2018 at 6:17 PM, Clayton Coleman 
> wrote:
>
>> As part of the continuation of splitting OpenShift up to make it be able
>> to run on top of kubernetes, we just merged https://github.com/open
>> shift/origin/pull/20344 which removes "openshift start node" and the
>> "openshift start" commands.  This means that the openshift binary will no
>> longer include the kubelet code and if you want an "all-in-one" openshift
>> experience you'll want to use "oc cluster up".
>>
>> There should be no impact to end users - starting in 3.10 we already only
>> used the kubelet (part of hyperkube binary) and use the
>> "openshift-node-config" binary to translate the node-config.yaml into
>> kubelet arguments.  oc cluster up has been running in this configuration
>> for a while.
>>
>> integration tests have been changed to only start the master components
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Custom certificate and the host associated with masterPublicURL

2018-08-29 Thread Daniel Comnea
Hi,

I'm trying to understand from a technical point of view the hard
requirement around namedCertificates and the hostname associated with the
masterPublicURL vs masterURL.

According to the docs [1] it says

"
The namedCertificates section should be configured only for the host name
associated with the masterPublicURLand oauthConfig.assetPublicURL settings
n the */etc/origin/master/master-config.yaml* file. Using a custom serving
certificate for the host name associated with the masterURL will result in
TLS errors as infrastructure components will attempt to contact the master
API using the internal masterURL host.
"

However the above note/ requirement doesn't applies to the self-signed
certificated generated by the openshift-ansible installer and as such the
OP can have the same value defined to the below variables in his/her
inventory

openshift_master_cluster_public_hostname => map to *masterPublicURL*
openshift_master_cluster_hostname => map to *masterURL*


without having any side effect - ie TLS errors.

Is there anything "special" around the self-signed certificates produced by
the openshift-ansible installer which doesn't generate any TLS errors ?
If not then i'd expect same TLS errors as for when the namedCertificates
section is present.


Dani

[1]
https://docs.openshift.com/container-platform/3.10/install_config/certificate_customization.html#configuring-custom-certificates
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Custom certificate and the host associated with masterPublicURL

2018-08-31 Thread Daniel Comnea
Okay Michael, i understand, thank you for feedback.

In this case i think will be reasonable to have a sanity check to fail in
case the values are the same  - ie enforce it in the code.

On Thu, Aug 30, 2018 at 3:40 PM Michael Gugino  wrote:

> OpenShift components themselves call the masterURL.  We ensure that
> the internal API endpoint is trusted by all OpenShift components.  I
> strongly suggest following the documentation even if it appears to
> work otherwise, changing this behavior might result in breaking during
> an upgrade or other scenario where a custom certificate at the
> masterURL wasn't accounted for.
>
> On Wed, Aug 29, 2018 at 9:06 AM, Daniel Comnea 
> wrote:
> > Hi,
> >
> > I'm trying to understand from a technical point of view the hard
> requirement
> > around namedCertificates and the hostname associated with the
> > masterPublicURL vs masterURL.
> >
> > According to the docs [1] it says
> >
> > "
> > The namedCertificates section should be configured only for the host name
> > associated with the masterPublicURLand oauthConfig.assetPublicURL
> settings n
> > the /etc/origin/master/master-config.yaml file. Using a custom serving
> > certificate for the host name associated with the masterURL will result
> in
> > TLS errors as infrastructure components will attempt to contact the
> master
> > API using the internal masterURL host.
> > "
> >
> > However the above note/ requirement doesn't applies to the self-signed
> > certificated generated by the openshift-ansible installer and as such
> the OP
> > can have the same value defined to the below variables in his/her
> inventory
> >
> > openshift_master_cluster_public_hostname => map to masterPublicURL
> > openshift_master_cluster_hostname => map to masterURL
> >
> >
> > without having any side effect - ie TLS errors.
> >
> > Is there anything "special" around the self-signed certificates produced
> by
> > the openshift-ansible installer which doesn't generate any TLS errors ?
> > If not then i'd expect same TLS errors as for when the namedCertificates
> > section is present.
> >
> >
> > Dani
> >
> > [1]
> >
> https://docs.openshift.com/container-platform/3.10/install_config/certificate_customization.html#configuring-custom-certificates
> >
> >
> > ___
> > dev mailing list
> > dev@lists.openshift.redhat.com
> > http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
> >
>
>
>
> --
> Michael Gugino
> Senior Software Engineer - OpenShift
> mgug...@redhat.com
> 540-846-0304
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: CI automation location for RPMs is moving

2018-09-10 Thread Daniel Comnea
Clayton,

Is the url   https://rpms.svc.ci.openshift.org meant to be public available
or is only available internally for your own deployments ?

In addition, is the plan that everyone deploying OCP/ OKD on RHEL/ CentOS
to use the above common repo (assuming is going to be public accessible ) ?


Dani


On Sun, Sep 9, 2018 at 3:26 AM Clayton Coleman  wrote:

> Previously, all RPMs used by PR and the test automation or Origin were
> located in GCS.  Starting with 3.11 and continuing forward, RPMs will be
> served from the api.ci cluster at:
>
> https://rpms.svc.ci.openshift.org
>
> You can get an rpm repo file for a release by clicking on one of the links
> on the page above or via curling the name directly:
>
> $ curl https://rpms.svc.ci.openshift.org/openshift-origin-v3.11.repo
> > /etc/yum.repos.d/openshift-origin-3.11.repo
>
> The contents of this repo will be the same as the contents of the image:
>
> docker.io/openshift/origin-artifacts:v3.11
>
> in the /srv/repo dir.
>
> PR jobs for 3.11 and onwards will now use this URL to fetch content.  The
> old location on GCS will no longer be updated as we are sunsetting the jobs
> that generated and used that content
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Service affinity predicate on regions based on label - why needed ?

2018-09-14 Thread Daniel Comnea
Hi,

I'm trying to understand what were the reasons for adding the K8 service
affinity based on region label [1] ?

If i remove it to overcome the problem described below, what use case will
i lose?

This predicate attempts to place pods with specific labels in its node
selector on nodes that have the same label.

If the pod does not specify the labels in its node selector, then the first
pod is placed on any node based on availability and all subsequent pods of
the service are scheduled on nodes that have the same label values as that
node.

In our case we deploy rabbitmq pods that use hostPort and in an event of a
infra node failure the new rabbitmq pod will fail to start (will remain in
pending) since it can not find another node with region=infra with free
hostport.


Dani

[1] https://github.com/openshift/openshift-ansible/blob/release-3.10/roles/
openshift_control_plane/vars/main.yml#L27-L31
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Service affinity predicate on regions based on label - why needed ?

2018-09-18 Thread Daniel Comnea
Hi,

Anyone able to help me find out the answer to my previous question?

On Fri, Sep 14, 2018 at 4:39 PM Daniel Comnea  wrote:

> Hi,
>
> I'm trying to understand what were the reasons for adding the K8 service
> affinity based on region label [1] ?
>
> If i remove it to overcome the problem described below, what use case will
> i lose?
>
> This predicate attempts to place pods with specific labels in its node
> selector on nodes that have the same label.
>
> If the pod does not specify the labels in its node selector, then the
> first pod is placed on any node based on availability and all subsequent
> pods of the service are scheduled on nodes that have the same label values
> as that node.
>
> In our case we deploy rabbitmq pods that use hostPort and in an event of
> a infra node failure the new rabbitmq pod will fail to start (will remain
> in pending) since it can not find another node with region=infra with free
> hostport.
>
>
> Dani
>
> [1] https://github.com/openshift/openshift-ansible
> /blob/release-3.10/roles/openshift_control_plane/vars/main.yml#L27-L31
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Plans on cutting Origin 3.11 / 4.0 ?

2018-10-10 Thread Daniel Comnea
Hi,

What are the plans on cutting a new Origin release ? I see on
_release-3.11_  branch on Origin as well as openshift-ansible git repos
however i don't see any Origin 3.11 release being out.

And then on BZ i see people already raised issues against 3.11 hence my
confusion.

Thanks,
Dani
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Plans on cutting Origin 3.11 / 4.0 ?

2018-10-10 Thread Daniel Comnea
Sounds good, thanks for the update

On Wed, Oct 10, 2018 at 3:52 PM Clayton Coleman  wrote:

> I was waiting for some last minute settling of the branch, and I will cut
> an rc
>
> On Wed, Oct 10, 2018 at 10:49 AM Daniel Comnea 
> wrote:
>
>> Hi,
>>
>> What are the plans on cutting a new Origin release ? I see on
>> _release-3.11_  branch on Origin as well as openshift-ansible git repos
>> however i don't see any Origin 3.11 release being out.
>>
>> And then on BZ i see people already raised issues against 3.11 hence my
>> confusion.
>>
>> Thanks,
>> Dani
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: CI automation location for RPMs is moving

2018-10-15 Thread Daniel Comnea
Don't you need to build your repo file by setting the baseUrl to
https://rpms.svc.ci.openshift.org/openshift-origin-v3.11/ and then give it
a run ?



On Mon, Oct 15, 2018 at 7:08 AM Anton Hughes 
wrote:

> I'm trying to repo using ansible, like so:
>
> name: update yum repos
> command: curl -O https://rpms.svc.ci.openshift.org/openshift-origin-v3.11/
> > /etc/yum.repos.d/openshift-origin-3.11.repo && yum update
> But I am getting the following error:
>
> FAILED! => {"changed": true, "cmd": ["curl", "-O", "
> https://rpms.svc.ci.openshift.org/openshift-origin-v3.11/";, ">",
> "/etc/yum.repos.d/openshift-origin-3.11.repo", "&&", "yum", "update"],
> "delta": "0:00:00.108763", "end": "2018-10-15 08:02:53.835727", "msg":
> "non-zero return code", "rc": 7, "start": "2018-10-15 08:02:53.726964",
> "stderr": "curl: Remote file name has no length!\ncurl: try 'curl --help'
> or 'curl --manual' for more information\n  % Total% Received % Xferd
> Average Speed   TimeTime Time  Current\n
>  Dload  Upload   Total   SpentLeft  Speed\n\r  0 00 0
>   0 0  0  0 --:--:-- --:--:-- --:--:-- 0curl: (6) Could not
> resolve host: >; Unknown error\ncurl: (3)  malformed\ncurl: (6) Could
> not resolve host: &&; Unknown error\ncurl: (7) Failed connect to yum:80;
> Connection refused\ncurl: (7) Failed connect to update:80; Connection
> refused", "stderr_lines": ["curl: Remote file name has no length!", "curl:
> try 'curl --help' or 'curl --manual' for more information", "  % Total%
> Received % Xferd  Average Speed   TimeTime Time  Current", "
>  Dload  Upload   Total   SpentLeft  Speed", "",
> "  0 00 00 0  0  0 --:--:-- --:--:-- --:--:--
>0curl: (6) Could not resolve host: >; Unknown error", "curl: (3) 
> malformed", "curl: (6) Could not resolve host: &&; Unknown error", "curl:
> (7) Failed connect to yum:80; Connection refused", "curl: (7) Failed
> connect to update:80; Connection refused"], "stdout": "", "stdout_lines":
> []}
>
>
> Can someone tell me what I am doing wrong?
>
> Thanks and kind regards
>
> On Wed, 10 Oct 2018 at 16:03, Rich Megginson  wrote:
>
>> On 10/9/18 8:14 PM, Clayton Coleman wrote:
>> > What website?
>>
>> curl -Lvsk https://rpms.svc.ci.openshift.org/openshift-origin-v3.11 just
>> hangs
>>
>> > Just use a slash at the end - all the CI jobs look like
>> > their working
>>
>>
>> Yep - curl -Lvsk
>> https://rpms.svc.ci.openshift.org/openshift-origin-v3.11/ works fine.
>>
>>
>> Thanks!
>>
>>
>> >> On Oct 9, 2018, at 10:10 PM, Rich Megginson 
>> wrote:
>> >>
>> >> Was this ever fixed?  Is this the cause of the website being currently
>> unresponsive?
>> >>
>> >>
>> >>> On 9/10/18 2:33 PM, Clayton Coleman wrote:
>> >>> Interesting, might be an HAProxy router bug.  Can you file one?
>> >>>
>> >>> On Mon, Sep 10, 2018 at 3:08 PM Seth Jennings > > wrote:
>> >>>
>> >>> There is a bug in the webserver configuration.  Main page links
>> to https://rpms.svc.ci.openshift.org/openshift-origin-v3.11 which gets
>> redirected to
>> >>> http://rpms.svc.ci.openshift.org:8080/openshift-origin-v3.11/
>> (drops https and adds port number).
>> >>>
>> >>> On Sat, Sep 8, 2018 at 9:27 PM Clayton Coleman <
>> ccole...@redhat.com > wrote:
>> >>>
>> >>> Previously, all RPMs used by PR and the test automation or
>> Origin were located in GCS.  Starting with 3.11 and continuing forward,
>> RPMs will be served from the api.ci 
>> >>> cluster at:
>> >>>
>> >>> https://rpms.svc.ci.openshift.org
>> >>>
>> >>> You can get an rpm repo file for a release by clicking on one
>> of the links on the page above or via curling the name directly:
>> >>>
>> >>> $ curl
>> https://rpms.svc.ci.openshift.org/openshift-origin-v3.11.repo >
>> /etc/yum.repos.d/openshift-origin-3.11.repo
>> >>>
>> >>> The contents of this repo will be the same as the contents of
>> the image:
>> >>>
>> >>> docker.io/openshift/origin-artifacts:v3.11 <
>> http://docker.io/openshift/origin-artifacts:v3.11>
>> >>>
>> >>> in the /srv/repo dir.
>> >>>
>> >>> PR jobs for 3.11 and onwards will now use this URL to fetch
>> content.  The old location on GCS will no longer be updated as we are
>> sunsetting the jobs that generated and used that content
>> >>> ___
>> >>> dev mailing list
>> >>> dev@lists.openshift.redhat.com > dev@lists.openshift.redhat.com>
>> >>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>> >>>
>> >>>
>> >>> ___
>> >>> dev mailing list
>> >>> dev@lists.openshift.redhat.com
>> >>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>> >>
>> >> ___
>> >> dev mailing list
>> >> dev@lists.openshift.redhat.com
>> >> h

Re: CI automation location for RPMs is moving

2018-10-16 Thread Daniel Comnea
Anton,

if you set your inventory like below it should get you going.

[OSEv3:vars]
(...)
openshift_additional_repos=[{'id': 'centos-okd-ci', 'name':
'centos-okd-ci', 'baseurl'
:'https://rpms.svc.ci.openshift.org/openshift-origin-v3.11',
'gpgcheck' :'0', 'enabled' :'1'}]

On a different note the OKD v3.11 rpms on CentOS will become available
hopefully this week for testing at least at which point i rely on you
and others in the community to help out with testing.

Thanks.


On Tue, Oct 16, 2018 at 7:31 AM Anton Hughes 
wrote:

> 1. Are you on ansible 2.6 or earlier?
>>
> Im using ansible 2.6.5
>
>> 2. If you access that machine and run 'yum install origin-node-3.11*' do
>> you get a result?
>>
>  I get
>
> yum install origin-node-3.11
> Loaded plugins: fastestmirror
> Loading mirror speeds from cached hostfile
>  * base: mirror.ratiokontakt.de
>  * epel: mirror.wiuwiu.de
>  * extras: mirror.ratiokontakt.de
>  * updates: mirror.checkdomain.de
> No package origin-node-3.11 available.
> Error: Nothing to do
>
> 3. If you run yum clean on the machine, and then run, do you get the right
>> outcome?
>>
>
> No
>
>
>> 4. Did you add the repo to all nodes correctly (verify 2-3 on each)?
>>
>
> I'm trying to install on a single node (master and worker on same host)
> until I can get it to install correctly.
>
> On Tue, 16 Oct 2018 at 03:46, Clayton Coleman  wrote:
>
>> A couple of things to check.
>>
>> 1. Are you on ansible 2.6 or earlier?
>> 2. If you access that machine and run 'yum install origin-node-3.11*' do
>> you get a result?
>> 3. If you run yum clean on the machine, and then run, do you get the
>> right outcome?
>> 4. Did you add the repo to all nodes correctly (verify 2-3 on each)?
>>
>> On Mon, Oct 15, 2018 at 6:04 AM Anton Hughes 
>> wrote:
>>
>>> Ive tried both of the following - and get the same error:
>>> yum-config-manager --add-repo
>>> https://rpms.svc.ci.openshift.org/openshift-origin-v3.11.repo && yum
>>> update
>>> yum-config-manager --add-repo
>>> https://rpms.svc.ci.openshift.org/openshift-origin-v3.11/ && yum update
>>>
>>> Error
>>>  Play: Configure nodes
>>>  Task: Install node, clients, and conntrack packages
>>>  Message:  No package matching 'origin-node-3.11' found available,
>>> installed or update
>>>
>>> On Mon, 15 Oct 2018 at 21:24, Daniel Comnea 
>>> wrote:
>>>
>>>> Don't you need to build your repo file by setting the baseUrl to
>>>> https://rpms.svc.ci.openshift.org/openshift-origin-v3.11/ and then
>>>> give it a run ?
>>>>
>>>>
>>>>
>>>> On Mon, Oct 15, 2018 at 7:08 AM Anton Hughes 
>>>> wrote:
>>>>
>>>>> I'm trying to repo using ansible, like so:
>>>>>
>>>>> name: update yum repos
>>>>> command: curl -O
>>>>> https://rpms.svc.ci.openshift.org/openshift-origin-v3.11/ >
>>>>> /etc/yum.repos.d/openshift-origin-3.11.repo && yum update
>>>>> But I am getting the following error:
>>>>>
>>>>> FAILED! => {"changed": true, "cmd": ["curl", "-O", "
>>>>> https://rpms.svc.ci.openshift.org/openshift-origin-v3.11/";, ">",
>>>>> "/etc/yum.repos.d/openshift-origin-3.11.repo", "&&", "yum", "update"],
>>>>> "delta": "0:00:00.108763", "end": "2018-10-15 08:02:53.835727", "msg":
>>>>> "non-zero return code", "rc": 7, "start": "2018-10-15 08:02:53.726964",
>>>>> "stderr": "curl: Remote file name has no length!\ncurl: try 'curl --help'
>>>>> or 'curl --manual' for more information\n  % Total% Received % Xferd
>>>>> Average Speed   TimeTime Time  Current\n
>>>>>  Dload  Upload   Total   SpentLeft  Speed\n\r  0 00 0
>>>>>   0 0  0  0 --:--:-- --:--:-- --:--:-- 0curl: (6) Could 
>>>>> not
>>>>> resolve host: >; Unknown error\ncurl: (3)  malformed\ncurl: (6) Could
>>>>> not resolve host: &&; Unknown error\ncurl: (7) Failed connect to yum:80;
>>>>> Conne

Re: CI automation location for RPMs is moving

2018-10-16 Thread Daniel Comnea
That's expected since i missed a / at the end.

if you use the below it should all work for you

[OSEv3:vars]
(...)
openshift_additional_repos=[{'id': 'centos-okd-ci', 'name':
'centos-okd-ci', 'baseurl'
:'https://rpms.svc.ci.openshift.org/openshift-origin-v3.11/',
'gpgcheck' :'0', 'enabled' :'1'}]

sorry for the typo.


On Tue, Oct 16, 2018 at 7:13 PM Anton Hughes 
wrote:

> Thanks Daniel
>
> I tried
>
> openshift_additional_repos=[{'id': 'centos-okd-ci', 'name': 'centos-okd-ci', 
> 'baseurl' :'https://rpms.svc.ci.openshift.org/openshift-origin-v3.11', 
> 'gpgcheck' :'0', 'enabled' :'1'}]
>
>
> that but it fails with the following:
>
> TASK [openshift_node : Install node, clients, and conntrack packages]
> **
> Wednesday 17 October 2018  07:03:19 +1300 (0:00:01.532)   0:02:03.230
> *
> FAILED - RETRYING: Install node, clients, and conntrack packages (3
> retries left).
> FAILED - RETRYING: Install node, clients, and conntrack packages (2
> retries left).
> FAILED - RETRYING: Install node, clients, and conntrack packages (1
> retries left).
> failed: [xxx.xxx.xx.xxx] (item={u'name': u'origin-node-3.11'}) =>
> {"attempts": 3, "changed": false, "item": {"name": "origin-node-3.11"},
> "msg": "No package matching 'origin-node-3.11' found available, installed
> or updated", "rc": 126, "results": ["No package matching 'origin-node-3.11'
> found available, installed or updated"]}
> FAILED - RETRYING: Install node, clients, and conntrack packages (3
> retries left).
> FAILED - RETRYING: Install node, clients, and conntrack packages (2
> retries left).
> FAILED - RETRYING: Install node, clients, and conntrack packages (1
> retries left).
> failed: [xxx.xxx.xx.xxx] (item={u'name': u'origin-clients-3.11'}) =>
> {"attempts": 3, "changed": false, "item": {"name": "origin-clients-3.11"},
> "msg": "No package matching 'origin-clients-3.11' found available,
> installed or updated", "rc": 126, "results": ["No package matching
> 'origin-clients-3.11' found available, installed or updated"]}
>
>
> On Wed, 17 Oct 2018 at 01:41, Daniel Comnea  wrote:
>
>> Anton,
>>
>> if you set your inventory like below it should get you going.
>>
>> [OSEv3:vars]
>> (...)
>> openshift_additional_repos=[{'id': 'centos-okd-ci', 'name': 'centos-okd-ci', 
>> 'baseurl' :'https://rpms.svc.ci.openshift.org/openshift-origin-v3.11', 
>> 'gpgcheck' :'0', 'enabled' :'1'}]
>>
>> On a different note the OKD v3.11 rpms on CentOS will become available 
>> hopefully this week for testing at least at which point i rely on you and 
>> others in the community to help out with testing.
>>
>> Thanks.
>>
>>
>> On Tue, Oct 16, 2018 at 7:31 AM Anton Hughes 
>> wrote:
>>
>>> 1. Are you on ansible 2.6 or earlier?
>>>>
>>> Im using ansible 2.6.5
>>>
>>>> 2. If you access that machine and run 'yum install origin-node-3.11*'
>>>> do you get a result?
>>>>
>>>  I get
>>>
>>> yum install origin-node-3.11
>>> Loaded plugins: fastestmirror
>>> Loading mirror speeds from cached hostfile
>>>  * base: mirror.ratiokontakt.de
>>>  * epel: mirror.wiuwiu.de
>>>  * extras: mirror.ratiokontakt.de
>>>  * updates: mirror.checkdomain.de
>>> No package origin-node-3.11 available.
>>> Error: Nothing to do
>>>
>>> 3. If you run yum clean on the machine, and then run, do you get the
>>>> right outcome?
>>>>
>>>
>>> No
>>>
>>>
>>>> 4. Did you add the repo to all nodes correctly (verify 2-3 on each)?
>>>>
>>>
>>> I'm trying to install on a single node (master and worker on same host)
>>> until I can get it to install correctly.
>>>
>>> On Tue, 16 Oct 2018 at 03:46, Clayton Coleman 
>>> wrote:
>>>
>>>> A couple of things to check.
>>>>
>>>> 1. Are you on ansible 2.6 or earlier?
>>&

[CentOS PaaS SIG]: Origin v3.11 rpms available for testing

2018-10-17 Thread Daniel Comnea
Hi,

We would like to announce that OKD v3.11 rpms are available for testing at
[1].

As such we are calling for help from community to start testing and let us
know if there are issues with the rpms and its dependencies.

And in the spirit of transparency see below the plan to promote the rpms to
mirror.centos.org repo:


   1. in the next few days the packages should be promoted to the test repo
   [2] (currently it does not exist, we are waiting to be sync'ed in the
   background)
   2. in one/two weeks time if we haven't heard any issues/ blockers we are
   going to promote to [3] repo (currently it doesn't exist, it will once
   the rpm will be promoted and signed)


Please note the ansbile version use (and supported) *must be* 2.6.x and not
2.7, if you opt to ignore the warning you will run into issues.

On a different note the CentOS Infra team are working hard (thanks !) to
package and release a centos-ansible rpm which we'll promote in our PaaS
repos.

The rational is to bring more control around the ansible version used/
required by OpenShift-ansible installer and not rely on the latest ansbile
version pushed to epel repo we caused friction recently (reflected on our
CI as well as users reporting issues)


Thank you,
PaaS SiG team

[1] https://cbs.centos.org/repos/paas7-openshift-origin311-testing/
[2] https://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin311/
[3] http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin311/
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Is Docker enterprise version subscription required for Openshift 3.7

2018-10-17 Thread Daniel Comnea
just a quick heads up that i doubt you can deploy any openshift version
(3.7+) with a docker version higher than 1.13.

On Wed, Oct 17, 2018 at 6:07 PM Santosh Kumar30 
wrote:

>
>
> Are you saying that we require docker17 or later for Hyperledger fabric
> image deployment ?
>
> If yes, definitely we require additional Docker enterprise edition
> subscription to deploy it on Openshift 3.7, is this assumption is correct?
>
>
>
> Regards,
>
> Santosh Kumar
>
>
>
> *From:* Mark Wagner 
> *Sent:* Wednesday, October 17, 2018 8:18 PM
> *To:* Jeremy Eder 
> *Cc:* dev@lists.openshift.redhat.com; Santosh Kumar30
> 
> *Subject:* Re: Is Docker enterprise version subscription required for
> Openshift 3.7
>
>
>
> From the upstream Fabric list.
>
>
>
> Technically, at runtime right now Docker 1.13 or later will work for pure
> Docker and/or Kubernetes.
> The samples and example network rely on later versions of docker-compose
> which I believe require some features of Docker 17.06 and later (I think in
> the area of networks and volumes but don't recall explicitly and we
> definitely use docker exec commands in some of the samples which require
> 17.06).
>
> With 1.3 and earlier, you should still be able to build with Docker 1.13,
> but with the current master we've moved to multistage builds which require
> 17.06 and later to build.
>
> Hope this helps.
>
> FWIW, I was able to use the Docker 1.13 version which ships with Redhat
> 7.x to build and run Redhat-based Fabric images.
>
> -- G
>
>
>
> On Wed, Oct 17, 2018 at 9:09 AM, Jeremy Eder  wrote:
>
> Mark, do you know where the version requirement in the hyperledger docs
> comes from?
>
>
>
> On Wed, Oct 17, 2018 at 7:50 AM Santosh Kumar30 <
> sk00546...@techmahindra.com> wrote:
>
> Hi,
>
>
>
> I am recently started exploring openshift. I am a Hyperledger blockcahin
> developer.
>
> I am trying to create a blockchain network containing which will contain
> Hyperledger – peer, orderer, cli… and there images have been provided by
> Hyperledger.
>
>
>
> As per the Hyperledger doc, these images only compatable with Docker
> version 17.06.2-ce or greater .
>
>
> https://hyperledger-fabric.readthedocs.io/en/release-1.3/prereqs.html#docker-and-docker-compose
>
>
>
> But as Openshift 3.7 release note:
> https://docs.openshift.com/container-platform/3.7/release_notes/ocp_3_7_release_notes.html#ocp-37-about-this-release
>
>
>
> OpenShift Container Platform 3.7 is supported on RHEL 7.3, 7.4.2, 7.5, and
> Atomic Host 7.4.2 and newer with the latest packages from Extras, including
> Docker 1.12.
>
>
>
> So *my query here is if I need docker 17 or later version for this
> openshift 3.7, Whether I required Docker enterprise version subscription *as
> on RHEL linux system, docker CE version will not work?
>
>
>
>
>
> Thanks in advance.
>
>
>
> Regards,
>
> Santosh Kumar
>
>
> 
>
> Disclaimer:  This message and the information contained herein is
> proprietary and confidential and subject to the Tech Mahindra policy
> statement, you may review the policy at
> http://www.techmahindra.com/Disclaimer.html externally
> http://tim.techmahindra.com/tim/disclaimer.html internally within
> TechMahindra.
>
>
> 
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
>
>
> --
>
>
>
> -- Jeremy Eder
>
>
>
>
> --
>
> Mark Wagner
>
> Senior Principal Software Engineer
>
> Performance and Scalability
>
> Red Hat, Inc
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: [CentOS-devel] [CentOS PaaS SIG]: Origin v3.11 rpms available for testing

2018-10-18 Thread Daniel Comnea
PSB

On Thu, Oct 18, 2018 at 6:17 PM Rich Megginson  wrote:

> On 10/17/18 3:38 AM, Daniel Comnea wrote:
> > Hi,
> >
> > We would like to announce that OKD v3.11 rpms are available for testing
> at [1].
> >
> > As such we are calling for help from community to start testing and let
> us know if there are issues with the rpms and its dependencies.
> >
> > And in the spirit of transparency see below the plan to promote the rpms
> to mirror.centos.org repo:
> >
> >  1. in the next few days the packages should be promoted to the test
> repo [2] (currently it does not exist, we are waiting to be sync'ed in the
> background)
> >  2. in one/two weeks time if we haven't heard any issues/ blockers we
> are going to promote to [3] repo (currently it doesn't exist, it will once
> the rpm will be promoted and signed)
> >
> >
> > Please note the ansbile version use (and supported) /*must be*/ 2.6.x
> and not 2.7, if you opt to ignore the warning you will run into issues.
> >
> > On a different note the CentOS Infra team are working hard (thanks !) to
> package and release a centos-ansible rpm which we'll promote in our PaaS
> repos.
>
>
> So does that mean we cannot test OKD v3.11 yet, unless we build our own
> version of ansible 2.6.x?
> [DC]: so i've been waiting for Infra guys to build the rpm but they are
> traveling and as such i went ahead and tagged ansible 2.6. and it should
> appear [1] in next 15/20 min. That should unblock you all from testing it.
>
> What will happen if we attempt to use ansible 2.7?  I my testing, I get
> stuck at deploying the control plane pods - it seems the virtual networking
> was not set up by openshift-ansible.
> [DC]: there been few issues reported on this topic and since they were
> already know we made it clear which ansible version is supported (read - it
> works) and which not.
>
> >
> > The rational is to bring more control around the ansible version used/
> required by OpenShift-ansible installer and not rely on the latest ansbile
> version pushed to epel repo we caused
> > friction recently (reflected on our CI as well as users reporting issues)
> >
> >
> > Thank you,
> > PaaS SiG team
> >
> > [1] https://cbs.centos.org/repos/paas7-openshift-origin311-testing/
> > [2]
> https://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin311/
> > [3] http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin311/
> >
> > ___
> > CentOS-devel mailing list
> > centos-de...@centos.org
> > https://lists.centos.org/mailman/listinfo/centos-devel
>
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: [CentOS PaaS SIG]: Origin v3.11 rpms available for testing

2018-10-19 Thread Daniel Comnea
Hi all,

First of all sorry for the late reply as well as for any confusion i may
have caused with my previous email.
I was very pleased to see the vibe and excitement around testing OKD v3.11,
very much appreciated.

Here are the latest info:

   - everyone who wants to help us with testing should use [1] repo which
   can be consumed:
  -  in the inventory as [2] or
  - by deploying your own repo file [3]
   - nobody should use the repo i've mentioned in my previous email [4]
   (CentOS Infra team corrected me on the confusion i made, once again
   apologies for that)


Regarding the ansible version here are the info following my sync up with
CentOS Infra team:

   - very likely on Monday/ latest Tuesday a new rpm called
   centos-release-ansible26 will appear in CentOs Extras
   - the above rpm will become a dependency for the
   *centos-release-openshift-origin311* rpm which will be created and land
   in CentOS Extras repo at the same time OKD v3.11 will be promoted to
   mirror.centos.org
  - note this is the same flow as it was for all versions prior to
  v3.11 ( the rpm provides the CentOS repo location for OKD rpms).

*Note*:

   1. if your flow up until now was to never use
   *centos-release-openshift-originXXX* rpm and you were creating your own
   repo files then you will need to make sure you pull as dependency the
   ansible 2.6.x (together with its own dependencies) rpm. It is up to you
   from where you are going to pull the ansible rpm: from Epel, from CentOS
   Extras etc.
   2. with the above we are trying to have a single way of solving the
   ansible dependency problem


Hopefully this brings more clarity around this topic.



Thank you,
PaaS SiG team

[1] https://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin311/
[2]

[OSEv3:vars]
(...)
openshift_additional_repos=[{'id': 'centos-okd-ci', 'name':
'centos-okd-ci', 'baseurl'
:'http://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin311/
<http://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin311%7D/>',
'gpgcheck' :'0', 'enabled' :'1'}]


[3]
[centos-openshift-origin311-testing]
name=CentOS OpenShift Origin Testing
baseurl=
http://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin311/
<http://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin311%7D/>
enabled=0
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-PaaS

[4] https://cbs.centos.org/repos/paas7-openshift-origin311-testing/



On Wed, Oct 17, 2018 at 10:38 AM Daniel Comnea 
wrote:

> Hi,
>
> We would like to announce that OKD v3.11 rpms are available for testing
> at [1].
>
> As such we are calling for help from community to start testing and let us
> know if there are issues with the rpms and its dependencies.
>
> And in the spirit of transparency see below the plan to promote the rpms
> to mirror.centos.org repo:
>
>
>1. in the next few days the packages should be promoted to the test
>repo [2] (currently it does not exist, we are waiting to be sync'ed in
>the background)
>2. in one/two weeks time if we haven't heard any issues/ blockers we
>are going to promote to [3] repo (currently it doesn't exist, it will
>once the rpm will be promoted and signed)
>
>
> Please note the ansbile version use (and supported) *must be* 2.6.x and
> not 2.7, if you opt to ignore the warning you will run into issues.
>
> On a different note the CentOS Infra team are working hard (thanks !) to
> package and release a centos-ansible rpm which we'll promote in our PaaS
> repos.
>
> The rational is to bring more control around the ansible version used/
> required by OpenShift-ansible installer and not rely on the latest ansbile
> version pushed to epel repo we caused friction recently (reflected on our
> CI as well as users reporting issues)
>
>
> Thank you,
> PaaS SiG team
>
> [1] https://cbs.centos.org/repos/paas7-openshift-origin311-testing/
> [2] https://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin311/
> [3] http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin311/
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Dropping oadm binary entirely

2018-11-08 Thread Daniel Comnea
Is that going into 3.11 (assuming a new minor release) or straight into 4.0
?

On Thu, Nov 8, 2018 at 3:28 PM Maciej Szulik  wrote:

> Hey,
> We've deprecated oadm binary back in 3.9 in favor of oc adm. [1] removes
> the binary entirely.
> If you find yourself using oadm, please switch to oc adm ASAP.
>
> Cheers,
> Maciej Szulik
>
>
> [1] https://github.com/openshift/origin/pull/21452
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Dropping oadm binary entirely

2018-11-09 Thread Daniel Comnea
Okay thanks for the information.

So which version will be if is not 4.0 ? i guess is going to be 4.x+ ?

On Fri, Nov 9, 2018 at 1:08 PM Maciej Szulik  wrote:

> 4.0 and newer. 3.11 is not affected by this change.
>
> Maciej
>
> On Thu, Nov 8, 2018 at 5:27 PM Daniel Comnea 
> wrote:
>
>> Is that going into 3.11 (assuming a new minor release) or straight into
>> 4.0 ?
>>
>> On Thu, Nov 8, 2018 at 3:28 PM Maciej Szulik  wrote:
>>
>>> Hey,
>>> We've deprecated oadm binary back in 3.9 in favor of oc adm. [1] removes
>>> the binary entirely.
>>> If you find yourself using oadm, please switch to oc adm ASAP.
>>>
>>> Cheers,
>>> Maciej Szulik
>>>
>>>
>>> [1] https://github.com/openshift/origin/pull/21452
>>> ___
>>> dev mailing list
>>> dev@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>>
>>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


[CentOS PaaS SIG]: Origin v3.11 rpms available officially released

2018-11-09 Thread Daniel Comnea
Hi,

We would like to announce that OKD v3.11 rpms been officially released and
are available at [1].

In order to use the released repo [1] we have created and published the rpm
(contains the yum configuration file)  [2] which is in the main CentOS
extra repository. The rpm itself has a dependency on the
*centos-release-ansible26* [3] which is the ansbile 2.6 version rpm built
by CentOS Infra team.

Should you decide not to use the *centos-release-openshift-origin3** rpm
then will be your responsibility to get ansible 2.6 required to by openshift
-ansible installer.

Please note that due to ongoing work on releasing CentOS 7.6, the mirror.
centos.org repo is in freeze mode - see [4] and as such we have not
published the rpms to [5]. Once the freeze mode will end, we'll publish the
rpms.

Kudos goes to CentOS Infra team for being very kind in giving us a waiver
to make the current release possible.


Thank you,
PaaS SiG team

[1] http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin311/
[2] http://mirror.centos.org/centos/7/extras/x86_64/Packages/centos-release-
openshift-origin311-1-2.el7.centos.noarch.rpm
[3] http://mirror.centos.org/centos/7/extras/x86_64/Packages/centos
-release-ansible26-1-3.el7.centos.noarch.rpm
[4] https://lists.centos.org/pipermail/centos-devel/2018-November/017033.html

[5] http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin/
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: [CentOS-devel] [CentOS PaaS SIG]: Origin v3.11 rpms available officially released

2018-11-13 Thread Daniel Comnea
Hi Leo,

The rpms are already in the official CentOS repository [1] . As
communicated earlier once the CentOS 7.6 is out, we (as a SIG) will be
allowed to promote rpms.

As soon as CentOS Infra team will inform us we will go and action
immediately followed by an announcement here.


Dani

[1] http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin311/

On Tue, Nov 13, 2018 at 4:55 AM leo David  wrote:

> Hi,
> First of all, thank you very much for all the work to get this version
> released !
> Any news about having the rpms in the Centos official repos ?
> Thank you !
> Leo
>
> On Mon, Nov 12, 2018, 15:13 Scott Dodson 
>> We're aware of some issues in 2.7.0, some tasks were skipped preventing
>> proper etcd certificate generation, that appears to be fixed in 2.7.1.
>> Our OpenShift QE teams do not currently test with 2.7 so the community
>> may be the first to encounter problems but we'll try to fix them if you
>> open a github issue.
>>
>> On Mon, Nov 12, 2018 at 7:24 AM Sandro Bonazzola 
>> wrote:
>>
>>>
>>>
>>> Il giorno ven 9 nov 2018 alle ore 18:15 Daniel Comnea <
>>> comnea.d...@gmail.com> ha scritto:
>>>
>>>>
>>>> Hi,
>>>>
>>>> We would like to announce that OKD v3.11 rpms been officially released
>>>> and are available at [1].
>>>>
>>>> In order to use the released repo [1] we have created and published
>>>> the rpm (contains the yum configuration file)  [2] which is in the main
>>>> CentOS extra repository. The rpm itself has a dependency on the
>>>> *centos-release-ansible26* [3] which is the ansbile 2.6 version rpm
>>>> built by CentOS Infra team.
>>>>
>>>
>>> Is there any known issue with ansible 2.7 with regards to this Origin
>>> release?
>>> I'm asking because in several other places within oVirt we are using 2.7
>>> modules and we are working on role/playbook for deploying Origin on oVirt.
>>>
>>>
>>>
>>>
>>>>
>>>> Should you decide not to use the *centos-release-openshift-origin3**
>>>> rpm then will be your responsibility to get ansible 2.6 required to by
>>>> openshift-ansible installer.
>>>>
>>>> Please note that due to ongoing work on releasing CentOS 7.6, the
>>>> mirror.centos.org repo is in freeze mode - see [4] and as such we have
>>>> not published the rpms to [5]. Once the freeze mode will end, we'll
>>>> publish the rpms.
>>>>
>>>> Kudos goes to CentOS Infra team for being very kind in giving us a
>>>> waiver to make the current release possible.
>>>>
>>>>
>>>> Thank you,
>>>> PaaS SiG team
>>>>
>>>> [1] http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin311/
>>>> [2] http://mirror.centos.org/centos/7/extras/x86_64/Packages/centos
>>>> -release-openshift-origin311-1-2.el7.centos.noarch.rpm
>>>> [3] http://mirror.centos.org/centos/7/extras/x86_64/Packages/centos
>>>> -release-ansible26-1-3.el7.centos.noarch.rpm
>>>> [4] 
>>>> https://lists.centos.org/pipermail/centos-devel/2018-November/017033.html
>>>>
>>>> [5] http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin/
>>>> ___
>>>> CentOS-devel mailing list
>>>> centos-de...@centos.org
>>>> https://lists.centos.org/mailman/listinfo/centos-devel
>>>>
>>>
>>>
>>> --
>>>
>>> SANDRO BONAZZOLA
>>>
>>> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>>>
>>> Red Hat EMEA <https://www.redhat.com/>
>>>
>>> sbona...@redhat.com
>>> <https://red.ht/sig>
>>> ___
>>> CentOS-devel mailing list
>>> centos-de...@centos.org
>>> https://lists.centos.org/mailman/listinfo/centos-devel
>>>
>> ___
>> users mailing list
>> us...@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
> ___
> users mailing list
> us...@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Openshift Origin builds for CVE-2018-1002105

2018-12-06 Thread Daniel Comnea
I'll chime in to get some clarity 

The CentOS rpms are built by the PaaS SIG and is based on the Origin tag
release.
As such in order to have new origin rpms built/ pushed into CentOS repos we
will need:


   - the fix to make it into 3.11/3.10 Origin branches => done [1] however
   i am just guessing those are the right PRs, someone from RH will need to
   confirm/ refute
   - a new Origin release to be cut for 3.11/3.10
   - then i can start with the PaaS Sig work

You can also see some details on [2] but again i have not validated myself

Hope this get some clarity


Dani

[1]
https://github.com/openshift/origin/pull/21600 (3.11)
https://github.com/openshift/origin/pull/21601 (3.10)

[2] https://github.com/openshift/origin/issues/21606

On Thu, Dec 6, 2018 at 10:07 AM Mateus Caruccio <
mateus.caruc...@getupcloud.com> wrote:

> On top of that is anyone here building publicly accessible rpms/srpms?
>
>
> Em Qui, 6 de dez de 2018 07:36, Gowtham Sundara <
> gowtham.sund...@rapyuta-robotics.com escreveu:
>
>> Hello,
>> The RPMs for Openshift origin need to be updated because of the recent
>> vulnerability. Is there a release schedule for this?
>>
>> --
>> Gowtham Sundara
>> Site Reliability Engineer
>>
>> Rapyuta Robotics “empowering lives with connected machines”
>> rapyuta-robotics.com 
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Openshift Origin builds for CVE-2018-1002105

2018-12-06 Thread Daniel Comnea
Cheers for chime in Clayton.

In this case you fancy cutting new minor release for 3.10/ 3.11 and then
i'll take it over?

Dani

On Thu, Dec 6, 2018 at 3:18 PM Clayton Coleman  wrote:

> This are the correct PRa
>
> On Dec 6, 2018, at 10:14 AM, Daniel Comnea  wrote:
>
> I'll chime in to get some clarity 
>
> The CentOS rpms are built by the PaaS SIG and is based on the Origin tag
> release.
> As such in order to have new origin rpms built/ pushed into CentOS repos
> we will need:
>
>
>- the fix to make it into 3.11/3.10 Origin branches => done [1]
>however i am just guessing those are the right PRs, someone from RH
>will need to confirm/ refute
>- a new Origin release to be cut for 3.11/3.10
>- then i can start with the PaaS Sig work
>
> You can also see some details on [2] but again i have not validated myself
>
> Hope this get some clarity
>
>
> Dani
>
> [1]
> https://github.com/openshift/origin/pull/21600 (3.11)
> https://github.com/openshift/origin/pull/21601 (3.10)
>
> [2] https://github.com/openshift/origin/issues/21606
>
> On Thu, Dec 6, 2018 at 10:07 AM Mateus Caruccio <
> mateus.caruc...@getupcloud.com> wrote:
>
>> On top of that is anyone here building publicly accessible rpms/srpms?
>>
>>
>> Em Qui, 6 de dez de 2018 07:36, Gowtham Sundara <
>> gowtham.sund...@rapyuta-robotics.com escreveu:
>>
>>> Hello,
>>> The RPMs for Openshift origin need to be updated because of the recent
>>> vulnerability. Is there a release schedule for this?
>>>
>>> --
>>> Gowtham Sundara
>>> Site Reliability Engineer
>>>
>>> Rapyuta Robotics “empowering lives with connected machines”
>>> rapyuta-robotics.com <https://www.rapyuta-robotics.com/>
>>> ___
>>> dev mailing list
>>> dev@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Openshift Origin builds for CVE-2018-1002105

2018-12-06 Thread Daniel Comnea
On Thu, Dec 6, 2018 at 3:25 PM Gowtham Sundara <
gowtham.sund...@rapyuta-robotics.com> wrote:

> Hello,
> Is there a ci build for version 3.9? (can't seem to find one, so I am
> assuming not). Could you please cut a minor release for 3.9 too as Daniel
> suggested.
>
> [DC]: the K8 fix was backported down to 1.10 and so our RH fellows did the
same. I doubt there will be anything for < 3.10 (not on OKD i suspect)


> Thanks
>
> On Thu, Dec 6, 2018 at 8:50 PM Daniel Comnea 
> wrote:
>
>> Cheers for chime in Clayton.
>>
>> In this case you fancy cutting new minor release for 3.10/ 3.11 and then
>> i'll take it over?
>>
>> Dani
>>
>> On Thu, Dec 6, 2018 at 3:18 PM Clayton Coleman 
>> wrote:
>>
>>> This are the correct PRa
>>>
>>> On Dec 6, 2018, at 10:14 AM, Daniel Comnea 
>>> wrote:
>>>
>>> I'll chime in to get some clarity 
>>>
>>> The CentOS rpms are built by the PaaS SIG and is based on the Origin
>>> tag release.
>>> As such in order to have new origin rpms built/ pushed into CentOS repos
>>> we will need:
>>>
>>>
>>>- the fix to make it into 3.11/3.10 Origin branches => done [1]
>>>however i am just guessing those are the right PRs, someone from RH
>>>will need to confirm/ refute
>>>- a new Origin release to be cut for 3.11/3.10
>>>- then i can start with the PaaS Sig work
>>>
>>> You can also see some details on [2] but again i have not validated
>>> myself
>>>
>>> Hope this get some clarity
>>>
>>>
>>> Dani
>>>
>>> [1]
>>> https://github.com/openshift/origin/pull/21600 (3.11)
>>> https://github.com/openshift/origin/pull/21601 (3.10)
>>>
>>> [2] https://github.com/openshift/origin/issues/21606
>>>
>>> On Thu, Dec 6, 2018 at 10:07 AM Mateus Caruccio <
>>> mateus.caruc...@getupcloud.com> wrote:
>>>
>>>> On top of that is anyone here building publicly accessible rpms/srpms?
>>>>
>>>>
>>>> Em Qui, 6 de dez de 2018 07:36, Gowtham Sundara <
>>>> gowtham.sund...@rapyuta-robotics.com escreveu:
>>>>
>>>>> Hello,
>>>>> The RPMs for Openshift origin need to be updated because of the recent
>>>>> vulnerability. Is there a release schedule for this?
>>>>>
>>>>> --
>>>>> Gowtham Sundara
>>>>> Site Reliability Engineer
>>>>>
>>>>> Rapyuta Robotics “empowering lives with connected machines”
>>>>> rapyuta-robotics.com <https://www.rapyuta-robotics.com/>
>>>>> ___
>>>>> dev mailing list
>>>>> dev@lists.openshift.redhat.com
>>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>>>>
>>>> ___
>>>> dev mailing list
>>>> dev@lists.openshift.redhat.com
>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>>>
>>> ___
>>> dev mailing list
>>> dev@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>>
>>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>
>
> --
> Gowtham Sundara
> Site Reliability Engineer
>
> Rapyuta Robotics “empowering lives with connected machines”
> rapyuta-robotics.com <https://www.rapyuta-robotics.com/>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


[4.x]: any future plans for proxy-mode: ipvs ?

2019-06-08 Thread Daniel Comnea
Hi,

Are there any future plans in 4.x lifecycle to decouple kube-proxy from OVN
and allow setting/ running K8s upstream kube-proxy in ipvs mode ?

Cheers,
Dani
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: [4.x]: any future plans for proxy-mode: ipvs ?

2019-06-10 Thread Daniel Comnea
Hi Clayton, Dan,

thanks for taking the time to respond, much appreciated.

On Mon, Jun 10, 2019 at 5:46 PM Dan Williams  wrote:

> On Sat, 2019-06-08 at 14:52 -0700, Clayton Coleman wrote:
> > OVN implements kube services on nodes without kube-proxy - is there a
> > specific feature gap between ipvs and ovn services you see that needs
> > to be filled?
>
> I'd love to hear the answer to that question too :)
>
> [DC]: without knowing in details the OVN's lb implementation i doubt i can
call it a gap ;) Saying that let me give our use case which we used back in
1.5 and still using in 3.7.
Being in the video processing/ encoding space we have some apps pods which
needs to talk to a hardware storage data plane IPs over which various
different video segments (different chunk size: 2/6 seconds and different
bitrate) are getting written/ pulled.
Now the pods do talk to a K8s service (2 ports) which is mapped to a big
endpoint list (300-600 endpoint IPs). As such (if i remember correctly) we
ended up with # of iptables rules =  # of pods (2000) x 2 (K8s service
ports) x # of endpoints.

Now what we've seen in the past was that load balancing traffic
distribution was not hitting all endpoints (some where getting hit harder
than others).
As such we thought that maybe with the ipvs getting stable in K8s 1.12+ we
should try and see:

   - if various ipvs load balancing algorithm will provide better
   alternatives
   - if ipvs DSR will make any improvements
   - refreshing the iptables rules will be faster


The proxy implementation is usually tightly coupled to the network
> plugin implementation. Some network plugins use kube-proxy while others
> have their own internal load balancing implementation, like ovn-
> kubernetes.
>
> The largest issue we've seen with the iptables-based kube-proxy (as
> opposed to IPVS-based kube-proxy) is iptables contention, and since
> OVN's load-balancing/proxy implementation does not use iptables this is
> not a concern for OVN.
>
[DC]:  @Dan - would you mind pointing me to the code which deals with the
OVN's lb logic ? looked in [1] but i guess i'm missing something else?
(maybe looking in the wrong repo)

[1]
https://github.com/openshift/origin/blob/master/pkg/cmd/openshift-sdn/proxy.go

Independently of that, we are planning to have a standalone kube-proxy
> daemonset that 3rd party plugins (like Calico) can use which could be
> run in IPVS mode if needed:
>
> https://github.com/openshift/release/pull/3711
>
> [DC]: i guess this is based on the [2] and if so, you mind (for my own
curiosity) helping me understand the difference between OpenShiftSDN and
OVNKubernetes networkType ? what new problems does the new OVNKubernetes
type solve?

[2] https://github.com/ovn-org/ovn-kubernetes

That's waiting on Clayton for an LGTM for the mirroring bits (hint hint
> :)
>
> Dan
>
> > > On Jun 8, 2019, at 4:08 PM, Daniel Comnea 
> > > wrote:
> > >
> > > Hi,
> > >
> > > Are there any future plans in 4.x lifecycle to decouple kube-proxy
> > > from OVN and allow setting/ running K8s upstream kube-proxy in ipvs
> > > mode ?
> > >
> > > Cheers,
> > > Dani
> > > ___
> > > dev mailing list
> > > dev@lists.openshift.redhat.com
> > > http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
> >
> > ___
> > dev mailing list
> > dev@lists.openshift.redhat.com
> > http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
> >
> >
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


[4.x]: thoughts on how folks should triage and open issues on the right repos?

2019-06-17 Thread Daniel Comnea
Hi,

In 3.x folks used to open issues on Origin/ openshift-ansible repos or BZ
if it was related to OCP.

In 4.x the game changed a bit where we have many repos and so my question
is:

do you have any suggestion/ preference on where folks should open issues
and how will they know / be able to triage which issue goes into which git
repo ?

Sometimes installer repo is used as the main place to open issues however
that is not efficient but then again i can understand why folks do it since
is the only interaction they are aware of.

One suggestion i have would be if it was somehow a mapping between the
features in v4 and the operators as well as a dependency graph of all the
operators. Having that inside a github issue template should help folks
understand (could be that not everyone will be comfortable with but is a
start i think) on which repo to open the github issue?

Dani
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


[4.x]: understand the role/ scope of image registry operator

2019-06-17 Thread Daniel Comnea
Hi,

Initially when i read the docs [1] i assumed that image registry operator's
role is similar to what we used to have in 3.x - a simple registry should
the user want to use it for images built with [2]

While i was playing with 4.1 i've followed the steps mentioned in [3]
because w/o it the openshift-installer will not report as installation
complete. Also the CVO will not be in a healthy state ready to pick up new
updates.

As such it seems that the image registry scope is different (and not
documented yet, happy to follow up on docs repo once i figure out with your
help ;) ) than i thought and so my questions are:

   - are all the operator images bundled inside the release payload being
   stored on the image registry storage?
  - if not then is it only CVO which needs to store its own release
  image ?
  - any particular reason why there is no option to customize the size
   and so it must be 100GB size (as per the docs and the code base) ?


Thank you,
Dani


[1]
https://docs.openshift.com/container-platform/4.1/registry/architecture-component-imageregistry.html

[2]
https://docs.openshift.com/container-platform/4.1/builds/understanding-image-builds.html
[3]
https://docs.openshift.com/container-platform/4.1/installing/installing_vsphere/installing-vsphere.html#installation-registry-storage-config_installing-vsphere
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: [4.x]: understand the role/ scope of image registry operator

2019-06-17 Thread Daniel Comnea
hi Ben,

thanks for taking the time to respond, please see below.



On Mon, Jun 17, 2019 at 3:50 PM Ben Parees  wrote:

>
>
> On Mon, Jun 17, 2019 at 6:44 AM Daniel Comnea 
> wrote:
>
>> Hi,
>>
>> Initially when i read the docs [1] i assumed that image registry
>> operator's role is similar to what we used to have in 3.x - a simple
>> registry should the user want to use it for images built with [2]
>>
>
> The registry in 3.x and the registry in 4.x serve the same purpose. The
> registry itself is the same.  The difference is that in 3.x the registry
> was deployed/managed by the ansible installer + the admin making direct
> edits to the registry deploymentconfig and using the "oc adm registry"
> command.
>
> In 4.x, the registry is deployed/managed by the registry operator and the
> admin asserts desired config by editing the registry operator's config
> resource.
>
> In your case the registry was not initially available because on vsphere
> there is no valid storage available, so the operator cannot default the
> storage configuration.  Thus is reports unavailable until the admin takes
> action to configure the storage properly.
>
>
>
>>
>> While i was playing with 4.1 i've followed the steps mentioned in [3]
>> because w/o it the openshift-installer will not report as installation
>> complete. Also the CVO will not be in a healthy state ready to pick up new
>> updates.
>>
>> As such it seems that the image registry scope is different (and not
>> documented yet, happy to follow up on docs repo once i figure out with your
>> help ;) ) than i thought and so my questions are:
>>
>>- are all the operator images bundled inside the release payload
>>being stored on the image registry storage?
>>
>>
> No.
>
 [DC]: great, i guess having the valuable info [1] in the docs would very
much help admin folks, mu 0.002 $

[1] https://github.com/openshift/cluster-version-operator/pull/201

>
>
>>- if not then is it only CVO which needs to store its own release
>>   image ?
>>
>>
> The registry doesn't store any images needed by the openshift.  The reason
> the CVO is complaining is because one of the operators (in this case the
> registry operator) is not reporting available.  You'd experience the same
> thing if any other platform operator was reporting unavailable, it's not
> specific to a dependency on the registry.
>
>
>
>>
>>-
>>- any particular reason why there is no option to customize the size
>>and so it must be 100GB size (as per the docs and the code base) ?
>>
>>
> The docs are a bit unclear but what it is saying is that you must define a
> 100gig PV because that is the size of volume that the PVC created by the
> registry operator will require.  So if you don't have a 100gig PV, the PVC
> will not be able to find a matching volume.  (Adam/Oleg we should probably
> clarify and or explain that prereq)
>
> That is simply a default that we chose for the PVC the registry operator
> automatically creates.  If you want to use a different sized volume, then
> you simply need to create your own PVC (and PV) and point the registry
> operator to the PVC you want to use, instead of letting the registry
> operator create its own PVC.
>
> [DC]: ah right! it definitely helps if you know each operator's scope ;)
.Looked [1] i can see a PVC name or a Storage class created outside does
the work, thanks again Ben.

[1]
https://github.com/openshift/cluster-image-registry-operator/blob/master/pkg/storage/pvc/pvc.go

>
>
>>
>> Thank you,
>> Dani
>>
>>
>> [1]
>> https://docs.openshift.com/container-platform/4.1/registry/architecture-component-imageregistry.html
>>
>> [2]
>> https://docs.openshift.com/container-platform/4.1/builds/understanding-image-builds.html
>> [3]
>> https://docs.openshift.com/container-platform/4.1/installing/installing_vsphere/installing-vsphere.html#installation-registry-storage-config_installing-vsphere
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>
>
> --
> Ben Parees | OpenShift
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: OKD 4 - A Modest Proposal

2019-06-26 Thread Daniel Comnea
Sorry for missing out the mailer by mistake, not intentional.
PSB in blue.

On Wed, Jun 26, 2019 at 10:14 PM Daniel Comnea 
wrote:

>
>
> On Wed, Jun 26, 2019 at 6:09 PM Colin Walters  wrote:
>
>>
>>
>> On Thu, Jun 20, 2019, at 5:20 PM, Clayton Coleman wrote:
>>
>>
>> > Because the operating system integration is so critical, we need to
>> > make sure that the major components (ostree, ignition, and the kubelet)
>> > are tied together in a CoreOS distribution that can be quickly
>> > refreshed with OKD - the Fedora CoreOS project is close to being ready
>> > for integration in our CI, so that’s a natural place to start. That
>> > represents the biggest technical obstacle that I’m aware of to get our
>> > first versions of OKD4 out (the CI systems are currently testing on top
>> > of RHEL CoreOS but we have PoCs of how to take an arbitrary ostree
>> > based distro and slot it in).
>>
>> The tricky thing here is...if we want this to work the same as OpenShift
>> 4/OCP
>> with RHEL CoreOS, then what we're really talking about here is a
>> *derivative*
>> of FCOS that for example embeds the kubelet from OKD.  And short term
>> it will need to use Ignition spec 2.  There may be other things I'm
>> forgetting.
>>
> [DC] : in addition to that i think you need changes to installer/ MCO , or
> am i wrong ?
>
>
>> Concretely for example, OKDFCOS (to use the obvious if unwieldy acronym)
>> would need to have its own uploaded "bootimages" (i.e. AMIs, PXE media
>> etc)
>> that are own its own version number/lifecycle distinct from (but derived
>> from)
>> FCOS (and OKD).
>>
> [DC]: curious to understand why it can't be one single FCOS? what other
> avenues FCOS do chase will break by having  OKD baked in components?
> If we are talking about a derivative then i'd go and challenge that maybe
> a CentOS CoreOS based on RHCOS is the best bet and that can deprecate
> Project Atomic. Doing so imo (i could miss some context here) will reduce
> any tension on FCOS charter and will rapidly (hopefully) allow OKD to
> become a thing.
>
>>
>> This is completely possible (anything is in software) but the current
>> team is
>> working on a lot of things and introducing a 3rd stream for us to
>> maintain would
>> be a not at all small cost.  On the other hand, the benefit of doing so
>> (e.g.
>> early upstream kernel/selinux-policy/systemd/podman integration testing
>> with kubernetes/OKD) might be worth it alone.
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: OKD 4 - A Modest Proposal

2019-06-28 Thread Daniel Comnea
On Fri, Jun 28, 2019 at 4:58 AM Clayton Coleman  wrote:

> > On Jun 26, 2019, at 1:08 PM, Colin Walters  wrote:
> >
> >
> >
> > On Thu, Jun 20, 2019, at 5:20 PM, Clayton Coleman wrote:
> >
> >
> >> Because the operating system integration is so critical, we need to
> >> make sure that the major components (ostree, ignition, and the kubelet)
> >> are tied together in a CoreOS distribution that can be quickly
> >> refreshed with OKD - the Fedora CoreOS project is close to being ready
> >> for integration in our CI, so that’s a natural place to start. That
> >> represents the biggest technical obstacle that I’m aware of to get our
> >> first versions of OKD4 out (the CI systems are currently testing on top
> >> of RHEL CoreOS but we have PoCs of how to take an arbitrary ostree
> >> based distro and slot it in).
> >
> > The tricky thing here is...if we want this to work the same as OpenShift
> 4/OCP
> > with RHEL CoreOS, then what we're really talking about here is a
> *derivative*
> > of FCOS that for example embeds the kubelet from OKD.  And short term
> > it will need to use Ignition spec 2.  There may be other things I'm
> forgetting.
>
> Or we have a branch of mcd that works with ignition 3 before the main
> branch switches.
>

[DC]: wouldn't this be more than just MCD ?e.g - change in installer too
[1] to import the v3 spec and work with it

[1]
https://github.com/openshift/installer/blob/master/pkg/asset/ignition/machine/node.go#L7

>
> I don’t know that it has to work exactly the same, but obviously the
> closer the better.
>
> >
> > Concretely for example, OKDFCOS (to use the obvious if unwieldy acronym)
> > would need to have its own uploaded "bootimages" (i.e. AMIs, PXE media
> etc)
> > that are own its own version number/lifecycle distinct from (but derived
> from)
> > FCOS (and OKD).
>
> Or it just pivots.  Pivots aren’t bad.
>
> >
> > This is completely possible (anything is in software) but the current
> team is
> > working on a lot of things and introducing a 3rd stream for us to
> maintain would
> > be a not at all small cost.  On the other hand, the benefit of doing so
> (e.g.
> > early upstream kernel/selinux-policy/systemd/podman integration testing
> > with kubernetes/OKD) might be worth it alone.
> >
> > ___
> > dev mailing list
> > dev@lists.openshift.redhat.com
> > http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


[v4]: v4.1.4 is using internal registry, is this a bug?

2019-07-18 Thread Daniel Comnea
Hi,

Trying a new fresh deployment by downloading a new secret together with the
installer from try.openshift.com i end up in a failure state with bootstrap
node caused by

*error pulling image
> "registry.svc.ci.openshift.org/ocp/release@sha256:a6c177eb007d20bb00bfd8f829e99bd40137167480112bd5ae1c25e40a4a163a
> ":
> unable to pull
> registry.svc.ci.openshift.org/ocp/release@sha256:a6c177eb007d20bb00bfd8f829e99bd40137167480112bd5ae1c25e40a4a163a
> :
> unable to pull image: Error determining manifest MIME type for
> docker://registry.svc.ci.openshift.org/ocp/release@sha256:a6c177eb007d20bb00bfd8f829e99bd40137167480112bd5ae1c25e40a4a163a
> :
> Error reading manifest
> sha256:a6c177eb007d20bb00bfd8f829e99bd40137167480112bd5ae1c25e40a4a163a in
> registry.svc.ci.openshift.org/ocp/release
> : unauthorized:
> authentication required*
>

Manually trying from withing bootstrap node to check if it works .. i get
same negative result

*skopeo inspect --authfile /root/.docker/config.json
> docker://registry.svc.ci.openshift.org/ocp/release@sha256:a6c177eb007d20bb00bfd8f829e99bd40137167480112bd5ae1c25e40a4a163a
> *
>

Switching to installer 4.1.0/ 4.1.3/ 4.1.6 with the same pull secret am
able to get bootstrap node up with is MCO and the other pods up.

One interesting bit is that 4.1.4 points to a release image hosted
internally

>
>
>
> *./openshift-install version./openshift-install
> v4.1.4-201906271212-dirtybuilt from commit
> bf47826c077d16798c556b1bd143a5bbfac14271release image
> registry.svc.ci.openshift.org/ocp/release@sha256:a6c177eb007d20bb00bfd8f829e99bd40137167480112bd5ae1c25e40a4a163a
> *
>

but 4.1.6 for example points to quay.io (as expected).
Is this a bug?

On a slightly different note, would be nice to update try.openshift.com to
latest stable 4.1 release which is .6 and not .4.

Cheers,
Dani
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: [v4]: v4.1.4 is using internal registry, is this a bug?

2019-07-18 Thread Daniel Comnea
On Thu, Jul 18, 2019 at 10:08 PM Clayton Coleman 
wrote:

> We generally bump "latest" symlink once it's in the stable channel, which
> 4.1.6 is not in.  4.1.6 is still considered pre-release.
>
[DC]: i looked [1] and so i assumed is stable since is part of 4-stable
section.

[1]
https://openshift-release.svc.ci.openshift.org/releasestream/4-stable/release/4.1.6

>
> For your first error message, which installer binary were you using?  Can
> you link to it directly?
>
[DC]: sure thing, i've downloaded from try.openshift.com which took me to
https://mirror.openshift.com/pub/openshift-v4/clients/ocp/4.1.4/openshift-install-mac-4.1.4.tar.gz

>
> On Thu, Jul 18, 2019 at 3:55 PM Daniel Comnea 
> wrote:
>
>> Hi,
>>
>> Trying a new fresh deployment by downloading a new secret together with
>> the installer from try.openshift.com i end up in a failure state with
>> bootstrap node caused by
>>
>> *error pulling image
>>> "registry.svc.ci.openshift.org/ocp/release@sha256:a6c177eb007d20bb00bfd8f829e99bd40137167480112bd5ae1c25e40a4a163a
>>> <http://registry.svc.ci.openshift.org/ocp/release@sha256:a6c177eb007d20bb00bfd8f829e99bd40137167480112bd5ae1c25e40a4a163a>":
>>> unable to pull
>>> registry.svc.ci.openshift.org/ocp/release@sha256:a6c177eb007d20bb00bfd8f829e99bd40137167480112bd5ae1c25e40a4a163a
>>> <http://registry.svc.ci.openshift.org/ocp/release@sha256:a6c177eb007d20bb00bfd8f829e99bd40137167480112bd5ae1c25e40a4a163a>:
>>> unable to pull image: Error determining manifest MIME type for
>>> docker://registry.svc.ci.openshift.org/ocp/release@sha256:a6c177eb007d20bb00bfd8f829e99bd40137167480112bd5ae1c25e40a4a163a
>>> <http://registry.svc.ci.openshift.org/ocp/release@sha256:a6c177eb007d20bb00bfd8f829e99bd40137167480112bd5ae1c25e40a4a163a>:
>>> Error reading manifest
>>> sha256:a6c177eb007d20bb00bfd8f829e99bd40137167480112bd5ae1c25e40a4a163a in
>>> registry.svc.ci.openshift.org/ocp/release
>>> <http://registry.svc.ci.openshift.org/ocp/release>: unauthorized:
>>> authentication required*
>>>
>>
>> Manually trying from withing bootstrap node to check if it works .. i get
>> same negative result
>>
>> *skopeo inspect --authfile /root/.docker/config.json
>>> docker://registry.svc.ci.openshift.org/ocp/release@sha256:a6c177eb007d20bb00bfd8f829e99bd40137167480112bd5ae1c25e40a4a163a
>>> <http://registry.svc.ci.openshift.org/ocp/release@sha256:a6c177eb007d20bb00bfd8f829e99bd40137167480112bd5ae1c25e40a4a163a>*
>>>
>>
>> Switching to installer 4.1.0/ 4.1.3/ 4.1.6 with the same pull secret am
>> able to get bootstrap node up with is MCO and the other pods up.
>>
>> One interesting bit is that 4.1.4 points to a release image hosted
>> internally
>>
>>>
>>>
>>>
>>> *./openshift-install version./openshift-install
>>> v4.1.4-201906271212-dirtybuilt from commit
>>> bf47826c077d16798c556b1bd143a5bbfac14271release image
>>> registry.svc.ci.openshift.org/ocp/release@sha256:a6c177eb007d20bb00bfd8f829e99bd40137167480112bd5ae1c25e40a4a163a
>>> <http://registry.svc.ci.openshift.org/ocp/release@sha256:a6c177eb007d20bb00bfd8f829e99bd40137167480112bd5ae1c25e40a4a163a>*
>>>
>>
>> but 4.1.6 for example points to quay.io (as expected).
>> Is this a bug?
>>
>> On a slightly different note, would be nice to update try.openshift.com
>> to latest stable 4.1 release which is .6 and not .4.
>>
>> Cheers,
>> Dani
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: [v4]: v4.1.4 is using internal registry, is this a bug?

2019-07-18 Thread Daniel Comnea
On Thu, Jul 18, 2019 at 11:35 PM Clayton Coleman 
wrote:

>
>
> On Jul 18, 2019, at 6:24 PM, Daniel Comnea  wrote:
>
>
>
> On Thu, Jul 18, 2019 at 10:08 PM Clayton Coleman 
> wrote:
>
>> We generally bump "latest" symlink once it's in the stable channel, which
>> 4.1.6 is not in.  4.1.6 is still considered pre-release.
>>
> [DC]: i looked [1] and so i assumed is stable since is part of 4-stable
> section.
>
> [1]
> https://openshift-release.svc.ci.openshift.org/releasestream/4-stable/release/4.1.6
>
>
> That page has nothing to do with officially going into stable channels.
> If the cluster doesn’t show the update or “latest” doesn’t point to it it’s
> not stable yet.
>
[DC]: thank you for the info, learnt something new. (one day i'll learn how
this promotion works and how is triggered)


>
> <https://openshift-release.svc.ci.openshift.org/releasestream/4-stable/release/4.1.6>
>
>>
>> For your first error message, which installer binary were you using?  Can
>> you link to it directly?
>>
> [DC]: sure thing, i've downloaded from try.openshift.com which took me to
> https://mirror.openshift.com/pub/openshift-v4/clients/ocp/4.1.4/openshift-install-mac-4.1.4.tar.gz
>
>
> Are you positive you were installing from that binary?  I just double
> checked locally and that listed a quay.io release image.
>
> [DC]: i'm no longer positive, sorry for the noise and for wasting your
time. :facepalm
























*[dani-dev in ~/Downloads]# wget
https://mirror.openshift.com/pub/openshift-v4/clients/ocp/4.1.4/openshift-install-mac-4.1.4.tar.gz
<https://mirror.openshift.com/pub/openshift-v4/clients/ocp/4.1.4/openshift-install-mac-4.1.4.tar.gz>--2019-07-18
23:41:46--
 
https://mirror.openshift.com/pub/openshift-v4/clients/ocp/4.1.4/openshift-install-mac-4.1.4.tar.gz
<https://mirror.openshift.com/pub/openshift-v4/clients/ocp/4.1.4/openshift-install-mac-4.1.4.tar.gz>Resolving
mirror.openshift.com <http://mirror.openshift.com> (mirror.openshift.com
<http://mirror.openshift.com>)... 54.172.163.83Connecting to
mirror.openshift.com <http://mirror.openshift.com> (mirror.openshift.com
<http://mirror.openshift.com>)|54.172.163.83|:443... connected.HTTP request
sent, awaiting response... 200 OKLength: 56179054 (54M)
[application/x-gzip]Saving to:
‘openshift-install-mac-4.1.4.tar.gz’openshift-install-mac-4.1.4.tar.gz

100%[==>]
 53.58M  7.67MB/sin 10s2019-07-18 23:41:57 (5.21 MB/s) -
‘openshift-install-mac-4.1.4.tar.gz’ saved [56179054/56179054][dani-dev in
~/Downloads]# md5sum
openshift-install-mac-4.1.4.tar.gz71fd99d94dc1062a52d43b2ca83900fa
 openshift-install-mac-4.1.4.tar.gz[dcomnea@DCOMNEA-M-M29F in ~/Downloads]#
tar xzvf openshift-install-mac-4.1.4.tar.gzx README.mdx
openshift-install[dani-dev in ~/Downloads]# md5sum
openshift-installdedb9c8d0c66861ebb3e620e32cef438
 openshift-install[dani-dev in ~/Downloads]# ./openshift-install
version./openshift-install v4.1.4-201906271212-dirtybuilt from commit
ca5d270bc8d554496f0aa1073bdaa67adf01a25arelease image
quay.io/openshift-release-dev/ocp-release@sha256:a6c177eb007d20bb00bfd8f829e99bd40137167480112bd5ae1c25e40a4a163a
<http://quay.io/openshift-release-dev/ocp-release@sha256:a6c177eb007d20bb00bfd8f829e99bd40137167480112bd5ae1c25e40a4a163a>[dani-dev
in ~/Downloads]#*


>> On Thu, Jul 18, 2019 at 3:55 PM Daniel Comnea 
>> wrote:
>>
>>> Hi,
>>>
>>> Trying a new fresh deployment by downloading a new secret together with
>>> the installer from try.openshift.com i end up in a failure state with
>>> bootstrap node caused by
>>>
>>> *error pulling image
>>>> "registry.svc.ci.openshift.org/ocp/release@sha256:a6c177eb007d20bb00bfd8f829e99bd40137167480112bd5ae1c25e40a4a163a
>>>> <http://registry.svc.ci.openshift.org/ocp/release@sha256:a6c177eb007d20bb00bfd8f829e99bd40137167480112bd5ae1c25e40a4a163a>":
>>>> unable to pull
>>>> registry.svc.ci.openshift.org/ocp/release@sha256:a6c177eb007d20bb00bfd8f829e99bd40137167480112bd5ae1c25e40a4a163a
>>>> <http://registry.svc.ci.openshift.org/ocp/release@sha256:a6c177eb007d20bb00bfd8f829e99bd40137167480112bd5ae1c25e40a4a163a>:
>>>> unable to pull image: Error determining manifest MIME type for
>>>> docker://registry.svc.ci.openshift.org/ocp/release@sha256:a6c177eb007d20bb00bfd8f829e99bd40137167480112bd5ae1c25e40a4a163a
>>>> <http://registry.svc.ci.openshift.org/ocp/release@sha256:a6c177eb007d20bb00bfd8f829e99bd40137167480112bd5ae1c25e40a4a163a>:
>>>> Error reading manifest
>>>> sha256:a6c177

Re: Follow up on OKD 4

2019-07-19 Thread Daniel Comnea
Hi Christian,

Welcome and thanks for volunteering on kicking off this effort.

My vote goes to #openshift-dev slack too, OpenShift Commons Slack scope
was/is a bit different geared towards ISVs.

IRC -  personally have no problem, however the chances to attract more
folks (especially non RH employees) who might be willing to help growing
OKD community are higher on slack.

On Fri, Jul 19, 2019 at 9:33 PM Christian Glombek 
wrote:

> +1 for using kubernetes #openshift-dev slack for the OKD WG meetings
>
>
> On Fri, Jul 19, 2019 at 6:49 PM Clayton Coleman 
> wrote:
>
>> The kube #openshift-dev slack might also make sense, since we have 518
>> people there right now
>>
>> On Fri, Jul 19, 2019 at 12:46 PM Christian Glombek 
>> wrote:
>>
>>> Hi everyone,
>>>
>>> first of all, I'd like to thank Clayton for kicking this off!
>>>
>>> As I only just joined this ML, let me quickly introduce myself:
>>>
>>> I am an Associate Software Engineer on the OpenShift
>>> machine-config-operator (mco) team and I'm based out of Berlin, Germany.
>>> Last year, I participated in Google Summer of Code as a student with
>>> Fedora IoT and joined Red Hat shortly thereafter to work on the Fedora
>>> CoreOS (FCOS) team.
>>> I joined the MCO team when it was established earlier this year.
>>>
>>> Having been a Fedora/Atomic community member for some years, I'm a
>>> strong proponent of using FCOS as base OS for OKD and would like to see it
>>> enabled :)
>>> As I work on the team that looks after the MCO, which is one of the
>>> parts of OpenShift that will need some adaptation in order to support
>>> another base OS, I am confident I can help with contributions there
>>> (of course I don't want to shut the door for other OSes to be used as
>>> base if people are interested in that :).
>>>
>>> Proposal: Create WG and hold regular meetings
>>>
>>> I'd like to propose the creation of the OKD Working Group that will hold
>>> bi-weekly meetings.
>>> (or should we call it a SIG? Also open to suggestions to find the right
>>> venue: IRC?, OpenShift Commons Slack?).
>>>
>>> I'll survey some people in the coming days to find a suitable meeting
>>> time.
>>>
>>> If you have any feedback or suggestions, please feel free to reach out,
>>> either via this list or personally!
>>> I can be found as lorbus on IRC/Fedora, @lorbus42 on Twitter, or simply
>>> via email :)
>>>
>>> I'll send out more info here ASAP. Stay tuned!
>>>
>>> With kind regards
>>>
>>> CHRISTIAN GLOMBEK
>>> Associate Software Engineer
>>>
>>> Red Hat GmbH, registred seat: Grassbrunn
>>> Commercial register: Amtsgericht Muenchen, HRB 153243
>>> Managing directors: Charles Cachera, Michael O'Neill, Thomas Savage, Eric 
>>> Shander
>>>
>>>
>>>
>>> On Wed, Jul 17, 2019 at 10:45 PM Clayton Coleman 
>>> wrote:
>>>
 Thanks for everyone who provided feedback over the last few weeks.
 There's been a lot of good feedback, including some things I'll try to
 capture here:

 * More structured working groups would be good
 * Better public roadmap
 * Concrete schedule for OKD 4
 * Concrete proposal for OKD 4

 I've heard generally positive comments about the suggestions and
 philosophy in the last email, with a desire for more details around what
 the actual steps might look like, so I think it's safe to say that the idea
 of "continuously up to date Kubernetes distribution" resonated.  We'll
 continue to take feedback along this direction (private or public).

 Since 4 was the kickoff for this discussion, and with the recent
 release of the Fedora CoreOS beta (
 https://docs.fedoraproject.org/en-US/fedora-coreos/getting-started/) 
 figuring
 prominently in the discussions so far, I got some volunteers from that team
 to take point on setting up a working group (SIG?) around the initial level
 of integration and drafting a proposal.

 Steve and Christian have both been working on Fedora CoreOS and
 graciously agreed to help drive the next steps on Fedora CoreOS and OKD
 potential integration into a proposal.  There's a rough level draft doc
 they plan to share - but for now I will turn this over to them and they'll
 help organize time / forum / process for kicking off this effort.  As that
 continues, we'll identify new SIGs to spawn off as necessary to cover other
 topics, including initial CI and release automation to deliver any
 necessary changes.

 Thanks to everyone who gave feedback, and stay tuned here for more!

>>> ___
>>> users mailing list
>>> us...@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
___
dev mailing list
dev@lists.openshift.redhat.com
htt

Re: Follow up on OKD 4

2019-07-21 Thread Daniel Comnea
On Sun, Jul 21, 2019 at 5:27 PM Clayton Coleman  wrote:

>
>
> On Sat, Jul 20, 2019 at 12:40 PM Justin Cook  wrote:
>
>> Once upon a time Freenode #openshift-dev was vibrant with loads of
>> activity and publicly available logs. I jumped in asked questions and Red
>> Hatters came from the woodwork and some amazing work was done.
>>
>> Perfect.
>>
>> Slack not so much. Since Monday there have been three comments with two
>> reply threads. All this with 524 people. Crickets.
>>
>> Please explain how this is better. I’d really love to know why IRC
>> ceased. It worked and worked brilliantly.
>>
>
> Is your concern about volume or location (irc vs slack)?
>
> Re volume: It should be relatively easy to move some common discussion
> types into the #openshift-dev slack channel (especially triage / general
> QA) that might be distributed to other various slack channels today (both
> private and public), and I can take the follow up to look into that.  Some
> of the volume that was previously in IRC moved to these slack channels, but
> they're not anything private (just convenient).
>
> Re location:  I don't know how many people want to go back to IRC from
> slack, but that's a fairly easy survey to do here if someone can volunteer
> to drive that, and I can run the same one internally.  Some of it is
> inertia - people have to be in slack sig-* channels - and some of it is
> preference (in that IRC is an inferior experience for long running
> communication).
>
[DC]: i've already reached out to Christian over the weekend and we are
going to have a 1:1 early next week to sort out some logistics and
hopefully we'll have something to share more mid next week in terms of
survey comms and process moving forward.


>
>>
>> There are mentions of sigs and bits and pieces, but absolutely no
>> progress. I fail to see why anyone would want to regress. OCP4 maybe
>> brilliant, but as I said in a private email, without upstream there is no
>> culture or insurance we’ve come to love from decades of heart and soul.
>>
>> Ladies and gentlemen, this is essentially getting to the point the
>> community is being abandoned. Man years of work acknowledged with the
>> roadmap pulled out from under us.
>>
>
> I don't think that's a fair characterization, but I understand why you
> feel that way and we are working to get the 4.x work moving.  The FCoS team
> as mentioned just released their first preview last week, I've been working
> with Diane and others to identify who on the team is going to take point on
> the design work, and there's a draft in flight that I saw yesterday.  Every
> component of OKD4 *besides* the FCoS integration is public and has been
> public for months.
>
> [DC]: Clayton, was that drat you mentioned circulated internally or is
public available?


> I do want to make sure we can get a basic preview up as quickly as
> possible - one option I was working on with the legal side was whether we
> could offer a short term preview of OKD4 based on top of RHCoS.  That is
> possible if folks are willing to accept the terms on try.openshift.com in
> order to access it in the very short term (and then once FCoS is available
> that would not be necessary).  If that's an option you or anyone on this
> thread are interested in please let me know, just as something we can do to
> speed up.
>
>
[DC]: my suggestion is that we should hold on this at least until we get
the SIG and the meeting going to at least have an open debate with the
folks who are willing to stick around and help out. Once we've get a quorum
we can then ask for a waiver on OKDv4 with RHCoS



>> I completely understand the disruption caused by the acquisition. But,
>> after kicking the tyres and our meeting a few weeks back, it’s been pretty
>> quiet. The clock is ticking on corporate long-term strategies. Some of
>> those corporates spent plenty of dosh on licensing OCP and hiring
>> consultants to implement.
>>
>
>> Red Hat need to lead from the front. Get IRC revived, throw us a bone,
>> and have us put our money where our mouth is — we’ll get involved. We’re
>> begging for it.
>>
>> Until then we’re running out of patience via clientele and will need to
>> start a community effort perhaps by forking OKD3 and integrating upstream.
>> I am not interested in doing that. We shouldn’t have to.
>>
>
> In the spirit of full transparency, FCoS integrated into OKD is going to
> take several months to get to the point where it meets the quality bar I'd
> expect for OKD4.  If that timeframe doesn't work for folks, we can
> definitely consider other options like having RHCoS availability behind a
> terms agreement, a franken-OKD without host integration (which might take
> just as long to get and not really be a step forward for folks vs 3), or
> other, more dramatic options.  Have folks given FCoS a try this week?
> https://docs.fedoraproject.org/en-US/fedora-coreos/getting-started/.
> That's a great place to get started
>
> As always PRs and fixes to 3.x will contin

Re: Follow up on OKD 4

2019-07-22 Thread Daniel Comnea
On Mon, Jul 22, 2019 at 8:52 AM Justin Cook  wrote:

> On 22 Jul 2019, 00:07 +0100, Gleidson Nascimento , wrote:
>
> I'm with Daniel, I believe it is easier to attract help by using Slack
> instead of IRC.
>
>
> My experience over many years — especially with OCP3 — IRC with public
> logs smashes Slack. It’s not comparable. The proof is in the pudding.
> Compare the public IRC logs with the Slack channel.
>
> The way I see it is we should practice openness in everything. Slack is
> proprietary. Google does not index the logs. That’s a full stop for me. As
> a matter of fact, many others agree. Just search it. The most disappointing
> thing is for over two decades *open *IRC has been used with *open*
> mailing lists and *open* documentation with a new trend of using fancy
> (arguably not) things that own the data we produce and we have to pay them
> to index it for us and in the end it’s not publicly available — see a theme
> emerging?
>
> So go ahead and have your Slack with three threads per week and we’ll see
> if your *belief* stays the same. The wide open searchable public IRC is
> the heavyweight champion that’s never let us down. As a matter of fact,
> being completely open helped build OCP3 and we all know how that worked
> out.
>
> *Justin* - let me provide some info from my side, i'm not trying to get
into a personal religious debate here however i think we need to
acknowledge few things:

you saying:


   - *IRC with public logs smashes Slack*
   - *Slack is proprietary. Google does not index the logs.*

My response:

I totally agree with that but let's do a quick reality check taking example
some IRC channels, shall we?


   - ansible IRC channel doesn't log the conversation - does the comments
   [1] and [2] resonate with you? It does for me and that is a huge -1 from my
   side.
   - centos-devel/ centos channels doesn't log the conversation. Saying
   that for the centos meetings (i.e PaaS SIG) it get logged and is per SIG.
   That in itself is very useful however as a guy who was consuming the output
   for the last year as PaaS SIG chair/ member i will say is not appealing to
   go over the output if a meeting had high traffic (same way if you have a 6
   hour meeting recording, will you watch it from A to Z ? ;) )
   - fedora-coreos it does log [3] but if i'll turn every morning and see
   what has been discussed you see a lot of noise caused by who join/leave
   - openshift/ openshift-dev channels had something on [4] but does it
   still works ?


All i'm trying to say with the above is:

Should we go with IRC as a form of communication we should then be ready to
have bodies lined up to:


   - look after and admin the IRC channels.
   - enable the IRC log channels and also filter out the noise to be
   consumable (not just stream the logs somewhere and tick the box)

In addition to the channel logs, my main requirement is to access the IRC
channels from any device and not lose track of what has been discussed.
A respected gentlemen involved in various opensource projects once wrote
[5] and so with that i'd say:

   - who will take on board the setup so everyone can benefit from it?


If you swing to slack, i'd say:

   - K8s slack is free in that neither you nor i/ others pay for it and
   everyone can join there
   - OpenShift Common slack channel is also free, RH is paying the bill
   (another investment from their side) however as said Diane setup up that
   place initially with a different scope.
   - once you logged in you can scroll back many months in the past
   - you get ability to share code snippet -> in IRC you don't. You could
   argue that folks can use github gist or any pastebin service however the
   content can be deleted/ expired and so we go back to square one


[1] https://github.com/ansible/community/issues/242#issuecomment-334239958
[2] https://github.com/ansible/community/issues/242#issuecomment-336890994
[3] https://echelog.com/logs/browse/fedora-coreos/1563746400
[4] https://botbot.me/freenode/openshift-dev/
[5]
https://doughellmann.com/blog/2015/03/12/deploying-nested-znc-services-with-ansible/

you also saying

   - *Slack with three threads per week*

How is the traffic on fedora-coreos OR centos-devel channels going? Have
you seen high volume ?

I think is unfair to say that, in reality even on the mentioned IRC
channels we don't see much traffic.  #ansible is an exception but that is
because ansible core (no idea how many are RH employees vs the rest) devs
do hang around there.

At the end i think we need to take a step back and ask ourselves:

   - who is involved in OKD?
  - who is contributing - with tests, integration, docs, logistics etc
  etc (if i can use an analogy - *help producing the wine*)
  - who is consuming it (the analogy - *consuming/ drinking the wine*)
   - what is the scope of OKD based on the resources available ?
  - does OKD afford/ have capacity for an Infra team to look after
  tools? any volunteers? :)
  - 

Re: Follow up on OKD 4

2019-07-24 Thread Daniel Comnea
On Mon, Jul 22, 2019 at 4:02 PM Justin Cook  wrote:

> On 22 Jul 2019, 12:24 +0100, Daniel Comnea , wrote:
>
> I totally agree with that but let's do a quick reality check taking
> example some IRC channels, shall we?
>
>- ansible IRC channel doesn't log the conversation - does the comments
>[1] and [2] resonate with you? It does for me and that is a huge -1 from my
>side.
>
>
> Yes that’s most unfortunate for #ansible.
>
>
>- centos-devel/ centos channels doesn't log the conversation. Saying
>that for the centos meetings (i.e PaaS SIG) it get logged and is per SIG.
>That in itself is very useful however as a guy who was consuming the output
>for the last year as PaaS SIG chair/ member i will say is not appealing to
>go over the output if a meeting had high traffic (same way if you have a 6
>hour meeting recording, will you watch it from A to Z ? ;) )
>- fedora-coreos it does log [3] but if i'll turn every morning and see
>what has been discussed you see a lot of noise caused by who join/leave
>
>
>
> #centos and #fedora could most certainly do better. We’re getting on to
> three months after RHEL8 release and no hint of CentOS8.
>
> [DC]: i think is a bit unfair to say that, the info are out - see [1] ,
[2] and [3]

[1] https://blog.centos.org/2019/05/centos-8-0-1905-build-status/
[2] https://blog.centos.org/2019/06/centos-8-status-17-june-2019/
[3] https://wiki.centos.org/About/Building_8


>- openshift/ openshift-dev channels had something on [4] but does it
>still works ?
>
>
> This is one point of my complaint.
>
>
>
> All i'm trying to say with the above is:
>
> Should we go with IRC as a form of communication we should then be ready
> to have bodies lined up to:
>
>- look after and admin the IRC channels.
>- enable the IRC log channels and also filter out the noise to be
>consumable (not just stream the logs somewhere and tick the box)
>
>
> Easy enough. It’s been done time and again. Let’s give it a whirl. Since
> I’m the one complaining perhaps I can put my name in for consideration.
>
> [DC]: i understood not everyone is okay with logging any activity due to
GDPR so i think this goes off the table

>
>
> In addition to the channel logs, my main requirement is to access the IRC
> channels from any device and not lose track of what has been discussed.
> A respected gentlemen involved in various opensource projects once wrote
> [5] and so with that i'd say:
>
>- who will take on board the setup so everyone can benefit from it?
>
>
> https://www.irccloud.com/irc/freenode
> https://matrix.org/faq
>
> Again some options here, but most certainly doable with a little effort.
> #openshift-dev is advertised all over the place.
> https://www.okd.io/#contribute
>
> If you swing to slack, i'd say:
>
>- K8s slack is free in that neither you nor i/ others pay for it and
>everyone can join there
>- OpenShift Common slack channel is also free, RH is paying the bill
>(another investment from their side) however as said Diane setup up that
>place initially with a different scope.
>- once you logged in you can scroll back many months in the past
>- you get ability to share code snippet -> in IRC you don't. You could
>argue that folks can use github gist or any pastebin service however the
>content can be deleted/ expired and so we go back to square one
>
>
> Slack logs are not indexed by search engines. This prevents me from
> supporting it in its entirety. People have been sharing code snippets for
> decades on IRC. And, it’s worked fantastic. Just from my personal
> experience of Slack absorbing or repelling so much energy and collaboration
> from the community — of which no one can explain really — I don’t see it as
> a viable option given we have the numbers in front of us from this very
> project which undeniably shows it doesn’t work.
>
>
> [1] https://github.com/ansible/community/issues/242#issuecomment-334239958
> [2] https://github.com/ansible/community/issues/242#issuecomment-336890994
> [3] https://echelog.com/logs/browse/fedora-coreos/1563746400
> [4] https://botbot.me/freenode/openshift-dev/
> [5]
> https://doughellmann.com/blog/2015/03/12/deploying-nested-znc-services-with-ansible/
>
> you also saying
>
>- *Slack with three threads per week*
>
> How is the traffic on fedora-coreos OR centos-devel channels going? Have
> you seen high volume ?
>
>
> Why do you mention other projects and their traffic? #openshift-dev had
> incredible amounts of traffic which helped make it a success. Different
> channels have different attendance depe

Re: Follow up on OKD 4

2019-07-25 Thread Daniel Comnea
On Thu, Jul 25, 2019 at 5:01 PM Michael Gugino  wrote:

> I don't really view the 'bucket of parts' and 'complete solution' as
> competing ideas.  It would be nice to build the 'complete solution'
> from the 'bucket of parts' in a reproducible, customizable manner.
> "How is this put together" should be easily followed, enough so that
> someone can 'put it together' on their own infrastructure without
> having to be an expert in designing and configuring the build system.
>
> IMO, if I can't build it, I don't own it.  In 3.x, I could compile all
> the openshift-specific bits from source, I could point at any
> repository I wanted, I could point to any image registry I wanted, I
> could use any distro I wanted.  I could replace the parts I wanted to;
> or I could just run it as-is from the published sources and not worry
> about replacing things.  I even built Fedora Atomic host rpm-trees
> with all the kublet bits pre-installed, similar to what we're doing
> with CoreOS now in 3.x.  It was a great experience, building my own
> system images and running updates was trivial.
>
> I wish we weren't EOL'ing the Atomic Host in Fedora.  It offered a lot
> of flexibility and easy to use tooling.
>
> So maybe what we are asking here is:

   - opinionated OCP 4 philosophy => OKD 4 + FCOS (IPI and UPI) using
   ignition, CVO etc
   - DYI kube philosophy reusing as many v4 components but with your own
   preferred operating system


In terms of approach, priority i think is fair to adopt a baby steps
approach where:

   - phase 1 = try to get out OKD 4 + FCOS asap so folks can start build up
   the knowledge around operating the new solution in a full production env
   - phase 2 = once experience/ knowledge was built up then we can crack on
   with reverse eng and see what we can swap etc.





> On Thu, Jul 25, 2019 at 9:51 AM Clayton Coleman 
> wrote:
> >
> > > On Jul 25, 2019, at 4:19 AM, Aleksandar Lazic <
> openshift-li...@me2digital.com> wrote:
> > >
> > > HI.
> > >
> > >> Am 25.07.2019 um 06:52 schrieb Michael Gugino:
> > >> I think FCoS could be a mutable detail.  To start with, support for
> > >> plain-old-fedora would be helpful to make the platform more portable,
> > >> particularly the MCO and machine-api.  If I had to state a goal, it
> > >> would be "Bring OKD to the largest possible range of linux distros to
> > >> become the defacto implementation of kubernetes."
> > >
> > > I agree here with Michael. As FCoS or in general CoS looks technical a
> good idea
> > > but it limits the flexibility of possible solutions.
> > >
> > > For example when you need to change some system settings then you will
> need to
> > > create a new OS Image, this is not very usable in some environments.
> >
> > I think something we haven’t emphasized enough is that openshift 4 is
> > very heavily structured around changing the cost and mental model
> > around this.  The goal was and is to make these sorts of things
> > unnecessary.  Changing machine settings by building golden images is
> > already the “wrong” (expensive and error prone) pattern - instead, it
> > should be easy to reconfigure machines or to launch new containers to
> > run software on those machines.  There may be two factors here at
> > work:
> >
> > 1. Openshift 4 isn’t flexible in the ways people want (Ie you want to
> > add an rpm to the OS to get a kernel module, or you want to ship a
> > complex set of config and managing things with mcd looks too hard)
> > 2. You want to build and maintain these things yourself, so the “just
> > works” mindset doesn’t appeal.
> >
> > The initial doc alluded to the DIY / bucket of parts use case (I can
> > assemble this on my own but slightly differently) - maybe we can go
> > further now and describe the goal / use case as:
> >
> > I want to be able to compose my own Kubernetes distribution, and I’m
> > willing to give up continuous automatic updates to gain flexibility in
> > picking my own software
> >
> > Does that sound like it captures your request?
> >
> > Note that a key reason why the OS is integrated is so that we can keep
> > machines up to date and do rolling control plane upgrades with no
> > risk.  If you take the OS out of the equation the risk goes up
> > substantially, but if you’re willing to give that up then yes, you
> > could build an OKD that doesn’t tie to the OS.  This trade off is an
> > important one for folks to discuss.  I’d been assuming that people
> > *want* the automatic and safe upgrades, but maybe that’s a bad
> > assumption.
> >
> > What would you be willing to give up?
> >
> > >
> > > It would be nice to have the good old option to use the ansible
> installer to
> > > install OKD/Openshift on other Linux distribution where ansible is
> able to run.
> > >
> > >> Also, it would be helpful (as previously stated) to build communities
> > >> around some of our components that might not have a place in the
> > >> official kubernetes, but are valuable downstream components
> > >> nevertheless

[3.x]: openshift router and its own metrics

2019-08-15 Thread Daniel Comnea
Hi,

Would appreciate if anyone can please confirm that my understanding is
correct w.r.t the way the router haproxy image [1] is built.
Am i right to assume that the image [1] is is built as it's seen without
any other layer being added to include [2] ?
Also am i right to say the haproxy metrics [2] is part of the origin
package ?


A bit of background/ context:

a while back on OKD 3.7 we had to swap the openshift 3.7.2 router image
with 3.10 because we were seeing some problems with the reload and so we
wanted to take the benefit of the native haproxy 1.8 reload feature to stop
affecting the traffic.

While everything was nice and working okay we've noticed recently that the
haproxy stats do slowly increase and we do wonder if this is an
accumulation or not cause (maybe?) by the reloads. Now i'm aware of a
change made [3] however i suspect that is not part of the 3.10 image hence
my question to double check if my understanding is wrong or not.


Cheers,
Dani

[1]
https://github.com/openshift/origin/tree/release-3.10/images/router/haproxy
[2] https://github.com/openshift/origin/tree/release-3.10/pkg/router/metrics
[3]
https://github.com/openshift/origin/commit/8f0119bdd9c3b679cdfdf2962143435a95e08eae#diff-58216897083787e1c87c90955aabceff
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: [3.x]: openshift router and its own metrics

2019-08-15 Thread Daniel Comnea
On Thu, Aug 15, 2019 at 3:30 PM Dan Mace  wrote:

>
>
> On Thu, Aug 15, 2019 at 10:03 AM Daniel Comnea 
> wrote:
>
>> Hi,
>>
>> Would appreciate if anyone can please confirm that my understanding is
>> correct w.r.t the way the router haproxy image [1] is built.
>> Am i right to assume that the image [1] is is built as it's seen without
>> any other layer being added to include [2] ?
>> Also am i right to say the haproxy metrics [2] is part of the origin
>> package ?
>>
>>
>> A bit of background/ context:
>>
>> a while back on OKD 3.7 we had to swap the openshift 3.7.2 router image
>> with 3.10 because we were seeing some problems with the reload and so we
>> wanted to take the benefit of the native haproxy 1.8 reload feature to stop
>> affecting the traffic.
>>
>> While everything was nice and working okay we've noticed recently that
>> the haproxy stats do slowly increase and we do wonder if this is an
>> accumulation or not cause (maybe?) by the reloads. Now i'm aware of a
>> change made [3] however i suspect that is not part of the 3.10 image hence
>> my question to double check if my understanding is wrong or not.
>>
>>
>> Cheers,
>> Dani
>>
>> [1]
>> https://github.com/openshift/origin/tree/release-3.10/images/router/haproxy
>> [2]
>> https://github.com/openshift/origin/tree/release-3.10/pkg/router/metrics
>> [3]
>> https://github.com/openshift/origin/commit/8f0119bdd9c3b679cdfdf2962143435a95e08eae#diff-58216897083787e1c87c90955aabceff
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>
> I think Clayton (copied) has the history here, but the nature of the
> metrics commit you referenced is that many of the exposed metrics points
> are counters which were being reset across reloads. The patch was (I think)
> to enable counter metrics to correctly aaccumulate across reloads.
>
> As to how the image itself is built, the pkg directly is part of the
> router controller code included with the image. Not sure if that answers
> your question.
>
[DC]: thank you Dan, it does answer the question. Out of curiosity, any
chance you can point me to the CI job which builds the image? Looking at
the dockerfile [1] itself i couldn't work it out hence my curiosity of
understanding the missing part ;)

[1]
https://github.com/openshift/origin/blob/release-3.10/images/router/haproxy/Dockerfile

-- 
>
> Dan Mace
>
> Principal Software Engineer, OpenShift
>
> Red Hat
>
> dm...@redhat.com
>
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: [3.x]: openshift router and its own metrics

2019-08-15 Thread Daniel Comnea
Hi Clayton,

Certainly some of the metrics should be preserved across reloads, e.g.
metrics like *haproxy_server_http_responses_total *should be preserved
across reload (though to an extent, Prometheus can handle resets correctly
with its native support).

However, the metric
*haproxy_server_http_average_response_latency_milliseconds* appears also to
be accumulating when we wouldn't expect it to. (According the the haproxy
stats, I think that's a rolling average over the last 1024 calls -- so it
goes up and down, or should.)

Thoughts?


Cheers,
Dani


On Thu, Aug 15, 2019 at 3:59 PM Clayton Coleman  wrote:

> Metrics memory use in the router should be proportional to number of
> services, endpoints, and routes.  I doubt it's leaking there and if it were
> it'd be really slow since we don't restart the router monitor process
> ever.  Stats should definitely be preserved across reloads, but will not be
> preserved across the pod being restarted.
>
> On Thu, Aug 15, 2019 at 10:30 AM Dan Mace  wrote:
>
>>
>>
>> On Thu, Aug 15, 2019 at 10:03 AM Daniel Comnea 
>> wrote:
>>
>>> Hi,
>>>
>>> Would appreciate if anyone can please confirm that my understanding is
>>> correct w.r.t the way the router haproxy image [1] is built.
>>> Am i right to assume that the image [1] is is built as it's seen without
>>> any other layer being added to include [2] ?
>>> Also am i right to say the haproxy metrics [2] is part of the origin
>>> package ?
>>>
>>>
>>> A bit of background/ context:
>>>
>>> a while back on OKD 3.7 we had to swap the openshift 3.7.2 router image
>>> with 3.10 because we were seeing some problems with the reload and so we
>>> wanted to take the benefit of the native haproxy 1.8 reload feature to stop
>>> affecting the traffic.
>>>
>>> While everything was nice and working okay we've noticed recently that
>>> the haproxy stats do slowly increase and we do wonder if this is an
>>> accumulation or not cause (maybe?) by the reloads. Now i'm aware of a
>>> change made [3] however i suspect that is not part of the 3.10 image hence
>>> my question to double check if my understanding is wrong or not.
>>>
>>>
>>> Cheers,
>>> Dani
>>>
>>> [1]
>>> https://github.com/openshift/origin/tree/release-3.10/images/router/haproxy
>>> [2]
>>> https://github.com/openshift/origin/tree/release-3.10/pkg/router/metrics
>>> [3]
>>> https://github.com/openshift/origin/commit/8f0119bdd9c3b679cdfdf2962143435a95e08eae#diff-58216897083787e1c87c90955aabceff
>>> ___
>>> dev mailing list
>>> dev@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>>
>>
>> I think Clayton (copied) has the history here, but the nature of the
>> metrics commit you referenced is that many of the exposed metrics points
>> are counters which were being reset across reloads. The patch was (I think)
>> to enable counter metrics to correctly aaccumulate across reloads.
>>
>> As to how the image itself is built, the pkg directly is part of the
>> router controller code included with the image. Not sure if that answers
>> your question.
>>
>> --
>>
>> Dan Mace
>>
>> Principal Software Engineer, OpenShift
>>
>> Red Hat
>>
>> dm...@redhat.com
>>
>>
>>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: [3.x]: openshift router and its own metrics

2019-08-16 Thread Daniel Comnea
On Thu, Aug 15, 2019 at 7:46 PM Clayton Coleman  wrote:

>
>
> On Aug 15, 2019, at 12:25 PM, Daniel Comnea  wrote:
>
> Hi Clayton,
>
> Certainly some of the metrics should be preserved across reloads, e.g.
> metrics like *haproxy_server_http_responses_total *should be preserved
> across reload (though to an extent, Prometheus can handle resets correctly
> with its native support).
>
> However, the metric
> *haproxy_server_http_average_response_latency_milliseconds* appears also
> to be accumulating when we wouldn't expect it to. (According the the
> haproxy stats, I think that's a rolling average over the last 1024 calls --
> so it goes up and down, or should.)
>
>
> File a bug with more details, can’t say off the top of my head
> [DC]: thank you, do you have a preference/ suggestion where i should open
> it for OKD ? i guess BZ is not the suitable for OKD, or am i wrong ?
>
>
> Thoughts?
>
>
> Cheers,
> Dani
>
>
> On Thu, Aug 15, 2019 at 3:59 PM Clayton Coleman 
> wrote:
>
>> Metrics memory use in the router should be proportional to number of
>> services, endpoints, and routes.  I doubt it's leaking there and if it were
>> it'd be really slow since we don't restart the router monitor process
>> ever.  Stats should definitely be preserved across reloads, but will not be
>> preserved across the pod being restarted.
>>
>> On Thu, Aug 15, 2019 at 10:30 AM Dan Mace  wrote:
>>
>>>
>>>
>>> On Thu, Aug 15, 2019 at 10:03 AM Daniel Comnea 
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> Would appreciate if anyone can please confirm that my understanding is
>>>> correct w.r.t the way the router haproxy image [1] is built.
>>>> Am i right to assume that the image [1] is is built as it's seen
>>>> without any other layer being added to include [2] ?
>>>> Also am i right to say the haproxy metrics [2] is part of the origin
>>>> package ?
>>>>
>>>>
>>>> A bit of background/ context:
>>>>
>>>> a while back on OKD 3.7 we had to swap the openshift 3.7.2 router image
>>>> with 3.10 because we were seeing some problems with the reload and so we
>>>> wanted to take the benefit of the native haproxy 1.8 reload feature to stop
>>>> affecting the traffic.
>>>>
>>>> While everything was nice and working okay we've noticed recently that
>>>> the haproxy stats do slowly increase and we do wonder if this is an
>>>> accumulation or not cause (maybe?) by the reloads. Now i'm aware of a
>>>> change made [3] however i suspect that is not part of the 3.10 image hence
>>>> my question to double check if my understanding is wrong or not.
>>>>
>>>>
>>>> Cheers,
>>>> Dani
>>>>
>>>> [1]
>>>> https://github.com/openshift/origin/tree/release-3.10/images/router/haproxy
>>>> [2]
>>>> https://github.com/openshift/origin/tree/release-3.10/pkg/router/metrics
>>>> [3]
>>>> https://github.com/openshift/origin/commit/8f0119bdd9c3b679cdfdf2962143435a95e08eae#diff-58216897083787e1c87c90955aabceff
>>>> ___
>>>> dev mailing list
>>>> dev@lists.openshift.redhat.com
>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>>>
>>>
>>> I think Clayton (copied) has the history here, but the nature of the
>>> metrics commit you referenced is that many of the exposed metrics points
>>> are counters which were being reset across reloads. The patch was (I think)
>>> to enable counter metrics to correctly aaccumulate across reloads.
>>>
>>> As to how the image itself is built, the pkg directly is part of the
>>> router controller code included with the image. Not sure if that answers
>>> your question.
>>>
>>> --
>>>
>>> Dan Mace
>>>
>>> Principal Software Engineer, OpenShift
>>>
>>> Red Hat
>>>
>>> dm...@redhat.com
>>>
>>>
>>>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: ANNOUNCEMENT! OKD Community Discussions Moved to okd...@googlegroups.com

2019-09-05 Thread Daniel Comnea
James, Serge,

Back in July an email was sent to dev mailing list [1] and it was also
posted on OpenShift Common agenda announcing the kick off of the working
group.
In addition we've also sent out a survey trying to understand if folks will
be against the google group - the majority was not against.

Then if you look over past months a lot of discussion was around
communication channels - move out of mailing, not using slack, moving to
discourse, how to post messages to the google group w/o a google account
etc.

For now we decided to carry on with the working however any communication
like call for agreement on various topics will be posted on dev/ users
mailing list as well as the google working group.

We had yesterday our wg call and the next one will be on 17th Sept.

Should you want to list to all the working group meetings please check out
[2]


Cheers,
Dani


[1]
http://lists.openshift.redhat.com/openshift-archives/dev/2019-July/msg00041.html
[2] https://www.youtube.com/user/rhopenshift/videos

On Thu, Sep 5, 2019 at 1:57 PM Serge van Ginderachter <
se...@vanginderachter.be> wrote:

>
>
> On Thu, 5 Sep 2019 at 06:53, James Cassell 
> wrote:
>
>> On Thu, Aug 1, 2019, at 3:32 PM, Diane Mueller-Klingspor wrote:
>> >
>> > All,
>> >
>> > Here are the meeting recording and notes from yesterday's OKD Working
>> > Group meeting. If you are interested in participating in this group,
>> > please join the google group for future meeting announcements and
>> > discussion.
>> >
>> > https://groups.google.com/forum/#!forum/okd-wg
>> >
>>
>> Why the new mailing list?
>>
>> This email is the only hint that all discussion was going to vacate dev@...
>> it wasn't even hinted to in the subject line.
>>
>> I joined dev and users a while back so I'd be able to follow the
>> community and contribute where possible. What's wrong with dev@? At the
>> very least, dev@ should have received all mail from okd-wg@ for some
>> transition period while folks figure out how to subscribe without a Google
>> account. I now realize a month later that there's been important
>> discussions I've missed, and that are also missing from my e-mail archive,
>> so won't come up in my mailbox searches.
>>
>> I may be the only one making the complaint, but it's hard to cultivate
>> community when you leave behind those who'd tried to join it more than a
>> week ago.
>>
>
> I've been lurking this list since quite some months, and seemed to have
> missed that announcement, too. Never received that original mail you quoted
> even.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "okd-wg" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to okd-wg+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/okd-wg/CAEhzMJB0MhWM1u8HZrErmnCH%2BcXO%2B9rg0%3DSZ41fBN22u216NoQ%40mail.gmail.com
> 
> .
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Call for Agreement: OKD WG Charter (Revised Proposal)

2019-09-05 Thread Daniel Comnea
Thanks Christian for doing this.
I've cc'ed users@ mailing list too to avoid situations where users might
miss the information and what we are trying to do.


Cheers,
Dani

On Thu, Sep 5, 2019 at 4:52 PM Christian Glombek 
wrote:

> Dear OKD Community,
>
>
> this is the second Call for Agreement on the topic of the OKD WG Charter,
> per the terms of the proposed Charter.
>
> Question:
> Do you accept or reject the proposed charter as the OKD Working Group's
> main governing document.
>
> You can find the charter proposal at:
>
> https://github.com/openshift/community/pull/3
>
> Please comment on the pull request to indicate your vote ("accept" or
> "reject").
> Make sure you apply for membership per the terms of the proposed charter
> before you vote.
>
> The voting period will conclude on September 13, 2019, at 00:00 UTC.
>
> The outcome of the Call for Agreement will be published on the Pull
> Request as well as on the Mailing List.
>
>
> Thank you for your participation.
>
> Christian Glombek
>
> --
> You received this message because you are subscribed to the Google Groups
> "okd-wg" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to okd-wg+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/okd-wg/CAABn9-8uMm0EZHz32AgnXUa_WbFqtyReM-WNGwWR_Wi%3DSxgxOg%40mail.gmail.com
> 
> .
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: ANNOUNCEMENT! OKD Community Discussions Moved to okd...@googlegroups.com

2019-09-05 Thread Daniel Comnea
On Thu, Sep 5, 2019 at 5:30 PM James Cassell 
wrote:

>
> On Thu, Sep 5, 2019, at 9:39 AM, Daniel Comnea wrote:
> > James, Serge,
> >
> > Back in July an email was sent to dev mailing list [1] and it was also
> > posted on OpenShift Common agenda announcing the kick off of the
> > working group.
> > In addition we've also sent out a survey trying to understand if folks
> > will be against the google group - the majority was not against.
> >
> > Then if you look over past months a lot of discussion was around
> > communication channels - move out of mailing, not using slack, moving
> > to discourse, how to post messages to the google group w/o a google
> > account etc.
> >
>
> Yes, but those discussions didn't include dev@ list, so your existing ML
> community who didn't make it to those meetings don't know about the new
> list. Even the email you referenced does not mention a new list.  If the
> results of that survey indicated folks wanted to new list, that email
> should have been replied to with such an announcement.
>
[DC]: in the email i've referenced there was a survey which was sent out to
be filled and in there there was a link to the google group. Saying that i
took note of your message and i'll make sure either myself or the other
co-chairs will send out notification to the mailing lists.

Once we get the charter approved we'll learn as we go if we need to review/
improve/ change the process so folks are not feeling like being excluded or
missing any information. It is not our intention to do so, is the opposite
just bear with us and we'll trying to improve things, we are listening !


>
> > For now we decided to carry on with the working however any
> > communication like call for agreement on various topics will be posted
> > on dev/ users mailing list as well as the google working group.
> >
> > We had yesterday our wg call and the next one will be on 17th Sept.
> >
>
> I found the recordings for the end of July meeting and the meeting earlier
> this week. Is there a recording for the mid-august meeting? I couldn't find
> it.
>
> [DC]: i think we've skipped due to holiday if my memory serves well.
Either way we are aiming for each meeting to be recorded and uploaded so
folks who missed can go back and listen.

>
> > Should you want to list to all the working group meetings please check
> out [2]
> >
> >
> >  Cheers,
> > Dani
> >
>
> V/r,
> James Cassell
>
>
> >
> > [1]
> >
> http://lists.openshift.redhat.com/openshift-archives/dev/2019-July/msg00041.html
> > [2] https://www.youtube.com/user/rhopenshift/videos
> >
> > On Thu, Sep 5, 2019 at 1:57 PM Serge van Ginderachter
> >  wrote:
> > >
> > >
> > > On Thu, 5 Sep 2019 at 06:53, James Cassell <
> fedoraproj...@cyberpear.com> wrote:
> > >> On Thu, Aug 1, 2019, at 3:32 PM, Diane Mueller-Klingspor wrote:
> > >>  >
> > >>  > All,
> > >>  >
> > >>  > Here are the meeting recording and notes from yesterday's OKD
> Working
> > >>  > Group meeting. If you are interested in participating in this
> group,
> > >>  > please join the google group for future meeting announcements and
> > >>  > discussion.
> > >>  >
> > >>  > https://groups.google.com/forum/#!forum/okd-wg
> > >>  >
> > >>
> > >>  Why the new mailing list?
> > >>
> > >>  This email is the only hint that all discussion was going to vacate
> dev@... it wasn't even hinted to in the subject line.
> > >>
> > >>  I joined dev and users a while back so I'd be able to follow the
> community and contribute where possible. What's wrong with dev@? At the
> very least, dev@ should have received all mail from okd-wg@ for some
> transition period while folks figure out how to subscribe without a Google
> account. I now realize a month later that there's been important
> discussions I've missed, and that are also missing from my e-mail archive,
> so won't come up in my mailbox searches.
> > >>
> > >>  I may be the only one making the complaint, but it's hard to
> cultivate community when you leave behind those who'd tried to join it more
> than a week ago.
> > >
> > > I've been lurking this list since quite some months, and seemed to
> have missed that announcement, too. Never received that original mail you
> quoted even.
> > >
> > >
> >
> > >  --
> > >  Y

[OKD/OCP v4]: deployment on a single node using CodeReady Container

2019-09-13 Thread Daniel Comnea
Recently folks were asking what is the minishift's alternative for v4 and
in case you've missed the news see [1]

Hopefully that will also work for OKD v4 once  the MVP is out.


Dani

[1]
https://developers.redhat.com/blog/2019/09/05/red-hat-openshift-4-on-your-laptop-introducing-red-hat-codeready-containers/
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


[OKD 4.x]: first preview drop - please help out with testing and feedback

2019-11-20 Thread Daniel Comnea
Hi folks,

For those who are not following the okd working group updates or not
hanging on the openshift-dev/users K8s slack channel, please be aware of
the announcement sent out [1] by Clayton

We would very much appreciate if folks help out testing and provide
feedback.

Note we haven't finalized the process on where folks should raise issues,
in the last OKD wg meeting there were few suggestions made but no
conclusion yet. Hopefully a decision will be made soon which will be
circulated around.


Cheers

[1] https://mobile.twitter.com/smarterclayton/status/1196477646885965824
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: [CentOS-devel] Update from CentOS PaaS SIG

2020-01-09 Thread Daniel Comnea
Hi Sandro,

Let me clarify few things as i fear there might be some confusion. Even
though is going to be a lot of history provided hopefully will help
everyone to understand *"why & where"* questions.

The message we tried to send out with the blog post was to a) update on the
current state OKD and maybe bring a bit more clarify around OKD 3.x/ 4.x
and b) the PaaS SIG responsibility as well as the answer to the main
question around OKD 4.x + CentOS base OS.

In terms of responsibilities i hope it was clear when we mentioned that
PaaS SIG was dealing with building and publish the 3.x rpms into CentOS
repos, that charter has not change.
As such the release process for building the 3.x rpms followed the bellow
pattern:


   - RH would cut a new Origin [1] release identified by a new tag release
   - Using the tag release - e.g latest 3.11 [2] we were amending the spec
   file and build the rpm from that tag release
   - The same process was followed for the openshift-ansible rpms based on
   [3] project


Now since Oct 2018 no new tag release was created on [1] however the CI
(same which build the OCP 3.11 managed by RH) evolved and switched to a
different method where it no longer cut "OKD 3.11 tag releases" but it
started to create artifacts on a rolling mode - e.g when new commits were/
are being merged into 3.11 OKD branch [1].
With that said the OKD 3.11 is formed by Docker images + the rpms generated
by RH CI (same used by OCP 3.11) which are available at [4]

As you can see from above PaaS SIG was filling a gap with creating the rpms
for OKD 3.x until CI catched up and started to build the OKD 3.11 rpms

With that said we - PaaS SIG - have no longer released newer 3.11 OKD rpms
into CentOS repo after the switch over mainly because

   - the community or folks asking for newer OKD 3.11 rpms in CentOS went
   all quiet, in other words nobody was asking for anything new
   - the confusion on the OKD 3.x vs OKD 4.x and who does what etc => that
   was clarified with the formation of OKD Working group where things became
   more clear

Regarding your question around "how frequently the PaaS SIG is going to
release and from which OKD branches" my answer is

Currently i haven't planned for any newer OKD 3.11 rpms to be built from
[5] branch (not taking into account the release tag but the latest commit
in the branch) and published into CentOS repos however IF there is a demand
i'm happy to resurect and change the automation to get newer rpms built.

However i want to make it clear that if if the above work will be done, it
will be as long as RH will keep supporting 3.11, when OKD 4.x will go GA
then i think OKD 3.11 will no longer get updates.

I hope my long answer does clear any confusion.

Regarding *"when OKD 4.x will be release, the cadence etc"*, as you all
know substantial progress has been made (kudos goes to Vadim, Christian and
Clayton + other RH folks on MCO team) and so everyone can already give it a
try [6] and help providing feedback by raising issues on same repo [6] (in
the near future i think the desire is to do it via BZ).
There are some hicups which Vadim is trying to fix it (many kudos to him)
and so once that is done then we can share the news (with the OKD WG hat
on).


Hope that help,
Dani

[1] https://github.com/openshift/origin
[2] https://github.com/openshift/origin/releases/tag/v3.11.0
[3] https://github.com/openshift/openshift-ansible
[4] https://rpms.svc.ci.openshift.org/openshift-origin-v3.11/
[5] https://github.com/openshift/origin/tree/release-3.11
[6] https://github.com/openshift/okd




On Thu, Jan 9, 2020 at 7:35 AM Sandro Bonazzola  wrote:

>
>
> Il giorno mer 8 gen 2020 alle ore 16:47 Gleidson Nascimento <
> slat...@live.com> ha scritto:
>
>> Hello from CentOS PaaS SIG!
>>
>> We have recently published an update on CentOS blog [1] about OKD v4 and
>> its current state on CentOS.
>>
>> [1] https://blog.centos.org/2020/01/centos-paas-sig-quarterly-report-2/
>>
>> Thanks,
>> Gleidson
>>
>
> Hi,
> can you give some information about how frequently the PaaS SIG is going
> to release and from which OKD branches?
>
>
>
>
>> ___
>> CentOS-devel mailing list
>> centos-de...@centos.org
>> https://lists.centos.org/mailman/listinfo/centos-devel
>>
>
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> *Red Hat respects your work life balance.
> Therefore there is no need to answer this email out of your office hours.*
> ___
> CentOS-devel mailing list
> centos-de...@centos.org
> https://lists.centos.org/mailman/listinfo/centos-devel
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev