Re: RFC: OKD4 Roadmap Draft

2019-08-19 Thread Clayton Coleman
> On Aug 16, 2019, at 10:25 PM, Michael McCune  wrote:
>
>> On Fri, Aug 16, 2019 at 2:36 PM Kevin Lapagna <4...@gmx.ch> wrote:
>>
>>> On Fri, Aug 16, 2019 at 4:50 PM Clayton Coleman  wrote:
>>>
>>> Single master / single node configurations are possible, but they will be 
>>> hard.  Many of the core design decisions of 4 are there to ensure the 
>>> cluster can self host, and they also require that machines really be 
>>> members of the cluster.
>>
>>
>> How about (as alternative) spinning up multiple virtual machines and 
>> simulate "the real thing". Sure, that uses lots of memory, but it will 
>> nicely show what 4.x is capable of.
>
> my understanding is that this is essentially similar to the current
> installer with a libvirt backend as the deployment, at least that's
> how it looks when i try to run an installation on a single physical
> node with multiple virtual machines.

Yes.  Although as mike notes it may be possible to get this running
with less effort via his route, which is a good alternative for simple
single machine.

>
> peace o/

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: [4.x] POC on OpenStack without admin rights.

2019-08-17 Thread Clayton Coleman
User provisioned infra?  You can host the ignition file on any HTTP server
the machines can reach at boot.

On Aug 17, 2019, at 11:45 AM, Gilles Le Bris  wrote:

I have already installed OpenShift 4.x on KVM.
Now, I would like to install it for a POC on OpenStack (Rocky).
But, I haven't got any OpenStack sysadmin rights (swift …).
Is there a workaround in this case to deal with the user_data
length/ignition files problem?

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: [3.x]: openshift router and its own metrics

2019-08-16 Thread Clayton Coleman
On Aug 16, 2019, at 4:55 AM, Daniel Comnea  wrote:



On Thu, Aug 15, 2019 at 7:46 PM Clayton Coleman  wrote:

>
>
> On Aug 15, 2019, at 12:25 PM, Daniel Comnea  wrote:
>
> Hi Clayton,
>
> Certainly some of the metrics should be preserved across reloads, e.g.
> metrics like *haproxy_server_http_responses_total *should be preserved
> across reload (though to an extent, Prometheus can handle resets correctly
> with its native support).
>
> However, the metric
> *haproxy_server_http_average_response_latency_milliseconds* appears also
> to be accumulating when we wouldn't expect it to. (According the the
> haproxy stats, I think that's a rolling average over the last 1024 calls --
> so it goes up and down, or should.)
>
>
> File a bug with more details, can’t say off the top of my head
> [DC]: thank you, do you have a preference/ suggestion where i should open
> it for OKD ? i guess BZ is not the suitable for OKD, or am i wrong ?
>

There should be BZ components for origin


> Thoughts?
>
>
> Cheers,
> Dani
>
>
> On Thu, Aug 15, 2019 at 3:59 PM Clayton Coleman 
> wrote:
>
>> Metrics memory use in the router should be proportional to number of
>> services, endpoints, and routes.  I doubt it's leaking there and if it were
>> it'd be really slow since we don't restart the router monitor process
>> ever.  Stats should definitely be preserved across reloads, but will not be
>> preserved across the pod being restarted.
>>
>> On Thu, Aug 15, 2019 at 10:30 AM Dan Mace  wrote:
>>
>>>
>>>
>>> On Thu, Aug 15, 2019 at 10:03 AM Daniel Comnea 
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> Would appreciate if anyone can please confirm that my understanding is
>>>> correct w.r.t the way the router haproxy image [1] is built.
>>>> Am i right to assume that the image [1] is is built as it's seen
>>>> without any other layer being added to include [2] ?
>>>> Also am i right to say the haproxy metrics [2] is part of the origin
>>>> package ?
>>>>
>>>>
>>>> A bit of background/ context:
>>>>
>>>> a while back on OKD 3.7 we had to swap the openshift 3.7.2 router image
>>>> with 3.10 because we were seeing some problems with the reload and so we
>>>> wanted to take the benefit of the native haproxy 1.8 reload feature to stop
>>>> affecting the traffic.
>>>>
>>>> While everything was nice and working okay we've noticed recently that
>>>> the haproxy stats do slowly increase and we do wonder if this is an
>>>> accumulation or not cause (maybe?) by the reloads. Now i'm aware of a
>>>> change made [3] however i suspect that is not part of the 3.10 image hence
>>>> my question to double check if my understanding is wrong or not.
>>>>
>>>>
>>>> Cheers,
>>>> Dani
>>>>
>>>> [1]
>>>> https://github.com/openshift/origin/tree/release-3.10/images/router/haproxy
>>>> [2]
>>>> https://github.com/openshift/origin/tree/release-3.10/pkg/router/metrics
>>>> [3]
>>>> https://github.com/openshift/origin/commit/8f0119bdd9c3b679cdfdf2962143435a95e08eae#diff-58216897083787e1c87c90955aabceff
>>>> ___
>>>> dev mailing list
>>>> dev@lists.openshift.redhat.com
>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>>>
>>>
>>> I think Clayton (copied) has the history here, but the nature of the
>>> metrics commit you referenced is that many of the exposed metrics points
>>> are counters which were being reset across reloads. The patch was (I think)
>>> to enable counter metrics to correctly aaccumulate across reloads.
>>>
>>> As to how the image itself is built, the pkg directly is part of the
>>> router controller code included with the image. Not sure if that answers
>>> your question.
>>>
>>> --
>>>
>>> Dan Mace
>>>
>>> Principal Software Engineer, OpenShift
>>>
>>> Red Hat
>>>
>>> dm...@redhat.com
>>>
>>>
>>>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: RFC: OKD4 Roadmap Draft

2019-08-16 Thread Clayton Coleman
On Aug 16, 2019, at 11:29 AM, Michael Gugino  wrote:

Pretty much already had all of this working here:
https://github.com/openshift/openshift-ansible/pull/10898

For single host cluster, I think path of least resistance would be to
modify the bootstrap host to not pivot, make it clear it's 'not for
production' and we can take lots of shortcuts for someone just looking for
an easy, 1-VM openshift api.


Does that assume bootstrap machine is “anything”?


I'm most interested in running OKD 4.x on Fedora rather than CoreOS.  I
might try to do something with that this weekend as POC.


Thanks


On Fri, Aug 16, 2019 at 10:49 AM Clayton Coleman 
wrote:

>
>
> On Aug 16, 2019, at 10:39 AM, Michael McCune  wrote:
>
>
>
> On Wed, Aug 14, 2019 at 12:27 PM Christian Glombek 
> wrote:
>
>> The OKD4 roadmap is currently being drafted here:
>>
>> https://hackmd.io/Py3RcpuyQE2psYEIBQIzfw
>>
>> There was an initial discussion on it in yesterday's WG meeting, with
>> some feedback given already.
>>
>> I have updated the draft and am now calling for comments for a final
>> time, before a formal
>> Call for Agreement shall follow at the beginning of next week on the OKD
>> WG Google group list.
>>
>> Please add your comments before Monday. Thank you.
>>
>>
> i'm not sure if i should add this on the document, but is there any
> consensus (one way or the other) about the notion of bringing forward the
> all-in-one work that was done in openshift-ansible for version 3?
>
> i am aware of code ready containers, but i would really like to see us
> provide the option for a single machine install.
>
>
> It’s possible for someone to emulate much of the install, bootstrap, and
> subsequent operations on a single machine (the installer isn’t that much
> code, the bulk of the work is across the operators).  You’d end up copying
> a fair bit of the installer, but it may be tractable.  You’d need to really
> understand the config passed to bootstrap via ignition, how the bootstrap
> script works, and how you would trick etcd to start on the bootstrap
> machine.  When the etcd operator lands in 4.3, that last becomes easier
> (the operator runs and configures a local etcd).
>
> Single master / single node configurations are possible, but they will be
> hard.  Many of the core design decisions of 4 are there to ensure the
> cluster can self host, and they also require that machines really be
> members of the cluster.
>
> A simpler, less complex path might be to (once we have OKD proto working)
> to create a custom payload that excludes the installer, the MCD, and to use
> ansible to configure the prereqs on a single machine (etcd in a specific
> config), then emulate parts of the bootstrap script and run a single
> instance (which in theory should work today).  You might be able to update
> it.  Someone exploring this would possibly be able to get openshift running
> on a non coreos control plane, so worth exploring if someone has the time.
>
>
> peace o/
>
>
>> Christian Glombek
>>
>> Associate Software Engineer
>>
>> Red Hat GmbH <https://www.redhat.com/>
>>
>> <https://www.google.com/maps/place/Engeldamm+64b,+10179+Berlin/@52.5058176,13.4191433,17z/data=!3m1!4b1!4m5!3m4!1s0x47a84e30d99f7f43:0xe6059fb480bfd85c!8m2!3d52.5058176!4d13.421332>
>>
>> cglom...@redhat.com 
>> <https://red.ht/sig>
>>
>>
>> Red Hat GmbH, http://www.de.redhat.com/, Sitz: Grasbrunn, Germany,
>> Handelsregister: Amtsgericht München, HRB 153243,
>> Geschäftsführer: Charles Cachera, Michael O'Neill, Tom Savage, Eric Shander
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "okd-wg" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to okd-wg+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/okd-wg/CAABn9-8khhZ4VmHpKXXogo_kH-50QnR6vZgc525FROA41mxboA%40mail.gmail.com
>> <https://groups.google.com/d/msgid/okd-wg/CAABn9-8khhZ4VmHpKXXogo_kH-50QnR6vZgc525FROA41mxboA%40mail.gmail.com?utm_medium=email_source=footer>
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "okd-wg" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to okd-wg+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/okd-wg/CADE%2BktTuSSUaZZ-vPYkOLHKiojM9%2B5qgQVtV1pS7z7eTcu7jdw%40mail.gmail.com
> <https://groups.google.com/d/msgid/okd-wg/CADE%2BktTuSSUaZZ-vPYkOLHKiojM9%2B5qgQVtV1pS7z7eTcu

Re: RFC: OKD4 Roadmap Draft

2019-08-16 Thread Clayton Coleman
On Aug 16, 2019, at 10:39 AM, Michael McCune  wrote:



On Wed, Aug 14, 2019 at 12:27 PM Christian Glombek 
wrote:

> The OKD4 roadmap is currently being drafted here:
>
> https://hackmd.io/Py3RcpuyQE2psYEIBQIzfw
>
> There was an initial discussion on it in yesterday's WG meeting, with some
> feedback given already.
>
> I have updated the draft and am now calling for comments for a final time,
> before a formal
> Call for Agreement shall follow at the beginning of next week on the OKD
> WG Google group list.
>
> Please add your comments before Monday. Thank you.
>
>
i'm not sure if i should add this on the document, but is there any
consensus (one way or the other) about the notion of bringing forward the
all-in-one work that was done in openshift-ansible for version 3?

i am aware of code ready containers, but i would really like to see us
provide the option for a single machine install.


It’s possible for someone to emulate much of the install, bootstrap, and
subsequent operations on a single machine (the installer isn’t that much
code, the bulk of the work is across the operators).  You’d end up copying
a fair bit of the installer, but it may be tractable.  You’d need to really
understand the config passed to bootstrap via ignition, how the bootstrap
script works, and how you would trick etcd to start on the bootstrap
machine.  When the etcd operator lands in 4.3, that last becomes easier
(the operator runs and configures a local etcd).

Single master / single node configurations are possible, but they will be
hard.  Many of the core design decisions of 4 are there to ensure the
cluster can self host, and they also require that machines really be
members of the cluster.

A simpler, less complex path might be to (once we have OKD proto working)
to create a custom payload that excludes the installer, the MCD, and to use
ansible to configure the prereqs on a single machine (etcd in a specific
config), then emulate parts of the bootstrap script and run a single
instance (which in theory should work today).  You might be able to update
it.  Someone exploring this would possibly be able to get openshift running
on a non coreos control plane, so worth exploring if someone has the time.


peace o/


> Christian Glombek
>
> Associate Software Engineer
>
> Red Hat GmbH 
>
> 
>
> cglom...@redhat.com 
> 
>
>
> Red Hat GmbH, http://www.de.redhat.com/, Sitz: Grasbrunn, Germany,
> Handelsregister: Amtsgericht München, HRB 153243,
> Geschäftsführer: Charles Cachera, Michael O'Neill, Tom Savage, Eric Shander
>
> --
> You received this message because you are subscribed to the Google Groups
> "okd-wg" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to okd-wg+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/okd-wg/CAABn9-8khhZ4VmHpKXXogo_kH-50QnR6vZgc525FROA41mxboA%40mail.gmail.com
> 
> .
>
-- 
You received this message because you are subscribed to the Google Groups
"okd-wg" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to okd-wg+unsubscr...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/okd-wg/CADE%2BktTuSSUaZZ-vPYkOLHKiojM9%2B5qgQVtV1pS7z7eTcu7jdw%40mail.gmail.com

.
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: [3.x]: openshift router and its own metrics

2019-08-15 Thread Clayton Coleman
On Aug 15, 2019, at 12:25 PM, Daniel Comnea  wrote:

Hi Clayton,

Certainly some of the metrics should be preserved across reloads, e.g.
metrics like *haproxy_server_http_responses_total *should be preserved
across reload (though to an extent, Prometheus can handle resets correctly
with its native support).

However, the metric
*haproxy_server_http_average_response_latency_milliseconds* appears also to
be accumulating when we wouldn't expect it to. (According the the haproxy
stats, I think that's a rolling average over the last 1024 calls -- so it
goes up and down, or should.)


File a bug with more details, can’t say off the top of my head


Thoughts?


Cheers,
Dani


On Thu, Aug 15, 2019 at 3:59 PM Clayton Coleman  wrote:

> Metrics memory use in the router should be proportional to number of
> services, endpoints, and routes.  I doubt it's leaking there and if it were
> it'd be really slow since we don't restart the router monitor process
> ever.  Stats should definitely be preserved across reloads, but will not be
> preserved across the pod being restarted.
>
> On Thu, Aug 15, 2019 at 10:30 AM Dan Mace  wrote:
>
>>
>>
>> On Thu, Aug 15, 2019 at 10:03 AM Daniel Comnea 
>> wrote:
>>
>>> Hi,
>>>
>>> Would appreciate if anyone can please confirm that my understanding is
>>> correct w.r.t the way the router haproxy image [1] is built.
>>> Am i right to assume that the image [1] is is built as it's seen without
>>> any other layer being added to include [2] ?
>>> Also am i right to say the haproxy metrics [2] is part of the origin
>>> package ?
>>>
>>>
>>> A bit of background/ context:
>>>
>>> a while back on OKD 3.7 we had to swap the openshift 3.7.2 router image
>>> with 3.10 because we were seeing some problems with the reload and so we
>>> wanted to take the benefit of the native haproxy 1.8 reload feature to stop
>>> affecting the traffic.
>>>
>>> While everything was nice and working okay we've noticed recently that
>>> the haproxy stats do slowly increase and we do wonder if this is an
>>> accumulation or not cause (maybe?) by the reloads. Now i'm aware of a
>>> change made [3] however i suspect that is not part of the 3.10 image hence
>>> my question to double check if my understanding is wrong or not.
>>>
>>>
>>> Cheers,
>>> Dani
>>>
>>> [1]
>>> https://github.com/openshift/origin/tree/release-3.10/images/router/haproxy
>>> [2]
>>> https://github.com/openshift/origin/tree/release-3.10/pkg/router/metrics
>>> [3]
>>> https://github.com/openshift/origin/commit/8f0119bdd9c3b679cdfdf2962143435a95e08eae#diff-58216897083787e1c87c90955aabceff
>>> ___
>>> dev mailing list
>>> dev@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>>
>>
>> I think Clayton (copied) has the history here, but the nature of the
>> metrics commit you referenced is that many of the exposed metrics points
>> are counters which were being reset across reloads. The patch was (I think)
>> to enable counter metrics to correctly aaccumulate across reloads.
>>
>> As to how the image itself is built, the pkg directly is part of the
>> router controller code included with the image. Not sure if that answers
>> your question.
>>
>> --
>>
>> Dan Mace
>>
>> Principal Software Engineer, OpenShift
>>
>> Red Hat
>>
>> dm...@redhat.com
>>
>>
>>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: [3.x]: openshift router and its own metrics

2019-08-15 Thread Clayton Coleman
Metrics memory use in the router should be proportional to number of
services, endpoints, and routes.  I doubt it's leaking there and if it were
it'd be really slow since we don't restart the router monitor process
ever.  Stats should definitely be preserved across reloads, but will not be
preserved across the pod being restarted.

On Thu, Aug 15, 2019 at 10:30 AM Dan Mace  wrote:

>
>
> On Thu, Aug 15, 2019 at 10:03 AM Daniel Comnea 
> wrote:
>
>> Hi,
>>
>> Would appreciate if anyone can please confirm that my understanding is
>> correct w.r.t the way the router haproxy image [1] is built.
>> Am i right to assume that the image [1] is is built as it's seen without
>> any other layer being added to include [2] ?
>> Also am i right to say the haproxy metrics [2] is part of the origin
>> package ?
>>
>>
>> A bit of background/ context:
>>
>> a while back on OKD 3.7 we had to swap the openshift 3.7.2 router image
>> with 3.10 because we were seeing some problems with the reload and so we
>> wanted to take the benefit of the native haproxy 1.8 reload feature to stop
>> affecting the traffic.
>>
>> While everything was nice and working okay we've noticed recently that
>> the haproxy stats do slowly increase and we do wonder if this is an
>> accumulation or not cause (maybe?) by the reloads. Now i'm aware of a
>> change made [3] however i suspect that is not part of the 3.10 image hence
>> my question to double check if my understanding is wrong or not.
>>
>>
>> Cheers,
>> Dani
>>
>> [1]
>> https://github.com/openshift/origin/tree/release-3.10/images/router/haproxy
>> [2]
>> https://github.com/openshift/origin/tree/release-3.10/pkg/router/metrics
>> [3]
>> https://github.com/openshift/origin/commit/8f0119bdd9c3b679cdfdf2962143435a95e08eae#diff-58216897083787e1c87c90955aabceff
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>
> I think Clayton (copied) has the history here, but the nature of the
> metrics commit you referenced is that many of the exposed metrics points
> are counters which were being reset across reloads. The patch was (I think)
> to enable counter metrics to correctly aaccumulate across reloads.
>
> As to how the image itself is built, the pkg directly is part of the
> router controller code included with the image. Not sure if that answers
> your question.
>
> --
>
> Dan Mace
>
> Principal Software Engineer, OpenShift
>
> Red Hat
>
> dm...@redhat.com
>
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Follow up on OKD 4

2019-07-25 Thread Clayton Coleman
> On Jul 25, 2019, at 2:32 PM, Fox, Kevin M  wrote:
>
> While "just works" is a great goal, and its relatively easy to accomplish in 
> the nice, virtualized world of vm's, I've found it is often not the case in 
> the dirty realm of real physical hardware. Sometimes you must rebuild/replace 
> a kernel or add a kernel module to get things to actually work. If you don't 
> support that, Its going to be a problem for many a site.

Ok, so this would be the “I want to be able to run my own kernel” use case.

That’s definitely something I would expect to be available with OKD in
the existing proposal, you would just be providing a different ostree
image at install time.

How often does this happen with fedora today?  I don’t hear it brought
up often so I may just be oblivious to something folks deal with more.
Certainly fcos should work everywhere existing fedora works, but if a
substantial set of people want that flexibility it’s a great data
point.

>
> Thanks,
> Kevin
> 
> From: dev-boun...@lists.openshift.redhat.com 
> [dev-boun...@lists.openshift.redhat.com] on behalf of Josh Berkus 
> [jber...@redhat.com]
> Sent: Thursday, July 25, 2019 11:23 AM
> To: Clayton Coleman; Aleksandar Lazic
> Cc: users; dev
> Subject: Re: Follow up on OKD 4
>
>> On 7/25/19 6:51 AM, Clayton Coleman wrote:
>> 1. Openshift 4 isn’t flexible in the ways people want (Ie you want to
>> add an rpm to the OS to get a kernel module, or you want to ship a
>> complex set of config and managing things with mcd looks too hard)
>> 2. You want to build and maintain these things yourself, so the “just
>> works” mindset doesn’t appeal.
>
> FWIW, 2.5 years ago when we were exploring having a specific
> Atomic+Openshift distro for Kubernetes, we did a straw poll of Fedora
> Cloud users.  We found that 2/3 of respondees wanted a complete package
> (that is, OKD+Atomic) that installed and "just worked" out of the box,
> and far fewer folks wanted to hack their own.  We never had such a
> release due to insufficient engineering resources (and getting stuck
> behind the complete rewrite of the Fedora build pipelines), but that was
> the original goal.
>
> Things may have changed in the interim, but I think that a broad user
> survey would still find a strong audience for a "just works" distro in
> Fedora.
>
> --
> --
> Josh Berkus
> Kubernetes Community
> Red Hat OSAS
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Follow up on OKD 4

2019-07-25 Thread Clayton Coleman
On Thu, Jul 25, 2019 at 11:58 AM Fox, Kevin M  wrote:

> Yeah, There is the question what it is now, and the question what it
> potentially should be. I'm asking more from a where should it go standpoint.
>
> Right now, k8s distro's are very much in the early linux distro days.
> Here's how to get a base os going. Ok, now your on your own to deploy
> anything on it. Download tarball, build it, install it, write init script,
> etc. If you look a the total package list in the modern linux distro, the
> os level stuff is usually a very small percentage of the software in the
> distro.
>
> These days we've moved on so far from "the distro is a kernel" that folks
> even talk about running a redhat, a fedora, or a centos container. Thats
> really #4 level stuff only.
>
> olm is like yum. a tool to install stuff. So kind of a #3 tool. Its the
> software packaging itself, mysql, apache, etc. that is also part of the
> distro that is mostly missing I think. a container is like an rpm. one way
> to define a linux distro is a collection of prebuilt/tested/supported rpms
> for common software.
>
> In the linux os today, you can start from "I want to deploy a mysql
> server" and I trust redhat to provide good software, so you go and yum
> install mysql. I could imagine similarly OKD as a collection of software to
> deploy on top of a k8s, where there is an optional, self hosting OS part
> (1-3) the same way Fedora/Centos can be used purely #4 with containers, or
> as a full blown os+workloads.
>
> Sure, you can let the community build all their own stuff. Thats possible
> in linux distro's today too and shouldn't be blocked. But it misses the
> point why folks deploy software from linux distro's over getting it from
> the source. I prefer to run mysql from redhat as apposed to upstream
> because of all the extras the distro packagers provide.
>
> Not trying to short change all the hard work in getting a k8s going. okd's
> doing an amazing job at that. That's really important too. But so is all
> the distro work around software packaging, and thats still much more in its
> infancy I think. We're still mostly at the point where we're debating if
> thats the end users problem.
>
> The package management tools are coming around nicely, but not so much yet
> the distro packages. How do we get a k8s distro of this form going? Is that
> in the general scope of OKD, or should there be a whole new project just
> for that?
>

Honestly we've been thinking of the core components as just that - CVO,
MCO, release images, integrated OS with updates.  That's the core compose
(the minimum set of components you need for maintainable stuff).  I think
Mike's comment about MCO supporting multiple OS's is something like that,
and CVO + release tooling is that too.  I think from the perspective of the
project as a whole I had been thinking that "OKD is both the tools and
process for k8s-as-distro".  OLM is a small component of that, like yum.
CVO + installer + bootstrap is anaconda.  The machine-os and k8s are like
systemd.

That's why I'm asking about how much flexibility people want - after all,
Fedora doesn't let you choose which glibc you use.  If people want to
compose the open source projects differently, all the projects are there
today and should be easy to fork, and we could also consider how we make
them easier to fork in the future.

The discuss OKD4 started with was "what kind of distro do people want".  If
there's a group who want to get more involved in the "build a distro" part
of tools that exist, that definitely seems like a different use case.


>
> The redhat container catalog is a good start too, but we need to be
> thinking all the way up to the k8s level.
>
> Should it be "okd k8s distro" or "fedora k8s distro" or something else?
>
> Thanks,
> Kevin
>
> --
> *From:* Clayton Coleman [ccole...@redhat.com]
> *Sent:* Wednesday, July 24, 2019 10:31 AM
> *To:* Fox, Kevin M
> *Cc:* Michael Gugino; users; dev
> *Subject:* Re: Follow up on OKD 4
>
>
>
> On Wed, Jul 24, 2019 at 12:45 PM Fox, Kevin M  wrote:
>
>> Ah, this raises an interesting discussion I've been wanting to have for a
>> while.
>>
>> There are potentially lots of things you could call a distro.
>>
>> Most linux distro's are made up of several layers:
>> 1. boot loader - components to get the kernel running
>> 2. kernel - provides a place to run higher level software
>> 3. os level services - singletons needed to really call the os an os.
>> (dhcp, systemd, dbus, etc)
>> 4. prebuilt/tested, generic software/services - workload (mysql, apache,
>> firefox, gnome, etc)
>>
>> For sake of discussio

Re: Follow up on OKD 4

2019-07-25 Thread Clayton Coleman
> On Jul 25, 2019, at 4:19 AM, Aleksandar Lazic 
>  wrote:
>
> HI.
>
>> Am 25.07.2019 um 06:52 schrieb Michael Gugino:
>> I think FCoS could be a mutable detail.  To start with, support for
>> plain-old-fedora would be helpful to make the platform more portable,
>> particularly the MCO and machine-api.  If I had to state a goal, it
>> would be "Bring OKD to the largest possible range of linux distros to
>> become the defacto implementation of kubernetes."
>
> I agree here with Michael. As FCoS or in general CoS looks technical a good 
> idea
> but it limits the flexibility of possible solutions.
>
> For example when you need to change some system settings then you will need to
> create a new OS Image, this is not very usable in some environments.

I think something we haven’t emphasized enough is that openshift 4 is
very heavily structured around changing the cost and mental model
around this.  The goal was and is to make these sorts of things
unnecessary.  Changing machine settings by building golden images is
already the “wrong” (expensive and error prone) pattern - instead, it
should be easy to reconfigure machines or to launch new containers to
run software on those machines.  There may be two factors here at
work:

1. Openshift 4 isn’t flexible in the ways people want (Ie you want to
add an rpm to the OS to get a kernel module, or you want to ship a
complex set of config and managing things with mcd looks too hard)
2. You want to build and maintain these things yourself, so the “just
works” mindset doesn’t appeal.

The initial doc alluded to the DIY / bucket of parts use case (I can
assemble this on my own but slightly differently) - maybe we can go
further now and describe the goal / use case as:

I want to be able to compose my own Kubernetes distribution, and I’m
willing to give up continuous automatic updates to gain flexibility in
picking my own software

Does that sound like it captures your request?

Note that a key reason why the OS is integrated is so that we can keep
machines up to date and do rolling control plane upgrades with no
risk.  If you take the OS out of the equation the risk goes up
substantially, but if you’re willing to give that up then yes, you
could build an OKD that doesn’t tie to the OS.  This trade off is an
important one for folks to discuss.  I’d been assuming that people
*want* the automatic and safe upgrades, but maybe that’s a bad
assumption.

What would you be willing to give up?

>
> It would be nice to have the good old option to use the ansible installer to
> install OKD/Openshift on other Linux distribution where ansible is able to 
> run.
>
>> Also, it would be helpful (as previously stated) to build communities
>> around some of our components that might not have a place in the
>> official kubernetes, but are valuable downstream components
>> nevertheless.
>>
>> Anyway, I'm just throwing some ideas out there, I wouldn't consider my
>> statements as advocating strongly in any direction.  Surely FCoS is
>> the natural fit, but I think considering other distros merits
>> discussion.
>
> +1
>
> Regards
> Aleks
>
>
>>> On Wed, Jul 24, 2019 at 9:23 PM Clayton Coleman  wrote:
>>>
>>>> On Jul 24, 2019, at 9:14 PM, Michael Gugino  wrote:
>>>>
>>>> I think what I'm looking for is more 'modular' rather than DIY.  CVO
>>>> would need to be adapted to separate container payload from host
>>>> software (or use something else), and maintaining cross-distro
>>>> machine-configs might prove tedious, but for the most part, rest of
>>>> everything from the k8s bins up, should be more or less the same.
>>>>
>>>> MCD is good software, but there's not really much going on there that
>>>> can't be ported to any other OS.  MCD downloads a payload, extracts
>>>> files, rebases ostree, reboots host.  You can do all of those steps
>>>> except 'rebases ostree' on any distro.  And instead of 'rebases
>>>> ostree', we could pull down a container that acts as a local repo that
>>>> contains all the bits you need to upgrade your host across releases.
>>>> Users could do things to break this workflow, but it should otherwise
>>>> work if they aren't fiddling with the hosts.  The MCD payload happens
>>>> to embed an ignition payload, but it doesn't actually run ignition,
>>>> just utilizes the file format.
>>>>
>>>> From my viewpoint, there's nothing particularly special about ignition
>>>> in our current process either.  I had the entire OCP 4 stack running
>>>> on RHEL using the same exact ignition payload, a minimal amount of
>>>&g

Re: Follow up on OKD 4

2019-07-24 Thread Clayton Coleman
> On Jul 24, 2019, at 9:14 PM, Michael Gugino  wrote:
>
> I think what I'm looking for is more 'modular' rather than DIY.  CVO
> would need to be adapted to separate container payload from host
> software (or use something else), and maintaining cross-distro
> machine-configs might prove tedious, but for the most part, rest of
> everything from the k8s bins up, should be more or less the same.
>
> MCD is good software, but there's not really much going on there that
> can't be ported to any other OS.  MCD downloads a payload, extracts
> files, rebases ostree, reboots host.  You can do all of those steps
> except 'rebases ostree' on any distro.  And instead of 'rebases
> ostree', we could pull down a container that acts as a local repo that
> contains all the bits you need to upgrade your host across releases.
> Users could do things to break this workflow, but it should otherwise
> work if they aren't fiddling with the hosts.  The MCD payload happens
> to embed an ignition payload, but it doesn't actually run ignition,
> just utilizes the file format.
>
> From my viewpoint, there's nothing particularly special about ignition
> in our current process either.  I had the entire OCP 4 stack running
> on RHEL using the same exact ignition payload, a minimal amount of
> ansible (which could have been replaced by cloud-init userdata), and a
> small python library to parse the ignition files.  I was also building
> repo containers for 3.10 and 3.11 for Fedora.  Not to say the
> OpenShift 4 experience isn't great, because it is.  RHEL CoreOS + OCP
> 4 came together quite nicely.
>
> I'm all for 'not managing machines' but I'm not sure it has to look
> exactly like OCP.  Seems the OCP installer and CVO could be
> adapted/replaced with something else, MCD adapted, pretty much
> everything else works the same.

Sure - why?  As in, what do you want to do?  What distro do you want
to use instead of fcos?  What goals / outcomes do you want out of the
ability to do whatever?  Ie the previous suggestion (the auto updating
kube distro) has the concrete goal of “don’t worry about security /
updates / nodes and still be able to run containers”, and fcos is a
detail, even if it’s an important one.  How would you pitch the
alternative?


>
>> On Wed, Jul 24, 2019 at 12:05 PM Clayton Coleman  wrote:
>>
>>
>>
>>
>>> On Wed, Jul 24, 2019 at 10:40 AM Michael Gugino  wrote:
>>>
>>> I tried FCoS prior to the release by using the assembler on github.
>>> Too much secret sauce in how to actually construct an image.  I
>>> thought atomic was much more polished, not really sure what the
>>> value-add of ignition is in this usecase.  Just give me a way to build
>>> simple image pipelines and I don't need ignition.  To that end, there
>>> should be an okd 'spin' of FCoS IMO.  Atomic was dead simple to build
>>> ostree repos for.  I'd prefer to have ignition be opt-in.  Since we're
>>> supporting the mcd-once-from to parse ignition on RHEL, we don't need
>>> ignition to actually install okd.  To me, it seems FCoS was created
>>> just to have a more open version of RHEL CoreOS, and I'm not sure FCoS
>>> actually solves anyone's needs relative to atomic.  It feels like we
>>> jumped the shark on this one.
>>
>>
>> That’s feedback that’s probably something you should share in the fcos 
>> forums as well.  I will say that I find the OCP + RHEL experience 
>> unsatisfying and doesn't truly live up to what RHCOS+OCP can do (since it 
>> lacks the key features like ignition and immutable hosts).  Are you saying 
>> you'd prefer to have more of a "DIY kube bistro" than the "highly 
>> opinionated, totally integrated OKD" proposal?  I think that's a good 
>> question the community should get a chance to weigh in on (in my original 
>> email that was the implicit question - do you want something that looks like 
>> OCP4, or something that is completely different).
>>
>>>
>>>
>>> I'd like to see OKD be distro-independent.  Obviously Fedora should be
>>> our primary target (I'd argue Fedora over FCoS), but I think it should
>>> be true upstream software in the sense that apache2 http server is
>>> upstream and not distro specific.  To this end, perhaps it makes sense
>>> to consume k/k instead of openshift/origin for okd.  OKD should be
>>> free to do wild and crazy things independently of the enterprise
>>> product.  Perhaps there's a usecase for treating k/k vs
>>> openshift/origin as a swappable base layer.
>>
>>
>> That’s even more dramatic a change from OKD even as it was in 3.x.  I’d be 
>> 

Re: Follow up on OKD 4

2019-07-24 Thread Clayton Coleman
On Wed, Jul 24, 2019 at 12:45 PM Fox, Kevin M  wrote:

> Ah, this raises an interesting discussion I've been wanting to have for a
> while.
>
> There are potentially lots of things you could call a distro.
>
> Most linux distro's are made up of several layers:
> 1. boot loader - components to get the kernel running
> 2. kernel - provides a place to run higher level software
> 3. os level services - singletons needed to really call the os an os.
> (dhcp, systemd, dbus, etc)
> 4. prebuilt/tested, generic software/services - workload (mysql, apache,
> firefox, gnome, etc)
>
> For sake of discussion, lets map these layers a bit, and assume that the
> openshift specific components can be added to a vanilla kubernetes. We then
> have
>
> 1. linux distro (could be k8s specific and micro)
> 2. kubernetes control plane & kubelets
> 3. openshift components (auth, ingress, cicd/etc)
> 4. ?  (operators + containers, helm + containers, etc)
>
> openshift use to be defined as being 1-3.
>

> As things like ake/eks/gke make it easy to deploy 1-2, maybe openshift
> should really become modular so it focuses more on 3 and 4.
>

That's interesting that you'd say that.  I think kube today is like
"install a kernel with bash and serial port magic", whereas OpenShift 4 is
"here's a compose, an installer, a disk formatter, yum, yum repos,
lifecycle, glibc, optional packages, and sys utils".  I don't know if you
can extend the analogy there (if you want to use EKS, you're effectively
running on someone's VPS, but you can only use their distro and you can't
change anything), but definitely a good debate.


>
> As for having something that provides a #1 that is super tiny/easy to
> maintain so that you can do #2 on top easily, I'm for that as well, but
> should be decoupled from 3-4 I think. Should you be able to switch out your
> #1 for someone elses #1 while keeping the rest? That's the question from
> previous in the thread.
>

I think the analogy I've been using is that openshift is a proper distro in
the sense that you don't take someone's random kernel and use it with
someone else's random glibc and a third party's random gcc, but you might
not care about the stuff on top.  The things in 3 for kube feel more like
glibc than "which version of Firefox do I install", since a cluster without
ingress isn't very useful.


>
> #4 I think is very important and while the operator framework is starting
> to make some inroads on it, there is still a lot of work to do to make an
> equivalent of the 'redhat' distro of software that runs on k8s.
>
> A lot of focus has been on making a distro out of k8s. but its really
> mostly been at the level of, how do I get a kernel booted/upgraded. I think
> the more important distro thing #4 is how do you make a distribution of
> prebuilt, easy to install software to run on top of k8s. Redhat's distro is
> really 99% userspace and a bit of getting the thing booted.
>

Really?  I don't think that's true at all - I'd flip it around and say it's
85% booted, with 15% user space today.  I'd probably draw the line at auth,
ingress, and olm as being "the top of the bottom".   OpenShift 4 is 100%
about making that bottom layer just working, rather than being about OLM up.


> Its value is in having a suite of prebuilt, tested, stable, and easily
> installable/upgradable software  with a team of humans that can provide
> support for it. The kernel/bootloader part is really just a means to enable
> #4. No one installs a kernel/os just to get a kernel. This part is
> currently lacking. Where is the equivalent of Redhat/Centos/Fedora for #4.
>
> In the context of OKD, which of these layers is OKD focused on?
>

In the proposal before it was definitely 1-3 (which is the same as the OCP4
path).  If you only care about 4, you're talking more about OLM on top of
Kube, which is more like JBoss or something like homebrew on Mac (you don't
upgrade your Mac via home-brew, but you do consume lots of stuff out there).


>
> Thanks,
> Kevin
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Follow up on OKD 4

2019-07-24 Thread Clayton Coleman
On Wed, Jul 24, 2019 at 12:18 PM Fox, Kevin M  wrote:

> So, last I heard OpenShift was starting to modularize, so it could load
> the OpenShift parts as extensions to the kube-apiserver? Has this been
> completed? Maybe the idea below of being able to deploy vanilla k8s is
> workable as the OpenShift parts could easily be added on top?
>

OpenShift in 3.x was a core control plane plus 3-5 components.  In 4.x it's
a kube control plane, a bunch of core wiring, the OS, the installer, and
then a good 30 other components.  OpenShift 4 is so far beyond "vanilla
kube + extensions" now that while it's probably technically possible to do
that, it's less of "openshift" and more of "a kube cluster with a few
APIs".  I.e. openshift is machine management, automated updates, integrated
monitoring, etc.


>
> Thanks,
> Kevin
> 
> From: dev-boun...@lists.openshift.redhat.com [
> dev-boun...@lists.openshift.redhat.com] on behalf of Michael Gugino [
> mgug...@redhat.com]
> Sent: Wednesday, July 24, 2019 7:40 AM
> To: Clayton Coleman
> Cc: users; dev
> Subject: Re: Follow up on OKD 4
>
> I tried FCoS prior to the release by using the assembler on github.
> Too much secret sauce in how to actually construct an image.  I
> thought atomic was much more polished, not really sure what the
> value-add of ignition is in this usecase.  Just give me a way to build
> simple image pipelines and I don't need ignition.  To that end, there
> should be an okd 'spin' of FCoS IMO.  Atomic was dead simple to build
> ostree repos for.  I'd prefer to have ignition be opt-in.  Since we're
> supporting the mcd-once-from to parse ignition on RHEL, we don't need
> ignition to actually install okd.  To me, it seems FCoS was created
> just to have a more open version of RHEL CoreOS, and I'm not sure FCoS
> actually solves anyone's needs relative to atomic.  It feels like we
> jumped the shark on this one.
>
> I'd like to see OKD be distro-independent.  Obviously Fedora should be
> our primary target (I'd argue Fedora over FCoS), but I think it should
> be true upstream software in the sense that apache2 http server is
> upstream and not distro specific.  To this end, perhaps it makes sense
> to consume k/k instead of openshift/origin for okd.  OKD should be
> free to do wild and crazy things independently of the enterprise
> product.  Perhaps there's a usecase for treating k/k vs
> openshift/origin as a swappable base layer.
>
> It would be nice to have a more native kubernetes place to develop our
> components against so we can upstream them, or otherwise just build a
> solid community around how we think kubernetes should be deployed and
> consumed.  Similar to how Fedora has a package repository, we should
> have a Kubernetes component repository (I realize operatorhub fulfulls
> some of this, but I'm talking about a place for OLM and things like
> MCD to live).
>
> I think we could integrate with existing package managers via a
> 'repo-in-a-container' type strategy for those not using ostree.
>
> As far as slack vs IRC, I vote IRC or any free software solution (but
> my preference is IRC because it's simple and I like it).
>
> On Sun, Jul 21, 2019 at 12:28 PM Clayton Coleman 
> wrote:
> >
> >
> >
> > On Sat, Jul 20, 2019 at 12:40 PM Justin Cook  wrote:
> >>
> >> Once upon a time Freenode #openshift-dev was vibrant with loads of
> activity and publicly available logs. I jumped in asked questions and Red
> Hatters came from the woodwork and some amazing work was done.
> >>
> >> Perfect.
> >>
> >> Slack not so much. Since Monday there have been three comments with two
> reply threads. All this with 524 people. Crickets.
> >>
> >> Please explain how this is better. I’d really love to know why IRC
> ceased. It worked and worked brilliantly.
> >
> >
> > Is your concern about volume or location (irc vs slack)?
> >
> > Re volume: It should be relatively easy to move some common discussion
> types into the #openshift-dev slack channel (especially triage / general
> QA) that might be distributed to other various slack channels today (both
> private and public), and I can take the follow up to look into that.  Some
> of the volume that was previously in IRC moved to these slack channels, but
> they're not anything private (just convenient).
> >
> > Re location:  I don't know how many people want to go back to IRC from
> slack, but that's a fairly easy survey to do here if someone can volunteer
> to drive that, and I can run the same one internally.  Some of it is
> inertia - people have to be in slack sig-* channels - and some of it is
> preferen

Re: Follow up on OKD 4

2019-07-24 Thread Clayton Coleman
On Wed, Jul 24, 2019 at 10:40 AM Michael Gugino  wrote:

> I tried FCoS prior to the release by using the assembler on github.
> Too much secret sauce in how to actually construct an image.  I
> thought atomic was much more polished, not really sure what the
> value-add of ignition is in this usecase.  Just give me a way to build
> simple image pipelines and I don't need ignition.  To that end, there
> should be an okd 'spin' of FCoS IMO.  Atomic was dead simple to build
> ostree repos for.  I'd prefer to have ignition be opt-in.  Since we're
> supporting the mcd-once-from to parse ignition on RHEL, we don't need
> ignition to actually install okd.  To me, it seems FCoS was created
> just to have a more open version of RHEL CoreOS, and I'm not sure FCoS
> actually solves anyone's needs relative to atomic.  It feels like we
> jumped the shark on this one.
>

That’s feedback that’s probably something you should share in the fcos
forums as well.  I will say that I find the OCP + RHEL experience
unsatisfying and doesn't truly live up to what RHCOS+OCP can do (since it
lacks the key features like ignition and immutable hosts).  Are you saying
you'd prefer to have more of a "DIY kube bistro" than the "highly
opinionated, totally integrated OKD" proposal?  I think that's a good
question the community should get a chance to weigh in on (in my original
email that was the implicit question - do you want something that looks
like OCP4, or something that is completely different).


>
> I'd like to see OKD be distro-independent.  Obviously Fedora should be
> our primary target (I'd argue Fedora over FCoS), but I think it should
> be true upstream software in the sense that apache2 http server is
> upstream and not distro specific.  To this end, perhaps it makes sense
> to consume k/k instead of openshift/origin for okd.  OKD should be
> free to do wild and crazy things independently of the enterprise
> product.  Perhaps there's a usecase for treating k/k vs
> openshift/origin as a swappable base layer.
>

That’s even more dramatic a change from OKD even as it was in 3.x.  I’d be
happy to see people excited about reusing cvo / mcd and be able to mix and
match, but most of the things here would be a huge investment to build.  In
my original email I might call this the “I want to build my own distro" -
if that's what people want to build, I think we can do things to enable
it.  But it would probably not be "openshift" in the same way.


>
> It would be nice to have a more native kubernetes place to develop our
> components against so we can upstream them, or otherwise just build a
> solid community around how we think kubernetes should be deployed and
> consumed.  Similar to how Fedora has a package repository, we should
> have a Kubernetes component repository (I realize operatorhub fulfulls
> some of this, but I'm talking about a place for OLM and things like
> MCD to live).
>

MCD is really tied to the OS.  The idea of a generic MCD seems like it
loses the value of MCD being specific to an OS.

I do think there are two types of components we have - things designed to
work well with kube, and things designed to work well with "openshift the
distro".  The former can be developed against Kube (like operator hub /
olm) but the latter doesn't really make sense to develop against unless it
matches what is being built.  In that vein, OKD4 looking not like OCP4
wouldn't benefit you or the components.


>
> I think we could integrate with existing package managers via a
> 'repo-in-a-container' type strategy for those not using ostree.
>

A big part of openshift 4 is "we're tired of managing machines".  It sounds
like you are arguing for "let people do whatever", which is definitely
valid (but doesn't sound like openshift 4).


>
> As far as slack vs IRC, I vote IRC or any free software solution (but
> my preference is IRC because it's simple and I like it).
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Follow up on OKD 4

2019-07-21 Thread Clayton Coleman
On Sat, Jul 20, 2019 at 12:40 PM Justin Cook  wrote:

> Once upon a time Freenode #openshift-dev was vibrant with loads of
> activity and publicly available logs. I jumped in asked questions and Red
> Hatters came from the woodwork and some amazing work was done.
>
> Perfect.
>
> Slack not so much. Since Monday there have been three comments with two
> reply threads. All this with 524 people. Crickets.
>
> Please explain how this is better. I’d really love to know why IRC ceased.
> It worked and worked brilliantly.
>

Is your concern about volume or location (irc vs slack)?

Re volume: It should be relatively easy to move some common discussion
types into the #openshift-dev slack channel (especially triage / general
QA) that might be distributed to other various slack channels today (both
private and public), and I can take the follow up to look into that.  Some
of the volume that was previously in IRC moved to these slack channels, but
they're not anything private (just convenient).

Re location:  I don't know how many people want to go back to IRC from
slack, but that's a fairly easy survey to do here if someone can volunteer
to drive that, and I can run the same one internally.  Some of it is
inertia - people have to be in slack sig-* channels - and some of it is
preference (in that IRC is an inferior experience for long running
communication).


>
> There are mentions of sigs and bits and pieces, but absolutely no
> progress. I fail to see why anyone would want to regress. OCP4 maybe
> brilliant, but as I said in a private email, without upstream there is no
> culture or insurance we’ve come to love from decades of heart and soul.
>
> Ladies and gentlemen, this is essentially getting to the point the
> community is being abandoned. Man years of work acknowledged with the
> roadmap pulled out from under us.
>

I don't think that's a fair characterization, but I understand why you feel
that way and we are working to get the 4.x work moving.  The FCoS team as
mentioned just released their first preview last week, I've been working
with Diane and others to identify who on the team is going to take point on
the design work, and there's a draft in flight that I saw yesterday.  Every
component of OKD4 *besides* the FCoS integration is public and has been
public for months.

I do want to make sure we can get a basic preview up as quickly as possible
- one option I was working on with the legal side was whether we could
offer a short term preview of OKD4 based on top of RHCoS.  That is possible
if folks are willing to accept the terms on try.openshift.com in order to
access it in the very short term (and then once FCoS is available that
would not be necessary).  If that's an option you or anyone on this thread
are interested in please let me know, just as something we can do to speed
up.


>
> I completely understand the disruption caused by the acquisition. But,
> after kicking the tyres and our meeting a few weeks back, it’s been pretty
> quiet. The clock is ticking on corporate long-term strategies. Some of
> those corporates spent plenty of dosh on licensing OCP and hiring
> consultants to implement.
>

> Red Hat need to lead from the front. Get IRC revived, throw us a bone, and
> have us put our money where our mouth is — we’ll get involved. We’re
> begging for it.
>
> Until then we’re running out of patience via clientele and will need to
> start a community effort perhaps by forking OKD3 and integrating upstream.
> I am not interested in doing that. We shouldn’t have to.
>

In the spirit of full transparency, FCoS integrated into OKD is going to
take several months to get to the point where it meets the quality bar I'd
expect for OKD4.  If that timeframe doesn't work for folks, we can
definitely consider other options like having RHCoS availability behind a
terms agreement, a franken-OKD without host integration (which might take
just as long to get and not really be a step forward for folks vs 3), or
other, more dramatic options.  Have folks given FCoS a try this week?
https://docs.fedoraproject.org/en-US/fedora-coreos/getting-started/.
That's a great place to get started

As always PRs and fixes to 3.x will continue to be welcomed and that effort
continues unabated.
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Follow up on OKD 4

2019-07-19 Thread Clayton Coleman
The kube #openshift-dev slack might also make sense, since we have 518
people there right now

On Fri, Jul 19, 2019 at 12:46 PM Christian Glombek 
wrote:

> Hi everyone,
>
> first of all, I'd like to thank Clayton for kicking this off!
>
> As I only just joined this ML, let me quickly introduce myself:
>
> I am an Associate Software Engineer on the OpenShift
> machine-config-operator (mco) team and I'm based out of Berlin, Germany.
> Last year, I participated in Google Summer of Code as a student with
> Fedora IoT and joined Red Hat shortly thereafter to work on the Fedora
> CoreOS (FCOS) team.
> I joined the MCO team when it was established earlier this year.
>
> Having been a Fedora/Atomic community member for some years, I'm a strong
> proponent of using FCOS as base OS for OKD and would like to see it enabled
> :)
> As I work on the team that looks after the MCO, which is one of the parts
> of OpenShift that will need some adaptation in order to support another
> base OS, I am confident I can help with contributions there
> (of course I don't want to shut the door for other OSes to be used as base
> if people are interested in that :).
>
> Proposal: Create WG and hold regular meetings
>
> I'd like to propose the creation of the OKD Working Group that will hold
> bi-weekly meetings.
> (or should we call it a SIG? Also open to suggestions to find the right
> venue: IRC?, OpenShift Commons Slack?).
>
> I'll survey some people in the coming days to find a suitable meeting time.
>
> If you have any feedback or suggestions, please feel free to reach out,
> either via this list or personally!
> I can be found as lorbus on IRC/Fedora, @lorbus42 on Twitter, or simply
> via email :)
>
> I'll send out more info here ASAP. Stay tuned!
>
> With kind regards
>
> CHRISTIAN GLOMBEK
> Associate Software Engineer
>
> Red Hat GmbH, registred seat: Grassbrunn
> Commercial register: Amtsgericht Muenchen, HRB 153243
> Managing directors: Charles Cachera, Michael O'Neill, Thomas Savage, Eric 
> Shander
>
>
>
> On Wed, Jul 17, 2019 at 10:45 PM Clayton Coleman 
> wrote:
>
>> Thanks for everyone who provided feedback over the last few weeks.
>> There's been a lot of good feedback, including some things I'll try to
>> capture here:
>>
>> * More structured working groups would be good
>> * Better public roadmap
>> * Concrete schedule for OKD 4
>> * Concrete proposal for OKD 4
>>
>> I've heard generally positive comments about the suggestions and
>> philosophy in the last email, with a desire for more details around what
>> the actual steps might look like, so I think it's safe to say that the idea
>> of "continuously up to date Kubernetes distribution" resonated.  We'll
>> continue to take feedback along this direction (private or public).
>>
>> Since 4 was the kickoff for this discussion, and with the recent release
>> of the Fedora CoreOS beta (
>> https://docs.fedoraproject.org/en-US/fedora-coreos/getting-started/) figuring
>> prominently in the discussions so far, I got some volunteers from that team
>> to take point on setting up a working group (SIG?) around the initial level
>> of integration and drafting a proposal.
>>
>> Steve and Christian have both been working on Fedora CoreOS and
>> graciously agreed to help drive the next steps on Fedora CoreOS and OKD
>> potential integration into a proposal.  There's a rough level draft doc
>> they plan to share - but for now I will turn this over to them and they'll
>> help organize time / forum / process for kicking off this effort.  As that
>> continues, we'll identify new SIGs to spawn off as necessary to cover other
>> topics, including initial CI and release automation to deliver any
>> necessary changes.
>>
>> Thanks to everyone who gave feedback, and stay tuned here for more!
>>
> ___
> users mailing list
> us...@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: [v4]: v4.1.4 is using internal registry, is this a bug?

2019-07-18 Thread Clayton Coleman
On Jul 18, 2019, at 6:24 PM, Daniel Comnea  wrote:



On Thu, Jul 18, 2019 at 10:08 PM Clayton Coleman 
wrote:

> We generally bump "latest" symlink once it's in the stable channel, which
> 4.1.6 is not in.  4.1.6 is still considered pre-release.
>
[DC]: i looked [1] and so i assumed is stable since is part of 4-stable
section.

[1]
https://openshift-release.svc.ci.openshift.org/releasestream/4-stable/release/4.1.6


That page has nothing to do with officially going into stable channels.  If
the cluster doesn’t show the update or “latest” doesn’t point to it it’s
not stable yet.

<https://openshift-release.svc.ci.openshift.org/releasestream/4-stable/release/4.1.6>

>
> For your first error message, which installer binary were you using?  Can
> you link to it directly?
>
[DC]: sure thing, i've downloaded from try.openshift.com which took me to
https://mirror.openshift.com/pub/openshift-v4/clients/ocp/4.1.4/openshift-install-mac-4.1.4.tar.gz


Are you positive you were installing from that binary?  I just double
checked locally and that listed a quay.io release image.


> On Thu, Jul 18, 2019 at 3:55 PM Daniel Comnea 
> wrote:
>
>> Hi,
>>
>> Trying a new fresh deployment by downloading a new secret together with
>> the installer from try.openshift.com i end up in a failure state with
>> bootstrap node caused by
>>
>> *error pulling image
>>> "registry.svc.ci.openshift.org/ocp/release@sha256:a6c177eb007d20bb00bfd8f829e99bd40137167480112bd5ae1c25e40a4a163a
>>> <http://registry.svc.ci.openshift.org/ocp/release@sha256:a6c177eb007d20bb00bfd8f829e99bd40137167480112bd5ae1c25e40a4a163a>":
>>> unable to pull
>>> registry.svc.ci.openshift.org/ocp/release@sha256:a6c177eb007d20bb00bfd8f829e99bd40137167480112bd5ae1c25e40a4a163a
>>> <http://registry.svc.ci.openshift.org/ocp/release@sha256:a6c177eb007d20bb00bfd8f829e99bd40137167480112bd5ae1c25e40a4a163a>:
>>> unable to pull image: Error determining manifest MIME type for
>>> docker://registry.svc.ci.openshift.org/ocp/release@sha256:a6c177eb007d20bb00bfd8f829e99bd40137167480112bd5ae1c25e40a4a163a
>>> <http://registry.svc.ci.openshift.org/ocp/release@sha256:a6c177eb007d20bb00bfd8f829e99bd40137167480112bd5ae1c25e40a4a163a>:
>>> Error reading manifest
>>> sha256:a6c177eb007d20bb00bfd8f829e99bd40137167480112bd5ae1c25e40a4a163a in
>>> registry.svc.ci.openshift.org/ocp/release
>>> <http://registry.svc.ci.openshift.org/ocp/release>: unauthorized:
>>> authentication required*
>>>
>>
>> Manually trying from withing bootstrap node to check if it works .. i get
>> same negative result
>>
>> *skopeo inspect --authfile /root/.docker/config.json
>>> docker://registry.svc.ci.openshift.org/ocp/release@sha256:a6c177eb007d20bb00bfd8f829e99bd40137167480112bd5ae1c25e40a4a163a
>>> <http://registry.svc.ci.openshift.org/ocp/release@sha256:a6c177eb007d20bb00bfd8f829e99bd40137167480112bd5ae1c25e40a4a163a>*
>>>
>>
>> Switching to installer 4.1.0/ 4.1.3/ 4.1.6 with the same pull secret am
>> able to get bootstrap node up with is MCO and the other pods up.
>>
>> One interesting bit is that 4.1.4 points to a release image hosted
>> internally
>>
>>>
>>>
>>>
>>> *./openshift-install version./openshift-install
>>> v4.1.4-201906271212-dirtybuilt from commit
>>> bf47826c077d16798c556b1bd143a5bbfac14271release image
>>> registry.svc.ci.openshift.org/ocp/release@sha256:a6c177eb007d20bb00bfd8f829e99bd40137167480112bd5ae1c25e40a4a163a
>>> <http://registry.svc.ci.openshift.org/ocp/release@sha256:a6c177eb007d20bb00bfd8f829e99bd40137167480112bd5ae1c25e40a4a163a>*
>>>
>>
>> but 4.1.6 for example points to quay.io (as expected).
>> Is this a bug?
>>
>> On a slightly different note, would be nice to update try.openshift.com
>> to latest stable 4.1 release which is .6 and not .4.
>>
>> Cheers,
>> Dani
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: [v4]: v4.1.4 is using internal registry, is this a bug?

2019-07-18 Thread Clayton Coleman
We generally bump "latest" symlink once it's in the stable channel, which
4.1.6 is not in.  4.1.6 is still considered pre-release.

For your first error message, which installer binary were you using?  Can
you link to it directly?

On Thu, Jul 18, 2019 at 3:55 PM Daniel Comnea  wrote:

> Hi,
>
> Trying a new fresh deployment by downloading a new secret together with
> the installer from try.openshift.com i end up in a failure state with
> bootstrap node caused by
>
> *error pulling image
>> "registry.svc.ci.openshift.org/ocp/release@sha256:a6c177eb007d20bb00bfd8f829e99bd40137167480112bd5ae1c25e40a4a163a
>> ":
>> unable to pull
>> registry.svc.ci.openshift.org/ocp/release@sha256:a6c177eb007d20bb00bfd8f829e99bd40137167480112bd5ae1c25e40a4a163a
>> :
>> unable to pull image: Error determining manifest MIME type for
>> docker://registry.svc.ci.openshift.org/ocp/release@sha256:a6c177eb007d20bb00bfd8f829e99bd40137167480112bd5ae1c25e40a4a163a
>> :
>> Error reading manifest
>> sha256:a6c177eb007d20bb00bfd8f829e99bd40137167480112bd5ae1c25e40a4a163a in
>> registry.svc.ci.openshift.org/ocp/release
>> : unauthorized:
>> authentication required*
>>
>
> Manually trying from withing bootstrap node to check if it works .. i get
> same negative result
>
> *skopeo inspect --authfile /root/.docker/config.json
>> docker://registry.svc.ci.openshift.org/ocp/release@sha256:a6c177eb007d20bb00bfd8f829e99bd40137167480112bd5ae1c25e40a4a163a
>> *
>>
>
> Switching to installer 4.1.0/ 4.1.3/ 4.1.6 with the same pull secret am
> able to get bootstrap node up with is MCO and the other pods up.
>
> One interesting bit is that 4.1.4 points to a release image hosted
> internally
>
>>
>>
>>
>> *./openshift-install version./openshift-install
>> v4.1.4-201906271212-dirtybuilt from commit
>> bf47826c077d16798c556b1bd143a5bbfac14271release image
>> registry.svc.ci.openshift.org/ocp/release@sha256:a6c177eb007d20bb00bfd8f829e99bd40137167480112bd5ae1c25e40a4a163a
>> *
>>
>
> but 4.1.6 for example points to quay.io (as expected).
> Is this a bug?
>
> On a slightly different note, would be nice to update try.openshift.com
> to latest stable 4.1 release which is .6 and not .4.
>
> Cheers,
> Dani
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Follow up on OKD 4

2019-07-17 Thread Clayton Coleman
Thanks for everyone who provided feedback over the last few weeks.  There's
been a lot of good feedback, including some things I'll try to capture here:

* More structured working groups would be good
* Better public roadmap
* Concrete schedule for OKD 4
* Concrete proposal for OKD 4

I've heard generally positive comments about the suggestions and philosophy
in the last email, with a desire for more details around what the actual
steps might look like, so I think it's safe to say that the idea of
"continuously up to date Kubernetes distribution" resonated.  We'll
continue to take feedback along this direction (private or public).

Since 4 was the kickoff for this discussion, and with the recent release of
the Fedora CoreOS beta (
https://docs.fedoraproject.org/en-US/fedora-coreos/getting-started/) figuring
prominently in the discussions so far, I got some volunteers from that team
to take point on setting up a working group (SIG?) around the initial level
of integration and drafting a proposal.

Steve and Christian have both been working on Fedora CoreOS and graciously
agreed to help drive the next steps on Fedora CoreOS and OKD potential
integration into a proposal.  There's a rough level draft doc they plan to
share - but for now I will turn this over to them and they'll help organize
time / forum / process for kicking off this effort.  As that continues,
we'll identify new SIGs to spawn off as necessary to cover other topics,
including initial CI and release automation to deliver any necessary
changes.

Thanks to everyone who gave feedback, and stay tuned here for more!
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Proposal: Deploy and switch to Discourse

2019-07-16 Thread Clayton Coleman
:It would probably be good to solicit feedback via a survey - gather
suggestions, assess how many people prefer the existing communication
mechanisms we have, etc.

On Tue, Jul 16, 2019 at 12:31 PM Jason Brooks  wrote:

> On Fri, Jul 12, 2019 at 7:20 AM Neal Gompa  wrote:
> >
> > On Fri, Jul 12, 2019 at 10:11 AM Colin Walters 
> wrote:
> > >
> > > Hi,
> > >
> > > I think the Common's use of Slack is not a good match for "support".
> Requiring an invitation is also an impediment to quickly asking questions.
> Further Slack is proprietary, and also any discussion there won't be easily
> found by Google.
> > >
> >
> > I agree here. I deeply dislike that we use Slack for that. And Slack
> > is terrible for a11y, too.
> >
> > > On the other hand we have these mailing lists, which are fine but
> they're traditional mailing lists with all the tradeoffs there.
> > >
> > > I propose we shut down the user@ and dev@ lists and deploy a
> Discourse instance, which is what the cool kids ;) are doing:
> > > https://discussion.fedoraproject.org/
> > > http://internals.rust-lang.org/
> > > etc.
> > >
> > > Discourse is IMO really nice because for people who want a mailing
> list it can act like that, but for people who both want a modern web UI and
> most importantly just want to drop in occasionally and not be committed to
> receiving a stream of email, it works a lot better.  Also importantly to me
> it's FOSS.
> > >
> > > I would also personally lean towards not using Slack too but I see
> that as a separate discussion - it's real time, and that's a distinct thing
> from discourse.  If we get a lot of momentum in our Discourse though over
> Slack we can consider what to do later.
> > >
> >
> > I would rather not see us move to Discourse for the mailing list
> > experience. I'd propose we upgrade to Mailman 3 with HyperKitty, as
> > other communities around us have done. The oVirt, Ceph, and Podman
> > communities already use it.
>
> My team in Red Hat's Open Source Program Office hosts Mailman 3 for
> oVirt, Podman and other projects. We'd be happy to help with hosting
> and migration if the OpenShift community wants to move to Mailman 3
> and Hyperkitty.
>
> Jason
>
> >
> > Fedora didn't shut down its users@ list when it deployed
> > discussions.fp.o. And adoption of Discourse in Fedora hasn't been very
> > high outside of the Silverblue/CoreOS bubble.
> >
> > I'm not opposed to the idea of having an additional channel for user
> > support with Discourse on okd.io, though.
> >
> >
> >
> > --
> > 真実はいつも一つ!/ Always, there's only one truth!
> >
> > ___
> > dev mailing list
> > dev@lists.openshift.redhat.com
> > http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Proposal: Deploy and switch to Discourse

2019-07-12 Thread Clayton Coleman
Another note - we reuse the Kubernetes slack channel, and we would
have no plans to remove that channel because we get a lot of joint
overlap with kube development.  Adding more channels to discuss means
people just have to log into more places.

> On Jul 12, 2019, at 11:48 AM, Colin Walters  wrote:
>
>
>
> On Fri, Jul 12, 2019, at 10:19 AM, Neal Gompa wrote:
>
>
>> Fedora didn't shut down its users@ list when it deployed
>> discussions.fp.o. And adoption of Discourse in Fedora hasn't been very
>> high outside of the Silverblue/CoreOS bubble.
>
> Eh, but the traditional Fedora community has a pretty high level of, hmm..how 
> to describe it...people likely to be "power email" users, the types of people 
> who know how to set up email filtering, may even run their own email servers 
> in 2019, etc.   That said  I think Discourse is "okay enough" for those 
> people, even though it's more "web page" than "email list" (where as mailman 
> 3 is more the other way around).
>
>> I'm not opposed to the idea of having an additional channel for user
>> support with Discourse on okd.io, though.
>
> We already have too many channels, adding more isn't going to help.   The 
> existing lists aren't high traffic, and my high level impression is the 
> people posting here are going to be fine with Discourse.
>
> The reason I mentioned commons Slack first is because it's the primary 
> entrypoint, and I think Discourse is a better *primary* entrypoint.
>
> (And not responding directly to Neil here) - Let's please keep proposals for 
> any changes to the *real time* stuff like Slack/IRC out of this discussion 
> because I'd like to focus on a clear and "direct" goal (Discourse replacing 
> mailman) - anything else scope creeps this a lot.
>
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Proposal: Deploy and switch to Discourse

2019-07-12 Thread Clayton Coleman
We’ve historically used StackOverflow for threaded human questions /
response problems.  Discourse feels like it would overlap a lot with
that, especially since SO is still usually better for search engines

> On Jul 12, 2019, at 10:18 AM, Neal Gompa  wrote:
>
>> On Fri, Jul 12, 2019 at 10:11 AM Colin Walters  wrote:
>>
>> Hi,
>>
>> I think the Common's use of Slack is not a good match for "support".  
>> Requiring an invitation is also an impediment to quickly asking questions.  
>> Further Slack is proprietary, and also any discussion there won't be easily 
>> found by Google.
>>
>
> I agree here. I deeply dislike that we use Slack for that. And Slack
> is terrible for a11y, too.
>
>> On the other hand we have these mailing lists, which are fine but they're 
>> traditional mailing lists with all the tradeoffs there.
>>
>> I propose we shut down the user@ and dev@ lists and deploy a Discourse 
>> instance, which is what the cool kids ;) are doing:
>> https://discussion.fedoraproject.org/
>> http://internals.rust-lang.org/
>> etc.
>>
>> Discourse is IMO really nice because for people who want a mailing list it 
>> can act like that, but for people who both want a modern web UI and most 
>> importantly just want to drop in occasionally and not be committed to 
>> receiving a stream of email, it works a lot better.  Also importantly to me 
>> it's FOSS.
>>
>> I would also personally lean towards not using Slack too but I see that as a 
>> separate discussion - it's real time, and that's a distinct thing from 
>> discourse.  If we get a lot of momentum in our Discourse though over Slack 
>> we can consider what to do later.
>>
>
> I would rather not see us move to Discourse for the mailing list
> experience. I'd propose we upgrade to Mailman 3 with HyperKitty, as
> other communities around us have done. The oVirt, Ceph, and Podman
> communities already use it.
>
> Fedora didn't shut down its users@ list when it deployed
> discussions.fp.o. And adoption of Discourse in Fedora hasn't been very
> high outside of the Silverblue/CoreOS bubble.
>
> I'm not opposed to the idea of having an additional channel for user
> support with Discourse on okd.io, though.
>
>
>
> --
> 真実はいつも一つ!/ Always, there's only one truth!
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: OKD 4 - A Modest Proposal

2019-06-28 Thread Clayton Coleman
On Jun 28, 2019, at 3:38 AM, Daniel Comnea  wrote:



On Fri, Jun 28, 2019 at 4:58 AM Clayton Coleman  wrote:

> > On Jun 26, 2019, at 1:08 PM, Colin Walters  wrote:
> >
> >
> >
> > On Thu, Jun 20, 2019, at 5:20 PM, Clayton Coleman wrote:
> >
> >
> >> Because the operating system integration is so critical, we need to
> >> make sure that the major components (ostree, ignition, and the kubelet)
> >> are tied together in a CoreOS distribution that can be quickly
> >> refreshed with OKD - the Fedora CoreOS project is close to being ready
> >> for integration in our CI, so that’s a natural place to start. That
> >> represents the biggest technical obstacle that I’m aware of to get our
> >> first versions of OKD4 out (the CI systems are currently testing on top
> >> of RHEL CoreOS but we have PoCs of how to take an arbitrary ostree
> >> based distro and slot it in).
> >
> > The tricky thing here is...if we want this to work the same as OpenShift
> 4/OCP
> > with RHEL CoreOS, then what we're really talking about here is a
> *derivative*
> > of FCOS that for example embeds the kubelet from OKD.  And short term
> > it will need to use Ignition spec 2.  There may be other things I'm
> forgetting.
>
> Or we have a branch of mcd that works with ignition 3 before the main
> branch switches.
>

[DC]: wouldn't this be more than just MCD ?e.g - change in installer too
[1] to import the v3 spec and work with it

[1]
https://github.com/openshift/installer/blob/master/pkg/asset/ignition/machine/node.go#L7


Yes, but possibly not a large change or one that is a “use ignition3” flag
or similar.


> I don’t know that it has to work exactly the same, but obviously the
> closer the better.
>
> >
> > Concretely for example, OKDFCOS (to use the obvious if unwieldy acronym)
> > would need to have its own uploaded "bootimages" (i.e. AMIs, PXE media
> etc)
> > that are own its own version number/lifecycle distinct from (but derived
> from)
> > FCOS (and OKD).
>
> Or it just pivots.  Pivots aren’t bad.
>
> >
> > This is completely possible (anything is in software) but the current
> team is
> > working on a lot of things and introducing a 3rd stream for us to
> maintain would
> > be a not at all small cost.  On the other hand, the benefit of doing so
> (e.g.
> > early upstream kernel/selinux-policy/systemd/podman integration testing
> > with kubernetes/OKD) might be worth it alone.
> >
> > ___
> > dev mailing list
> > dev@lists.openshift.redhat.com
> > http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: OKD 4 - A Modest Proposal

2019-06-27 Thread Clayton Coleman
> On Jun 26, 2019, at 1:08 PM, Colin Walters  wrote:
>
>
>
> On Thu, Jun 20, 2019, at 5:20 PM, Clayton Coleman wrote:
>
>
>> Because the operating system integration is so critical, we need to
>> make sure that the major components (ostree, ignition, and the kubelet)
>> are tied together in a CoreOS distribution that can be quickly
>> refreshed with OKD - the Fedora CoreOS project is close to being ready
>> for integration in our CI, so that’s a natural place to start. That
>> represents the biggest technical obstacle that I’m aware of to get our
>> first versions of OKD4 out (the CI systems are currently testing on top
>> of RHEL CoreOS but we have PoCs of how to take an arbitrary ostree
>> based distro and slot it in).
>
> The tricky thing here is...if we want this to work the same as OpenShift 4/OCP
> with RHEL CoreOS, then what we're really talking about here is a *derivative*
> of FCOS that for example embeds the kubelet from OKD.  And short term
> it will need to use Ignition spec 2.  There may be other things I'm 
> forgetting.

Or we have a branch of mcd that works with ignition 3 before the main
branch switches.

I don’t know that it has to work exactly the same, but obviously the
closer the better.

>
> Concretely for example, OKDFCOS (to use the obvious if unwieldy acronym)
> would need to have its own uploaded "bootimages" (i.e. AMIs, PXE media etc)
> that are own its own version number/lifecycle distinct from (but derived from)
> FCOS (and OKD).

Or it just pivots.  Pivots aren’t bad.

>
> This is completely possible (anything is in software) but the current team is
> working on a lot of things and introducing a 3rd stream for us to maintain 
> would
> be a not at all small cost.  On the other hand, the benefit of doing so (e.g.
> early upstream kernel/selinux-policy/systemd/podman integration testing
> with kubernetes/OKD) might be worth it alone.
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


OKD 4 - A Modest Proposal

2019-06-20 Thread Clayton Coleman
tps://commons.openshift.org/events.html#event%7Cokd4-road-map-release-update-with-clayton-coleman-red-hat%7C960>
to further explore these topics with the wider community. I hope you’ll
join the conversation and look forward to hearing from the others across
the community.  Meeting details here: http://bit.ly/OKD4ReleaseUpdate

Thank you for your continued support and for being a part of the OKD
community,

Clayton Coleman

Kubernetes and OKD contributor
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: [4.x]: thoughts on how folks should triage and open issues on the right repos?

2019-06-17 Thread Clayton Coleman
Historically we also had a bugzilla Origin component that we used.  One
challenge with GitHub issues is that they lack some of the tools that
bugzilla has for triage and management, and so they always ended up being
somewhat neglected (despite the best efforts of many people).

I definitely agree it's hard to pick one component repo to represent the
new model for sure.

On Mon, Jun 17, 2019 at 6:22 AM Daniel Comnea  wrote:

> Hi,
>
> In 3.x folks used to open issues on Origin/ openshift-ansible repos or BZ
> if it was related to OCP.
>
> In 4.x the game changed a bit where we have many repos and so my question
> is:
>
> do you have any suggestion/ preference on where folks should open issues
> and how will they know / be able to triage which issue goes into which git
> repo ?
>
> Sometimes installer repo is used as the main place to open issues however
> that is not efficient but then again i can understand why folks do it since
> is the only interaction they are aware of.
>
> One suggestion i have would be if it was somehow a mapping between the
> features in v4 and the operators as well as a dependency graph of all the
> operators. Having that inside a github issue template should help folks
> understand (could be that not everyone will be comfortable with but is a
> start i think) on which repo to open the github issue?
>
> Dani
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: [4.x]: any future plans for proxy-mode: ipvs ?

2019-06-11 Thread Clayton Coleman
DSR is definitely desirable for many use cases.

Re load balancer algorithms I had forgotten about those - you might get
unpredictable balancing across servers, but I could definitely see wanting
to bias to local endpoints (for example)

On Jun 10, 2019, at 7:42 PM, Daniel Comnea  wrote:

Hi Clayton, Dan,

thanks for taking the time to respond, much appreciated.

On Mon, Jun 10, 2019 at 5:46 PM Dan Williams  wrote:

> On Sat, 2019-06-08 at 14:52 -0700, Clayton Coleman wrote:
> > OVN implements kube services on nodes without kube-proxy - is there a
> > specific feature gap between ipvs and ovn services you see that needs
> > to be filled?
>
> I'd love to hear the answer to that question too :)
>
> [DC]: without knowing in details the OVN's lb implementation i doubt i can
call it a gap ;) Saying that let me give our use case which we used back in
1.5 and still using in 3.7.
Being in the video processing/ encoding space we have some apps pods which
needs to talk to a hardware storage data plane IPs over which various
different video segments (different chunk size: 2/6 seconds and different
bitrate) are getting written/ pulled.
Now the pods do talk to a K8s service (2 ports) which is mapped to a big
endpoint list (300-600 endpoint IPs). As such (if i remember correctly) we
ended up with # of iptables rules =  # of pods (2000) x 2 (K8s service
ports) x # of endpoints.

Now what we've seen in the past was that load balancing traffic
distribution was not hitting all endpoints (some where getting hit harder
than others).
As such we thought that maybe with the ipvs getting stable in K8s 1.12+ we
should try and see:

   - if various ipvs load balancing algorithm will provide better
   alternatives
   - if ipvs DSR will make any improvements
   - refreshing the iptables rules will be faster


The proxy implementation is usually tightly coupled to the network
> plugin implementation. Some network plugins use kube-proxy while others
> have their own internal load balancing implementation, like ovn-
> kubernetes.
>
> The largest issue we've seen with the iptables-based kube-proxy (as
> opposed to IPVS-based kube-proxy) is iptables contention, and since
> OVN's load-balancing/proxy implementation does not use iptables this is
> not a concern for OVN.
>
[DC]:  @Dan - would you mind pointing me to the code which deals with the
OVN's lb logic ? looked in [1] but i guess i'm missing something else?
(maybe looking in the wrong repo)

[1]
https://github.com/openshift/origin/blob/master/pkg/cmd/openshift-sdn/proxy.go

Independently of that, we are planning to have a standalone kube-proxy
> daemonset that 3rd party plugins (like Calico) can use which could be
> run in IPVS mode if needed:
>
> https://github.com/openshift/release/pull/3711
>
> [DC]: i guess this is based on the [2] and if so, you mind (for my own
curiosity) helping me understand the difference between OpenShiftSDN and
OVNKubernetes networkType ? what new problems does the new OVNKubernetes
type solve?

[2] https://github.com/ovn-org/ovn-kubernetes

That's waiting on Clayton for an LGTM for the mirroring bits (hint hint
> :)
>
> Dan
>
> > > On Jun 8, 2019, at 4:08 PM, Daniel Comnea 
> > > wrote:
> > >
> > > Hi,
> > >
> > > Are there any future plans in 4.x lifecycle to decouple kube-proxy
> > > from OVN and allow setting/ running K8s upstream kube-proxy in ipvs
> > > mode ?
> > >
> > > Cheers,
> > > Dani
> > > ___
> > > dev mailing list
> > > dev@lists.openshift.redhat.com
> > > http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
> >
> > ___
> > dev mailing list
> > dev@lists.openshift.redhat.com
> > http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
> >
> >
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: OKD 4.x

2019-06-06 Thread Clayton Coleman
The cluster bootstrap process uses ignition to ensure masters join the
cluster, and there are a number of aspects of how upgrades of control plane
nodes work that assume the ability to use ostree to update and pivot the
nodes during boot to the correct content set.

I’m sure it’s possible to get a cluster working without a coreos-style OS
(ignition, pivot, ostree), but it would lack the ability to update itself,
so it wouldn’t really be a coherent whole.  The work in fcos is probably
the place to start (we still have to spec out the details, which is on me)
just because it works together with the cluster rather than being distinct.

On Jun 6, 2019, at 6:00 AM, Thode Jocelyn < .th...@elca.ch> wrote:

Hi Clayton,



Will there still be support for Centos 8 witjout the CoreOS spin ? Or will
OKD/Openshift 4 only work with a CoreOS based system ?



Cheers

Jocelyn Thode



*From:* dev-boun...@lists.openshift.redhat.com <
dev-boun...@lists.openshift.redhat.com> *On Behalf Of *Clayton Coleman
*Sent:* jeudi, 6 juin 2019 12:15
*To:* Alix ander 
*Cc:* OpenShift Users List ;
dev@lists.openshift.redhat.com
*Subject:* Re: OKD 4.x



We’re currently working on how Fedora CoreOS will integrate into OKD.
There’s a fair chunk of work that needs to be done and FCoS has a broader
mission than RHCoS does, so its a bit further behind (since OpenShift 4 /
OKD 4 require an OS with ignition and ostree).  Stay tuned, I was going to
write out a work plan for this and share it here.  There’s no current plan
for a centos version, since there’s a lot of interest in FCoS for a newer
kernel.



Until that happens, try.openshift.com makes it easy to get an evaluation of
OCP4 for test and development even if you don’t have a subscription.






On Jun 6, 2019, at 6:04 AM, Alix ander  wrote:

hey guys,



> OKD is upstream code base upon which Red Hat OpenShift Online and Red Hat
OpenShift Container Platform are built



[red hat openshift 4 is now available](
https://blog.openshift.com/red-hat-openshift-4-is-now-available/)



Does any body know if  there is any plan for OKD 4.x ? I am unable to find
any information regarding OKD especially for bare metal.

Here
<https://docs.openshift.com/container-platform/4.1/installing/installing_bare_metal/installing-bare-metal.html>is
a documentation for installing the container-platform on bare-metal which
uses Red Hat Enterprise Linux CoreOS(RHCOS) which presuppose red hat
subscription. RHCOS will not be possible though to use on OKD - Is there
something like CentOS for CoreOS ?



Cheers,
Alix

___
users mailing list
us...@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: OKD 4.x

2019-06-06 Thread Clayton Coleman
We’re currently working on how Fedora CoreOS will integrate into OKD.
There’s a fair chunk of work that needs to be done and FCoS has a broader
mission than RHCoS does, so its a bit further behind (since OpenShift 4 /
OKD 4 require an OS with ignition and ostree).  Stay tuned, I was going to
write out a work plan for this and share it here.  There’s no current plan
for a centos version, since there’s a lot of interest in FCoS for a newer
kernel.

Until that happens, try.openshift.com makes it easy to get an evaluation of
OCP4 for test and development even if you don’t have a subscription.



On Jun 6, 2019, at 6:04 AM, Alix ander  wrote:

hey guys,


> OKD is upstream code base upon which Red Hat OpenShift Online and Red Hat
OpenShift Container Platform are built


[red hat openshift 4 is now available](
https://blog.openshift.com/red-hat-openshift-4-is-now-available/)


Does any body know if  there is any plan for OKD 4.x ? I am unable to find
any information regarding OKD especially for bare metal.

Here
is
a documentation for installing the container-platform on bare-metal which
uses Red Hat Enterprise Linux CoreOS(RHCOS) which presuppose red hat
subscription. RHCOS will not be possible though to use on OKD - Is there
something like CentOS for CoreOS ?



Cheers,
Alix

___
users mailing list
us...@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Openshift Origin Haproxy metrics exporter source

2019-02-08 Thread Clayton Coleman
https://github.com/openshift/router/blob/master/pkg/router/metrics/haproxy/haproxy.go

Is what converts haproxy stats from the stats socket to prometheus metrics

On Feb 8, 2019, at 8:48 AM, Gowtham Sundara <
gowtham.sund...@rapyuta-robotics.com> wrote:

Hello,
May I know where the source code for the openshift haproxy router metrics
prometheus binary is present? (edited)
I would like to know more about how the `haproxy_server_bytes_out_total` is
calculated.
-- 
Gowtham Sundara
Site Reliability Engineer

Rapyuta Robotics “empowering lives with connected machines”
rapyuta-robotics.com 

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


FYI: As of 4.0, all OKD images are being pushed to quay

2019-01-22 Thread Clayton Coleman
As we've grown ever more images for 4.0 we are now publishing images
exclusively to quay for OKD builds.  A subset is still being mirrored to
docker but the 4.0 versions will be discontinued over time, so please don't
rely on them.

All images published by CI for 4.0 onward are at:

quay.io/openshift/origin-COMPONENT:v4.0

I'll be decommissioning mirrors as teams move their example configs over to
quay.
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: origin-console

2018-12-18 Thread Clayton Coleman
https://github.com/openshift/console

On Dec 18, 2018, at 3:54 PM, Neale Ferguson  wrote:

>From what sources are the origin-console:v3.11.0 image built?



Neale

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Openshift Origin builds for CVE-2018-1002105

2018-12-06 Thread Clayton Coleman
At this point i didn’t have a plan to backport to 3.9.  The ci infra has
atrophied somewhat and so I can’t be sure it can be done.

On Dec 6, 2018, at 10:28 AM, Daniel Comnea  wrote:



On Thu, Dec 6, 2018 at 3:25 PM Gowtham Sundara <
gowtham.sund...@rapyuta-robotics.com> wrote:

> Hello,
> Is there a ci build for version 3.9? (can't seem to find one, so I am
> assuming not). Could you please cut a minor release for 3.9 too as Daniel
> suggested.
>
> [DC]: the K8 fix was backported down to 1.10 and so our RH fellows did the
same. I doubt there will be anything for < 3.10 (not on OKD i suspect)


> Thanks
>
> On Thu, Dec 6, 2018 at 8:50 PM Daniel Comnea 
> wrote:
>
>> Cheers for chime in Clayton.
>>
>> In this case you fancy cutting new minor release for 3.10/ 3.11 and then
>> i'll take it over?
>>
>> Dani
>>
>> On Thu, Dec 6, 2018 at 3:18 PM Clayton Coleman 
>> wrote:
>>
>>> This are the correct PRa
>>>
>>> On Dec 6, 2018, at 10:14 AM, Daniel Comnea 
>>> wrote:
>>>
>>> I'll chime in to get some clarity 
>>>
>>> The CentOS rpms are built by the PaaS SIG and is based on the Origin
>>> tag release.
>>> As such in order to have new origin rpms built/ pushed into CentOS repos
>>> we will need:
>>>
>>>
>>>- the fix to make it into 3.11/3.10 Origin branches => done [1]
>>>however i am just guessing those are the right PRs, someone from RH
>>>will need to confirm/ refute
>>>- a new Origin release to be cut for 3.11/3.10
>>>- then i can start with the PaaS Sig work
>>>
>>> You can also see some details on [2] but again i have not validated
>>> myself
>>>
>>> Hope this get some clarity
>>>
>>>
>>> Dani
>>>
>>> [1]
>>> https://github.com/openshift/origin/pull/21600 (3.11)
>>> https://github.com/openshift/origin/pull/21601 (3.10)
>>>
>>> [2] https://github.com/openshift/origin/issues/21606
>>>
>>> On Thu, Dec 6, 2018 at 10:07 AM Mateus Caruccio <
>>> mateus.caruc...@getupcloud.com> wrote:
>>>
>>>> On top of that is anyone here building publicly accessible rpms/srpms?
>>>>
>>>>
>>>> Em Qui, 6 de dez de 2018 07:36, Gowtham Sundara <
>>>> gowtham.sund...@rapyuta-robotics.com escreveu:
>>>>
>>>>> Hello,
>>>>> The RPMs for Openshift origin need to be updated because of the recent
>>>>> vulnerability. Is there a release schedule for this?
>>>>>
>>>>> --
>>>>> Gowtham Sundara
>>>>> Site Reliability Engineer
>>>>>
>>>>> Rapyuta Robotics “empowering lives with connected machines”
>>>>> rapyuta-robotics.com <https://www.rapyuta-robotics.com/>
>>>>> ___
>>>>> dev mailing list
>>>>> dev@lists.openshift.redhat.com
>>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>>>>
>>>> ___
>>>> dev mailing list
>>>> dev@lists.openshift.redhat.com
>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>>>
>>> ___
>>> dev mailing list
>>> dev@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>>
>>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>
>
> --
> Gowtham Sundara
> Site Reliability Engineer
>
> Rapyuta Robotics “empowering lives with connected machines”
> rapyuta-robotics.com <https://www.rapyuta-robotics.com/>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Openshift Origin builds for CVE-2018-1002105

2018-12-06 Thread Clayton Coleman
This are the correct PRa

On Dec 6, 2018, at 10:14 AM, Daniel Comnea  wrote:

I'll chime in to get some clarity 

The CentOS rpms are built by the PaaS SIG and is based on the Origin tag
release.
As such in order to have new origin rpms built/ pushed into CentOS repos we
will need:


   - the fix to make it into 3.11/3.10 Origin branches => done [1] however
   i am just guessing those are the right PRs, someone from RH will need to
   confirm/ refute
   - a new Origin release to be cut for 3.11/3.10
   - then i can start with the PaaS Sig work

You can also see some details on [2] but again i have not validated myself

Hope this get some clarity


Dani

[1]
https://github.com/openshift/origin/pull/21600 (3.11)
https://github.com/openshift/origin/pull/21601 (3.10)

[2] https://github.com/openshift/origin/issues/21606

On Thu, Dec 6, 2018 at 10:07 AM Mateus Caruccio <
mateus.caruc...@getupcloud.com> wrote:

> On top of that is anyone here building publicly accessible rpms/srpms?
>
>
> Em Qui, 6 de dez de 2018 07:36, Gowtham Sundara <
> gowtham.sund...@rapyuta-robotics.com escreveu:
>
>> Hello,
>> The RPMs for Openshift origin need to be updated because of the recent
>> vulnerability. Is there a release schedule for this?
>>
>> --
>> Gowtham Sundara
>> Site Reliability Engineer
>>
>> Rapyuta Robotics “empowering lives with connected machines”
>> rapyuta-robotics.com 
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Openshift Origin builds for CVE-2018-1002105

2018-12-06 Thread Clayton Coleman
Rpms from CI are here

https://artifacts-openshift-release-3-11.svc.ci.openshift.org/repo/

I forgot srpms aren’t created via this process, there’s not an easy way to
add them due to the size increase.

The centos paas sig was also creating them (and they have srpms) and would
be where I would recommend pulling from.

On Dec 6, 2018, at 5:07 AM, Mateus Caruccio 
wrote:

On top of that is anyone here building publicly accessible rpms/srpms?


Em Qui, 6 de dez de 2018 07:36, Gowtham Sundara <
gowtham.sund...@rapyuta-robotics.com escreveu:

> Hello,
> The RPMs for Openshift origin need to be updated because of the recent
> vulnerability. Is there a release schedule for this?
>
> --
> Gowtham Sundara
> Site Reliability Engineer
>
> Rapyuta Robotics “empowering lives with connected machines”
> rapyuta-robotics.com 
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: tags v3.10 or v3.11?

2018-10-12 Thread Clayton Coleman
That image is not updated after 3.10.  It was removed in favor of new
images that split its responsibilities.

On Fri, Oct 12, 2018 at 3:45 PM Omer Faruk SEN  wrote:

> Hello ,
>
> I have just checked https://hub.docker.com/r/openshift/origin/tags/ and
> it says tag v3.10 was just released 2 hours ago while v3.11 4 months ago ?
>
> [image: image.png]
>
> According to
> http://lists.openshift.redhat.com/openshift-archives/dev/2018-October/msg00010.html
> it must be v3.11 just released right? Or am I missing something
>
> Also I am just installing openshift enterprise container for RHEL repo
> 3.11 contains images for 3.10:
>
> openshift-ansible-roles-3.11.16-1.git.0.4ac6f81.el7.noarch
> openshift-ansible-docs-3.11.16-1.git.0.4ac6f81.el7.noarch
> openshift-ansible-3.11.16-1.git.0.4ac6f81.el7.noarch
> openshift-ansible-playbooks-3.11.16-1.git.0.4ac6f81.el7.noarch
>
>
> [root@master ~]# docker images
> REPOSITORYTAG IMAGE ID
> CREATED SIZE
> docker.io/openshift/origin-node   v3.10   cf75e4641dbb
> 2 hours ago 1.27 GB
> docker.io/openshift/origin-podv3.10   bc7edd16cb1a
> 2 hours ago 224 MB
>
>
>
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


OKD v3.11.0 has been tagged and pushed to GitHub

2018-10-11 Thread Clayton Coleman
https://github.com/openshift/origin/releases/tag/v3.11.0 contains the
release notes and latest binaries.

The v3.11.0 tag on docker.io is up to date and will be a rolling tag (new
fixes will be delivered there).

Thanks to everyone on their hard work!
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Plans on cutting Origin 3.11 / 4.0 ?

2018-10-10 Thread Clayton Coleman
I was waiting for some last minute settling of the branch, and I will cut
an rc

On Wed, Oct 10, 2018 at 10:49 AM Daniel Comnea 
wrote:

> Hi,
>
> What are the plans on cutting a new Origin release ? I see on
> _release-3.11_  branch on Origin as well as openshift-ansible git repos
> however i don't see any Origin 3.11 release being out.
>
> And then on BZ i see people already raised issues against 3.11 hence my
> confusion.
>
> Thanks,
> Dani
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: CI automation location for RPMs is moving

2018-10-09 Thread Clayton Coleman
What website?  Just use a slash at the end - all the CI jobs look like
their working

> On Oct 9, 2018, at 10:10 PM, Rich Megginson  wrote:
>
> Was this ever fixed?  Is this the cause of the website being currently 
> unresponsive?
>
>
>> On 9/10/18 2:33 PM, Clayton Coleman wrote:
>> Interesting, might be an HAProxy router bug.  Can you file one?
>>
>> On Mon, Sep 10, 2018 at 3:08 PM Seth Jennings > <mailto:sjenn...@redhat.com>> wrote:
>>
>>There is a bug in the webserver configuration.  Main page links to 
>> https://rpms.svc.ci.openshift.org/openshift-origin-v3.11 which gets 
>> redirected to
>>http://rpms.svc.ci.openshift.org:8080/openshift-origin-v3.11/ (drops 
>> https and adds port number).
>>
>>On Sat, Sep 8, 2018 at 9:27 PM Clayton Coleman > <mailto:ccole...@redhat.com>> wrote:
>>
>>Previously, all RPMs used by PR and the test automation or Origin 
>> were located in GCS.  Starting with 3.11 and continuing forward, RPMs will 
>> be served from the api.ci <http://api.ci>
>>cluster at:
>>
>>https://rpms.svc.ci.openshift.org
>>
>>You can get an rpm repo file for a release by clicking on one of the 
>> links on the page above or via curling the name directly:
>>
>>$ curl 
>> https://rpms.svc.ci.openshift.org/openshift-origin-v3.11.repo > 
>> /etc/yum.repos.d/openshift-origin-3.11.repo
>>
>>The contents of this repo will be the same as the contents of the 
>> image:
>>
>>docker.io/openshift/origin-artifacts:v3.11 
>> <http://docker.io/openshift/origin-artifacts:v3.11>
>>
>>in the /srv/repo dir.
>>
>>PR jobs for 3.11 and onwards will now use this URL to fetch content.  
>> The old location on GCS will no longer be updated as we are sunsetting the 
>> jobs that generated and used that content
>>___
>>dev mailing list
>>dev@lists.openshift.redhat.com <mailto:dev@lists.openshift.redhat.com>
>>http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Future Installer - Ansible vs Terraform

2018-09-18 Thread Clayton Coleman
> On Sep 18, 2018, at 8:06 AM, Tobias Brunner  wrote:
>
> Hi,
>
> It seems like the future OpenShift installer
> (https://github.com/openshift/installer) will be based on the Tectonic
> installer which uses Terraform in it's heart. What does this mean for
> the Ansible based installer (openshift-ansible)? Will it be completely
> replaced or will there be an Ansible based installer in the future? The
> reason I'm asking for is because we're currently re-architecting the way
> we're rolling out OpenShift clusters and before we invest a lot of time
> in the Ansible setup we'd like to know in which direction it goes.

Terraform is an implementation detail.  A more important part of the
installer is that 80-90% of the cluster configuration and installation
is moving on cluster itself (the installer sets up a top level
operator that runs to finish the install) and management of nodes is
moving to the cluster via the machine api (installer won’t provision
nodes).  The installer will be the preferred method for installing
openshift onto clouds (private or public) where you have IaaS APIs
(over time more targets will be suppprted).

openshift-ansible scope will be dramatically reduced as a consequence
of that - mostly to setting up rhel nodes and assisting in other
non-cloud configuration tasks.  It will continue to be the “get it
working if you already have configured machines” approach.

>
> Thanks for any insights!
>
> Best,
> Tobias
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: CI automation location for RPMs is moving

2018-09-10 Thread Clayton Coleman
Interesting, might be an HAProxy router bug.  Can you file one?

On Mon, Sep 10, 2018 at 3:08 PM Seth Jennings  wrote:

> There is a bug in the webserver configuration.  Main page links to
> https://rpms.svc.ci.openshift.org/openshift-origin-v3.11 which gets
> redirected to
> http://rpms.svc.ci.openshift.org:8080/openshift-origin-v3.11/ (drops
> https and adds port number).
>
> On Sat, Sep 8, 2018 at 9:27 PM Clayton Coleman 
> wrote:
>
>> Previously, all RPMs used by PR and the test automation or Origin were
>> located in GCS.  Starting with 3.11 and continuing forward, RPMs will be
>> served from the api.ci cluster at:
>>
>> https://rpms.svc.ci.openshift.org
>>
>> You can get an rpm repo file for a release by clicking on one of the
>> links on the page above or via curling the name directly:
>>
>> $ curl https://rpms.svc.ci.openshift.org/openshift-origin-v3.11.repo
>> > /etc/yum.repos.d/openshift-origin-3.11.repo
>>
>> The contents of this repo will be the same as the contents of the image:
>>
>> docker.io/openshift/origin-artifacts:v3.11
>>
>> in the /srv/repo dir.
>>
>> PR jobs for 3.11 and onwards will now use this URL to fetch content.  The
>> old location on GCS will no longer be updated as we are sunsetting the jobs
>> that generated and used that content
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: CI automation location for RPMs is moving

2018-09-10 Thread Clayton Coleman
It’s used for all Origin CI.  Post release, any hotfixes will be in that
location.  These are not signed RPMs nor will they be preserved - they’re
just a convenient place to get the build output.

On Sep 10, 2018, at 8:49 AM, Daniel Comnea  wrote:

Clayton,

Is the url   https://rpms.svc.ci.openshift.org meant to be public available
or is only available internally for your own deployments ?

In addition, is the plan that everyone deploying OCP/ OKD on RHEL/ CentOS
to use the above common repo (assuming is going to be public accessible ) ?


Dani


On Sun, Sep 9, 2018 at 3:26 AM Clayton Coleman  wrote:

> Previously, all RPMs used by PR and the test automation or Origin were
> located in GCS.  Starting with 3.11 and continuing forward, RPMs will be
> served from the api.ci cluster at:
>
> https://rpms.svc.ci.openshift.org
>
> You can get an rpm repo file for a release by clicking on one of the links
> on the page above or via curling the name directly:
>
> $ curl https://rpms.svc.ci.openshift.org/openshift-origin-v3.11.repo
> > /etc/yum.repos.d/openshift-origin-3.11.repo
>
> The contents of this repo will be the same as the contents of the image:
>
> docker.io/openshift/origin-artifacts:v3.11
>
> in the /srv/repo dir.
>
> PR jobs for 3.11 and onwards will now use this URL to fetch content.  The
> old location on GCS will no longer be updated as we are sunsetting the jobs
> that generated and used that content
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


CI automation location for RPMs is moving

2018-09-08 Thread Clayton Coleman
Previously, all RPMs used by PR and the test automation or Origin were
located in GCS.  Starting with 3.11 and continuing forward, RPMs will be
served from the api.ci cluster at:

https://rpms.svc.ci.openshift.org

You can get an rpm repo file for a release by clicking on one of the links
on the page above or via curling the name directly:

$ curl https://rpms.svc.ci.openshift.org/openshift-origin-v3.11.repo >
/etc/yum.repos.d/openshift-origin-3.11.repo

The contents of this repo will be the same as the contents of the image:

docker.io/openshift/origin-artifacts:v3.11

in the /srv/repo dir.

PR jobs for 3.11 and onwards will now use this URL to fetch content.  The
old location on GCS will no longer be updated as we are sunsetting the jobs
that generated and used that content
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


PSA for openshift users - dockerhub will be down for scheduled outage on August 25th

2018-08-16 Thread Clayton Coleman
Please see https://status.docker.com/ for times.

Remember, if you have autoscaling nodes that need to pull new apps, or have
pods that run with PullAlways, or push builds to the docker hub, while the
hub is down those operations will fail.

Mitigations could include:

1. Disable autoscaling for the duration
2. Use the image mirroring and transparent proxying feature of the
openshift integrated registry (switch the resolutionPolicy for your image
streams to Local on 3.9 or later) to automatically mirror remote images and
serve them from the local registry
3. Disable PullAlways from any deployed workloads so you can leverage
cached local images (if a pod tries to restart while the registry is down
and pull always is set, the new container won’t be started).
4. Push to a different registry than dockerhub, like the integrated
registry or quay.io
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Removed "openshift start node" from origin master

2018-08-14 Thread Clayton Coleman
That’s the long term direction, now that many extension points are maturing
enough to be useful.  But I’ll caution and say the primary goal is to
reduce maintenance costs, improve upgrade isolation, and maintain the
appropriate level of security, so some of the more nuanced splits might
take much longer.

On Aug 14, 2018, at 6:51 PM, Daniel Comnea  wrote:

Hi Clayton,

Great progress!

So am i right to say that *"**splitting OpenShift up to make it be able to
run on top of kubernetes"* end result will be more like openshift distinct
features turning more like add-ons rather than what we have today?



On Tue, Aug 14, 2018 at 6:17 PM, Clayton Coleman 
wrote:

> As part of the continuation of splitting OpenShift up to make it be able
> to run on top of kubernetes, we just merged https://github.com/
> openshift/origin/pull/20344 which removes "openshift start node" and the
> "openshift start" commands.  This means that the openshift binary will no
> longer include the kubelet code and if you want an "all-in-one" openshift
> experience you'll want to use "oc cluster up".
>
> There should be no impact to end users - starting in 3.10 we already only
> used the kubelet (part of hyperkube binary) and use the
> "openshift-node-config" binary to translate the node-config.yaml into
> kubelet arguments.  oc cluster up has been running in this configuration
> for a while.
>
> integration tests have been changed to only start the master components
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Removed "openshift start node" from origin master

2018-08-14 Thread Clayton Coleman
As part of the continuation of splitting OpenShift up to make it be able to
run on top of kubernetes, we just merged
https://github.com/openshift/origin/pull/20344 which removes "openshift
start node" and the "openshift start" commands.  This means that the
openshift binary will no longer include the kubelet code and if you want an
"all-in-one" openshift experience you'll want to use "oc cluster up".

There should be no impact to end users - starting in 3.10 we already only
used the kubelet (part of hyperkube binary) and use the
"openshift-node-config" binary to translate the node-config.yaml into
kubelet arguments.  oc cluster up has been running in this configuration
for a while.

integration tests have been changed to only start the master components
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Kubelet/node nice level

2018-07-01 Thread Clayton Coleman
That’s the one the installer lays down.  Ansible has never used the one in
the RPMs (and the one in the RPMs is being removed in 3.10 to prevent
confusion).

On Jul 1, 2018, at 10:03 AM, Mateus Caruccio 
wrote:

Yep, I copy/paste from an old buffer. It's /bin/nice already. Same results.

Anyway, I believe have found the reason. Looks like
/etc/systemd/system/multi-user.target.wants is linking to the wrong unit
file (or am I editing the wrong one?).

# ls -l /etc/systemd/system/multi-user.target.wants/origin-node.service
lrwxrwxrwx.  1 root root   39 Jul  1 13:42
/etc/systemd/system/multi-user.target.wants/origin-node.service ->
*/etc/systemd/system/origin-node.service*

# cat /etc/systemd/system/origin-node.service
[Unit]
Description=OpenShift Node
After=docker.service
After=chronyd.service
After=ntpd.service
Wants=openvswitch.service
After=ovsdb-server.service
After=ovs-vswitchd.service
Wants=docker.service
Documentation=https://github.com/openshift/origin
Wants=dnsmasq.service
After=dnsmasq.service

[Service]
Type=notify
EnvironmentFile=/etc/sysconfig/origin-node
Environment=GOTRACEBACK=crash
ExecStartPre=/usr/bin/cp /etc/origin/node/node-dnsmasq.conf /etc/dnsmasq.d/
ExecStartPre=/usr/bin/dbus-send --system --dest=uk.org.thekelleys.dnsmasq
/uk/org/thekelleys/dnsmasq uk.org.thekelleys.SetDomainServers
array:string:/in-addr.arpa/127.0.0.1,/cluster.local/127.0.0.1
ExecStopPost=/usr/bin/rm /etc/dnsmasq.d/node-dnsmasq.conf
ExecStopPost=/usr/bin/dbus-send --system --dest=uk.org.thekelleys.dnsmasq
/uk/org/thekelleys/dnsmasq uk.org.thekelleys.SetDomainServers array:string:
ExecStart=/usr/bin/openshift start node  --config=${CONFIG_FILE} $OPTIONS
LimitNOFILE=65536
LimitCORE=infinity
WorkingDirectory=/var/lib/origin/
SyslogIdentifier=origin-node
Restart=always
RestartSec=5s
TimeoutStartSec=300
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target


After remove it, the link points to where I expect being the right unit
file:


# rm -f /etc/systemd/system/origin-node.service

# systemctl disable origin-node
Removed symlink
/etc/systemd/system/multi-user.target.wants/origin-node.service.

# systemctl enable origin-node
Created symlink from
/etc/systemd/system/multi-user.target.wants/origin-node.service to
/usr/lib/systemd/system/origin-node.service.

*# systemctl restart origin-node*

# ps ax -o pid,nice,comm|grep openshift
  4994   0 openshift
  5036  -5 openshift


The question now is: where /etc/systemd/system/origin-node.service comes
from?

--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
Gartner Cool Vendor 2017

2018-07-01 8:06 GMT-03:00 Tobias Florek :

> Hi!
>
> > ExecStart=nice -n -5 /usr/bin/openshift start node [...]
>
> That won't work. You need the full path to the executable in systemd
> units.
>
> Cheers,
>  Tobias Florek
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Kubelet/node nice level

2018-06-30 Thread Clayton Coleman
Maybe double check that systemd sees the Nice parameter with systemctl cat
origin-node

On Jun 30, 2018, at 3:10 PM, Mateus Caruccio 
wrote:

[centos@ip-10-0-53-142 ~]$ oc version
oc v3.9.0+ba7faec-1
kubernetes v1.9.1+a0ce1bc657
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://api.engine.caruccio.com
openshift v3.9.0+ba7faec-1
kubernetes v1.9.1+a0ce1bc657

--

[centos@ip-10-0-53-142 ~]$ sudo grep -v '^#' /etc/sysconfig/origin-node
/etc/sysconfig/origin-node:OPTIONS=--loglevel=1
/etc/sysconfig/origin-node:CONFIG_FILE=/etc/origin/node/node-config.yaml
/etc/sysconfig/origin-node:
/etc/sysconfig/origin-node:IMAGE_VERSION=v3.9.0
/etc/sysconfig/origin-node:AWS_ACCESS_KEY_ID=[REDACTED]
/etc/sysconfig/origin-node:AWS_SECRET_ACCESS_KEY=[REDACTED]

--

[centos@ip-10-0-53-142 ~]$ sudo cat /etc/origin/node/node-config.yaml
allowDisabledDocker: false
apiVersion: v1
dnsBindAddress: 127.0.0.1:53
dnsRecursiveResolvConf: /etc/origin/node/resolv.conf
dnsDomain: cluster.local
dnsIP: 10.0.53.142
dockerConfig:
  execHandlerName: ""
iptablesSyncPeriod: "30s"
imageConfig:
  format: openshift/origin-${component}:${version}
  latest: False
kind: NodeConfig
kubeletArguments:
  cloud-config:
  - /etc/origin/cloudprovider/aws.conf
  cloud-provider:
  - aws
  image-gc-high-threshold:
  - '80'
  image-gc-low-threshold:
  - '50'
  image-pull-progress-deadline:
  - 20m
  max-pods:
  - '200'
  maximum-dead-containers:
  - '10'
  maximum-dead-containers-per-container:
  - '1'
  minimum-container-ttl-duration:
  - 30s
  node-labels:
  - region=primary
  - role=master
  - zone=default
  - server_name=mateus-master-0
  pods-per-core:
  - '20'
masterClientConnectionOverrides:
  acceptContentTypes: application/vnd.kubernetes.protobuf,application/json
  contentType: application/vnd.kubernetes.protobuf
  burst: 200
  qps: 100
masterKubeConfig: system:node:ip-10-0-53-142.us
-west-2.compute.internal.kubeconfig
networkPluginName: redhat/openshift-ovs-multitenant
# networkConfig struct introduced in origin 1.0.6 and OSE 3.0.2 which
# deprecates networkPluginName above. The two should match.
networkConfig:
   mtu: 8951
   networkPluginName: redhat/openshift-ovs-multitenant
nodeName: ip-10-0-53-142.us-west-2.compute.internal
podManifestConfig:
servingInfo:
  bindAddress: 0.0.0.0:10250
  certFile: server.crt
  clientCA: ca.crt
  keyFile: server.key
  minTLSVersion: VersionTLS12
volumeDirectory: /var/lib/origin/openshift.local.volumes
proxyArguments:
  proxy-mode:
 - iptables
volumeConfig:
  localQuota:
perFSGroup:

--

[centos@ip-10-0-53-142 ~]$ sudo cat
/usr/lib/systemd/system/origin-node.service
[Unit]
Description=Origin Node
After=docker.service
Wants=docker.service
Documentation=https://github.com/openshift/origin

[Service]
Type=notify
EnvironmentFile=/etc/sysconfig/origin-node
Environment=GOTRACEBACK=crash
ExecStart=/usr/bin/openshift start node --config=${CONFIG_FILE} $OPTIONS
LimitNOFILE=65536
LimitCORE=infinity
WorkingDirectory=/var/lib/origin/
SyslogIdentifier=origin-node
Restart=always
RestartSec=5s
OOMScoreAdjust=-999
Nice = -5

[Install]
WantedBy=multi-user.target

--

This is the ansible task I'm using:

https://github.com/caruccio/getup-engine-installer/commit/49c28e4cc350856e11b8160f7a315e5fdda0dcce

---
- name: Set origin-node niceness
  ini_file:
path: /usr/lib/systemd/system/origin-node.service
section: Service
option: Nice
value: -5
backup: yes
  tags:
  - post-install


--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
Gartner Cool Vendor 2017

2018-06-30 15:03 GMT-03:00 Clayton Coleman :

> Which version of openshift and what are your node start settings?
>
> On Jun 29, 2018, at 11:10 PM, Mateus Caruccio  com> wrote:
>
> Hi. I'm trying to run openshift kubelet with nice set o -5 but with no
> success.
>
> I've noticed that kubelet is started using syscall.Exec[1], which calls
> execve. The man page of execve[2] states that the new process shall
> inherited nice value from caller.
>
> After adding `Nice=-5` to origin-node unit and reloading both daemon and
> unit, openshift node process still runs with nice=0.
>
> What am I missing?
>
> [1]: https://github.com/openshift/origin/blob/
> 83ac5ae6a7d635ae67b1be438d85c339500fd65b/pkg/cmd/server/
> start/start_node.go#L433
> [2]: https://linux.die.net/man/3/execve
>
> --
> Mateus Caruccio / Master of Puppets
> GetupCloud.com
> We make the infrastructure invisible
> Gartner Cool Vendor 2017
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


OpenShift v3.10.0-rc.0 has been published

2018-06-20 Thread Clayton Coleman
Clients and binaries have been pushed to GitHub
https://github.com/openshift/origin/releases/tag/v3.10.0-rc.0 and images
are available on the DockerHub.
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Openshift and Power loss

2018-06-20 Thread Clayton Coleman
On Jun 20, 2018, at 7:32 AM, Hetz Ben Hamo  wrote:

With all redundant hardware these days, with UPS - power loss can happen.

I just got a power loss and upon powering on those machines, most of the
services weren't working. Checking the pods shows almost all of them in
error state. I deleted the pods and they were automatically recreated so
most of the services were running, but the images inside this Openshift
system wen't dead (redeployed my stuff gave an error that images cannot be
pulled).


If you get “images cannot be pulled” it usually means those images don’t
exist anymore.  If you try to docker pull those images, what happens?


I looked at the documents, both the commercial and the origin version, but
there is nothing which talks about this issue, nor any script that will fix
this issue after powering on this system.

Is there such a document or any script that fixes such an issue?

Thanks,
Hetz

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Origin 3.10 release

2018-06-12 Thread Clayton Coleman
You should be using the current rolling tag.  We're not yet ready to cut an
rc candidate.

Please see my previous email to the list about accessing the latest RPMs or
zips for the project.

On Tue, Jun 12, 2018 at 8:10 AM, Lalatendu Mohanty 
wrote:

> Hi,
>
> We are working on code changes required for running  (cluster up)
> Origin3.10 in Minishift. So wondering when can we expect v3.10 alpha (or
> any) release?
>
> Thanks,
> Lala
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: OpenShift Web Console - 3.9 - Pod / CrashLoopBackOff

2018-05-17 Thread Clayton Coleman
anyuid is less restrictive than restricted, unless you customized
restricted.  Did youvustomize restricted?

On May 17, 2018, at 8:56 AM, Charles Moulliard  wrote:

Hi,

If we scale down/up the Replication Set of the OpenShift Web Console, then
the new pod created will crash and report

"Error: unable to load server certificate: open /var/serving-cert/tls.crt:
permission denied"

This problem comes from the fact that when the pod is recreated, then the
scc annotation is set to anyuid instead of restricted and then the pod
can't access the cert

apiVersion: v1
kind: Pod
metadata:
  annotations:
openshift.io/scc: anyuid

Is this bug been fixed for openshift 3.9 ? Is there a workaround to resolve
it otherwise we can't access anymore the Web Console ?

Regards

CHARLES MOULLIARD

SOFTWARE ENGINEER MANAGER SPRING(BOOT)

Red Hat 

cmoulli...@redhat.comM: +32-473-604014

@cmoulliard 

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: OpenShift Origin incorporating CoreOS technologies ?

2018-05-16 Thread Clayton Coleman
Many if not most of the features will be in Origin.  Probably the one
exception is over the air cluster updates - the pieces of that will be
open, but the mechanism for Origin updates may be more similar to the
existing setup today than to what tectonic has.  We’re still sorting
out how that will work.

> On May 16, 2018, at 6:28 AM, Daniel Comnea  wrote:
>
> Hi,
>
> Following RH Summit and the news about CoreOS Tectonic features being 
> integrated into OCP, can we get any insights as to whether the Tectonics 
> features will make it into Origin too?
>
>
> Thank you,
> Dani
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: How to build RPMs

2018-05-04 Thread Clayton Coleman
It looks like it is trying to push tags, maybe that is failing.  You may
need to add -x to hack/build-rpms.sh

On May 4, 2018, at 10:30 AM, Mateus Caruccio 
wrote:

Hi there.

I'm having a hard time trying to build RPMs for 3.7.x
What am I missing here?
Thanks.

[centos ~]$ cat /etc/centos-release
CentOS Linux release 7.4.1708 (Core)

[centos ~]$ rpm -q tito
tito-0.6.11-1.el7.noarch

[centos ~]$ git clone https://github.com/openshift/origin

[centos ~]$ cd origin

[centos origin]$ git checkout -b fix-3.7 remotes/origin/release-3.7

[centos origin]$ patch -p1 < <(curl -L
https://patch-diff.githubusercontent.com/raw/openshift/origin/pull/19437.patch
)

[centos origin]$ git cherry-pick 3cb1da1445729ffb0b008dbb50c9a59ddb0b1746

[centosorigin]$ git tag v3.7.3-fix

[centosorigin]$ sudo make build-rpms
OS_ONLY_BUILD_PLATFORMS='linux/amd64' hack/build-rpm-release.sh
[INFO] [14:21:46+] Building Origin release RPMs with tito...
Creating output directory: /tmp/tito
OS_GIT_MINOR::7+
OS_GIT_MAJOR::3
OS_GIT_VERSION::v3.7.3-fix+325ae5d
OS_GIT_TREE_STATE::clean
OS_GIT_CATALOG_VERSION::v0.1.2
OS_GIT_COMMIT::325ae5d
Tagging new version of origin: 0.0.1 -> 3.7.3-0.fix.0.325ae5d

Created tag: v3.7.3
   View: git show HEAD
   Undo: tito tag -u
   Push: git push --follow-tags origin
Creating output directory: /tmp/openshift/build-rpm-release/tito
ERROR: Tag does not exist locally: [v3.7.3-0.fix.0.325ae5d]
[ERROR] [14:21:52+] PID 1619: hack/build-rpm-release.sh:35: `tito build
--offline --srpm --rpmbuild-options="--define 'dist .el7'"
--output="${tito_tmp_dir}"` exited with status 1.
[INFO] [14:21:52+] Stack Trace:
[INFO] [14:21:52+]   1: hack/build-rpm-release.sh:35: `tito build
--offline --srpm --rpmbuild-options="--define 'dist .el7'"
--output="${tito_tmp_dir}"`
[INFO] [14:21:52+]   Exiting with code 1.
[ERROR] [14:21:52+] hack/build-rpm-release.sh exited with code 1 after
00h 00m 06s
make: *** [build-rpms] Error 1


--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
Gartner Cool Vendor 2017

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Changes to origin and OCP images in 3.10

2018-05-02 Thread Clayton Coleman
https://github.com/openshift/origin/pull/19509 has been merged and does two
things:

First, and most important, it puts our images and binaries on a diet:

1. oc is now 110M instead of 220M
2. most origin images were 1.26GB uncompressed (300 or so on the wire) are
now half that size (150 on the wire)

The size of the binary was mostly due to people accidentally or
intentionally taking import dependencies from 'oc' into the kubelet, the
controllers, or the apiservers.  Those links are cut and we'll be putting
in protections to prevent accidental growth.

Changes to our images and how they are split out:

1. openshift/origin and openshift/node are now dead (in both OCP and Origin)
1a. We stopped publishing or using origin-sti-build - both docker and s2i
builds now use the origin-docker-build image
2. openshift/origin's closest equivalent is openshift/origin-control-plane
which has the openshift and oc binaries in it
3. openshift/node is now called openshift/origin-node, and has some other
changes described below
4. the hypershift binary (launches the openshift api and controllers) is in
openshift/origin-hypershift
5. the hyperkube binary (launches the kube api, controllers, and kubelet)
is in openshift/origin-hyperkube
6. two new images openshift/origin-test and openshift/origin-cli have been
created that contain the e2e binary and the cli binary

Two new RPMs have been created that contain just hyperkube and just
hypershift

The node image in 3.10 will continue to have the openshift binary and
hyperkube.  Starting in 3.11 we will likely rename the node image to
openshift/origin-sdn and remove the kubelet and openshift logic from it.

As part of removing our direct go dependency on the kubelet, a new binary
openshift-node-config has been created that does exactly one thing - reads
node-config.yaml and converts it to kubelet arguments.  In 3.11 we will
switch ansible over from calling openshift start node to a calling
openshift-node-config.

The new image separation will be mirrored into OCP, and oc cluster up now
reacts to the changes.  No ansible changes are necessary in 3.10.

If you see oddities because you depended on a particular binary being in
one of our images, please file a bug to me.

Thanks!
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Images renamed in origin

2018-04-18 Thread Clayton Coleman
The following images are being renamed in 3.10:

openshift/origin -> openshift/origin-control-plane
openshift/node -> openshift/origin-node

The following images are being removed:

openshift/openvswitch: the RPMs and content here is now part of
openshift/origin-node

The following images won't be updated anymore:

openshift/hello-world: this is a from scratch image and doesn't change much

We will build openshift/node and openshift/origin for a short period of
time while we transition ansible, and then we will remove those images from
being built (the old versions will remain on the docker hub).

As part of this, oc cluster up --image has been changed to be consistent
with openshift-ansible's "oreg_url" field which maps to the platform
"imageFormat" - the value that controls which images the platform uses.
This will be used by the new CI infrastructure which will more efficiently
build and publish these images.  You can see a preview of that from:

docker pull
registry.svc.ci.openshift.org/openshift/origin-v3.10:control-plane
docker pull registry.svc.ci.openshift.org/openshift/origin-v3.10:node
docker pull registry.svc.ci.openshift.org/openshift/origin-v3.10:pod
...

etc.  In the near future all PRs will have a registry of this form, so
you'll be able to oc cluster up from a PR with:

oc cluster up --image '
registry.svc.ci.openshift.org/openshift/origin-v3.10:${component}'
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Install OpenShift Origin 3.9 failed on single node

2018-04-10 Thread Clayton Coleman
You can try rerunning the install with -vv to get additional debug
information.

What OS and version on Ansible are you using?

On Apr 10, 2018, at 3:24 AM, Yu Wei  wrote:

Hi,
I tried to install openshift origin 3.9 on a single machine and encountered
problems as below,



























*TASK [openshift_node : Install Node package, sdn-ovs, conntrack packages]
*
fatal: [host-10-1-241-74]: FAILED! => {"msg": "|failed expects a string or
unicode"} to retry, use: --limit
@/root/jared/openshift-ansible/playbooks/deploy_cluster.retry PLAY RECAP

host-10-1-241-74   : ok=326  changed=41   unreachable=0
failed=1localhost  : ok=13   changed=0
unreachable=0failed=0INSTALLER STATUS
**
Initialization : Complete (0:00:43) Health Check
: Complete (0:00:05) etcd Install   : Complete (0:00:58) Master
Install : Complete (0:05:03) Master Additional Install  :
Complete (0:00:48) Node Install   : In Progress (0:00:38)
 This phase can be restarted by running:
playbooks/openshift-node/config.yml Failure summary:   1. Hosts:
host-10-1-241-74  Play: Configure containerized nodes
Task: Install Node package, sdn-ovs, conntrack packages  Message:
|failed expects a string or unicode*

I didn't find useful information in docker / journal logs.
How could I fix this problem further?

Thanks,

Jared, (韦煜)
Software developer
Interested in open source software, big data, Linux

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: OpenShift Origin v3.9.0 is now available

2018-04-01 Thread Clayton Coleman
Yum does not require the directory of the baseurl to exist as an http
location.  That’s just a convention from people hosting with older web
servers like Apache.

On Apr 1, 2018, at 9:39 AM, Nakayama Kenjiro <nakayamakenj...@gmail.com>
wrote:

Although curl access gets NoSuchKey as you mentioned, dnf/yum can find
and download RPMs without any problem.

```
$ sudo dnf repoquery --disablerepo=*  --repofrompath=origin,
https://storage.googleapis.com/origin-ci-test/logs/test_branch_origin_extended_conformance_gce_39/23/artifacts/rpms
Added origin repo from
https://storage.googleapis.com/origin-ci-test/logs/test_branch_origin_extended_conformance_gce_39/23/artifacts/rpms
Last metadata expiration check: 0:06:37 ago on Sun 01 Apr 2018 10:16:20 PM
JST.
origin-0:3.9.0-1.0.191fece.x86_64
origin-clients-0:3.9.0-1.0.191fece.x86_64
origin-cluster-capacity-0:3.9.0-1.0.191fece.x86_64
origin-docker-excluder-0:3.9.0-1.0.191fece.noarch
origin-excluder-0:3.9.0-1.0.191fece.noarch
origin-federation-services-0:3.9.0-1.0.191fece.x86_64
origin-master-0:3.9.0-1.0.191fece.x86_64
origin-node-0:3.9.0-1.0.191fece.x86_64
origin-pod-0:3.9.0-1.0.191fece.x86_64
origin-sdn-ovs-0:3.9.0-1.0.191fece.x86_64
origin-service-catalog-0:3.9.0-1.0.191fece.x86_64
origin-template-service-broker-0:3.9.0-1.0.191fece.x86_64
origin-tests-0:3.9.0-1.0.191fece.x86_64
```

So, if you put the repo file under etc/yum.repos.d/*.repo, you
can install origin v3.9. I guess that storage.googleapis.com returns
"No such object" unless we accesses to exact file name. For example,
following URL could return repo data correctly.


https://storage.googleapis.com/origin-ci-test/logs/test_branch_origin_extended_conformance_gce_39/23/artifacts/rpms/repodata/repomd.xml

Regards,
Kenjiro


On Sun, Apr 1, 2018 at 9:33 PM, Aleksandar Lazic <
openshift-...@me2digital.com> wrote:

> Hi.
>
> looks like the file does not point to a valid repo.
>
> ```
> curl -sSL $(curl -sSL
> https://storage.googleapis.com/origin-ci-test/releases/
> openshift/origin/v3.9.0/origin.repo
> |egrep baseurl|cut -d= -f2)
>
>  encoding='UTF-8'?>NoSuchKeyThe specified
> key does not exist.No such object:
> origin-ci-test/logs/test_branch_origin_extended_conformance_gce_39/23/
> artifacts/rpms
> ```
>
> Regards
> Aleks
> Am 31.03.2018 um 01:33 schrieb Clayton Coleman:
> > The v3.9.0 OpenShift release has been tagged
> > https://github.com/openshift/origin/releases/tag/v3.9.0 and images
> > have been pushed to the Docker Hub.
> >
> > RPMs are available at:
> >
> >
> > https://storage.googleapis.com/origin-ci-test/releases/
> openshift/origin/v3.9.0/origin.repo
> >
> > Starting in v3.9.0, the Origin release images published to the Docker
> > Hub are now rolling tags - v3.9 and v3.9.0 will be updated whenever
> > changes are merged to the release-3.9 branch.
> >
> >
> > ___
> > dev mailing list
> > dev@lists.openshift.redhat.com
> > http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>



-- 
Kenjiro NAKAYAMA <nakayamakenj...@gmail.com>
GPG Key fingerprint = ED8F 049D E67A 727D 9A44  8E25 F44B E208 C946 5EB9
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


OpenShift Origin v3.9.0 is now available

2018-03-30 Thread Clayton Coleman
The v3.9.0 OpenShift release has been tagged
https://github.com/openshift/origin/releases/tag/v3.9.0 and images have
been pushed to the Docker Hub.

RPMs are available at:


https://storage.googleapis.com/origin-ci-test/releases/openshift/origin/v3.9.0/origin.repo

Starting in v3.9.0, the Origin release images published to the Docker Hub
are now rolling tags - v3.9 and v3.9.0 will be updated whenever changes are
merged to the release-3.9 branch.
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: [CentOS-devel] CentOS PaaS SIG meeting (2018-03-21)

2018-03-27 Thread Clayton Coleman
I’m going to retag when the commits merge.

On Mar 27, 2018, at 7:43 PM, Brigman, Larry <larry.brig...@arris.com> wrote:

Shouldn’t that last tag been an -rc0 instead of v3.9.0?



*From:* users-boun...@lists.openshift.redhat.com [
mailto:users-boun...@lists.openshift.redhat.com
<users-boun...@lists.openshift.redhat.com>] *On Behalf Of *Clayton Coleman
*Sent:* Tuesday, March 27, 2018 3:44 PM
*To:* Troy Dawson <tdaw...@redhat.com>
*Cc:* users <us...@redhat.com>; The CentOS developers mailing list. <
centos-de...@centos.org>; dev <dev@lists.openshift.redhat.com>
*Subject:* Re: [CentOS-devel] CentOS PaaS SIG meeting (2018-03-21)



Still waiting for a last couple of regressions to be fixed.  Sorry
everyone, I know you're excited about this.



On Tue, Mar 27, 2018 at 6:04 PM, Troy Dawson <tdaw...@redhat.com> wrote:

I didn't see anything saying that 3.9 was released yet.  Last I heard
they were working on some regressions.
If I missed it, can someone point me at it.  Maybe I just need a
better place to look than the mailling lists.


On Thu, Mar 22, 2018 at 11:01 PM, Jeffrey Zhang <zhang.lei@gmail.com>
wrote:
> hi, openshift origin 3.9 is released already,
> when centos-release-openshift-origin39 repo will be GA? it is still using
> alpha tag now.
>
> https://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin39/
<https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fbuildlogs.centos.org%2Fcentos%2F7%2Fpaas%2Fx86_64%2Fopenshift-origin39%2F=01%7C01%7Clarry.brigman%40arris.com%7C44ab399f85214a0db07308d594343d4b%7Cf27929ade5544d55837ac561519c3091%7C1=Yj4F60B72Jqp51E9Y6R1jD6iTTKfuOGP6uySoqcGuro%3D=0>

>

___
users mailing list
us...@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
<https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists.openshift.redhat.com%2Fopenshiftmm%2Flistinfo%2Fusers=01%7C01%7Clarry.brigman%40arris.com%7C44ab399f85214a0db07308d594343d4b%7Cf27929ade5544d55837ac561519c3091%7C1=dMj9DH7P4e%2Fow7My%2BiZNRZwHVWLM06U%2Bkpt2CDITcbs%3D=0>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: [CentOS-devel] CentOS PaaS SIG meeting (2018-03-21)

2018-03-27 Thread Clayton Coleman
Still waiting for a last couple of regressions to be fixed.  Sorry
everyone, I know you're excited about this.

On Tue, Mar 27, 2018 at 6:04 PM, Troy Dawson  wrote:

> I didn't see anything saying that 3.9 was released yet.  Last I heard
> they were working on some regressions.
> If I missed it, can someone point me at it.  Maybe I just need a
> better place to look than the mailling lists.
>
>
> On Thu, Mar 22, 2018 at 11:01 PM, Jeffrey Zhang 
> wrote:
> > hi, openshift origin 3.9 is released already,
> > when centos-release-openshift-origin39 repo will be GA? it is still
> using
> > alpha tag now.
> >
> > https://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin39/
> >
>
> ___
> users mailing list
> us...@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Stop docker processes created by "oc cluster up"

2018-03-26 Thread Clayton Coleman
oc cluster down?

On Mon, Mar 26, 2018 at 12:58 PM, Charles Moulliard 
wrote:

> Hi,
>
> When we use "oc cluster up" command, then openshift is started as docker
> containers (origin, router, registry + console) that we could stop/start
> using command "docker stop|start origin"
>
> The command "docker start origin" works very well as 7 docker ps are
> created to bootstrap the origin cluster
> - k8s_registry_docker-registry-1-w7c5r_default_c265ed3e-30c8-
> 11e8-8a68-080027b2fb11_3
> - k8s_POD_docker-registry-1-w7c5r_default_c265ed3e-30c8-
> 11e8-8a68-080027b2fb11_2
> - k8s_router_router-1-dxzkh_default_c23e1d4d-30c8-11e8-8a68-080027b2fb11_3
> - k8s_webconsole_webconsole-74df8f7c9-s6wxx_openshift-web-
> console_c11f9613-30c8-11e8-8a68-080027b2fb11_2
> - k8s_POD_router-1-dxzkh_default_c23e1d4d-30c8-11e8-8a68-080027b2fb11_2
> - k8s_POD_webconsole-74df8f7c9-s6wxx_openshift-web-console_
> c11f9613-30c8-11e8-8a68-080027b2fb11_2
> - origin
>
> But when we use "docker stop origin", only the "origin" process is stopped
>
> Is there a trick to stop all the docker processes and not only "origin
>
> Regards
>
> Charles
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Configuring Private DNS Zones and Upstream Nameservers in Openshift

2018-02-22 Thread Clayton Coleman
Probably have to wait until 3.9.  We also want to move to coredns, but that
could take longer.

On Feb 15, 2018, at 6:26 PM, Srinivas Naga Kotaru (skotaru) <
skot...@cisco.com> wrote:



Is it possible like described in kubernetes?



http://blog.kubernetes.io/2017/04/configuring-private-dns-zones-upstream-nameservers-kubernetes.html



We have few clients where they configured their own consul based DNS server
and not suing service discovery provided by Openshift. We build a custom
solution by adding these external zones and IP addresses to dnsmasq.conf.
This approach working but having few failures since it has to first check
cluster zones by honoring the order specified in /etc/resolve.conf and then
forward to these external DNS severs.



I saw a solution in Kubernetes 1.9 as an alpha feature  by letting clients
to configure their own DNS settings in POD definitions.



https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pods-dns-config



Openshift clients need to wait till 3.9? or is there any way currently to
solve this problem?



-- 

*Srinivas Kotaru*
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Leader election in Kubernetes control plane

2018-02-20 Thread Clayton Coleman
In 3.6 you can look at which master is using the most memory (the
passive controllers won’t use much at all).

You can also do a GET /controllers against :8444 which returns 200 if
this is the active controller, or 404 (iirc) otherwise.  That’s been
removed though in newer versions in favor of the election annotations.

> On Feb 21, 2018, at 12:49 AM, Srinivas Naga Kotaru (skotaru) 
> <skot...@cisco.com> wrote:
>
> Thanks, that make sense. We are using 3.6 currently
>
> --
>
> Srinivas Kotaru
> On 2/20/18, 9:46 PM, "Takayoshi Kimura" <tkim...@redhat.com> wrote:
>
>In 3.7+ "oc get cm openshift-master-controllers -n kube-system -o yaml" 
> you can see the annotation described in that article.
>
>Regards,
>Takayoshi
>
>On Wed, 21 Feb 2018 14:37:32 +0900,
>"Srinivas Naga Kotaru (skotaru)" <skot...@cisco.com> wrote:
>>
>> It has just client-ca-file. We have 3 masters in each cluster. not sure how 
>> to identify which control manager is active? I usually find which oneʼs is 
>> writing logs by using journalctl 
>> atomic-openshift-master-controllers.service. passive oneʼs donʼt write or 
>> generate
>> any logs.
>>
>> --
>>
>> Srinivas Kotaru
>>
>> From: Clayton Coleman <ccole...@redhat.com>
>> Date: Tuesday, February 20, 2018 at 9:29 PM
>> To: Srinivas Naga Kotaru <skot...@cisco.com>
>> Cc: dev <dev@lists.openshift.redhat.com>
>> Subject: Re: Leader election in Kubernetes control plane
>>
>> We use config maps - check in kube-system for that.
>>
>> On Feb 15, 2018, at 2:48 PM, Srinivas Naga Kotaru (skotaru) 
>> <skot...@cisco.com> wrote:
>>
>>while I was reading below article, I tried to do the same to find out 
>> which one is active control plane in Openshift. I could see zero end points 
>> in kube-system name space. Am I missing something or not implemented in 
>> Openshift?
>>
>>
>> https://blog.heptio.com/leader-election-in-kubernetes-control-plane-heptioprotip-1ed9fb0f3e6d
>>
>>$oc project
>>
>>Using project "kube-system" on server
>>
>>$ oc get ep
>>
>>No resources found.
>>
>>$oc get all
>>
>>No resources found.
>>
>>--
>>
>>Srinivas Kotaru
>>
>>___
>>dev mailing list
>>dev@lists.openshift.redhat.com
>>http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Leader election in Kubernetes control plane

2018-02-20 Thread Clayton Coleman
We use config maps - check in kube-system for that.

On Feb 15, 2018, at 2:48 PM, Srinivas Naga Kotaru (skotaru) <
skot...@cisco.com> wrote:

while I was reading below article, I tried to do the same to find out which
one is active control plane in Openshift. I could see zero end points in
kube-system name space. Am I missing something or not implemented in
Openshift?



https://blog.heptio.com/leader-election-in-kubernetes-control-plane-heptioprotip-1ed9fb0f3e6d



$oc project

Using project "kube-system" on server

$ oc get ep

No resources found.

$oc get all

No resources found.



-- 

*Srinivas Kotaru*

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


New rolling pre-release tags for Origin v3.9

2018-02-01 Thread Clayton Coleman
Due to much popular demand and a desire to use this for rolling updates of
cluster components, we have started publishing vX.Y and vX.Y.Z tags for
origin, the registry, and logging and metrics.

So by end of day tomorrow you should see v3.9 and v3.9.0 tags for all
OpenShift components.  These tags are for pre-release code and are updated
on every merge to master - we will roll them the entire release, and when
we cut for v3.9.1 we'll immediately start tagging from master to the v3.9.1
tag.  3.10 will start immediately after the release branch is cut for 3.9.

As a user, you should only switch to a rolling tag once it's been
"released" (a git tag exists) if you want to have a fully stable experience.

oc cluster up and the image behavior will not be updated until we've had
time to assess the impact of changing, although if you run "oc cluster up
--version=v3.9" I would hope it would work.

Stay tuned for more on autoupdating of cluster components.
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Development nightly / PR artifacts

2018-01-29 Thread Clayton Coleman
We recently updated the tracking artifacts for releases - for any tag,
branch, or PR that is built you can get at the RPMs

For branches and tags, this URL gives you the link to the latest RPMs (the
content of this file is a URL you can use in an RPM repo)

curl -q "
https://storage.googleapis.com/origin-ci-test/releases/openshift/origin/master/.latest-rpms
"
curl -q "
https://storage.googleapis.com/origin-ci-test/releases/openshift/origin/release-3.8/.latest-rpms
"
curl -q "
https://storage.googleapis.com/origin-ci-test/releases/openshift/origin/v3.9.0-alpha.4/.latest-rpms
"

For PRs, you'll need to look at the logs for your job (search for
https://storage.googleapis.com in your GCP logs, you'll see a link that
looks similar to the previous).

You can get at zips for branches and tags with:

curl -q "
https://storage.googleapis.com/origin-ci-test/releases/openshift/origin/master/.latest-zips
"
curl -q "
https://storage.googleapis.com/origin-ci-test/releases/openshift/origin/release-3.8/.latest-zips
"
curl -q "
https://storage.googleapis.com/origin-ci-test/releases/openshift/origin/v3.9.0-alpha.4/.latest-zips
"

... but you'll need to then access the returned URL via gcsweb by taking
the job number and replacing it into the following style of URL:

https://gcsweb-ci.svc.ci.openshift.org/gcs/origin-ci-test/logs/test_branch_origin_cross/1349/artifacts/zips/

I may update the zips in the future, this is really more for my convenience
when I'm throwing up origin releases.
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: cbs-paas7-openshift-multiarch-el7-build

2018-01-23 Thread Clayton Coleman
Jason is probably the best contact for this

On Mon, Jan 22, 2018 at 11:17 AM, Neale Ferguson 
wrote:

> Hi,
>  I have been building CE for the s390x platform and notice in the recent
> release there is a new repo used in the source image. At the moment most of
> the packages referenced there are in the s390x version of EPEL that I
> maintain but I’d like to be consistent with all architectures. I would like
> to build the s390x version of this repo. I’d like to automate this process
> rather than just trying to manually build each package. I assume the
> non-x86_64 arches get their stuff for building from the x86_64 repo so
> wonder if there is some tool/procedure I could adapt to build for s390x?
>
> Neale
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: service discovery if node service down

2018-01-18 Thread Clayton Coleman
It could be configured that way.  You should be able to statically test
that with the on disk config for dnsmasq to see if it works as expected
(you'll want the node ip first)

On Thu, Jan 18, 2018 at 11:55 AM, Srinivas Naga Kotaru (skotaru) <
skot...@cisco.com> wrote:

> Will service discovery works during node service restart or node service
> down for some reason? Before 3.6, all service discover happens at master’s
> level, but 3.6 onwards, service discovery offloaded to nodes via dnsmasq à
> node service.
>
>
>
> During our test, service discovery not working during node restart. Why
> dnsmasq not forwarding to masters in case node service down or not
> available?
>
>
>
> --
>
> *Srinivas Kotaru*
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Problem with tito when building RPMs

2018-01-18 Thread Clayton Coleman
Have you pulled tags into you git repo?

On Jan 18, 2018, at 5:44 PM, Neale Ferguson  wrote:

I saw a similar problem reported in October last year but there was no
report of a resolution. I have done a build since then and it worked but
today when I went to build 3.7.1 I got the following:

OS_ONLY_BUILD_PLATFORMS='linux/amd64' hack/build-rpm-release.sh

[INFO] Building Origin release RPMs with tito...

Version: 3.7.1

Release: 1.1

Creating output directory: /tmp/tito

OS_GIT_MINOR::7+

OS_GIT_MAJOR::3

OS_GIT_VERSION::v3.7.1+8e67101-1

OS_GIT_TREE_STATE::clean

OS_GIT_CATALOG_VERSION::v0.1.2

OS_GIT_COMMIT::8e67101

Tagging new version of origin: 0.0.1 -> 3.7.1-1.1

version_and_rel:  3.7.1-1.1

suffixed_version: 3.7.1

release:  1.1

Traceback (most recent call last):

  File "/usr/bin/tito", line 23, in 

CLI().main(sys.argv[1:])

  File "/usr/lib/python2.7/site-packages/tito/cli.py", line 203, in main

return module.main(argv)

  File "/usr/lib/python2.7/site-packages/tito/cli.py", line 671, in main

return tagger.run(self.options)

  File "/usr/lib/python2.7/site-packages/tito/tagger/main.py", line 114, in
run

self._tag_release()

  File "/root/origin-3.7.1/go/src/
github.com/openshift/origin/.tito/lib/origin/tagger/__init__.py", line 40,
in _tag_release

super(OriginTagger, self)._tag_release()

  File "/usr/lib/python2.7/site-packages/tito/tagger/main.py", line 136, in
_tag_release

self._check_tag_does_not_exist(self._get_new_tag(new_version))

  File "/usr/lib/python2.7/site-packages/tito/tagger/main.py", line 558, in
_get_new_tag

return self._get_tag_for_version(suffixed_version, release)

TypeError: _get_tag_for_version() takes exactly 2 arguments (3 given)

[ERROR] PID 55920: hack/build-rpm-release.sh:32: `tito tag
--use-version="${OS_RPM_VERSION}" --use-release="${OS_RPM_RELEASE}"
--no-auto-changelog --offline` exited with status 1.

[INFO] Stack Trace:

[INFO]   1: hack/build-rpm-release.sh:32: `tito tag
--use-version="${OS_RPM_VERSION}" --use-release="${OS_RPM_RELEASE}"
--no-auto-changelog --offline`

[INFO]   Exiting with code 1.

[ERROR] hack/build-rpm-release.sh exited with code 1 after 00h 00m 05s

make: *** [build-rpms] Error 1


Neale

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


OpenShift v3.7.1 released

2018-01-16 Thread Clayton Coleman
This is patch release containing some high severity issues v3.7.1


Images have been pushed to the docker hub.  RPMs will likely trail by a few
days.
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: origin v3.7.0 images at docker.io

2017-12-19 Thread Clayton Coleman
Did I forget to send the email?  We cut a few weeks ago.

On Dec 19, 2017, at 8:46 AM, Mateus Caruccio <mateus.caruc...@getupcloud.com>
wrote:

Hey, is there any prevision for when will 3.7.0 be released?
tnx

--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
Gartner Cool Vendor 2017

2017-11-21 16:54 GMT-02:00 Clayton Coleman <ccole...@redhat.com>:

> We haven't cut 3.7.0 yet in origin.  Still waiting for final soak
> determination - there are a few outstanding issues being chased.  I'd urge
> everyone who wants 3.7.0 to verify that 3.7.0.rc0 works for them.
>
> On Tue, Nov 21, 2017 at 1:40 PM, Jason Brooks <jbro...@redhat.com> wrote:
>
>> Can we get v3.7.0 images for openshift/origin?
>>
>> See https://hub.docker.com/r/openshift/origin/tags/
>>
>> Regards, Jason
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: [aos-devel] optimizing go guru

2017-12-05 Thread Clayton Coleman
Openshift and Kubernetes are massive go projects - over 3 million lines of
code (last I checked).  Initial compile can take a few minutes for these
tools.  Things to check:

1. Go 1.9 uses less memory when compiling
2. Be sure you are reusing your go compiled artifacts dir between multiple
tools (sometimes that is GOPATH/pkg, but openshift explicitly only compiles
temp packages into _output/local/pkgdir for reasons)
3. Get faster laptop :)

On Dec 5, 2017, at 9:44 AM, Luke Meyer  wrote:

In the context of the vim-go plugin. However behavior seems much the same
if I run the same command at the command line (I pulled it out of ps -ef).

On Tue, Dec 5, 2017 at 10:40 AM, Sebastian Jug  wrote:

> Are you using guru in some sort of editor/IDE or just standalone?
>
> On Dec 5, 2017 9:40 AM, "Luke Meyer"  wrote:
>
>>
>>
>> On Tue, Dec 5, 2017 at 9:36 AM, Sebastian Jug  wrote:
>>
>>> Sounds like you have got auto compile still on?
>>>
>>>
>> What does this mean in the context of go guru? Is there an env var to
>> set, an option to add, a config file to change to control this behavior?
>>
>>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Webhook token auth

2017-12-01 Thread Clayton Coleman
At the current time authenticator web hooks aren't supported (3.6).  It's
being discussed for 3.9, but more realistically 3.10.

This is for IAM integration with AWS?

On Fri, Dec 1, 2017 at 3:48 PM, Mateus Caruccio <
mateus.caruc...@getupcloud.com> wrote:

> Hi.
> Is it possible to use external webhook auth on openshift?
>
> I've edited origin-master with this fragment:
>
> kubernetesMasterConfig:
>   apiServerArguments:
> authentication-token-webhook-config-file:
> /etc/kubernetes/heptio-authenticator-aws/kubeconfig.yaml
>
> However it looks like apiserver is not even hitting the webhook service at
> 127.0.0.1
> No log messages messages even when loglevel=10
>
>
> $ oc version
> oc v3.6.1+008f2d5
> kubernetes v1.6.1+5115d708d7
> features: Basic-Auth GSSAPI Kerberos SPNEGO
>
> Server https://XXX.XXX.getupcloud.com:443
> openshift v3.6.1+008f2d5
> kubernetes v1.6.1+5115d708d7
>
>
> Thanks
>
> --
> Mateus Caruccio / Master of Puppets
> GetupCloud.com
> We make the infrastructure invisible
> Gartner Cool Vendor 2017
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


release-3.8 branch created, v3.8.0-alpha.1 and v3.9.0-alpha.0 tags created

2017-12-01 Thread Clayton Coleman
We've branched master for release-3.8 and created a v3.9.0-alpha.0 tag.
This is because 3.8 is a "skip" release where we'll only do an internal
data upgrade and then go from 3.7 to 3.9 directly.

Expect the next Kubernetes rebase for 1.9 to begin soon. Commits to master
are still allowed.
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Unpredictable oc binary path for Origin version v3.7.0

2017-11-30 Thread Clayton Coleman
Binaries are published.

On Nov 30, 2017, at 10:11 AM, Clayton Coleman <ccole...@redhat.com> wrote:

I can publish them to github for 3.7.  We’re looking to reduce the amount
of effort required to publish binaries so we can do it more often.  Expect
binaries to be published to GCS in the future.

On Nov 30, 2017, at 2:05 AM, Lalatendu Mohanty <lmoha...@redhat.com> wrote:

Hi,

Till v3.7.0-rc.0 release oc binaries were published in github release page
[1] but with 3.7.0 the location has changed and causing minishift start
--openshift-version v3.7.0 to fail.

>From Minishift side we can change the code to point it to the new location
but the new location is difficult to guess programmatically because It
seems to contain a build id "1032" which makes it hard for third parties to
determine.

If we can not predict the location then Minishift users can not use a new
version of Origin without code changes in Minishift which is not desirable.

Filed an github issue on the same :
https://github.com/openshift/origin/issues/17527

Is this going to be the normal going forward? Is there a way to predict the
oc binary path from Origin version? Do you have plan to setup a mirror some
place with predictable URL?

[1] https://github.com/openshift/origin/releases/tag/v3.7.0-rc.0

[2]
https://gcsweb-ci.svc.ci.openshift.org/gcs/origin-ci-test/branch-logs/origin/v3.7.0/builds/test_branch_origin_cross/1032/artifacts/zips/

Thanks,

Lala

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Unpredictable oc binary path for Origin version v3.7.0

2017-11-30 Thread Clayton Coleman
Yes, that is one of the considerations.  I can’t promise anything at the
moment.

On Nov 30, 2017, at 10:29 AM, Marek Jelen <mje...@redhat.com> wrote:

Hi Clayton,

when uploaded to GCS, could the binaries be as well uploaded to some well
known paths? Something like

https://storage.googleapis.com/origin-ci-test/branch-logs/origin/openshift-origin-client-tools-v3.7.0-latest-linux-64bit.tar.gz
https://storage.googleapis.com/origin-ci-test/branch-logs/origin/openshift-origin-client-tools-v3.7.0-release-linux-64bit.tar.gz

where “latest” would track development builds and “stable” would track
tagged releases.

It would be pretty convenient for scripting. not to have to employ logic to
figure out latest build ids or release hashes.

--

MAREK JELEN

DEVELOPER ADVOCATE, OPENSHIFT

https://www.redhat.com/

mje...@redhat.comM: +420724255807

On 30 November 2017 at 16:13:08, Clayton Coleman (ccole...@redhat.com)
wrote:

I can publish them to github for 3.7.  We’re looking to reduce the amount
of effort required to publish binaries so we can do it more often.  Expect
binaries to be published to GCS in the future.

On Nov 30, 2017, at 2:05 AM, Lalatendu Mohanty <lmoha...@redhat.com> wrote:

Hi,

Till v3.7.0-rc.0 release oc binaries were published in github release page
[1] but with 3.7.0 the location has changed and causing minishift start
--openshift-version v3.7.0 to fail.

>From Minishift side we can change the code to point it to the new location
but the new location is difficult to guess programmatically because It
seems to contain a build id "1032" which makes it hard for third parties to
determine.

If we can not predict the location then Minishift users can not use a new
version of Origin without code changes in Minishift which is not desirable.

Filed an github issue on the same :
https://github.com/openshift/origin/issues/17527

Is this going to be the normal going forward? Is there a way to predict the
oc binary path from Origin version? Do you have plan to setup a mirror some
place with predictable URL?

[1] https://github.com/openshift/origin/releases/tag/v3.7.0-rc.0

[2]
https://gcsweb-ci.svc.ci.openshift.org/gcs/origin-ci-test/branch-logs/origin/v3.7.0/builds/test_branch_origin_cross/1032/artifacts/zips/

Thanks,

Lala

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Unpredictable oc binary path for Origin version v3.7.0

2017-11-30 Thread Clayton Coleman
I can publish them to github for 3.7.  We’re looking to reduce the amount
of effort required to publish binaries so we can do it more often.  Expect
binaries to be published to GCS in the future.

On Nov 30, 2017, at 2:05 AM, Lalatendu Mohanty  wrote:

Hi,

Till v3.7.0-rc.0 release oc binaries were published in github release page
[1] but with 3.7.0 the location has changed and causing minishift start
--openshift-version v3.7.0 to fail.

>From Minishift side we can change the code to point it to the new location
but the new location is difficult to guess programmatically because It
seems to contain a build id "1032" which makes it hard for third parties to
determine.

If we can not predict the location then Minishift users can not use a new
version of Origin without code changes in Minishift which is not desirable.

Filed an github issue on the same :
https://github.com/openshift/origin/issues/17527

Is this going to be the normal going forward? Is there a way to predict the
oc binary path from Origin version? Do you have plan to setup a mirror some
place with predictable URL?

[1] https://github.com/openshift/origin/releases/tag/v3.7.0-rc.0

[2]
https://gcsweb-ci.svc.ci.openshift.org/gcs/origin-ci-test/branch-logs/origin/v3.7.0/builds/test_branch_origin_cross/1032/artifacts/zips/

Thanks,

Lala

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: registry deletion

2017-11-28 Thread Clayton Coleman
Note that any command that can generate objects can be an input to delete:

oadm registry -o json | oc delete -f -

It won’t delete objects you personally created, but is a good way to rewind.

On Nov 28, 2017, at 6:13 PM, Ben Parees  wrote:



On Tue, Nov 28, 2017 at 6:06 PM, Brian Keyes  wrote:

> I want to delete this docker registry and start over, this is the command
> that I think was run to create it
>
> oadm registry --config=/etc/origin/master/admin.kubeconfig
> --service-account=registry
>
>
>
> oadm registry delete or something like that ???
>

sorry, there's no command to delete it.  You should be able to just delete
the deploymentconfig (oc delete dc docker-registry -n default) and then run
oadm registry again to recreate it, however.  You'll get some errors
because of resources that already exist, but it'll get you a new registry
pod.

But i might also ask why you feel you need to delete the registry to get
back to a clean state.




>
> thanks 
> --
> Brian Keyes
> Systems Engineer, Vizuri
> 703-855-9074 <(703)%20855-9074>(Mobile)
> 703-464-7030 x8239 <(703)%20464-7030> (Office)
>
> FOR OFFICIAL USE ONLY: This email and any attachments may contain
> information that is privacy and business sensitive.  Inappropriate or
> unauthorized disclosure of business and privacy sensitive information may
> result in civil and/or criminal penalties as detailed in as amended Privacy
> Act of 1974 and DoD 5400.11-R.
>
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>


-- 
Ben Parees | OpenShift

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: origin v3.7.0 images at docker.io

2017-11-21 Thread Clayton Coleman
We haven't cut 3.7.0 yet in origin.  Still waiting for final soak
determination - there are a few outstanding issues being chased.  I'd urge
everyone who wants 3.7.0 to verify that 3.7.0.rc0 works for them.

On Tue, Nov 21, 2017 at 1:40 PM, Jason Brooks  wrote:

> Can we get v3.7.0 images for openshift/origin?
>
> See https://hub.docker.com/r/openshift/origin/tags/
>
> Regards, Jason
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: question about openshift origin deployer

2017-11-19 Thread Clayton Coleman
The deploy command is a sub command in the openshift binary (openshift
infra deploy —help) and makes api calls back to openshift to launch the
pod.  The deployment service account is used by the pod and is granted the
permission to launch hook pods and also to scale the replica set for each
revision of a deployment config.


On Nov 19, 2017, at 12:49 PM, Yu Wei  wrote:

Hi,

How does openshift origin deployer start another container?

I checked docker file about deployer and found stuff as "
/usr/bin/openshift-deploy"?


How is /usr/bin/openshift-deploy implemented? Does it call docker api?

Is "/usr/bin/openshift-deploy" also open sourced? Where could I find it?


Thanks,

Jared, (韦煜)
Software developer
Interested in open source software, big data, Linux

___
users mailing list
us...@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: v3.6.1 and v3.7.0-rc.0 released

2017-10-27 Thread Clayton Coleman
v3.6.0 images were carried forward, :latest was used for v3.7.0-rc.0

On Fri, Oct 27, 2017 at 7:15 PM, Rich Megginson <rmegg...@redhat.com> wrote:

> What about logging and metrics?
>
> On 10/27/2017 11:09 AM, Clayton Coleman wrote:
>
>> v3.6.1 and v3.7.0-rc.0 have been released on GitHub and the Docker Hub:
>>
>> https://github.com/openshift/origin/releases/tag/v3.7.0-rc.0 <
>> https://github.com/openshift/origin/releases/tag/v3.7.0-rc.0>
>> https://github.com/openshift/origin/releases/tag/v3.6.1 <
>> https://github.com/openshift/origin/releases/tag/v3.6.1>
>>
>> Thanks to everyone for their hard work so far!
>>
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


v3.6.1 and v3.7.0-rc.0 released

2017-10-27 Thread Clayton Coleman
v3.6.1 and v3.7.0-rc.0 have been released on GitHub and the Docker Hub:

https://github.com/openshift/origin/releases/tag/v3.7.0-rc.0
https://github.com/openshift/origin/releases/tag/v3.6.1

Thanks to everyone for their hard work so far!
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: PTR record for POD

2017-10-05 Thread Clayton Coleman
No, we only report pod IPs via PTR records if they are part of a stateful
set and you have set the serviceName field on the set.

On Thu, Oct 5, 2017 at 6:35 PM, Srinivas Naga Kotaru (skotaru) <
skot...@cisco.com> wrote:

> HI
>
>
>
> Is it possible to get POD name given POD IP address by querying master DNS
> server?
>
>
>
>
>
> Service lookup working:
>
>
>
> dig +short @master kubernetes.default.svc.cluster.local
>
> 172.24.0.1
>
>
>
> PTR lookup not working:
>
>
>
> $ dig -x @master 172.24.0.1 +short
>
> 172.24.0.1
>
>
>
> --
>
> *Srinivas Kotaru*
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Scale up / down based on user traffic

2017-09-13 Thread Clayton Coleman
Custom metrics autoscaling is coming soon. It will be based on prometheus
metrics exposed by your app.

On Sep 13, 2017, at 7:33 AM, Josef Karasek  wrote:

At Promcon [0] I talked with a guy who exposes some metrics (https
connections, sql queries) in the prometheus format [1].
People also use JMX (for you that would probably be Jolokia?) and expose
metrics that way[2]. That's how you should be
able to reach the queue size etc. But I have to say, I haven't tried it
myself.

But if you want automatic scaling, there would have to be support for that
in openshift. AFAIK cpu/memory is supported at this time.

[0] https://promcon.io/2017-munich/schedule/
[1] https://www.youtube.com/watch?v=ImxglcUIHZU
https://github.com/fstab/promagent/
[2] https://github.com/prometheus/jmx_exporter

On Sun, Sep 10, 2017 at 7:19 AM, Srikrishna Paparaju 
wrote:

> Hi,
>
>We are Fabric8 Analytics team, trying to scale workers (OSD containers)
> up/down based on incoming user traffic. Based on research done so far,
> there is no easy way to scale up / down (OpenShift) based on some metric
> 'like number of messages in a queue'.
>
>Here is the issue being tracked: https://github.com/
> openshiftio/openshift.io/issues/668
>
>Would like to your thoughts...
>
> Thanks,
> SriKrishna
>
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Pods Not Terminating

2017-09-05 Thread Clayton Coleman
Please open a bug in openshift/origin and we'll triage it there.

On Tue, Sep 5, 2017 at 5:14 PM, Patrick Tescher <patr...@outtherelabs.com>
wrote:

> The pods are still “terminating” and have been stuck in that state. New
> pods have come and gone since then but the stuck ones are still stuck.
>
>
> On Sep 5, 2017, at 2:13 PM, Clayton Coleman <ccole...@redhat.com> wrote:
>
> So the errors recur continuously for a given pod once they start happening?
>
> On Tue, Sep 5, 2017 at 5:07 PM, Patrick Tescher <patr...@outtherelabs.com>
> wrote:
>
>> No patches have been applied since we upgraded to 3.6.0 over a week ago.
>> The errors just popped up for a few different pods in different namespaces.
>> The only thing we did today was launch a stateful set in a new namespace.
>> Those pods were not the ones throwing this error.
>>
>>
>> On Sep 5, 2017, at 1:19 PM, Clayton Coleman <ccole...@redhat.com> wrote:
>>
>> Were any patches applied to the system?  Some of these are normal if they
>> happen for a brief period of time.  Are you seeing these errors
>> continuously for the same pod over and over?
>>
>> On Tue, Sep 5, 2017 at 3:23 PM, Patrick Tescher <patr...@outtherelabs.com
>> > wrote:
>>
>>> This morning our cluster started experiencing an odd error on multiple
>>> nodes. Pods are stuck in the terminating phase. In our node log I see the
>>> following:
>>>
>>> Sep  5 19:17:22 ip-10-0-1-184 origin-node: E0905 19:17:22.043257  112306
>>> nestedpendingoperations.go:262] Operation for "\"
>>> kubernetes.io/secret/182285ee-9267-11e7-b7be-06415eb17bbf
>>> -default-token-f18hx\
>>> <http://kubernetes.io/secret/182285ee-9267-11e7-b7be-06415eb17bbf-default-token-f18hx%5C>"
>>> (\"182285ee-9267-11e7-b7be-06415eb17bbf\")" failed. No retries
>>> permitted until 2017-09-05 19:17:22.543230782 + UTC
>>> (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for
>>> volume "kubernetes.io/secret/182285ee-9267-11e7-b7be-06415eb17bbf-d
>>> efault-token-f18hx" (volume.spec.Name <http://volume.spec.name/>:
>>> "default-token-f18hx") pod "182285ee-9267-11e7-b7be-06415eb17bbf" (UID:
>>> "182285ee-9267-11e7-b7be-06415eb17bbf") with: remove
>>> /var/lib/origin/openshift.local.volumes/pods/182285ee-9267-1
>>> 1e7-b7be-06415eb17bbf/volumes/kubernetes.io~secret/default-token-f18hx:
>>> device or resource busy
>>>
>>> That path is not mounted (running mount does not list it) and running
>>> fuser -v on that directory does not show anything. Trying to rmdir results
>>> in a similar error:
>>>
>>> sudo rmdir var/lib/origin/openshift.local.volumes/pods/182285ee-9267-11
>>> e7-b7be-06415eb17bbf/volumes/kubernetes.io~secret/default-token-f18hx
>>> rmdir: failed to remove ‘var/lib/origin/openshift.loca
>>> l.volumes/pods/182285ee-9267-11e7-b7be-06415eb17bbf/volumes/
>>> kubernetes.io~secret/default-token-f18hx’: No such file or directory
>>>
>>> Is anyone else getting this error?
>>>
>>>
>>> ___
>>> dev mailing list
>>> dev@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>>
>>>
>>
>>
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Pods Not Terminating

2017-09-05 Thread Clayton Coleman
So the errors recur continuously for a given pod once they start happening?

On Tue, Sep 5, 2017 at 5:07 PM, Patrick Tescher <patr...@outtherelabs.com>
wrote:

> No patches have been applied since we upgraded to 3.6.0 over a week ago.
> The errors just popped up for a few different pods in different namespaces.
> The only thing we did today was launch a stateful set in a new namespace.
> Those pods were not the ones throwing this error.
>
>
> On Sep 5, 2017, at 1:19 PM, Clayton Coleman <ccole...@redhat.com> wrote:
>
> Were any patches applied to the system?  Some of these are normal if they
> happen for a brief period of time.  Are you seeing these errors
> continuously for the same pod over and over?
>
> On Tue, Sep 5, 2017 at 3:23 PM, Patrick Tescher <patr...@outtherelabs.com>
> wrote:
>
>> This morning our cluster started experiencing an odd error on multiple
>> nodes. Pods are stuck in the terminating phase. In our node log I see the
>> following:
>>
>> Sep  5 19:17:22 ip-10-0-1-184 origin-node: E0905 19:17:22.043257  112306
>> nestedpendingoperations.go:262] Operation for "\"
>> kubernetes.io/secret/182285ee-9267-11e7-b7be-06415eb17bbf
>> -default-token-f18hx\
>> <http://kubernetes.io/secret/182285ee-9267-11e7-b7be-06415eb17bbf-default-token-f18hx%5C>"
>> (\"182285ee-9267-11e7-b7be-06415eb17bbf\")" failed. No retries permitted
>> until 2017-09-05 19:17:22.543230782 + UTC (durationBeforeRetry 500ms).
>> Error: UnmountVolume.TearDown failed for volume "
>> kubernetes.io/secret/182285ee-9267-11e7-b7be-06415eb17bbf-
>> default-token-f18hx" (volume.spec.Name <http://volume.spec.name/>:
>> "default-token-f18hx") pod "182285ee-9267-11e7-b7be-06415eb17bbf" (UID:
>> "182285ee-9267-11e7-b7be-06415eb17bbf") with: remove
>> /var/lib/origin/openshift.local.volumes/pods/182285ee-9267-
>> 11e7-b7be-06415eb17bbf/volumes/kubernetes.io~secret/default-token-f18hx:
>> device or resource busy
>>
>> That path is not mounted (running mount does not list it) and running
>> fuser -v on that directory does not show anything. Trying to rmdir results
>> in a similar error:
>>
>> sudo rmdir var/lib/origin/openshift.local.volumes/pods/182285ee-9267-
>> 11e7-b7be-06415eb17bbf/volumes/kubernetes.io~secret/default-token-f18hx
>> rmdir: failed to remove ‘var/lib/origin/openshift.loca
>> l.volumes/pods/182285ee-9267-11e7-b7be-06415eb17bbf/volumes/kubernetes.io
>> ~secret/default-token-f18hx’: No such file or directory
>>
>> Is anyone else getting this error?
>>
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>>
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Pods Not Terminating

2017-09-05 Thread Clayton Coleman
Were any patches applied to the system?  Some of these are normal if they
happen for a brief period of time.  Are you seeing these errors
continuously for the same pod over and over?

On Tue, Sep 5, 2017 at 3:23 PM, Patrick Tescher 
wrote:

> This morning our cluster started experiencing an odd error on multiple
> nodes. Pods are stuck in the terminating phase. In our node log I see the
> following:
>
> Sep  5 19:17:22 ip-10-0-1-184 origin-node: E0905 19:17:22.043257  112306
> nestedpendingoperations.go:262] Operation for "\"kubernetes.io/secret/
> 182285ee-9267-11e7-b7be-06415eb17bbf-default-token-f18hx\
> "
> (\"182285ee-9267-11e7-b7be-06415eb17bbf\")" failed. No retries permitted
> until 2017-09-05 19:17:22.543230782 + UTC (durationBeforeRetry 500ms).
> Error: UnmountVolume.TearDown failed for volume "kubernetes.io/secret/
> 182285ee-9267-11e7-b7be-06415eb17bbf-default-token-f18hx" (
> volume.spec.Name: "default-token-f18hx") pod 
> "182285ee-9267-11e7-b7be-06415eb17bbf"
> (UID: "182285ee-9267-11e7-b7be-06415eb17bbf") with: remove
> /var/lib/origin/openshift.local.volumes/pods/182285ee-
> 9267-11e7-b7be-06415eb17bbf/volumes/kubernetes.io~secret/default-token-f18hx:
> device or resource busy
>
> That path is not mounted (running mount does not list it) and running
> fuser -v on that directory does not show anything. Trying to rmdir results
> in a similar error:
>
> sudo rmdir var/lib/origin/openshift.local.volumes/pods/182285ee-
> 9267-11e7-b7be-06415eb17bbf/volumes/kubernetes.io~secret/
> default-token-f18hx
> rmdir: failed to remove ‘var/lib/origin/openshift.
> local.volumes/pods/182285ee-9267-11e7-b7be-06415eb17bbf/volumes/
> kubernetes.io~secret/default-token-f18hx’: No such file or directory
>
> Is anyone else getting this error?
>
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


OpenShift Origin v3.7.0-alpha.1 released

2017-08-30 Thread Clayton Coleman
Release notes are up on GitHub v3.7.0-alpha.1
 and the
new images have been pushed.

Of major note - in 3.7 OpenShift RBAC will shift to being a layer on top of
Kubernetes RBAC (preserving your existing roles).  Existing APIs will
continue to be supported, but unlike 3.6 where Kubernetes RBAC was
automatically kept in sync with OpenShift RBAC, starting with 3.7 the
canonical source of authorization will be via the Kubernetes RBAC APIs.
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Is that possible to deploy openshift on existing k8s cluster?

2017-08-22 Thread Clayton Coleman
On Tue, Aug 22, 2017 at 4:38 PM, Sanjeev Rampal (srampal) <sram...@cisco.com
> wrote:

> Hi,
>
>
>
> Two related (but slightly different) questions …
>
>
>
> 1)  Is it possible to setup Openshift RBAC such that some specific
> tenants can only use standard kubernetes APIs/ CLIs and not Openshift
> specific api/ clis ? This way, a service provider can provide some tenants
> a pure native kubernetes only service (if some specific tenants prefer this
> and want to ensure their applications are portable to pure kubernetes
> environments at all times) and some other tenants can get the full
> OPenshift API/ CLI access within another project.
>

Yes, you could take the existing 'admin' and 'editor' roles and copy them
to 'kube-admin' and 'kube-editor' roles.  Then remove the 'create' and
'update' verbs from openshift resources.  That should be sufficient.


> 2)  Any document/ guidelines on what one has to do in order to create
> a private build in which Openshift Origin 3.6 is built with Kubernetes 1.7
> (or similar future combinations). This may be something someone may want to
> do to pick up a new k8s feature that only exists in a future upstream
> release but is otherwise completely independent of Openshift Origin. Of
> course this would not be community supported (private image/ fork  or
> Origin only) but useful if some tenant/ project is using pure kubernetes
> only functionality and needs the latest upstream kubernetes.
>

Unfortunately for the next few releases this is fairly expensive - we call
this a "rebase" and it's a lot of refactoring to match upstream Kube.  Some
of the folks on the team specialize in reducing this cost (what I alluded
to as being something that may be possible in the future) so that future
versions of OpenShift may run directly on top of a Kube version.  Today I
would say it's probably very difficult and not recommended without a lot of
expertise in both the OpenShift and Kube codebases.


>
>
>
>
> Rgds,
> Sanjeev
>
>
>
>
>
> *From: *<dev-boun...@lists.openshift.redhat.com> on behalf of Clayton
> Coleman <ccole...@redhat.com>
> *Date: *Tuesday, August 22, 2017 at 9:36 AM
> *To: *Yu Wei <yu20...@hotmail.com>
> *Cc: *"us...@lists.openshift.redhat.com" <us...@lists.openshift.redhat.com>,
> "dev@lists.openshift.redhat.com" <dev@lists.openshift.redhat.com>
> *Subject: *Re: Is that possible to deploy openshift on existing k8s
> cluster?
>
>
>
> Not today.  We hope to do so at some point in the future, but today
> openshift requires additional compiled in control points that only work
> when installing origin directly from the binaries we build.
>
>
> On Aug 22, 2017, at 6:36 AM, Yu Wei <yu20...@hotmail.com> wrote:
>
> Hi,
>
> Now we have existing k8s cluster running workloads.
>
> We also want to make use of features provided by Openshift Origin, for
> example DevOps etc.
>
> Is that possible to integrate openshift origin with our existing k8s?
>
>
>
> Any advice?
>
>
>
> Thanks,
>
> Jared, (韦煜)
> Software developer
> Interested in open source software, big data, Linux
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Changes in imagestream does not trigger build

2017-08-10 Thread Clayton Coleman
Is this an upgrade of a cluster, or a brand new cluster?  If it's an
upgrade, did you run cluster role reconcile after updating your cluster?

On Thu, Aug 10, 2017 at 12:48 PM, Bamacharan Kundu 
wrote:

> Hi,
>   I am having one build chain with openshift/origin:v3.6.0. where
>
> 1. First build is triggered using custom build strategy manually. pushes
> newly created image to an imagestream.
> 2. Second build is triggered based on image change in image stream tag.
>
> Now, the first build is working normally and pushing the image to the
> specific image stream. But the second build is not getting triggered.
>
> This works fine with v1.2.1. Also I have enabled
> "system:build-strategy-custom", even tried adding cluster-admin cluster
> role to the user, but no luck.
>
> Any suggestion for what should I be looking at, or if I am missing
> something.
>
> pasting the build trigger template below for reference.
>
> {
>   "kind": "BuildConfig",
>   "apiVersion": "v1",
>   "metadata": {
>   "name": "test"
>   },
>   "spec": {
> "triggers": [
>   {
> "type": "Generic",
> "generic": {
>   "secret": "${BUILD_TRIGGER_SECRET}"
> }
>   },
>   {
> "type": "ImageChange",
> "imageChange": {
>   "from": {
> "kind": "ImageStreamTag",
> "name": "${JOBID}:test"
>   }
> }
>   }
> ],
> "strategy": {
>   "type": "Custom",
>   "customStrategy": {
> "exposeDockerSocket": true,
> "from": {
>   "kind": "DockerImage",
>   "name": "cccp-test"
> }
>
> Thanks & Regards
> Bamacharan Kundu
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Seeking Advice On Exporting Image Streams

2017-08-09 Thread Clayton Coleman
I don't know what that us to do with templates though - what "parameter"
applies?  The registry external name?

On Aug 9, 2017, at 1:22 PM, Cesar Wong <cew...@redhat.com> wrote:

The purpose of the endpoint is to parameterize an aspect of the resources
passed to it. We are initially implementing image references, but the idea
is to allow parameterizing other things such as environment variables,
names, etc. This is orthogonal to transforming the resources for export
which ‘GET’ with the export parameter needs to handle.

On Aug 9, 2017, at 1:20 PM, Clayton Coleman <ccole...@redhat.com> wrote:

Why do we need parameters? Which parameters are we adding?

On Aug 9, 2017, at 12:21 PM, Cesar Wong <cew...@redhat.com> wrote:

Hi Devan,

You can see my branch here:
https://github.com/csrwng/origin/tree/parameterize_template
(last 5 commits)

Hopefully should be a PR soon. The REST endpoint should be functional, the
CLI still needs work, but basically the idea is to have the reverse of the
‘oc process’ command, where the input is a list of resources and out comes
a template with parameters.

On Aug 9, 2017, at 11:40 AM, Devan Goodwin <dgood...@redhat.com> wrote:

On Wed, Aug 9, 2017 at 11:44 AM, Cesar Wong <cew...@redhat.com> wrote:

Hi Devan,

This past iteration I started work on this same problem [1]

https://trello.com/c/I2ZJxS94/998-5-improve-oc-export-to-parameterize-containerapppromotion

The problem is broad and the way I decided to break it up is to consider the
export and parameterize operations independently. The export should be
handled by the resource’s strategy as you mentioned in the Kube issue you
opened. The parameterization part can be a follow up to the export. Here’s
an initial document describing it:

https://docs.google.com/a/redhat.com/document/d/15SLkhXRovY1dLbxpWFy_Wfq3I6xMznsOAnopTYrXw_A/edit?usp=sharing


Thanks that was a good read, will keep an eye on this document.

Does anything exist yet for your parameterization code? Curious what
it looks like and if it's something we could re-use yet, what the
inputs and outputs are, etc.


On the export side, I think we need to decide whether there is different
“types” of export that can happen which should affect the logic of the
resource strategy. For example, does a deployment config look different if
you’re exporting it for use in a different namespace vs a different cluster.
If this is the case, then right now is probably a good time to drive that
change to the upstream API as David suggested.


Is anyone working on a proposal for this export logic upstream? I am
wondering if I should try to put one together if I can find the time.
The general idea (as I understand it) would be to migrate the
currently quite broken export=true param to something strategy based,
and interpret "true" to mean a strategy that matches what we do today.
The references in code I've seen indicate that the current intention
is to strip anything the user cannot specify themselves.




On Aug 9, 2017, at 10:27 AM, Ben Parees <bpar...@redhat.com> wrote:



On Wed, Aug 9, 2017 at 10:00 AM, Devan Goodwin <dgood...@redhat.com> wrote:


On Wed, Aug 9, 2017 at 9:58 AM, Ben Parees <bpar...@redhat.com> wrote:



On Wed, Aug 9, 2017 at 8:49 AM, Devan Goodwin <dgood...@redhat.com>
wrote:


We are working on a more robust project export/import process (into a
new namespace, possibly a new cluster, etc) and have a question on how
to handle image streams.

Our first test was with "oc new-app
https://github.com/openshift/ruby-hello-world.git;, this results in an
image stream like the following:

$ oc get is ruby-hello-world -o yaml
apiVersion: v1
kind: ImageStream
metadata:
 annotations:
   openshift.io/generated-by: OpenShiftNewApp
 creationTimestamp: 2017-08-08T12:01:22Z
 generation: 1
 labels:
   app: ruby-hello-world
 name: ruby-hello-world
 namespace: project1
 resourceVersion: "183991"
 selfLink: /oapi/v1/namespaces/project1/imagestreams/ruby-hello-world
 uid: 4bd229be-7c31-11e7-badf-989096de63cb
spec:
 lookupPolicy:
   local: false
status:
 dockerImageRepository: 172.30.1.1:5000/project1/ruby-hello-world
 tags:
 - items:
   - created: 2017-08-08T12:02:04Z
 dockerImageReference:


172.30.1.1:5000/project1/ruby-hello-world@sha256:8d0f81a13ec1b8f8fa4372d26075f0dd87578fba2ec120776133db71ce2c2074
 generation: 1
 image:
sha256:8d0f81a13ec1b8f8fa4372d26075f0dd87578fba2ec120776133db71ce2c2074
   tag: latest


If we link up with the kubernetes resource exporting by adding
--export:

$ oc get is ruby-hello-world -o yaml --export
apiVersion: v1
kind: ImageStream
metadata:
 annotations:
   openshift.io/generated-by: OpenShiftNewApp
 creationTimestamp: null
 generation: 1
 labels:
   app: ruby-hello-world
 name: ruby-hello-world
 namespace: default
 selfLink: /oapi/v1/namespaces/default/imagestreams/ruby-hello-world
spec:
 lookupPolicy:
   local: false
status:
 dockerImageRepository: 17

Re: Seeking Advice On Exporting Image Streams

2017-08-09 Thread Clayton Coleman
Why do we need parameters? Which parameters are we adding?

On Aug 9, 2017, at 12:21 PM, Cesar Wong  wrote:

Hi Devan,

You can see my branch here:
https://github.com/csrwng/origin/tree/parameterize_template
(last 5 commits)

Hopefully should be a PR soon. The REST endpoint should be functional, the
CLI still needs work, but basically the idea is to have the reverse of the
‘oc process’ command, where the input is a list of resources and out comes
a template with parameters.

On Aug 9, 2017, at 11:40 AM, Devan Goodwin  wrote:

On Wed, Aug 9, 2017 at 11:44 AM, Cesar Wong  wrote:

Hi Devan,

This past iteration I started work on this same problem [1]

https://trello.com/c/I2ZJxS94/998-5-improve-oc-export-to-parameterize-containerapppromotion

The problem is broad and the way I decided to break it up is to consider the
export and parameterize operations independently. The export should be
handled by the resource’s strategy as you mentioned in the Kube issue you
opened. The parameterization part can be a follow up to the export. Here’s
an initial document describing it:

https://docs.google.com/a/redhat.com/document/d/15SLkhXRovY1dLbxpWFy_Wfq3I6xMznsOAnopTYrXw_A/edit?usp=sharing


Thanks that was a good read, will keep an eye on this document.

Does anything exist yet for your parameterization code? Curious what
it looks like and if it's something we could re-use yet, what the
inputs and outputs are, etc.


On the export side, I think we need to decide whether there is different
“types” of export that can happen which should affect the logic of the
resource strategy. For example, does a deployment config look different if
you’re exporting it for use in a different namespace vs a different cluster.
If this is the case, then right now is probably a good time to drive that
change to the upstream API as David suggested.


Is anyone working on a proposal for this export logic upstream? I am
wondering if I should try to put one together if I can find the time.
The general idea (as I understand it) would be to migrate the
currently quite broken export=true param to something strategy based,
and interpret "true" to mean a strategy that matches what we do today.
The references in code I've seen indicate that the current intention
is to strip anything the user cannot specify themselves.




On Aug 9, 2017, at 10:27 AM, Ben Parees  wrote:



On Wed, Aug 9, 2017 at 10:00 AM, Devan Goodwin  wrote:


On Wed, Aug 9, 2017 at 9:58 AM, Ben Parees  wrote:



On Wed, Aug 9, 2017 at 8:49 AM, Devan Goodwin 
wrote:


We are working on a more robust project export/import process (into a
new namespace, possibly a new cluster, etc) and have a question on how
to handle image streams.

Our first test was with "oc new-app
https://github.com/openshift/ruby-hello-world.git;, this results in an
image stream like the following:

$ oc get is ruby-hello-world -o yaml
apiVersion: v1
kind: ImageStream
metadata:
 annotations:
   openshift.io/generated-by: OpenShiftNewApp
 creationTimestamp: 2017-08-08T12:01:22Z
 generation: 1
 labels:
   app: ruby-hello-world
 name: ruby-hello-world
 namespace: project1
 resourceVersion: "183991"
 selfLink: /oapi/v1/namespaces/project1/imagestreams/ruby-hello-world
 uid: 4bd229be-7c31-11e7-badf-989096de63cb
spec:
 lookupPolicy:
   local: false
status:
 dockerImageRepository: 172.30.1.1:5000/project1/ruby-hello-world
 tags:
 - items:
   - created: 2017-08-08T12:02:04Z
 dockerImageReference:


172.30.1.1:5000/project1/ruby-hello-world@sha256:8d0f81a13ec1b8f8fa4372d26075f0dd87578fba2ec120776133db71ce2c2074
 generation: 1
 image:
sha256:8d0f81a13ec1b8f8fa4372d26075f0dd87578fba2ec120776133db71ce2c2074
   tag: latest


If we link up with the kubernetes resource exporting by adding
--export:

$ oc get is ruby-hello-world -o yaml --export
apiVersion: v1
kind: ImageStream
metadata:
 annotations:
   openshift.io/generated-by: OpenShiftNewApp
 creationTimestamp: null
 generation: 1
 labels:
   app: ruby-hello-world
 name: ruby-hello-world
 namespace: default
 selfLink: /oapi/v1/namespaces/default/imagestreams/ruby-hello-world
spec:
 lookupPolicy:
   local: false
status:
 dockerImageRepository: 172.30.1.1:5000/default/ruby-hello-world


This leads to an initial question, what stripped the status tags? I
would have expected this code to live in the image stream strategy:


https://github.com/openshift/origin/blob/master/pkg/image/registry/imagestream/strategy.go
but this does not satisfy RESTExportStrategy, I wasn't able to
determine where this is happening.

The dockerImageRepository in status remains, but weirdly flips from
"project1" to "default" when doing an export. Should this remain in an
exported IS at all? And if so is there any reason why it would flip
from project1 to default?

Our real problem however picks up in the deployment config after
import, in here we end 

Re: OpenShift Origin v3.6.0 is released

2017-07-31 Thread Clayton Coleman
Yes, that'll probably have to be a point change.

On Mon, Jul 31, 2017 at 11:02 PM, Andrew Lau <and...@andrewklau.com> wrote:

> I think the node images are still missing the sdn-ovs package.
>
> On Tue, 1 Aug 2017 at 07:45 Clayton Coleman <ccole...@redhat.com> wrote:
>
>> This has been fixed and images were repushed.
>>
>> On Mon, Jul 31, 2017 at 12:13 PM, Clayton Coleman <ccole...@redhat.com>
>> wrote:
>>
>>> Hrm, so they do.  Looking.
>>>
>>> On Mon, Jul 31, 2017 at 11:57 AM, Andrew Lau <and...@andrewklau.com>
>>> wrote:
>>>
>>>> The images still seem to be using rc0 packages.
>>>>
>>>> rpm -qa | grep origin
>>>> origin-clients-3.6.0-0.rc.0.359.de23676.x86_64
>>>> origin-3.6.0-0.rc.0.359.de23676.x86_64
>>>>
>>>> I also had a PR which just got merged for a missing package in the node
>>>> image https://github.com/openshift/origin/pull/15542
>>>>
>>>> On Tue, 1 Aug 2017 at 01:34 Clayton Coleman <ccole...@redhat.com>
>>>> wrote:
>>>>
>>>>> https://github.com/openshift/origin/releases/tag/v3.6.0
>>>>>
>>>>> Images are pushed to the hub.  Thanks to everyone for their hard work
>>>>> this release.  Expect official RPMs in a few days.  Remember to use the
>>>>> Ansible release-3.6 branch for your installs.
>>>>> ___
>>>>> dev mailing list
>>>>> dev@lists.openshift.redhat.com
>>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>>>>
>>>>
>>>
>>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: OpenShift Origin v3.6.0 is released

2017-07-31 Thread Clayton Coleman
This has been fixed and images were repushed.

On Mon, Jul 31, 2017 at 12:13 PM, Clayton Coleman <ccole...@redhat.com>
wrote:

> Hrm, so they do.  Looking.
>
> On Mon, Jul 31, 2017 at 11:57 AM, Andrew Lau <and...@andrewklau.com>
> wrote:
>
>> The images still seem to be using rc0 packages.
>>
>> rpm -qa | grep origin
>> origin-clients-3.6.0-0.rc.0.359.de23676.x86_64
>> origin-3.6.0-0.rc.0.359.de23676.x86_64
>>
>> I also had a PR which just got merged for a missing package in the node
>> image https://github.com/openshift/origin/pull/15542
>>
>> On Tue, 1 Aug 2017 at 01:34 Clayton Coleman <ccole...@redhat.com> wrote:
>>
>>> https://github.com/openshift/origin/releases/tag/v3.6.0
>>>
>>> Images are pushed to the hub.  Thanks to everyone for their hard work
>>> this release.  Expect official RPMs in a few days.  Remember to use the
>>> Ansible release-3.6 branch for your installs.
>>> ___
>>> dev mailing list
>>> dev@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>>
>>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: OpenShift Origin v3.6.0 is released

2017-07-31 Thread Clayton Coleman
Hrm, so they do.  Looking.

On Mon, Jul 31, 2017 at 11:57 AM, Andrew Lau <and...@andrewklau.com> wrote:

> The images still seem to be using rc0 packages.
>
> rpm -qa | grep origin
> origin-clients-3.6.0-0.rc.0.359.de23676.x86_64
> origin-3.6.0-0.rc.0.359.de23676.x86_64
>
> I also had a PR which just got merged for a missing package in the node
> image https://github.com/openshift/origin/pull/15542
>
> On Tue, 1 Aug 2017 at 01:34 Clayton Coleman <ccole...@redhat.com> wrote:
>
>> https://github.com/openshift/origin/releases/tag/v3.6.0
>>
>> Images are pushed to the hub.  Thanks to everyone for their hard work
>> this release.  Expect official RPMs in a few days.  Remember to use the
>> Ansible release-3.6 branch for your installs.
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


OpenShift Origin v3.6.0 is released

2017-07-31 Thread Clayton Coleman
https://github.com/openshift/origin/releases/tag/v3.6.0

Images are pushed to the hub.  Thanks to everyone for their hard work this
release.  Expect official RPMs in a few days.  Remember to use the Ansible
release-3.6 branch for your installs.
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Selecting specific openshift/origin-deployer image version

2017-07-28 Thread Clayton Coleman
Define a custom deployment strategy struct and set the image in there.  If
you have rolling deployment set the custom image will use it (custom
deployment is not exclusive to the other strategies)

On Jul 28, 2017, at 11:55 AM, Mateus Caruccio <
mateus.caruc...@getupcloud.com> wrote:

How could I select an specific version for openshift/origin-deployer to run
my deployment strategy?


--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
Gartner Cool Vendor 2017

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Default Service Account Names

2017-07-17 Thread Clayton Coleman
Asked another way - why not export them?

On Mon, Jul 17, 2017 at 7:31 AM, Devan Goodwin  wrote:

> I've been working on project archival for online, with regard to
> service accounts we may need to export those created manually by the
> user, and skip those created automatically by OpenShift when we
> created the project.
>
> There does not appear to be any information on those service accounts
> to identify that it was automatically created by OpenShift:
>
> - apiVersion: v1
>   imagePullSecrets:
>   - name: deployer-dockercfg-t2ckf
>   kind: ServiceAccount
>   metadata:
> creationTimestamp: 2017-07-12T14:48:19Z
> name: deployer
> namespace: myproject
>
>
> Is assuming the service accounts with names "builder", "deployer", and
> "default" a stable set we could count on for skipping during an
> export?
>
> Would it be acceptable to start adding an annotation to these service
> accounts similar to what we do for secrets that are attached to those
> SAs?
>
>   kind: Secret
>   metadata:
> annotations:
>   kubernetes.io/created-by: openshift.io/create-dockercfg-secrets
>
> Perhaps in this case "openshift.io/default-service-accounts"?
> (suggestions welcome)
>
> If so, is there any established precedent for migrating pre-existing
> builder/deployer/default SAs to add the annotation during an upgrade?
>
> Thanks!
>
> Devan
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Why openshift requires DNS server

2017-07-13 Thread Clayton Coleman
We've discussed it, there are other near term priorities.

On Jul 13, 2017, at 10:36 AM, Fox, Kevin M  wrote:

Is there any intention to contribute it to k8s?

Thanks,
Kevin
--
*From:* dev-boun...@lists.openshift.redhat.com [
dev-boun...@lists.openshift.redhat.com] on behalf of Jordan Liggitt [
jligg...@redhat.com]
*Sent:* Thursday, July 13, 2017 5:57 AM
*To:* Haoran Wang
*Cc:* us...@lists.openshift.redhat.com; dev@lists.openshift.redhat.com
*Subject:* Re: Why openshift requires DNS server

> Why does separate dns server need? Could kube-dns be used?

kube-dns is actually a separate dns server as well. Openshift's DNS
implementation resolves some of the scalability issues kube-dns has and is
preferred


On Jul 13, 2017, at 6:20 AM, Haoran Wang  wrote:

Hi,
1. when you setup your cluster, you will have some node that running a
router pod,and  you need have a subdomain for your cluster, for example,
you cluster will use product.example.com, so you need the DNS server
setting forward the request with this DNS name prefix product.example.com to
the router node.

2. Assume create a project named test, and  you deploy an app to the
openshift cluster, you need create a route [1] for external access,  after
you do that, the url you can be access is by test.product.example.com.
 when you access the your app from outside, the general flow will be:

1. public dns server redirect you to your DNS server
2. Your dns server forward your request to one of the  node which have
openshift router pod deployed
3. the router pod(it's running haproxy by default) will forward the request
to the running pods

Hope this can help you. :)


[1]https://docs.openshift.org/latest/dev_guide/routes.html

Regards,
Haoran Wang

OpenShift QE
IRC: haowang
Raycom office - Red Hat China
Direct: +86 10  <+86%2010%206562%207492>65339430(ext-8389430)


On Thu, Jul 13, 2017 at 5:47 PM, bmeng  wrote:

> AFAIK, the kube-dns resolve the service-names inside the cluster.
>
> The dns server required by OpenShift gives the applications external
> accessibility.
>
> With the dns resolution and the route you created for your application,
> you can access your web application anywhere.
>
> On 07/13/2017 05:09 PM, Yu Wei wrote:
>
> Hi,
>
> I'm learning OpenShift by setting up cluster with VMs.
>
> From the document, some content about DNS is as below,
>
> *OpenShift Origin requires a fully functional DNS server in the
> environment. This is ideally a separate host running DNS software and can
> provide name resolution to hosts and containers running on the platform.*
>
>
> Why does separate dns server need? Could kub-dns be used?
>
>
> Thanks,
>
> Jared, (韦煜)
> Software developer
> Interested in open source software, big data, Linux
>
>
> ___
> users mailing 
> listusers@lists.openshift.redhat.comhttp://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


  1   2   3   >