Re: [CentOS-devel] Update from CentOS PaaS SIG

2020-01-09 Thread Daniel Comnea
Hi Sandro,

Let me clarify few things as i fear there might be some confusion. Even
though is going to be a lot of history provided hopefully will help
everyone to understand *"why & where"* questions.

The message we tried to send out with the blog post was to a) update on the
current state OKD and maybe bring a bit more clarify around OKD 3.x/ 4.x
and b) the PaaS SIG responsibility as well as the answer to the main
question around OKD 4.x + CentOS base OS.

In terms of responsibilities i hope it was clear when we mentioned that
PaaS SIG was dealing with building and publish the 3.x rpms into CentOS
repos, that charter has not change.
As such the release process for building the 3.x rpms followed the bellow
pattern:


   - RH would cut a new Origin [1] release identified by a new tag release
   - Using the tag release - e.g latest 3.11 [2] we were amending the spec
   file and build the rpm from that tag release
   - The same process was followed for the openshift-ansible rpms based on
   [3] project


Now since Oct 2018 no new tag release was created on [1] however the CI
(same which build the OCP 3.11 managed by RH) evolved and switched to a
different method where it no longer cut "OKD 3.11 tag releases" but it
started to create artifacts on a rolling mode - e.g when new commits were/
are being merged into 3.11 OKD branch [1].
With that said the OKD 3.11 is formed by Docker images + the rpms generated
by RH CI (same used by OCP 3.11) which are available at [4]

As you can see from above PaaS SIG was filling a gap with creating the rpms
for OKD 3.x until CI catched up and started to build the OKD 3.11 rpms

With that said we - PaaS SIG - have no longer released newer 3.11 OKD rpms
into CentOS repo after the switch over mainly because

   - the community or folks asking for newer OKD 3.11 rpms in CentOS went
   all quiet, in other words nobody was asking for anything new
   - the confusion on the OKD 3.x vs OKD 4.x and who does what etc => that
   was clarified with the formation of OKD Working group where things became
   more clear

Regarding your question around "how frequently the PaaS SIG is going to
release and from which OKD branches" my answer is

Currently i haven't planned for any newer OKD 3.11 rpms to be built from
[5] branch (not taking into account the release tag but the latest commit
in the branch) and published into CentOS repos however IF there is a demand
i'm happy to resurect and change the automation to get newer rpms built.

However i want to make it clear that if if the above work will be done, it
will be as long as RH will keep supporting 3.11, when OKD 4.x will go GA
then i think OKD 3.11 will no longer get updates.

I hope my long answer does clear any confusion.

Regarding *"when OKD 4.x will be release, the cadence etc"*, as you all
know substantial progress has been made (kudos goes to Vadim, Christian and
Clayton + other RH folks on MCO team) and so everyone can already give it a
try [6] and help providing feedback by raising issues on same repo [6] (in
the near future i think the desire is to do it via BZ).
There are some hicups which Vadim is trying to fix it (many kudos to him)
and so once that is done then we can share the news (with the OKD WG hat
on).


Hope that help,
Dani

[1] https://github.com/openshift/origin
[2] https://github.com/openshift/origin/releases/tag/v3.11.0
[3] https://github.com/openshift/openshift-ansible
[4] https://rpms.svc.ci.openshift.org/openshift-origin-v3.11/
[5] https://github.com/openshift/origin/tree/release-3.11
[6] https://github.com/openshift/okd




On Thu, Jan 9, 2020 at 7:35 AM Sandro Bonazzola  wrote:

>
>
> Il giorno mer 8 gen 2020 alle ore 16:47 Gleidson Nascimento <
> slat...@live.com> ha scritto:
>
>> Hello from CentOS PaaS SIG!
>>
>> We have recently published an update on CentOS blog [1] about OKD v4 and
>> its current state on CentOS.
>>
>> [1] https://blog.centos.org/2020/01/centos-paas-sig-quarterly-report-2/
>>
>> Thanks,
>> Gleidson
>>
>
> Hi,
> can you give some information about how frequently the PaaS SIG is going
> to release and from which OKD branches?
>
>
>
>
>> ___
>> CentOS-devel mailing list
>> centos-de...@centos.org
>> https://lists.centos.org/mailman/listinfo/centos-devel
>>
>
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> *Red Hat respects your work life balance.
> Therefore there is no need to answer this email out of your office hours.*
> ___
> CentOS-devel mailing list
> centos-de...@centos.org
> https://lists.centos.org/mailman/listinfo/centos-devel
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Costomize images RHCOS with ignition file

2019-12-11 Thread Daniel Comnea
I suspect this is in the context of OCP 4.x ? If so, my understanding is
that on OCP 4.x deployment this won't be possible as RHCOS is *just*
another component and so it comes bundled with the whole release payload.
If guess it might be a dev way to inject different images but not sure if
that is a good idea for running in a prod environment

On Wed, Dec 11, 2019 at 2:57 PM Sérgio Cascão  wrote:

> Hi.
> I would like to know if is it possible to customize an iso RHcos with
> ignition files inside. this is to install in bare metal and avoid insert
> manual steps. I know the coreos-installer and I already have inserted an
> ignition file inside coreOS image.
> My question is there some way to insert an ignition file inside RHCOS
> images
> 
> ?
> If is it possible how?
> Best regards.
> Sérgio
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


[OKD 4.x]: first preview drop - please help out with testing and feedback

2019-11-20 Thread Daniel Comnea
Hi folks,

For those who are not following the okd working group updates or not
hanging on the openshift-dev/users K8s slack channel, please be aware of
the announcement sent out [1] by Clayton

We would very much appreciate if folks help out testing and provide
feedback.

Note we haven't finalized the process on where folks should raise issues,
in the last OKD wg meeting there were few suggestions made but no
conclusion yet. Hopefully a decision will be made soon which will be
circulated around.


Cheers

[1] https://mobile.twitter.com/smarterclayton/status/1196477646885965824
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


[OKD/OCP v4]: deployment on a single node using CodeReady Container

2019-09-13 Thread Daniel Comnea
Recently folks were asking what is the minishift's alternative for v4 and
in case you've missed the news see [1]

Hopefully that will also work for OKD v4 once  the MVP is out.


Dani

[1]
https://developers.redhat.com/blog/2019/09/05/red-hat-openshift-4-on-your-laptop-introducing-red-hat-codeready-containers/
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Follow up on OKD 4

2019-07-25 Thread Daniel Comnea
On Thu, Jul 25, 2019 at 5:01 PM Michael Gugino  wrote:

> I don't really view the 'bucket of parts' and 'complete solution' as
> competing ideas.  It would be nice to build the 'complete solution'
> from the 'bucket of parts' in a reproducible, customizable manner.
> "How is this put together" should be easily followed, enough so that
> someone can 'put it together' on their own infrastructure without
> having to be an expert in designing and configuring the build system.
>
> IMO, if I can't build it, I don't own it.  In 3.x, I could compile all
> the openshift-specific bits from source, I could point at any
> repository I wanted, I could point to any image registry I wanted, I
> could use any distro I wanted.  I could replace the parts I wanted to;
> or I could just run it as-is from the published sources and not worry
> about replacing things.  I even built Fedora Atomic host rpm-trees
> with all the kublet bits pre-installed, similar to what we're doing
> with CoreOS now in 3.x.  It was a great experience, building my own
> system images and running updates was trivial.
>
> I wish we weren't EOL'ing the Atomic Host in Fedora.  It offered a lot
> of flexibility and easy to use tooling.
>
> So maybe what we are asking here is:

   - opinionated OCP 4 philosophy => OKD 4 + FCOS (IPI and UPI) using
   ignition, CVO etc
   - DYI kube philosophy reusing as many v4 components but with your own
   preferred operating system


In terms of approach, priority i think is fair to adopt a baby steps
approach where:

   - phase 1 = try to get out OKD 4 + FCOS asap so folks can start build up
   the knowledge around operating the new solution in a full production env
   - phase 2 = once experience/ knowledge was built up then we can crack on
   with reverse eng and see what we can swap etc.





> On Thu, Jul 25, 2019 at 9:51 AM Clayton Coleman 
> wrote:
> >
> > > On Jul 25, 2019, at 4:19 AM, Aleksandar Lazic <
> openshift-li...@me2digital.com> wrote:
> > >
> > > HI.
> > >
> > >> Am 25.07.2019 um 06:52 schrieb Michael Gugino:
> > >> I think FCoS could be a mutable detail.  To start with, support for
> > >> plain-old-fedora would be helpful to make the platform more portable,
> > >> particularly the MCO and machine-api.  If I had to state a goal, it
> > >> would be "Bring OKD to the largest possible range of linux distros to
> > >> become the defacto implementation of kubernetes."
> > >
> > > I agree here with Michael. As FCoS or in general CoS looks technical a
> good idea
> > > but it limits the flexibility of possible solutions.
> > >
> > > For example when you need to change some system settings then you will
> need to
> > > create a new OS Image, this is not very usable in some environments.
> >
> > I think something we haven’t emphasized enough is that openshift 4 is
> > very heavily structured around changing the cost and mental model
> > around this.  The goal was and is to make these sorts of things
> > unnecessary.  Changing machine settings by building golden images is
> > already the “wrong” (expensive and error prone) pattern - instead, it
> > should be easy to reconfigure machines or to launch new containers to
> > run software on those machines.  There may be two factors here at
> > work:
> >
> > 1. Openshift 4 isn’t flexible in the ways people want (Ie you want to
> > add an rpm to the OS to get a kernel module, or you want to ship a
> > complex set of config and managing things with mcd looks too hard)
> > 2. You want to build and maintain these things yourself, so the “just
> > works” mindset doesn’t appeal.
> >
> > The initial doc alluded to the DIY / bucket of parts use case (I can
> > assemble this on my own but slightly differently) - maybe we can go
> > further now and describe the goal / use case as:
> >
> > I want to be able to compose my own Kubernetes distribution, and I’m
> > willing to give up continuous automatic updates to gain flexibility in
> > picking my own software
> >
> > Does that sound like it captures your request?
> >
> > Note that a key reason why the OS is integrated is so that we can keep
> > machines up to date and do rolling control plane upgrades with no
> > risk.  If you take the OS out of the equation the risk goes up
> > substantially, but if you’re willing to give that up then yes, you
> > could build an OKD that doesn’t tie to the OS.  This trade off is an
> > important one for folks to discuss.  I’d been assuming that people
> > *want* the automatic and safe upgrades, but maybe that’s a bad
> > assumption.
> >
> > What would you be willing to give up?
> >
> > >
> > > It would be nice to have the good old option to use the ansible
> installer to
> > > install OKD/Openshift on other Linux distribution where ansible is
> able to run.
> > >
> > >> Also, it would be helpful (as previously stated) to build communities
> > >> around some of our components that might not have a place in the
> > >> official kubernetes, but are valuable downstream components
> > >> 

Re: Follow up on OKD 4

2019-07-24 Thread Daniel Comnea
On Mon, Jul 22, 2019 at 4:02 PM Justin Cook  wrote:

> On 22 Jul 2019, 12:24 +0100, Daniel Comnea , wrote:
>
> I totally agree with that but let's do a quick reality check taking
> example some IRC channels, shall we?
>
>- ansible IRC channel doesn't log the conversation - does the comments
>[1] and [2] resonate with you? It does for me and that is a huge -1 from my
>side.
>
>
> Yes that’s most unfortunate for #ansible.
>
>
>- centos-devel/ centos channels doesn't log the conversation. Saying
>that for the centos meetings (i.e PaaS SIG) it get logged and is per SIG.
>That in itself is very useful however as a guy who was consuming the output
>for the last year as PaaS SIG chair/ member i will say is not appealing to
>go over the output if a meeting had high traffic (same way if you have a 6
>hour meeting recording, will you watch it from A to Z ? ;) )
>- fedora-coreos it does log [3] but if i'll turn every morning and see
>what has been discussed you see a lot of noise caused by who join/leave
>
>
>
> #centos and #fedora could most certainly do better. We’re getting on to
> three months after RHEL8 release and no hint of CentOS8.
>
> [DC]: i think is a bit unfair to say that, the info are out - see [1] ,
[2] and [3]

[1] https://blog.centos.org/2019/05/centos-8-0-1905-build-status/
[2] https://blog.centos.org/2019/06/centos-8-status-17-june-2019/
[3] https://wiki.centos.org/About/Building_8


>- openshift/ openshift-dev channels had something on [4] but does it
>still works ?
>
>
> This is one point of my complaint.
>
>
>
> All i'm trying to say with the above is:
>
> Should we go with IRC as a form of communication we should then be ready
> to have bodies lined up to:
>
>- look after and admin the IRC channels.
>- enable the IRC log channels and also filter out the noise to be
>consumable (not just stream the logs somewhere and tick the box)
>
>
> Easy enough. It’s been done time and again. Let’s give it a whirl. Since
> I’m the one complaining perhaps I can put my name in for consideration.
>
> [DC]: i understood not everyone is okay with logging any activity due to
GDPR so i think this goes off the table

>
>
> In addition to the channel logs, my main requirement is to access the IRC
> channels from any device and not lose track of what has been discussed.
> A respected gentlemen involved in various opensource projects once wrote
> [5] and so with that i'd say:
>
>- who will take on board the setup so everyone can benefit from it?
>
>
> https://www.irccloud.com/irc/freenode
> https://matrix.org/faq
>
> Again some options here, but most certainly doable with a little effort.
> #openshift-dev is advertised all over the place.
> https://www.okd.io/#contribute
>
> If you swing to slack, i'd say:
>
>- K8s slack is free in that neither you nor i/ others pay for it and
>everyone can join there
>- OpenShift Common slack channel is also free, RH is paying the bill
>(another investment from their side) however as said Diane setup up that
>place initially with a different scope.
>- once you logged in you can scroll back many months in the past
>- you get ability to share code snippet -> in IRC you don't. You could
>argue that folks can use github gist or any pastebin service however the
>content can be deleted/ expired and so we go back to square one
>
>
> Slack logs are not indexed by search engines. This prevents me from
> supporting it in its entirety. People have been sharing code snippets for
> decades on IRC. And, it’s worked fantastic. Just from my personal
> experience of Slack absorbing or repelling so much energy and collaboration
> from the community — of which no one can explain really — I don’t see it as
> a viable option given we have the numbers in front of us from this very
> project which undeniably shows it doesn’t work.
>
>
> [1] https://github.com/ansible/community/issues/242#issuecomment-334239958
> [2] https://github.com/ansible/community/issues/242#issuecomment-336890994
> [3] https://echelog.com/logs/browse/fedora-coreos/1563746400
> [4] https://botbot.me/freenode/openshift-dev/
> [5]
> https://doughellmann.com/blog/2015/03/12/deploying-nested-znc-services-with-ansible/
>
> you also saying
>
>- *Slack with three threads per week*
>
> How is the traffic on fedora-coreos OR centos-devel channels going? Have
> you seen high volume ?
>
>
> Why do you mention other projects and their traffic? #openshift-dev had
> incredible amounts of traffic which helped make it a success. Different
> channels have different attendance depending on a tremendous amount of
> fac

Re: Follow up on OKD 4

2019-07-21 Thread Daniel Comnea
On Sun, Jul 21, 2019 at 5:27 PM Clayton Coleman  wrote:

>
>
> On Sat, Jul 20, 2019 at 12:40 PM Justin Cook  wrote:
>
>> Once upon a time Freenode #openshift-dev was vibrant with loads of
>> activity and publicly available logs. I jumped in asked questions and Red
>> Hatters came from the woodwork and some amazing work was done.
>>
>> Perfect.
>>
>> Slack not so much. Since Monday there have been three comments with two
>> reply threads. All this with 524 people. Crickets.
>>
>> Please explain how this is better. I’d really love to know why IRC
>> ceased. It worked and worked brilliantly.
>>
>
> Is your concern about volume or location (irc vs slack)?
>
> Re volume: It should be relatively easy to move some common discussion
> types into the #openshift-dev slack channel (especially triage / general
> QA) that might be distributed to other various slack channels today (both
> private and public), and I can take the follow up to look into that.  Some
> of the volume that was previously in IRC moved to these slack channels, but
> they're not anything private (just convenient).
>
> Re location:  I don't know how many people want to go back to IRC from
> slack, but that's a fairly easy survey to do here if someone can volunteer
> to drive that, and I can run the same one internally.  Some of it is
> inertia - people have to be in slack sig-* channels - and some of it is
> preference (in that IRC is an inferior experience for long running
> communication).
>
[DC]: i've already reached out to Christian over the weekend and we are
going to have a 1:1 early next week to sort out some logistics and
hopefully we'll have something to share more mid next week in terms of
survey comms and process moving forward.


>
>>
>> There are mentions of sigs and bits and pieces, but absolutely no
>> progress. I fail to see why anyone would want to regress. OCP4 maybe
>> brilliant, but as I said in a private email, without upstream there is no
>> culture or insurance we’ve come to love from decades of heart and soul.
>>
>> Ladies and gentlemen, this is essentially getting to the point the
>> community is being abandoned. Man years of work acknowledged with the
>> roadmap pulled out from under us.
>>
>
> I don't think that's a fair characterization, but I understand why you
> feel that way and we are working to get the 4.x work moving.  The FCoS team
> as mentioned just released their first preview last week, I've been working
> with Diane and others to identify who on the team is going to take point on
> the design work, and there's a draft in flight that I saw yesterday.  Every
> component of OKD4 *besides* the FCoS integration is public and has been
> public for months.
>
> [DC]: Clayton, was that drat you mentioned circulated internally or is
public available?


> I do want to make sure we can get a basic preview up as quickly as
> possible - one option I was working on with the legal side was whether we
> could offer a short term preview of OKD4 based on top of RHCoS.  That is
> possible if folks are willing to accept the terms on try.openshift.com in
> order to access it in the very short term (and then once FCoS is available
> that would not be necessary).  If that's an option you or anyone on this
> thread are interested in please let me know, just as something we can do to
> speed up.
>
>
[DC]: my suggestion is that we should hold on this at least until we get
the SIG and the meeting going to at least have an open debate with the
folks who are willing to stick around and help out. Once we've get a quorum
we can then ask for a waiver on OKDv4 with RHCoS



>> I completely understand the disruption caused by the acquisition. But,
>> after kicking the tyres and our meeting a few weeks back, it’s been pretty
>> quiet. The clock is ticking on corporate long-term strategies. Some of
>> those corporates spent plenty of dosh on licensing OCP and hiring
>> consultants to implement.
>>
>
>> Red Hat need to lead from the front. Get IRC revived, throw us a bone,
>> and have us put our money where our mouth is — we’ll get involved. We’re
>> begging for it.
>>
>> Until then we’re running out of patience via clientele and will need to
>> start a community effort perhaps by forking OKD3 and integrating upstream.
>> I am not interested in doing that. We shouldn’t have to.
>>
>
> In the spirit of full transparency, FCoS integrated into OKD is going to
> take several months to get to the point where it meets the quality bar I'd
> expect for OKD4.  If that timeframe doesn't work for folks, we can
> definitely consider other options like having RHCoS availability behind a
> terms agreement, a franken-OKD without host integration (which might take
> just as long to get and not really be a step forward for folks vs 3), or
> other, more dramatic options.  Have folks given FCoS a try this week?
> https://docs.fedoraproject.org/en-US/fedora-coreos/getting-started/.
> That's a great place to get started
>
> As always PRs and fixes to 3.x will 

Re: Follow up on OKD 4

2019-07-19 Thread Daniel Comnea
Hi Christian,

Welcome and thanks for volunteering on kicking off this effort.

My vote goes to #openshift-dev slack too, OpenShift Commons Slack scope
was/is a bit different geared towards ISVs.

IRC -  personally have no problem, however the chances to attract more
folks (especially non RH employees) who might be willing to help growing
OKD community are higher on slack.

On Fri, Jul 19, 2019 at 9:33 PM Christian Glombek 
wrote:

> +1 for using kubernetes #openshift-dev slack for the OKD WG meetings
>
>
> On Fri, Jul 19, 2019 at 6:49 PM Clayton Coleman 
> wrote:
>
>> The kube #openshift-dev slack might also make sense, since we have 518
>> people there right now
>>
>> On Fri, Jul 19, 2019 at 12:46 PM Christian Glombek 
>> wrote:
>>
>>> Hi everyone,
>>>
>>> first of all, I'd like to thank Clayton for kicking this off!
>>>
>>> As I only just joined this ML, let me quickly introduce myself:
>>>
>>> I am an Associate Software Engineer on the OpenShift
>>> machine-config-operator (mco) team and I'm based out of Berlin, Germany.
>>> Last year, I participated in Google Summer of Code as a student with
>>> Fedora IoT and joined Red Hat shortly thereafter to work on the Fedora
>>> CoreOS (FCOS) team.
>>> I joined the MCO team when it was established earlier this year.
>>>
>>> Having been a Fedora/Atomic community member for some years, I'm a
>>> strong proponent of using FCOS as base OS for OKD and would like to see it
>>> enabled :)
>>> As I work on the team that looks after the MCO, which is one of the
>>> parts of OpenShift that will need some adaptation in order to support
>>> another base OS, I am confident I can help with contributions there
>>> (of course I don't want to shut the door for other OSes to be used as
>>> base if people are interested in that :).
>>>
>>> Proposal: Create WG and hold regular meetings
>>>
>>> I'd like to propose the creation of the OKD Working Group that will hold
>>> bi-weekly meetings.
>>> (or should we call it a SIG? Also open to suggestions to find the right
>>> venue: IRC?, OpenShift Commons Slack?).
>>>
>>> I'll survey some people in the coming days to find a suitable meeting
>>> time.
>>>
>>> If you have any feedback or suggestions, please feel free to reach out,
>>> either via this list or personally!
>>> I can be found as lorbus on IRC/Fedora, @lorbus42 on Twitter, or simply
>>> via email :)
>>>
>>> I'll send out more info here ASAP. Stay tuned!
>>>
>>> With kind regards
>>>
>>> CHRISTIAN GLOMBEK
>>> Associate Software Engineer
>>>
>>> Red Hat GmbH, registred seat: Grassbrunn
>>> Commercial register: Amtsgericht Muenchen, HRB 153243
>>> Managing directors: Charles Cachera, Michael O'Neill, Thomas Savage, Eric 
>>> Shander
>>>
>>>
>>>
>>> On Wed, Jul 17, 2019 at 10:45 PM Clayton Coleman 
>>> wrote:
>>>
 Thanks for everyone who provided feedback over the last few weeks.
 There's been a lot of good feedback, including some things I'll try to
 capture here:

 * More structured working groups would be good
 * Better public roadmap
 * Concrete schedule for OKD 4
 * Concrete proposal for OKD 4

 I've heard generally positive comments about the suggestions and
 philosophy in the last email, with a desire for more details around what
 the actual steps might look like, so I think it's safe to say that the idea
 of "continuously up to date Kubernetes distribution" resonated.  We'll
 continue to take feedback along this direction (private or public).

 Since 4 was the kickoff for this discussion, and with the recent
 release of the Fedora CoreOS beta (
 https://docs.fedoraproject.org/en-US/fedora-coreos/getting-started/) 
 figuring
 prominently in the discussions so far, I got some volunteers from that team
 to take point on setting up a working group (SIG?) around the initial level
 of integration and drafting a proposal.

 Steve and Christian have both been working on Fedora CoreOS and
 graciously agreed to help drive the next steps on Fedora CoreOS and OKD
 potential integration into a proposal.  There's a rough level draft doc
 they plan to share - but for now I will turn this over to them and they'll
 help organize time / forum / process for kicking off this effort.  As that
 continues, we'll identify new SIGs to spawn off as necessary to cover other
 topics, including initial CI and release automation to deliver any
 necessary changes.

 Thanks to everyone who gave feedback, and stay tuned here for more!

>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>> ___
> dev mailing list
> d...@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
___
users mailing list

Re: How to do a flexvolume with ocp 4?

2019-06-13 Thread Daniel Comnea
On Thu, Jun 13, 2019 at 7:00 PM Hemant Kumar  wrote:

> Yes they are. The only catch is - getting them to work in control-plane is
> more difficult, but since your flexvolume plugin worked in 3.11 where
> controller-manager is already conainerized, it may not be so for your
> particular use case.
>
> [DC]: if you don't mind, curious to understand why you think in v4 is
harder to get it working with the control-plane?

>
>
> On Thu, Jun 13, 2019 at 12:31 PM Marc Boorshtein 
> wrote:
>
>> I've got a flexvolume driver for CIFS working in 3.11.  How does that
>> work on 4.x with RHCOS?  Are flexvolumes still possible?
>>
>> Thanks
>> Marc
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OKD3.11 install blocked - Could not find csr for nodes

2019-06-04 Thread Daniel Comnea
Hi Dan,

Which openshift-ansible release tag have you used ?


Cheers,
Dani

On Mon, Jun 3, 2019 at 4:18 PM Punga Dan  wrote:

> Thank you very much for the extensive response, Samuel!
>
> I've found that I do have a DNS misconfiguration so I receive the CSR
> error from the title not because of something related to Openshift
> installer procedure.
>
> Somehow (and I haven't yet found the reason, but still looking for it)
> dnsmasq fills the upstream DNS configuration with some public nameservers
> and not my "internal" DNS.
> So after the openshift-ansible playbook, related to this, installs dnsmasq
> and calls the /etc/NetworkManager/dispatcher.d/99-origin-dns.sh
> script(restarts NetworkManager), all nodes end up with "bad" upstream
> nameservers (in the /etc/dnsmasq.d/origin-upstream-dns.conf and
> /etc/origin/node/resolv.conf files).
> Even if the /etc/resolv.conf file for each host has the right nameserver
> and search domain, dnsmasq populates the OKD-related conf files above with
> a different nameserver.
>
> I think this is related to dnsmasq/NetworkManager specific
> configurationwill have to look into it and figure out what's not going
> as expected and why. I believe these are served by the DHCP server, but
> still looking for a way to address this.
>
> Anyway thanks again for the input, it put me on the right track! :)
>
> Dan
>
> În dum., 2 iun. 2019 la 22:04, Samuel Martín Moro  a
> scris:
>
>> Hi,
>>
>>
>> This is quite puzzling, ... could you share your inventory with us? make
>> sure to obfuscate any sensitive data (ldap/htpasswd credentials among
>> others, ...)
>> mostly interested in potential openshift_node_groups edition. Although
>> something else might come up (?)
>>
>>
>> At first glance, you are right, it sounds like a firewalling issue.
>> Yet from your description, you did open all required ports.
>> I could suggest you check back on these, make sure your data is accurate
>> - although I would assume it is.
>> Also: if using Cri-O as a runtime, note that you would be missing port
>> 10010, that should be opened on all nodes. Yet I don't think that one would
>> be related to nodes registrations against your master API.
>>
>> Another explanation could be related to DNS (can your infra/compute nodes
>> properly resolve your masters name? the contrary would be unusual, still
>> could explain what's going on).
>>
>> As a general rule, at that stage, I would restart the origin-node service
>> on those hosts that fail to register, keeping an eye on /var/log/messages
>> (or journalctl -f).
>> If that doesn't help, I might raise log levels in
>> /etc/sysconfig/origin-node (there's a variable which defaults to 2, you can
>> change it to 99, beware it would give you a lots of logs/could saturate
>> your disks at some point, don't keep it like this over a long period)
>>
>> Dealing with large volumes of logs, note that openshift services tends to
>> store messages with prefix based on severity: you might be able to "| grep
>> -E 'E[0-9][0-9]" to focus on error messages, or W[0-9][0-9] for warnings,
>> ...
>>
>> Your issue being potentially related to firewalling, I might also use
>> tcpdump looking into what's being exchanged between nodes.
>> Look for any packets with a SYN flag ("[S]") that would not be followed
>> by an SYN-ACK ("[S.]").
>>
>>
>> Let us know how that goes,
>>
>>
>> Good luck.
>> Failing during the "Approve node certificate" steps is relatively common,
>> and could have several causes, from node groups configuration, to DNS,
>> firewalls, broken TCP handshake, MTU not allowing for certificates to go
>> through, ... we'll want to dig deeper, to elucidate that issue.
>>
>>
>> Regards.
>>
>> On Sat, Jun 1, 2019 at 12:19 PM Punga Dan  wrote:
>>
>>> Hello all!
>>>
>>> I'm hitting a problem when trying to install a OKD3.11 on one master 2
>>> infra and 2 compute nodes. The hosts are VM that run centos7.
>>> I've gone through the issues related to this subject:
>>> https://access.redhat.com/solutions/3680401 which suggest naming the
>>> hosts as FQDN. Tried it with the same problem appearing for the same set of
>>> hosts(all except the master).
>>>
>>> In my case the error is only for the 2 infra nodes and 2 compute nodes,
>>> so not for the master as well.
>>>
>>> oc get nodes gives me just the master node, but I guess this is the case
>>> as the other OKD-nodes stand to be created by the process that fails. Am I
>>> wrong?
>>>
>>> oc get csr gives me a result of 3 csrs:
>>> [root@master ~]# oc get csr
>>> NAMEAGE   REQUESTORCONDITION
>>> csr-4xjjb   24m   system:admin Approved,Issued
>>> csr-b6x45   24m   system:admin Approved,Issued
>>> csr-hgmpf   20m   system:node:master   Approved,Issued
>>>
>>> Here I believe I have 2 csrs for system:Admin because I ran
>>> the playbooks/openshift-node/join.yml a second time.
>>>
>>> The bootstrapping certificates on the master look fine(??)
>>> [root@master ~]# ll 

Re: install questions (on AWS)

2019-03-07 Thread Daniel Comnea
On Thu, Mar 7, 2019 at 12:07 AM Just Marvin <
marvin.the.cynical.ro...@gmail.com> wrote:

> Hi,
>
> I looked at [4], and it isn't clear about which host / hosts the EIPs
> will get mapped to. This makes a difference in the cost of my solution. If
> I have 10 EIPs and 10 hosts, and I map one per host, the ip addresses are
> free. But if OpenShift needs me to map them all to one host, I need to pay
> for 9 (because the first one is free). How does this work? To be clear, I'm
> asking about v3.
>
[DC]: fyi all the answers Trevor provided were for v4 not v3


> I'll ask this and other v4 questions in the forum you pointed me to.
>
> Thanks,
> Marvin
>
> On Wed, Mar 6, 2019 at 3:53 PM W. Trevor King  wrote:
>
>> On Wed, Mar 6, 2019 at 12:38 PM Just Marvin wrote:
>> > Firstly - is this the right place to ask questions pertaining to
>> the v4 developer preview? If not, would appreciate a pointer to the right
>> place please.
>>
>> [1] suggests [2] for the v4 developer preview.
>>
>> > I have questions pertaining to how one would install on AWS (v4 or
>> v3). AWS charges for Elastic IP's that are not mapped to a running EC2
>> instance, or additional elastic IP's (more than one) mapped to a EC2
>> instance. I know how many elastic IPs I need for routes, but I'm not sure
>> how these IPs need to be assigned. Do I say that all those IP addresses are
>> going to be assigned to the host running the SDN (openvswitch?) components?
>> Or do I distribute them across the worker nodes? Do I need an elastic ip
>> address for kube-dns (and if so, which host is that on - the master)?
>>
>> The v4 installer [3] will handle all of this for you.  We have
>> documentation for user-supplied infrastructure on AWS in the pipe, but
>> nothing I can link yet.  Docs for the current EIP allocation are in
>> [4].
>>
>> > If I need to enable https end-user traffic, will I need to install
>> CA (but private CA) cert generating components, or does Openshift have the
>> capability to dynamically generate the certificates for my routes?
>>
>> Looks like there is some disccusion of this over in [5].
>>
>> Cheers,
>> Trevor
>>
>> [1]: https://cloud.openshift.com/clusters/install
>> [2]: https://groups.google.com/forum/#!forum/openshift-4-dev-preview
>> [3]: https://github.com/openshift/installer/
>> [4]:
>> https://github.com/openshift/installer/blob/v0.14.0/docs/user/aws/limits.md#elastic-ip-eip
>> [5]: https://groups.google.com/d/msg/openshift-4-dev-preview/l3qckiBnhkA
>>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Openshift-router.

2019-02-11 Thread Daniel Comnea
Yahor,

As you might know the images which result from the code base you mentioned
are meant to be managed by an operator in 4.x
Saying that, if you look [1] you can see how the image is getting built
using the library [2]

Dani

[1] https://github.com/openshift/router/blob/master/Makefile#L8
[2] https://github.com/openshift/imagebuilder


On Mon, Feb 11, 2019 at 1:27 PM Scott Dodson  wrote:

> Yahor,
>
> I'm sorry, I'm not familiar with that. I've CC'd the users list in
> case someone else does know.
>
> --
> Scott
>
> On Mon, Feb 11, 2019 at 5:24 AM Egor Chyzhevskiy 
> wrote:
> >
> > Hello, Scott.
> > Nice to meet you!
> > This conversation is about openshift-router, which is here (
> https://github.com/openshift/router)
> >
> > I with my team use openshift for making our purposes in a huge project.
> There we used oc-router. I wanted to know, how does it work separatly. On
> the github I found a repository with openshift-router and wanted to
> deployed it bymyself to get better understanding how does router controlls
> routes.
> > But when I tried to deploy it via Docker I got negative result because
> of private docker regestry. Could you, please, why I can't build docker
> image from Dockerfile or how can I do this?
> >
> > I executed next command:
> > docker build -t oc-router ~/oc-router/router/images/router/haproxy
> >
> >
> >
> >
> >
> > Best regards,
> > Yahor
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OKD Openshift-origin Control plane pods didn't come up Centos 7.6

2019-02-11 Thread Daniel Comnea
is master.example.com resolvable ? you could also try to jump on the master
node and see if the pod is up and if so start from that end.

On Mon, Feb 11, 2019 at 4:28 AM Dhanushka Parakrama 
wrote:

> Hi  Team
>
> I'm trying to install  openshift origin 3.11 in Centos 7.6 and i got the
> "Pods didn't came up error"  Can you guys please help me the fix the below
> error .
>
>
>
> Error
> =
>
> failed: [master.example.com] (item=controllers) => {"attempts": 60,
> "changed": false, "item": "controllers", "msg": {"cmd": "/usr/bin/oc get
> pod master-controllers-master.example.com -o json -n kube-system",
> "results": [{}], "returncode": 1, "stderr": "The connection to the server
> master.example.com:8443 was refused - did you specify the right host or
> port?\n", "stdout": ""}}
> ...ignoring
>
> TASK [openshift_control_plane : Check status in the kube-system namespace]
> 
> fatal: [master.example.com]: FAILED! => {"changed": true, "cmd": ["oc",
> "status", "--config=/etc/origin/master/admin.kubeconfig", "-n",
> "kube-system"], "delta": "0:00:00.199493", "end": "2019-02-11
> 08:52:36.021191", "msg": "non-zero return code", "rc": 1, "start":
> "2019-02-11 08:52:35.821698", "stderr": "The connection to the server
> master.example.com:8443 was refused - did you specify the right host or
> port?\nThe connection to the server master.example.com:8443 was refused -
> did you specify the right host or port?\nThe connection to the server
> master.example.com:8443 was refused - did you specify the right host or
> port?\nThe connection to the server master.example.com:8443 was refused -
> did you specify the right host or port?\nThe connection to the server
> master.example.com:8443 was refused - did you specify the right host or
> port?\nThe connection to the server master.example.com:8443 was refused -
> did you specify the right host or port?\nThe connection to the server
> master.example.com:8443 was refused - did you specify the right host or
> port?\nThe connection to the server master.example.com:8443 was refused -
> did you specify the right host or port?\nThe connection to the server
> master.example.com:8443 was refused - did you specify the right host or
> port?\nThe connection to the server master.example.com:8443 was refused -
> did you specify the right host or port?\nThe connection to the server
> master.example.com:8443 was refused - did you specify the right host or
> port?\nThe connection to the server master.example.com:8443 was refused -
> did you specify the right host or port?\nThe connection to the server
> master.example.com:8443 was refused - did you specify the right host or
> port?\nThe connection to the server master.example.com:8443 was refused -
> did you specify the right host or port?\nThe connection to the server
> master.example.com:8443 was refused - did you specify the right host or
> port?\nThe connection to the server master.example.com:8443 was refused -
> did you specify the right host or port?", "stderr_lines": ["The connection
> to the server master.example.com:8443 was refused - did you specify the
> right host or port?", "The connection to the server
> master.example.com:8443 was refused - did you specify the right host or
> port?", "The connection to the server master.example.com:8443 was refused
> - did you specify the right host or port?", "The connection to the server
> master.example.com:8443 was refused - did you specify the right host or
> port?", "The connection to the server master.example.com:8443 was refused
> - did you specify the right host or port?", "The connection to the server
> master.example.com:8443 was refused - did you specify the right host or
> port?", "The connection to the server master.example.com:8443 was refused
> - did you specify the right host or port?", "The connection to the server
> master.example.com:8443 was refused - did you specify the right host or
> port?", "The connection to the server master.example.com:8443 was refused
> - did you specify the right host or port?", "The connection to the server
> master.example.com:8443 was refused - did you specify the right host or
> port?", "The connection to the server master.example.com:8443 was refused
> - did you specify the right host or port?", "The connection to the server
> master.example.com:8443 was refused - did you specify the right host or
> port?", "The connection to the server master.example.com:8443 was refused
> - did you specify the right host or port?", "The connection to the server
> master.example.com:8443 was refused - did you specify the right host or
> port?", "The connection to the server master.example.com:8443 was refused
> - did you specify the right host or port?", "The connection to the server
> master.example.com:8443 was refused - did you specify the right host or
> port?"], "stdout": "", "stdout_lines": []}
> 

Re: RPMs for 3.11 still missing from the official OpenShift Origin CentOS repo

2019-01-06 Thread Daniel Comnea
Joel & all,

On the CVE subject you are correct however if you read [1] you will better
understand a) the PaaS sig process on how the Origin rpm is getting build
(based on the Origin release tag) and b) what is holding on getting a new
Origin v3.11 rpm out

Hope that helps a bit
Dani

[1]
http://lists.openshift.redhat.com/openshift-archives/dev/2018-December/msg00015.html


On Sun, Jan 6, 2019 at 11:29 AM Joel Pearson 
wrote:

> I think it's worth mentioning here that the RPMs at
> http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin311/ have a
> critical security vulnerability, I think it's unsafe to use the RPMs if
> you're planning on having your cluster available on the internet.
>
> https://access.redhat.com/security/cve/cve-2018-1002105
>
> Unless you're going to be using the RedHat supported version of OpenShift,
> ie OCP, then I think the only safe option is to install OKD with Centos
> Atomic Host and the containerised version of OpenShift, ie not use the RPMs
> at all.
>
> The problem with the RPMs, is that you get no patches, only the version of
> OpenShift 3.11.0 as it was when it was released, however, the containerized
> version of OKD (only supported on Atomic Host) has a rolling tag (see
> https://lists.openshift.redhat.com/openshift-archives/users/2018-October/msg00049.html)
> and you'll notice that the containers were just rebuilt a few minutes ago:
> https://hub.docker.com/r/openshift/origin-node/tags
>
> It looks like the OKD images are rebuilt from the release-3.11 branch:
> https://github.com/openshift/origin/commits/release-3.11
>
> You can see the CVE critical vulnerability was fixed in commits on
> December 4, however, the RPMs were built on the 5th of November so they
> certainly do not contain the critical vulnerability fixes.
>
> I am running OKD 3.11 on Centos Atomic Host on an OpenStack cluster and it
> works fine, and I can confirm from the OKD About page that I'm running a
> version of OpenShift that is patched: OpenShift Master: v3.11.0+d0a16e1-79
> (which lines up with commits on December 31)
>
> However, the bad news for you is that an upgrade from RPMs to
> containerised would not be simple, and you couldn't reuse your nodes
> because you'd need to switch from Centos regular to Centos Atomic Host.  It
> would probably be technically possible but not simple.  I guess you'd
> upgrade your 3.10 cluster to the vulnerable version of 3.11 via RPMs, and
> then migrate your cluster to another cluster running on Atomic Host, I'm
> guessing there is probably some way to replicate the etcd data from one
> cluster to another. But it sounds like it'd be a lot of work, and you'd
> need some pretty deep skills in etcd and openshift.
>
> On Sun, 6 Jan 2019 at 07:03, mabi  wrote:
>
>> ‐‐‐ Original Message ‐‐‐
>> On Saturday, January 5, 2019 3:57 PM, Daniel Comnea <
>> comnea.d...@gmail.com> wrote:
>>
>> [DC]: i think you are a bit confused: there are 2 ways to get the rpms
>> from CentOS yum repo: using the generic repo [1] which will always have the
>> latest origin release OR [2] where i've mentioned that you can install
>> *centos-release-openshift-origin3** rpm which will give you [3] yum repo
>>
>>
>> Thank you for your precisions and yes I am confused because first of all
>> the upgrading documentation on the okd.io website does not mention
>> anything about having to manually change the yum repo.repos.d file to match
>> a new directory for a new version of openshift.
>>
>> Then second, this mail (
>> https://lists.openshift.redhat.com/openshift-archives/users/2018-November/msg7.html)
>> has the following sentence, I quote:
>>
>> "Please note that due to ongoing work on releasing CentOS 7.6, the
>> mirror.centos.org repo is in freeze mode - see [4] and as such we have
>> not published the rpms to [5]. Once the freeze mode will end, we'll publish
>> the rpms."
>>
>> So when is the freeze mode over for this repo? I read this should have
>> happened after the CentOS 7.6 release but that was already one month ago
>> and still no version 3.11 RPMs in the
>> http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin/ repo...
>>
>> Finally, all I want to do is to upgrade my current okd version 3.10 to
>> version 3.11 but I can't find any complete instructions documented
>> correctly. The best I can find is
>> https://docs.okd.io/3.11/upgrading/automated_upgrades.html which simply
>> mentions running the following upgrade playbook:
>>
>> ansible-playbook \
>> -i  \
>> playbooks/byo/openshift-cluster/upgrades//upgrade.yml
>>
>> Again here there is no mention of having to m

Re: RPMs for 3.11 still missing from the official OpenShift Origin CentOS repo

2019-01-05 Thread Daniel Comnea
On Sat, Jan 5, 2019 at 10:03 AM mabi  wrote:

> ‐‐‐ Original Message ‐‐‐
> On Saturday, January 5, 2019 10:57 AM, Daniel Comnea <
> comnea.d...@gmail.com> wrote:
>
> The specific openshift release directory been present for a long time.
> Saying that i'll work next week in pushing v3.11 rpms to
> http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin/ too
>
>
> Aha, so it is indeed still missing from this specific repo which is in use
> by the openshift-ansible playbook...
>
> That would be great if you can push the missing RPMs to that directory too
> because the openshift-ansible playbooks do rely on this specific directory
> having the right version available as far as I know.
>
[DC]: i think you are a bit confused: there are 2 ways to get the rpms from
CentOS yum repo: using the generic repo [1] which will always have the
latest origin release OR [2] where i've mentioned that you can install
*centos-release-openshift-origin3** rpm which will give you [3] yum repo

[1] http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin/
[2]
http://lists.openshift.redhat.com/openshift-archives/users/2018-November/msg7.html
[3] http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin311/
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: RPMs for 3.11 still missing from the official OpenShift Origin CentOS repo

2019-01-05 Thread Daniel Comnea
The specific openshift release directory been present for a long time.
Saying that i'll work next week in pushing v3.11 rpms to
http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin/ too


Dani

On Fri, Jan 4, 2019 at 10:41 PM mabi  wrote:

> ‐‐‐ Original Message ‐‐‐
> On Friday, January 4, 2019 11:15 PM, Erik McCormick <
> emccorm...@cirrusseven.com> wrote:
>
> Change it to use:
> http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin311/
>
>
> I see, so now there is one directory per version released and not all
> versions in the same openshift-origin directory like in the past...
>
> As I will be using the upgrade openshift-ansible playbook do I need to
> manually change my yum repo.d file for the new 311 repo directory or does
> the upgrade ansible playbook take care of that?
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: [CentOS-devel] [CentOS PaaS SIG]: Origin v3.11 rpms available officially released

2018-11-13 Thread Daniel Comnea
Hi Leo,

The rpms are already in the official CentOS repository [1] . As
communicated earlier once the CentOS 7.6 is out, we (as a SIG) will be
allowed to promote rpms.

As soon as CentOS Infra team will inform us we will go and action
immediately followed by an announcement here.


Dani

[1] http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin311/

On Tue, Nov 13, 2018 at 4:55 AM leo David  wrote:

> Hi,
> First of all, thank you very much for all the work to get this version
> released !
> Any news about having the rpms in the Centos official repos ?
> Thank you !
> Leo
>
> On Mon, Nov 12, 2018, 15:13 Scott Dodson 
>> We're aware of some issues in 2.7.0, some tasks were skipped preventing
>> proper etcd certificate generation, that appears to be fixed in 2.7.1.
>> Our OpenShift QE teams do not currently test with 2.7 so the community
>> may be the first to encounter problems but we'll try to fix them if you
>> open a github issue.
>>
>> On Mon, Nov 12, 2018 at 7:24 AM Sandro Bonazzola 
>> wrote:
>>
>>>
>>>
>>> Il giorno ven 9 nov 2018 alle ore 18:15 Daniel Comnea <
>>> comnea.d...@gmail.com> ha scritto:
>>>
>>>>
>>>> Hi,
>>>>
>>>> We would like to announce that OKD v3.11 rpms been officially released
>>>> and are available at [1].
>>>>
>>>> In order to use the released repo [1] we have created and published
>>>> the rpm (contains the yum configuration file)  [2] which is in the main
>>>> CentOS extra repository. The rpm itself has a dependency on the
>>>> *centos-release-ansible26* [3] which is the ansbile 2.6 version rpm
>>>> built by CentOS Infra team.
>>>>
>>>
>>> Is there any known issue with ansible 2.7 with regards to this Origin
>>> release?
>>> I'm asking because in several other places within oVirt we are using 2.7
>>> modules and we are working on role/playbook for deploying Origin on oVirt.
>>>
>>>
>>>
>>>
>>>>
>>>> Should you decide not to use the *centos-release-openshift-origin3**
>>>> rpm then will be your responsibility to get ansible 2.6 required to by
>>>> openshift-ansible installer.
>>>>
>>>> Please note that due to ongoing work on releasing CentOS 7.6, the
>>>> mirror.centos.org repo is in freeze mode - see [4] and as such we have
>>>> not published the rpms to [5]. Once the freeze mode will end, we'll
>>>> publish the rpms.
>>>>
>>>> Kudos goes to CentOS Infra team for being very kind in giving us a
>>>> waiver to make the current release possible.
>>>>
>>>>
>>>> Thank you,
>>>> PaaS SiG team
>>>>
>>>> [1] http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin311/
>>>> [2] http://mirror.centos.org/centos/7/extras/x86_64/Packages/centos
>>>> -release-openshift-origin311-1-2.el7.centos.noarch.rpm
>>>> [3] http://mirror.centos.org/centos/7/extras/x86_64/Packages/centos
>>>> -release-ansible26-1-3.el7.centos.noarch.rpm
>>>> [4] 
>>>> https://lists.centos.org/pipermail/centos-devel/2018-November/017033.html
>>>>
>>>> [5] http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin/
>>>> ___
>>>> CentOS-devel mailing list
>>>> centos-de...@centos.org
>>>> https://lists.centos.org/mailman/listinfo/centos-devel
>>>>
>>>
>>>
>>> --
>>>
>>> SANDRO BONAZZOLA
>>>
>>> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>>>
>>> Red Hat EMEA <https://www.redhat.com/>
>>>
>>> sbona...@redhat.com
>>> <https://red.ht/sig>
>>> ___
>>> CentOS-devel mailing list
>>> centos-de...@centos.org
>>> https://lists.centos.org/mailman/listinfo/centos-devel
>>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


[CentOS PaaS SIG]: Origin v3.11 rpms available officially released

2018-11-09 Thread Daniel Comnea
Hi,

We would like to announce that OKD v3.11 rpms been officially released and
are available at [1].

In order to use the released repo [1] we have created and published the rpm
(contains the yum configuration file)  [2] which is in the main CentOS
extra repository. The rpm itself has a dependency on the
*centos-release-ansible26* [3] which is the ansbile 2.6 version rpm built
by CentOS Infra team.

Should you decide not to use the *centos-release-openshift-origin3** rpm
then will be your responsibility to get ansible 2.6 required to by openshift
-ansible installer.

Please note that due to ongoing work on releasing CentOS 7.6, the mirror.
centos.org repo is in freeze mode - see [4] and as such we have not
published the rpms to [5]. Once the freeze mode will end, we'll publish the
rpms.

Kudos goes to CentOS Infra team for being very kind in giving us a waiver
to make the current release possible.


Thank you,
PaaS SiG team

[1] http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin311/
[2] http://mirror.centos.org/centos/7/extras/x86_64/Packages/centos-release-
openshift-origin311-1-2.el7.centos.noarch.rpm
[3] http://mirror.centos.org/centos/7/extras/x86_64/Packages/centos
-release-ansible26-1-3.el7.centos.noarch.rpm
[4] https://lists.centos.org/pipermail/centos-devel/2018-November/017033.html

[5] http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin/
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: [CentOS PaaS SIG]: Origin v3.11 rpms available for testing

2018-10-31 Thread Daniel Comnea
Thank you all for confirmation.

I shall then proceed with promoting the packages to mirror.centos.org. Will
update when progress is made.

On Wed, Oct 31, 2018 at 5:25 PM Carlos M. Cornejo <
carlos.cornejo.cre...@gmail.com> wrote:

> Hi,
>
> I’ve successfully deployed 3.11 OKD using centos-okd-ci repo
>
> Regards,
> Carlos M.
>
> Sent from my iPhone
>
> On 31 Oct 2018, at 16:42, Ricardo Martinelli de Oliveira <
> rmart...@redhat.com> wrote:
>
> I'd like to ask anyone who deployed OKD 3.11 successfuly if you could
> reply to this thread with your ack or nack. We need this feedback in order
> to promote to -candidate and then the official CentOS repos.
>
> On Fri, Oct 19, 2018 at 5:42 PM Anton Hughes 
> wrote:
>
>> Thanks Phil
>>
>> I was using
>>
>> openshift_release="v3.11"
>> openshift_image_tag="v3.11"
>> openshift_pkg_version="-3.11"
>>
>> But should have been using
>>
>> openshift_release="v3.11.0"
>> openshift_image_tag="v3.11.0"
>> openshift_pkg_version="-3.11.0"
>>
>>
>>
>> On Sat, 20 Oct 2018 at 09:23, Phil Cameron  wrote:
>>
>>> Go to
>>> http://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin311/
>>> in your web browser and you can see the names of all available rpms. It
>>> appears the 3.11 rpms are 3.11.0
>>>
>>> cd /etc/yum.repos.d
>>> create a file, centos-okd-ci.repo
>>> [centos-okd-ci]
>>> name=centos-okd-ci
>>> baseurl=
>>> http://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin311/
>>> gpgcheck=0
>>> enabled=1
>>>
>>> yum search origin-node
>>> will list the available rpms
>>>
>>> On 10/19/2018 04:09 PM, Anton Hughes wrote:
>>>
>>> Hi Daniel
>>>
>>> Unfortunately this is still not working for me. Im trying the method of
>>> adding the repo using the inventory file, eg,
>>>
>>> openshift_additional_repos=[{'id': 'centos-okd-ci', 'name':
>>> 'centos-okd-ci', 'baseurl' :'
>>> http://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin311/',
>>> 'gpgcheck' :'0', 'enabled' :'1'}]
>>>
>>> but I am getting the below error.
>>>
>>> TASK [openshift_node : Install node, clients, and conntrack packages]
>>> **
>>> Saturday 20 October 2018  09:04:53 +1300 (0:00:02.255)   0:03:34.602
>>> **
>>> FAILED - RETRYING: Install node, clients, and conntrack packages (3
>>> retries left).
>>> FAILED - RETRYING: Install node, clients, and conntrack packages (2
>>> retries left).
>>> FAILED - RETRYING: Install node, clients, and conntrack packages (1
>>> retries left).
>>> failed: [xxx.xxx.xxx.xxx] (item={u'name': u'origin-node-3.11'}) =>
>>> {"attempts": 3, "changed": false, "item": {"name": "origin-node-3.11"},
>>> "msg": "No package matching 'origin-node-3.11' found available, installed
>>> or updated", "rc": 126, "results": ["No package matching 'origin-node-3.11'
>>> found available, installed or updated"]}
>>> FAILED - RETRYING: Install node, clients, and conntrack packages (3
>>> retries left).
>>> FAILED - RETRYING: Install node, clients, and conntrack packages (2
>>> retries left).
>>> FAILED - RETRYING: Install node, clients, and conntrack packages (1
>>> retries left).
>>> failed: [xxx.xxx.xxx.xxx] (item={u'name': u'origin-clients-3.11'}) =>
>>> {"attempts": 3, "changed": false, "item": {"name": "origin-clients-3.11"},
>>> "msg": "No package matching 'origin-clients-3.11' found available,
>>> installed or updated", "rc": 126, "results": ["No package matching
>>> 'origin-clients-3.11' found available, installed or updated"]}
>>>
>>>
>>> On Sat, 20 Oct 2018 at 03:27, Daniel Comnea 
>>> wrote:
>>>
>>>> Hi all,
>>>>
>>>> First of all sorry for the late reply as well as for any confusion i
>>>> may have caused with my previous email.
>>>> I was very pleased to see the vibe and excitement around testing OKD
>>>> v3.11, very much appreciated.
>>>>
>>>> Here are the latest info:
>>>>
&

Re: [CentOS PaaS SIG]: Origin v3.11 rpms available for testing

2018-10-19 Thread Daniel Comnea
Hi all,

First of all sorry for the late reply as well as for any confusion i may
have caused with my previous email.
I was very pleased to see the vibe and excitement around testing OKD v3.11,
very much appreciated.

Here are the latest info:

   - everyone who wants to help us with testing should use [1] repo which
   can be consumed:
  -  in the inventory as [2] or
  - by deploying your own repo file [3]
   - nobody should use the repo i've mentioned in my previous email [4]
   (CentOS Infra team corrected me on the confusion i made, once again
   apologies for that)


Regarding the ansible version here are the info following my sync up with
CentOS Infra team:

   - very likely on Monday/ latest Tuesday a new rpm called
   centos-release-ansible26 will appear in CentOs Extras
   - the above rpm will become a dependency for the
   *centos-release-openshift-origin311* rpm which will be created and land
   in CentOS Extras repo at the same time OKD v3.11 will be promoted to
   mirror.centos.org
  - note this is the same flow as it was for all versions prior to
  v3.11 ( the rpm provides the CentOS repo location for OKD rpms).

*Note*:

   1. if your flow up until now was to never use
   *centos-release-openshift-originXXX* rpm and you were creating your own
   repo files then you will need to make sure you pull as dependency the
   ansible 2.6.x (together with its own dependencies) rpm. It is up to you
   from where you are going to pull the ansible rpm: from Epel, from CentOS
   Extras etc.
   2. with the above we are trying to have a single way of solving the
   ansible dependency problem


Hopefully this brings more clarity around this topic.



Thank you,
PaaS SiG team

[1] https://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin311/
[2]

[OSEv3:vars]
(...)
openshift_additional_repos=[{'id': 'centos-okd-ci', 'name':
'centos-okd-ci', 'baseurl'
:'http://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin311/
<http://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin311%7D/>',
'gpgcheck' :'0', 'enabled' :'1'}]


[3]
[centos-openshift-origin311-testing]
name=CentOS OpenShift Origin Testing
baseurl=
http://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin311/
<http://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin311%7D/>
enabled=0
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-PaaS

[4] https://cbs.centos.org/repos/paas7-openshift-origin311-testing/



On Wed, Oct 17, 2018 at 10:38 AM Daniel Comnea 
wrote:

> Hi,
>
> We would like to announce that OKD v3.11 rpms are available for testing
> at [1].
>
> As such we are calling for help from community to start testing and let us
> know if there are issues with the rpms and its dependencies.
>
> And in the spirit of transparency see below the plan to promote the rpms
> to mirror.centos.org repo:
>
>
>1. in the next few days the packages should be promoted to the test
>repo [2] (currently it does not exist, we are waiting to be sync'ed in
>the background)
>2. in one/two weeks time if we haven't heard any issues/ blockers we
>are going to promote to [3] repo (currently it doesn't exist, it will
>once the rpm will be promoted and signed)
>
>
> Please note the ansbile version use (and supported) *must be* 2.6.x and
> not 2.7, if you opt to ignore the warning you will run into issues.
>
> On a different note the CentOS Infra team are working hard (thanks !) to
> package and release a centos-ansible rpm which we'll promote in our PaaS
> repos.
>
> The rational is to bring more control around the ansible version used/
> required by OpenShift-ansible installer and not rely on the latest ansbile
> version pushed to epel repo we caused friction recently (reflected on our
> CI as well as users reporting issues)
>
>
> Thank you,
> PaaS SiG team
>
> [1] https://cbs.centos.org/repos/paas7-openshift-origin311-testing/
> [2] https://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin311/
> [3] http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin311/
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: [CentOS PaaS SIG]: Origin v3.11 rpms available for testing

2018-10-18 Thread Daniel Comnea
Hi Marc,

Thank you for volunteering in helping us with the tests.
Assuming i understood your question, you can set your repo baseurl to
https://cbs.centos.org/repos/paas7-openshift-origin311-candidate/x86_64/os/Packages/

<https://cbs.centos.org/repos/paas7-openshift-origin311-testing/>




On Wed, Oct 17, 2018 at 9:16 PM Marc Schlegel  wrote:

> I would like to participate in the testing.
>
> How can I set the rpm-repo-url for the ansible-installer? I couldnt find
> any inventory-param in the docs.
>
> Am Mittwoch, 17. Oktober 2018, 11:38:48 CEST schrieb Daniel Comnea:
> > Hi,
> >
> > We would like to announce that OKD v3.11 rpms are available for testing
> at
> > [1].
> >
> > As such we are calling for help from community to start testing and let
> us
> > know if there are issues with the rpms and its dependencies.
> >
> > And in the spirit of transparency see below the plan to promote the rpms
> to
> > mirror.centos.org repo:
> >
> >
> >1. in the next few days the packages should be promoted to the test
> repo
> >[2] (currently it does not exist, we are waiting to be sync'ed in the
> >background)
> >2. in one/two weeks time if we haven't heard any issues/ blockers we
> are
> >going to promote to [3] repo (currently it doesn't exist, it will once
> >the rpm will be promoted and signed)
> >
> >
> > Please note the ansbile version use (and supported) *must be* 2.6.x and
> not
> > 2.7, if you opt to ignore the warning you will run into issues.
> >
> > On a different note the CentOS Infra team are working hard (thanks !) to
> > package and release a centos-ansible rpm which we'll promote in our PaaS
> > repos.
> >
> > The rational is to bring more control around the ansible version used/
> > required by OpenShift-ansible installer and not rely on the latest
> ansbile
> > version pushed to epel repo we caused friction recently (reflected on our
> > CI as well as users reporting issues)
> >
> >
> > Thank you,
> > PaaS SiG team
> >
> > [1] https://cbs.centos.org/repos/paas7-openshift-origin311-testing/
> > [2]
> https://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin311/
> > [3] http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin311/
> >
>
>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: [CentOS-devel] [CentOS PaaS SIG]: Origin v3.11 rpms available for testing

2018-10-18 Thread Daniel Comnea
PSB

On Thu, Oct 18, 2018 at 6:17 PM Rich Megginson  wrote:

> On 10/17/18 3:38 AM, Daniel Comnea wrote:
> > Hi,
> >
> > We would like to announce that OKD v3.11 rpms are available for testing
> at [1].
> >
> > As such we are calling for help from community to start testing and let
> us know if there are issues with the rpms and its dependencies.
> >
> > And in the spirit of transparency see below the plan to promote the rpms
> to mirror.centos.org repo:
> >
> >  1. in the next few days the packages should be promoted to the test
> repo [2] (currently it does not exist, we are waiting to be sync'ed in the
> background)
> >  2. in one/two weeks time if we haven't heard any issues/ blockers we
> are going to promote to [3] repo (currently it doesn't exist, it will once
> the rpm will be promoted and signed)
> >
> >
> > Please note the ansbile version use (and supported) /*must be*/ 2.6.x
> and not 2.7, if you opt to ignore the warning you will run into issues.
> >
> > On a different note the CentOS Infra team are working hard (thanks !) to
> package and release a centos-ansible rpm which we'll promote in our PaaS
> repos.
>
>
> So does that mean we cannot test OKD v3.11 yet, unless we build our own
> version of ansible 2.6.x?
> [DC]: so i've been waiting for Infra guys to build the rpm but they are
> traveling and as such i went ahead and tagged ansible 2.6. and it should
> appear [1] in next 15/20 min. That should unblock you all from testing it.
>
> What will happen if we attempt to use ansible 2.7?  I my testing, I get
> stuck at deploying the control plane pods - it seems the virtual networking
> was not set up by openshift-ansible.
> [DC]: there been few issues reported on this topic and since they were
> already know we made it clear which ansible version is supported (read - it
> works) and which not.
>
> >
> > The rational is to bring more control around the ansible version used/
> required by OpenShift-ansible installer and not rely on the latest ansbile
> version pushed to epel repo we caused
> > friction recently (reflected on our CI as well as users reporting issues)
> >
> >
> > Thank you,
> > PaaS SiG team
> >
> > [1] https://cbs.centos.org/repos/paas7-openshift-origin311-testing/
> > [2]
> https://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin311/
> > [3] http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin311/
> >
> > ___
> > CentOS-devel mailing list
> > centos-de...@centos.org
> > https://lists.centos.org/mailman/listinfo/centos-devel
>
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


[CentOS PaaS SIG]: Origin v3.11 rpms available for testing

2018-10-17 Thread Daniel Comnea
Hi,

We would like to announce that OKD v3.11 rpms are available for testing at
[1].

As such we are calling for help from community to start testing and let us
know if there are issues with the rpms and its dependencies.

And in the spirit of transparency see below the plan to promote the rpms to
mirror.centos.org repo:


   1. in the next few days the packages should be promoted to the test repo
   [2] (currently it does not exist, we are waiting to be sync'ed in the
   background)
   2. in one/two weeks time if we haven't heard any issues/ blockers we are
   going to promote to [3] repo (currently it doesn't exist, it will once
   the rpm will be promoted and signed)


Please note the ansbile version use (and supported) *must be* 2.6.x and not
2.7, if you opt to ignore the warning you will run into issues.

On a different note the CentOS Infra team are working hard (thanks !) to
package and release a centos-ansible rpm which we'll promote in our PaaS
repos.

The rational is to bring more control around the ansible version used/
required by OpenShift-ansible installer and not rely on the latest ansbile
version pushed to epel repo we caused friction recently (reflected on our
CI as well as users reporting issues)


Thank you,
PaaS SiG team

[1] https://cbs.centos.org/repos/paas7-openshift-origin311-testing/
[2] https://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin311/
[3] http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin311/
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: v3.11 - One or more checks failed

2018-10-12 Thread Daniel Comnea
I suspect this is a OKD on CentOS ?

On Fri, Oct 12, 2018 at 9:50 PM Anton Hughes 
wrote:

> Hello
>
> Im trying to install 3.11, but am getting the below error/
>
> Im using
> https://github.com/openshift/openshift-ansible/releases/tag/v3.11.0
>
> Failure summary:
>
>
>   1. Hosts:xxx.xxx.xxx.xxx
>  Play: OpenShift Health Checks
>  Task: Run health checks (install) - EL
>  Message:  One or more checks failed
>  Details:  check "package_version":
>Not all of the required packages are available at their
> requested version
>origin:3.11
>origin-node:3.11
>origin-master:3.11
>Please check your subscriptions and enabled repositories.
>
>
> The relevant section of my inventory file is:
>
> [OSEv3:vars]
> ansible_ssh_user=root
> enable_excluders=False
> enable_docker_excluder=False
> ansible_service_broker_install=False
>
> containerized=True
> os_sdn_network_plugin_name='redhat/openshift-ovs-multitenant'
>
> openshift_disable_check=disk_availability,docker_storage,memory_availability,docker_image_availability
>
> #openshift_node_kubelet_args={'pods-per-core': ['10']}
>
> deployment_type=origin
> openshift_deployment_type=origin
>
> openshift_release=v3.11
> openshift_pkg_version=-3.11.0
> openshift_image_tag=v3.11
> openshift_disable_check=package_version
> openshift_disable_check=docker_storage
>
>
> template_service_broker_selector={"region":"infra"}
> openshift_metrics_image_version="v3.11"
> openshift_logging_image_version="v3.11"
> openshift_logging_elasticsearch_proxy_image_version="v1.0.0"
> logging_elasticsearch_rollout_override=false
> osm_use_cockpit=true
>
>
>
> Any help is apprciated.
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: how to disable the ansible service broker?

2018-10-10 Thread Daniel Comnea
Which release is this one?

On Wed, Oct 10, 2018 at 1:55 PM Marc Boorshtein 
wrote:

> I added the following to my inventory:
>
> ansible_service_broker_install=false
> ansible_service_broker_remove=true
>
> and then ran the api-server playbook but its still there.  Is there a
> different playbook I'm supposed to use?
>
> Thanks
> Marc
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OKD 3.10 keeps switching between the certificates

2018-10-01 Thread Daniel Comnea
I suggest you open a github issue too.

On Mon, Oct 1, 2018 at 10:05 AM Gaurav Ojha  wrote:

> Basically facing two different issues.
>
>1. OpenShift Origin 3.10 keeps switching between the custom named
>certificate deployed and the internal certificate being used. The web
>console randomly reports Server Connection Interrupted, and then switches
>to the internal certificate, but a fresh loading of the page serves the
>custom certificate.
>2. Even though the publicMasterURL is configured, the browser still
>redirects to the masterURL
>
> oc v3.10.0+0c4577e-1
> kubernetes v1.10.0+b81c8f8
> features: Basic-Auth GSSAPI Kerberos SPNEGO
>
> Server https://lb.okd.cloud.rnoc.gatech.edu:8443
> openshift v3.10.0+fd501dd-48
> kubernetes v1.10.0+b81c8f8
>
> Steps To Reproduce
>
>1. Configure a publicMasterURL and a masterURL. In my case they are
>publicMasterURL=okd-cluster.cloud.mydomain.com and masterURL=
>lb.cloud.mydomain.com. Note that here lb refers to the load balancer
>of my multi-master cluster.
>2. Deploy the certificates generated when installing through ansible.
>This works fine, I can see in my master-config.yml the correct values. The
>value for publicMasterURL points to okd-cluster.cloud.mydomain.com:8443
>and masterURL to lb.cloud.mydomain.com:8443. In the servingInfo, the
>correct certificates are pointed to. The generated certificate has a common
>name of lb.cloud.mydomain.com and an alternative name of
>okd-cluster.cloud.mydomain.com.
>3. Access the web console. The certificate served is valid.
>
> Here, okd-cluster.cloud.mydomain.com is a CNAME to lb.cloud.mydomain.com
> Current Result
>
>1. Even though I enter okd-cluster.cloud.mydomain.com:8443, the
>browser redirects to lb.cloud.mydomain.com:8443. I have checked and
>nowhere does the publicMasterURL points to lb.cloud.mydomain.com
>2. When logged in, the console randomly throws an error saying Server
>Connection Interrupted, and at times, refreshes and now reverts to the
>internal certificate and serves it. This goes away if I close the browser
>and reload the page. The correct certificate is again served, but again
>randomly reverts to the internal certificate.
>
> My expectation is that once deployed, accessing
> okd-cluster.cloud.mydomain.com should always use that address, and the
> certificate should be served correctly always.
>
> Is it because comman name is same as the masterURL and the alternative
> name holds the same value as the publicMasterURL ? I am not sure if this is
> the case, but it would be great to get some retrospective on this problem I
> am seeing.
>
>
> Regards
>
> Gaurav
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Atomic Host support on OpenShift 3.11 and up

2018-09-06 Thread Daniel Comnea
Clayton,

4.0 is that going to be 3.12 rebranded (if we follow the current release
cycle) or 3.13 ?



On Thu, Sep 6, 2018 at 2:34 PM Clayton Coleman  wrote:

> The successor to atomic host will be RH CoreOS and the community
> variants.  That is slated for 4.0.
>
> > On Sep 6, 2018, at 9:25 AM, Marc Ledent  wrote:
> >
> > Hi all,
> >
> > I have read in the 3.10 release notes that Atomic Host is deprecated and
> will nod be supported starting release 3.11.
> >
> > What this means? Is it advisable to migrate all Atomic host vms to
> "standard" RHEL server?
> >
> > Kind regards,
> > Marc
> >
> >
> > ___
> > users mailing list
> > users@lists.openshift.redhat.com
> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Service and route in front of api pods in OpenShift 3.10

2018-09-06 Thread Daniel Comnea
Very nice Mickael !

Just a minor note (although i'm sure you know already) if others bump into
this thread, this method works for public domains but it won't work if your
domain is internal/ dev one (i.e - .local).

Dani

On Wed, Sep 5, 2018 at 4:11 PM Mickaël Canévet 
wrote:

> Thanks a lot Tobias,
>
> That helped a lot, it's working fine.
> Now I have a Let's Encrypt certificate for my web console without using an
> external reverse proxy \o/
>
> Kind regards,
> Mickaël
>
> Le mer. 5 sept. 2018 à 13:17, Tobias Florek  a
> écrit :
>
>> Hi!
>>
>> It is certainly possible.
>>
>> You already have a "kubernetes" service in the default namespace. You
>> only need to expose that service's https port with Reencrypt TLS-Policy
>> and set the kubernetes.io/tls-acme=true annotation.
>>
>> Your unsuccessful try was missing the reencrypt tls policy.
>>
>> Cheers,
>>  Tobias Florek
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>
>
> --
>   « Any society that would give up a little liberty to gain a little
> security will deserve neither and lose both. »
>   (Benjamin Franklin)
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: openshift-ansible release-3.10 - Install fails with control plane pods

2018-08-30 Thread Daniel Comnea
Marc,

could you please look over the issue [1] and pull the master pod logs and
see if you bumped into same issue mentioned by the other folks?
Also make sure the openshift-ansible release is the latest one.

Dani

[1] https://github.com/openshift/openshift-ansible/issues/9575

On Wed, Aug 29, 2018 at 7:36 PM Marc Schlegel  wrote:

> Hello everyone
>
> I am having trouble getting a working Origin 3.10 installation using the
> openshift-ansible installer. My install always fails because the control
> pane pods are not available. I've checkout the release-3.10 branch from
> openshift-ansible and configured the inventory accordingly
>
>
> TASK [openshift_control_plane : Start and enable self-hosting node]
> **
> changed: [master]
> TASK [openshift_control_plane : Get node logs]
> ***
> skipping: [master]
> TASK [openshift_control_plane : debug]
> **
> skipping: [master]
> TASK [openshift_control_plane : fail]
> *
> skipping: [master]
> TASK [openshift_control_plane : Wait for control plane pods to appear]
> ***
>
> failed: [master] (item=etcd) => {"attempts": 60, "changed": false, "item":
> "etcd", "msg": {"cmd": "/bin/oc get pod master-etcd-master.vnet.de -o
> json -n kube-system", "results": [{}], "returncode": 1, "stderr": "The
> connection to the server master.vnet.de:8443 was refused - did you
> specify the right host or port?\n", "stdout": ""}}
>
> TASK [openshift_control_plane : Report control plane errors]
> *
> fatal: [master]: FAILED! => {"changed": false, "msg": "Control plane pods
> didn't come up"}
>
>
> I am using Vagrant to setup a local domain (vnet.de) which also includes
> a dnsmasq-node to have full control over the dns. The following VMs are
> running and DNS ans SSH works as expected
>
> Hostname IP
> domain.vnet.de   192.168.60.100
> master.vnet.de192.168.60.150 (dns also works for openshift.vnet.de
> which is configured as openshift_master_cluster_public_hostname) also runs
> etcd
> infra.vnet.de192.168.60.151 (openshift_master_default_subdomain
> wildcard points to this node)
> app1.vnet.de192.168.60.152
> app2.vnet.de192.168.60.153
>
>
> When connecting to the master-node I can see that several docker-instances
> are up and running
>
> [vagrant@master ~]$ sudo docker ps
> CONTAINER IDIMAGECOMMAND
> CREATED STATUS  PORTS
>  NAMES
>
> 9a0844123909ff5dd2137a4f "/bin/sh -c
> '#!/bi..."   19 minutes ago  Up 19 minutes
>  
> k8s_etcd_master-etcd-master.vnet.de_kube-system_a2c858fccd481c334a9af7413728e203_0
>
> 41d803023b72f216d84cdf54 "/bin/bash -c
> '#!/..."   19 minutes ago  Up 19 minutes
>  
> k8s_controllers_master-controllers-master.vnet.de_kube-system_a3c3ca56f69ed817bad799176cba5ce8_0
>
> 044c9d12588cdocker.io/openshift/origin-pod:v3.10.0
>  "/usr/bin/pod"   19 minutes ago  Up 19 minutes
>
>  
> k8s_POD_master-api-master.vnet.de_kube-system_86017803919d833e39cb3d694c249997_0
>
> 10a197e394b3docker.io/openshift/origin-pod:v3.10.0
>  "/usr/bin/pod"   19 minutes ago  Up 19 minutes
>
>  
> k8s_POD_master-controllers-master.vnet.de_kube-system_a3c3ca56f69ed817bad799176cba5ce8_0
>
> 20f4f86bdd07docker.io/openshift/origin-pod:v3.10.0
>  "/usr/bin/pod"   19 minutes ago  Up 19 minutes
>
>  
> k8s_POD_master-etcd-master.vnet.de_kube-system_a2c858fccd481c334a9af7413728e203_0
>
>
> However, there is no port 8443 open on the master-node. No wonder the
> ansible-installer complains.
>
> The machines are using a plain Centos 7.5 and I've run the
> openshift-ansible/playbooks/prerequisites.yml first and then
> openshift-ansible/playbooks/deploy_cluster.yml.
> I've double-checked the installation documentation and my Vagrant
> config...all looks correct.
>
> Any ideas/advice?
> regards
> Marc
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: origin3.9 deployments

2018-08-20 Thread Daniel Comnea
Alan, Todd,

i think i kind of clarified few things in my response [1].

In the future will be good if for same issue we stick to same email thread
to not lose any info - bouncing between threads is getting a bit
unmanageable in my opinion.

Dani

[1]https://github.com/openshift/openshift-ansible/issues/9675#issuecomment
-414488090


On Mon, Aug 20, 2018 at 4:53 PM, Alan Christie <
achris...@informaticsmatters.com> wrote:

> Thanks Todd.
>
> It does appear to be a part of the 
> “openshift-ansible/playbooks/prerequisites.yml”
> playbook that is documented. It’s the one installing the repos. Everything
> was fine a week ago but broke for me last week.
>
> Alan Christie
> achris...@informaticsmatters.com
>
>
>
> > On 20 Aug 2018, at 16:50, Walters, Todd 
> wrote:
> >
> > Ok, good luck. We set our own repo versions. Never had luck and didn’t
> realize the playbooks installed repos. Thought that was a prereq. At least
> in enterprise it was.  Hope it works out for you.
> >
> > On 8/20/18, 10:47 AM, "Alan Christie" 
> wrote:
> >
> >Thanks,
> >
> >I’ve tried all sorts of things now and need a rest - I’ve been trying
> to understand this behaviour for the lat 7 hours and the working day’s
> approaching its end for me!
> >
> >In the meantime I’m raising this an an issue as requested as I
> shouldn’t need to tinker with repos that are being installed by the
> OpenShift playbooks. I use tagged releases and am using
> "openshift-ansible-3.9.40-1”. The rest of the details will go in the issue.
> >
> >In the meantime I’m just going to set “package_version” in the
> "openshift_disable_check” list in the inventory.
> >
> >Alan Christie
> >achris...@informaticsmatters.com
> >
> >
> >
> >> On 20 Aug 2018, at 16:33, Walters, Todd 
> wrote:
> >>
> >> I believe have the proper repos enabled is part of the node
> prerequisites.   So we get around this by running prereq playbook and
> disabling the origin release for latest (repo with no release number on
> end) and enabling 3.9 only
> >>
> >> - name: Install Specific Version of Openshift Origin
> >> yum:
> >>   name: centos-release-openshift-origin
> >>   state: absent
> >>
> >> - name: Install Specific Version of Openshift Origin
> >> yum:
> >>   name: centos-release-openshift-origin39
> >>   state: present
> >>
> >> Also, only git branch that’s supposed to work is release-3.9, which is
> what we always pull for playbooks.
> >>
> >> Thanks,
> >>
> >> Todd
> >>
> >> Today's Topics:
> >>
> >>  1. Re: Ansible/Origin 3.9 deployment now fails because
> >> "package(s) areavailable at a version that is higher than
> >> requested" (Alan Christie)
> >>  2. Re: Ansible/Origin 3.9 deployment now fails because
> >> "package(s) areavailable at a version that is higher than
> >> requested" (Alan Christie)
> >>
> >>
> >>   
> --
> >>
> >>   Message: 1
> >>   Date: Mon, 20 Aug 2018 16:11:50 +0100
> >>   From: Alan Christie 
> >>   To: Peter Heitman 
> >>   Cc: users 
> >>   Subject: Re: Ansible/Origin 3.9 deployment now fails because
> >>   "package(s) areavailable at a version that is higher than requested"
> >>   Message-ID:
> >>   
> >>   Content-Type: text/plain; charset="utf-8"
> >>
> >>   I?m doing pretty-much the same thing. Prior to ?prerequisites? I run
> the following play:
> >>
> >>   - hosts: nodes
> >> become: yes
> >>
> >> tasks:
> >>
> >> - name: Install origin39 repo
> >>   yum:
> >> name: centos-release-openshift-origin39
> >> state: present
> >>
> >>   The 3.9 repo appears in /etc/yum.repos.d/ but, after the
> prerequisites, so does "CentOS-OpenShift-Origin.repo? and the main
> ?deploy_cluster.yml? fails again. The only way through this for me to add
> ?package_version? to ?openshift_disable_check?.
> >>
> >>   Alan Christie
> >>   achris...@informaticsmatters.com
> >>
> >>
> >>
> >>
> >> 
> 
> >> The information contained in this message, and any attachments thereto,
> >> is intended solely for the use of the addressee(s) and may contain
> >> confidential and/or privileged material. Any review, retransmission,
> >> dissemination, copying, or other use of the transmitted information is
> >> prohibited. If you received this in error, please contact the sender
> >> and delete the material from any computer. UNIGROUP.COM
> >> 
> 
> >>
> >
> >
> >
> >
> > 
> > The information contained in this message, and any attachments thereto,
> > is intended solely for the use of the addressee(s) and may contain
> > confidential and/or privileged material. Any review, retransmission,
> > dissemination, copying, or other use of the transmitted information is
> > prohibited. If you received this in error, please 

Re: Ansible/Origin 3.9 deployment now fails because "package(s) are available at a version that is higher than requested"

2018-08-20 Thread Daniel Comnea
Just came across this email, and still not clear why the issue is still
taking place.

Can you please move this issue onto
https://github.com/openshift/openshift-ansible and provide following info:


   - openshift-ansible rpm (if you used that) or the tag used
   - the gits output with the full trace error you get


I'll try and see if can help you out..



On Mon, Aug 20, 2018 at 3:20 PM, Peter Heitman  wrote:

> I agree with you. I've hit this same error when previous versions were
> released. I'm not sure why defining the version we want to install (and
> then using that version of the openshift ansible git) isn't sufficient. As
> for installing the repo, I do this before I run the prerequisite playbook,
> i.e. ansible all -i  -m yum -a 
> "name=centos-release-openshift-origin39
> state=present"  --become. That seems to resolve the issue.
>
> On Mon, Aug 20, 2018 at 10:10 AM Alan Christie <
> achris...@informaticsmatters.com> wrote:
>
>> Thanks Peter.
>>
>> Interestingly it looks like it’s Origin’s own “prerequisites.yml”
>> playbook that’s adding the repo that’s causing problems. My instances don’t
>> have this repo until I run that playbook.
>>
>> Why do I have to remove something that’s being added by the prerequisite
>> playbook? Especially as my inventory explicitly states
>> "openshift_release=v3.9”?
>>
>> If the answer is “do not run prerequisites.yml” what’s the point of it?
>>
>> I still wonder why this specific issue is actually an error? Shouldn’t it
>> be installing specific version anyway? Shouldn’t it be error occur if there
>> is no 3.9 package, not if there’s a 3.10 package?
>>
>> Incidentally, I’m using the ansible code from
>> "openshift-ansible-3.9.40-1”.
>>
>> Alan Christie
>> achris...@informaticsmatters.com
>>
>>
>>
>> On 18 Aug 2018, at 13:36, Peter Heitman  wrote:
>>
>> See the recent thread "How to avoid upgrading to 3.10". The bottom line
>> is to install the 3.9 specific repo. For CentOS that is
>> centos-release-openshift-origin39
>>
>> On Sat, Aug 18, 2018, 2:44 AM Alan Christie <
>> achris...@informaticsmatters.com> wrote:
>>
>>> HI,
>>>
>>> I’ve been deploying new clusters of Origin v3.9 using the official
>>> Ansible playbook approach for a few weeks now, using what appear to be
>>> perfectly reasonable base images on OpenStack and AWS. Then, this week,
>>> with no other changes having been made, the deployment fails with this
>>> message: -
>>>
>>> One or more checks failed
>>>  check "package_version":
>>>Some required package(s) are available at a version
>>>that is higher than requested
>>>  origin-3.10.0
>>>  origin-node-3.10.0
>>>  origin-master-3.10.0
>>>This will prevent installing the version you requested.
>>>Please check your enabled repositories or adjust
>>> openshift_release.
>>>
>>> I can avoid the error, and deploy what appears to be a perfectly
>>> functional 3.9, if I add *package_version* to *openshift_disable_check*
>>> in the inventory the deployment. But this is not the right way to deal with
>>> this sort of error.
>>>
>>> Q1) How does one correctly address this error?
>>>
>>> Q2) Out of interest … why is this specific issue an error? I’ve
>>> instructed the playbook to instal v3.9. I don't care if there is a 3.10
>>> release available - I do care if there is not a 3.9. Shouldn’t the error
>>> occur if there is no 3.9 package, not if there’s a 3.10 package?
>>>
>>> Alan Christie
>>> Informatics Matters Ltd.
>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>
>>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users