Re: setting jenkins permissions

2018-10-01 Thread Ben Parees
On Mon, Oct 1, 2018 at 6:31 PM, Seth Kenlon  wrote:

> I'm new to OpenShift use, and still getting my head around roles and
> permissions. I've got a test instance, and I'm trying to add a role to
> provide a Jenkins user access to Read, Job Build, and Job Cancel. I don't
> want the Jenkins user to have access to any more than that.
>
> Is that a possible combination of permissions to create through the GUI?
>

it's probably possible if you use jenkins to manage your
authorization(Jenkins offers pretty fine grained user permission control).
If you use the openshift integration(in which your jenkins permissions are
determined by your permissions within openshift) it's not going to be
possible because we basically match a few pretty chunky roles (view, edit,
admin) to jenkins permissions (ie if you can edit a project in openshift,
you can do edit-like things in the jenkins instance running in that
project):
https://github.com/openshift/jenkins-openshift-login-plugin#openshift-role-to-jenkins-permission-mapping

you can find some more details about this here:

https://docs.okd.io/latest/using_images/other_images/jenkins.html#jenkins-authentication





>
>
> --
> Seth Kenlon
> Senior Technical Editor
> Red Hat
> sken...@redhat.com T: +61-735-147125 M: +64-2040-619719 IM:
> skenlon
> F97393A5
> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>


-- 
Ben Parees | OpenShift
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OKD 3.10 keeps switching between the certificates

2018-10-01 Thread Gaurav Ojha
Hi,

Sorry about the delayed update. I just reverted to a clean snapshot of my VMs 
and ran a fresh cluster deployment, and the issue isn’t present anymore. Seems 
it was related to a failure I had faced quite early on in the deployment phase.

Regards


> On Oct 1, 2018, at 17:51, Daniel Comnea  wrote:
> 
> I suggest you open a github issue too.
> 
> On Mon, Oct 1, 2018 at 10:05 AM Gaurav Ojha  > wrote:
> Basically facing two different issues.
> 
> OpenShift Origin 3.10 keeps switching between the custom named certificate 
> deployed and the internal certificate being used. The web console randomly 
> reports Server Connection Interrupted, and then switches to the internal 
> certificate, but a fresh loading of the page serves the custom certificate.
> Even though the publicMasterURL is configured, the browser still redirects to 
> the masterURL
> oc v3.10.0+0c4577e-1
> kubernetes v1.10.0+b81c8f8
> features: Basic-Auth GSSAPI Kerberos SPNEGO
> 
> Server https://lb.okd.cloud.rnoc.gatech.edu:8443 
> 
> openshift v3.10.0+fd501dd-48
> kubernetes v1.10.0+b81c8f8
> Steps To Reproduce
> 
> Configure a publicMasterURL and a masterURL. In my case they are 
> publicMasterURL=okd-cluster.cloud.mydomain.com 
>  and masterURL=lb.cloud.mydomain.com 
> . Note that here lb refers to the load 
> balancer of my multi-master cluster.
> Deploy the certificates generated when installing through ansible. This works 
> fine, I can see in my master-config.yml the correct values. The value for 
> publicMasterURL points to okd-cluster.cloud.mydomain.com:8443 
>  and masterURL to 
> lb.cloud.mydomain.com:8443 . In the 
> servingInfo, the correct certificates are pointed to. The generated 
> certificate has a common name of lb.cloud.mydomain.com 
>  and an alternative name of 
> okd-cluster.cloud.mydomain.com .
> Access the web console. The certificate served is valid.
> Here, okd-cluster.cloud.mydomain.com  
> is a CNAME to lb.cloud.mydomain.com 
> Current Result
> 
> Even though I enter okd-cluster.cloud.mydomain.com:8443 
> , the browser redirects to 
> lb.cloud.mydomain.com:8443 . I have 
> checked and nowhere does the publicMasterURL points to lb.cloud.mydomain.com 
> 
> When logged in, the console randomly throws an error saying Server Connection 
> Interrupted, and at times, refreshes and now reverts to the internal 
> certificate and serves it. This goes away if I close the browser and reload 
> the page. The correct certificate is again served, but again randomly reverts 
> to the internal certificate.
> My expectation is that once deployed, accessing 
> okd-cluster.cloud.mydomain.com  
> should always use that address, and the certificate should be served 
> correctly always.
> 
> Is it because comman name is same as the masterURL and the alternative name 
> holds the same value as the publicMasterURL ? I am not sure if this is the 
> case, but it would be great to get some retrospective on this problem I am 
> seeing.
> 
> Regards
> Gaurav
> ___
> users mailing list
> users@lists.openshift.redhat.com 
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users 
> 

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OKD 3.10 keeps switching between the certificates

2018-10-01 Thread Daniel Comnea
I suggest you open a github issue too.

On Mon, Oct 1, 2018 at 10:05 AM Gaurav Ojha  wrote:

> Basically facing two different issues.
>
>1. OpenShift Origin 3.10 keeps switching between the custom named
>certificate deployed and the internal certificate being used. The web
>console randomly reports Server Connection Interrupted, and then switches
>to the internal certificate, but a fresh loading of the page serves the
>custom certificate.
>2. Even though the publicMasterURL is configured, the browser still
>redirects to the masterURL
>
> oc v3.10.0+0c4577e-1
> kubernetes v1.10.0+b81c8f8
> features: Basic-Auth GSSAPI Kerberos SPNEGO
>
> Server https://lb.okd.cloud.rnoc.gatech.edu:8443
> openshift v3.10.0+fd501dd-48
> kubernetes v1.10.0+b81c8f8
>
> Steps To Reproduce
>
>1. Configure a publicMasterURL and a masterURL. In my case they are
>publicMasterURL=okd-cluster.cloud.mydomain.com and masterURL=
>lb.cloud.mydomain.com. Note that here lb refers to the load balancer
>of my multi-master cluster.
>2. Deploy the certificates generated when installing through ansible.
>This works fine, I can see in my master-config.yml the correct values. The
>value for publicMasterURL points to okd-cluster.cloud.mydomain.com:8443
>and masterURL to lb.cloud.mydomain.com:8443. In the servingInfo, the
>correct certificates are pointed to. The generated certificate has a common
>name of lb.cloud.mydomain.com and an alternative name of
>okd-cluster.cloud.mydomain.com.
>3. Access the web console. The certificate served is valid.
>
> Here, okd-cluster.cloud.mydomain.com is a CNAME to lb.cloud.mydomain.com
> Current Result
>
>1. Even though I enter okd-cluster.cloud.mydomain.com:8443, the
>browser redirects to lb.cloud.mydomain.com:8443. I have checked and
>nowhere does the publicMasterURL points to lb.cloud.mydomain.com
>2. When logged in, the console randomly throws an error saying Server
>Connection Interrupted, and at times, refreshes and now reverts to the
>internal certificate and serves it. This goes away if I close the browser
>and reload the page. The correct certificate is again served, but again
>randomly reverts to the internal certificate.
>
> My expectation is that once deployed, accessing
> okd-cluster.cloud.mydomain.com should always use that address, and the
> certificate should be served correctly always.
>
> Is it because comman name is same as the masterURL and the alternative
> name holds the same value as the publicMasterURL ? I am not sure if this is
> the case, but it would be great to get some retrospective on this problem I
> am seeing.
>
>
> Regards
>
> Gaurav
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


OpenShift Origin on AWS

2018-10-01 Thread Peter Heitman
I've created a CloudFormation Stack for simple lab-test deployments of
OpenShift Origin on AWS. Now I'd like to understand what would be best for
production deployments of OpenShift Origin on AWS. In particular I'd like
to create the corresponding CloudFormation Stack.

I've seen the Install Guide page on Configuring for AWS and I've looked
through the RedHat QuickStart Guide for OpenShift Enterprise but am still
missing information. For example, the RedHat QuickStart Guide creates 3
masters, 3 etcd servers and some number of compute nodes. Where are the
routers (infra nodes) located? On the masters or on the etcd servers? How
are the ELBs configured to work with those deployed routers? What if some
of the traffic you are routing is not http/https? What is required to
support that?

I've seen the simple CloudFormation stack (
https://sysdig.com/blog/deploy-openshift-aws/) but haven't found anything
comparable for something that is closer to production ready (and likely
takes advantage of using the AWS VPC QuickStart (
https://aws.amazon.com/quickstart/architecture/vpc/).

Does anyone have any prior work that they could share or point me to?

Thanks in advance,

Peter Heitman
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Coming back to openshift. looking for help rolling a 6 node lab running 3.10.

2018-10-01 Thread Wolf Noble
I played with openshift a few revs back, but didn’t have the hardware assembled 
to be able to give it a full test.

Now that I’ve assembled the gear I (think I) need, I’m starting to walk through 
the instructions, and I’m finding some spots that I’m uncertain about.

most specifically (at the moment)

the overall environment has a /27 of public ipv4 space.
one unroutable/24 is for ‘generic lan user traffic’
one unroutable/24 is for ‘dmz devices’
I am using the unroutable network 198.18.100/24 for ‘openshift physical systems’
3 masters
3 nodes
I plan on sticking a vip for the masters on the ha firewall pair (pfsense) for 
the entire environment.

I was thinking that I’d have a (maybe more?) vip configured on the public space 
for the masters permitting access from the outside world to the workloads being 
facilitated through the cluster.

I doubt it prudent to make everything available externally by default,
Does it make sense to have one vip on the 198.18.100 network for node/master 
<-> node/master comms, and one vip on the public network for workloads?

This WAS my plan, but I saw the previous post from Gaurav today outlining his 
difficulties when having differing PublicMasterURL and masterURL variables 
configured, and thought that it might be wise to pause and ask for 
clarification and perhaps a touch of guidance before runnign down a 
trap-laden-path.



Thanks in advance for any guidance or help.

W


signature.asc
Description: Message signed with OpenPGP
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: openshift origin all in one

2018-10-01 Thread Clayton Coleman
The all-in-one path David is referring to (openshift start) is not used by
minishift (which uses of cluster up).

There will be a replacement path for the core functionality of running a
single master in a VM, we’re still working out the details.  The end goal
would be for an equivalent easy to use flow on a single machine that is
more aligned with the new installer, but we aren’t there yet.

On Oct 1, 2018, at 9:31 AM, Fernando Lozano  wrote:

Without all-in-one, how will minishift work? I assume we still want an easy
to use option for developers.

On Mon, Oct 1, 2018 at 10:12 AM subscription sites <
subscription.si...@gmail.com> wrote:

> Hi David,
>
>
> so there will not be a possibility anymore to install on one host? Also no
> alternative for the use-cases that all-in-one covers today, such as
> experiment with openshift?
> Basically, the "oc cluster up" command disappears?
>
> Also: is this kind of decisions available somewhere online, like a public
> roadmap for the product?
>
> Kr,
>
> Peter
>
> On Mon, Oct 1, 2018 at 2:08 PM David Eads  wrote:
>
>> In the release after 3.11, the all-in-one will no longer be available and
>> because it isn't considered a production installation, we have no plans to
>> provide a clean migration from an all-in-one configuration.
>>
>> On Sun, Sep 30, 2018 at 3:56 PM Aleksandar Kostadinov <
>> akost...@redhat.com> wrote:
>>
>>> Here my personal thoughts and experience. Not some sort of official
>>> advice.
>>>
>>> subscription sites wrote on 09/29/18 18:40:
>>> > Hello,
>>> >
>>> >
>>> > I'm wondering with regard to the all-in-one setup:
>>> > - I know the documentation doesn't say it's considered production, but
>>> > what would the downside be of using this on a VPS to host production
>>> > apps? Except for the lack of redundancy obviously, the host goes down
>>> > and it's all down, but my alternative would be to not use openshift
>>> and
>>> > use plain docker on one host, so availability isn't my premium
>>> concern.
>>> > Is it not recommended from a security perspective, considering how
>>> it's
>>> > setup using "oc cluster up", or are there other concerns for not using
>>> > it in production?
>>>
>>> Except for missing on HA and running some non-app resources (console,
>>> node, controllers, etcd, router, etc.), then I see no other drawbacks.
>>>
>>> > - When setting up an all-in-one on an internet-exposed host, how can
>>> you
>>> > best protect the web console? Isn't it a bit "light" security wise to
>>> > just depend on username/password for protection? Is there a
>>> possibility
>>> > to use multifactor or certificate based authentication? I also tried
>>>
>>> Depends on how you choose and manage your password. For more options you
>>> can try to use keycloak auth provider. This should allow you to setup
>>> 2-factor auth IIRC.
>>>
>>> > blocking the port with iptables and using ssh with port forwarding,
>>> but
>>> > this doesn't seem to work, both if I set the public-master option to
>>> the
>>> > public ip or localhost?
>>>
>>> How does it fail when you set to localhost?
>>>
>>> I assume using some sort of VPN can also help but I don't see why `ssh`
>>> shouldn't work. An alternative would be to use `ssh -D` to proxy your
>>> traffic through the remote host and setup your browser to use that socks
>>> server when accessing console. But still think normal port forwarding
>>> should do the job.
>>>
>>> >
>>> >
>>> > Thanks for any help you can provide!
>>> >
>>> >
>>> > Regards,
>>> >
>>> >
>>> >
>>> > Peter
>>> >
>>> >
>>> > ___
>>> > users mailing list
>>> > users@lists.openshift.redhat.com
>>> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>> >
>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: openshift origin all in one

2018-10-01 Thread Fernando Lozano
Without all-in-one, how will minishift work? I assume we still want an easy
to use option for developers.

On Mon, Oct 1, 2018 at 10:12 AM subscription sites <
subscription.si...@gmail.com> wrote:

> Hi David,
>
>
> so there will not be a possibility anymore to install on one host? Also no
> alternative for the use-cases that all-in-one covers today, such as
> experiment with openshift?
> Basically, the "oc cluster up" command disappears?
>
> Also: is this kind of decisions available somewhere online, like a public
> roadmap for the product?
>
> Kr,
>
> Peter
>
> On Mon, Oct 1, 2018 at 2:08 PM David Eads  wrote:
>
>> In the release after 3.11, the all-in-one will no longer be available and
>> because it isn't considered a production installation, we have no plans to
>> provide a clean migration from an all-in-one configuration.
>>
>> On Sun, Sep 30, 2018 at 3:56 PM Aleksandar Kostadinov <
>> akost...@redhat.com> wrote:
>>
>>> Here my personal thoughts and experience. Not some sort of official
>>> advice.
>>>
>>> subscription sites wrote on 09/29/18 18:40:
>>> > Hello,
>>> >
>>> >
>>> > I'm wondering with regard to the all-in-one setup:
>>> > - I know the documentation doesn't say it's considered production, but
>>> > what would the downside be of using this on a VPS to host production
>>> > apps? Except for the lack of redundancy obviously, the host goes down
>>> > and it's all down, but my alternative would be to not use openshift
>>> and
>>> > use plain docker on one host, so availability isn't my premium
>>> concern.
>>> > Is it not recommended from a security perspective, considering how
>>> it's
>>> > setup using "oc cluster up", or are there other concerns for not using
>>> > it in production?
>>>
>>> Except for missing on HA and running some non-app resources (console,
>>> node, controllers, etcd, router, etc.), then I see no other drawbacks.
>>>
>>> > - When setting up an all-in-one on an internet-exposed host, how can
>>> you
>>> > best protect the web console? Isn't it a bit "light" security wise to
>>> > just depend on username/password for protection? Is there a
>>> possibility
>>> > to use multifactor or certificate based authentication? I also tried
>>>
>>> Depends on how you choose and manage your password. For more options you
>>> can try to use keycloak auth provider. This should allow you to setup
>>> 2-factor auth IIRC.
>>>
>>> > blocking the port with iptables and using ssh with port forwarding,
>>> but
>>> > this doesn't seem to work, both if I set the public-master option to
>>> the
>>> > public ip or localhost?
>>>
>>> How does it fail when you set to localhost?
>>>
>>> I assume using some sort of VPN can also help but I don't see why `ssh`
>>> shouldn't work. An alternative would be to use `ssh -D` to proxy your
>>> traffic through the remote host and setup your browser to use that socks
>>> server when accessing console. But still think normal port forwarding
>>> should do the job.
>>>
>>> >
>>> >
>>> > Thanks for any help you can provide!
>>> >
>>> >
>>> > Regards,
>>> >
>>> >
>>> >
>>> > Peter
>>> >
>>> >
>>> > ___
>>> > users mailing list
>>> > users@lists.openshift.redhat.com
>>> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>> >
>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: openshift origin all in one

2018-10-01 Thread subscription sites
Hi David,


so there will not be a possibility anymore to install on one host? Also no
alternative for the use-cases that all-in-one covers today, such as
experiment with openshift?
Basically, the "oc cluster up" command disappears?

Also: is this kind of decisions available somewhere online, like a public
roadmap for the product?

Kr,

Peter

On Mon, Oct 1, 2018 at 2:08 PM David Eads  wrote:

> In the release after 3.11, the all-in-one will no longer be available and
> because it isn't considered a production installation, we have no plans to
> provide a clean migration from an all-in-one configuration.
>
> On Sun, Sep 30, 2018 at 3:56 PM Aleksandar Kostadinov 
> wrote:
>
>> Here my personal thoughts and experience. Not some sort of official
>> advice.
>>
>> subscription sites wrote on 09/29/18 18:40:
>> > Hello,
>> >
>> >
>> > I'm wondering with regard to the all-in-one setup:
>> > - I know the documentation doesn't say it's considered production, but
>> > what would the downside be of using this on a VPS to host production
>> > apps? Except for the lack of redundancy obviously, the host goes down
>> > and it's all down, but my alternative would be to not use openshift and
>> > use plain docker on one host, so availability isn't my premium concern.
>> > Is it not recommended from a security perspective, considering how it's
>> > setup using "oc cluster up", or are there other concerns for not using
>> > it in production?
>>
>> Except for missing on HA and running some non-app resources (console,
>> node, controllers, etcd, router, etc.), then I see no other drawbacks.
>>
>> > - When setting up an all-in-one on an internet-exposed host, how can
>> you
>> > best protect the web console? Isn't it a bit "light" security wise to
>> > just depend on username/password for protection? Is there a possibility
>> > to use multifactor or certificate based authentication? I also tried
>>
>> Depends on how you choose and manage your password. For more options you
>> can try to use keycloak auth provider. This should allow you to setup
>> 2-factor auth IIRC.
>>
>> > blocking the port with iptables and using ssh with port forwarding, but
>> > this doesn't seem to work, both if I set the public-master option to
>> the
>> > public ip or localhost?
>>
>> How does it fail when you set to localhost?
>>
>> I assume using some sort of VPN can also help but I don't see why `ssh`
>> shouldn't work. An alternative would be to use `ssh -D` to proxy your
>> traffic through the remote host and setup your browser to use that socks
>> server when accessing console. But still think normal port forwarding
>> should do the job.
>>
>> >
>> >
>> > Thanks for any help you can provide!
>> >
>> >
>> > Regards,
>> >
>> >
>> >
>> > Peter
>> >
>> >
>> > ___
>> > users mailing list
>> > users@lists.openshift.redhat.com
>> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>> >
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: openshift origin all in one

2018-10-01 Thread David Eads
In the release after 3.11, the all-in-one will no longer be available and
because it isn't considered a production installation, we have no plans to
provide a clean migration from an all-in-one configuration.

On Sun, Sep 30, 2018 at 3:56 PM Aleksandar Kostadinov 
wrote:

> Here my personal thoughts and experience. Not some sort of official advice.
>
> subscription sites wrote on 09/29/18 18:40:
> > Hello,
> >
> >
> > I'm wondering with regard to the all-in-one setup:
> > - I know the documentation doesn't say it's considered production, but
> > what would the downside be of using this on a VPS to host production
> > apps? Except for the lack of redundancy obviously, the host goes down
> > and it's all down, but my alternative would be to not use openshift and
> > use plain docker on one host, so availability isn't my premium concern.
> > Is it not recommended from a security perspective, considering how it's
> > setup using "oc cluster up", or are there other concerns for not using
> > it in production?
>
> Except for missing on HA and running some non-app resources (console,
> node, controllers, etcd, router, etc.), then I see no other drawbacks.
>
> > - When setting up an all-in-one on an internet-exposed host, how can you
> > best protect the web console? Isn't it a bit "light" security wise to
> > just depend on username/password for protection? Is there a possibility
> > to use multifactor or certificate based authentication? I also tried
>
> Depends on how you choose and manage your password. For more options you
> can try to use keycloak auth provider. This should allow you to setup
> 2-factor auth IIRC.
>
> > blocking the port with iptables and using ssh with port forwarding, but
> > this doesn't seem to work, both if I set the public-master option to the
> > public ip or localhost?
>
> How does it fail when you set to localhost?
>
> I assume using some sort of VPN can also help but I don't see why `ssh`
> shouldn't work. An alternative would be to use `ssh -D` to proxy your
> traffic through the remote host and setup your browser to use that socks
> server when accessing console. But still think normal port forwarding
> should do the job.
>
> >
> >
> > Thanks for any help you can provide!
> >
> >
> > Regards,
> >
> >
> >
> > Peter
> >
> >
> > ___
> > users mailing list
> > users@lists.openshift.redhat.com
> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> >
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Where to mount the NFS volume

2018-10-01 Thread Gaurav Ojha
Hi,

Just a quick question. I have a multi-master cluster, with 2 masters, 2
compute nodes and 2 infrastructure nodes, and I want to use NFS for
persistence. But I cant seem to understand a basic question like where do I
mount the volume? Do I mount it inside each compute node, or the master or
the infra node?

My guess is that it cannot be the master node, and should be in both of the
compute nodes?

Regards
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


systemctl start openshift-node fails with: Unable to register node "node001" with API server:

2018-10-01 Thread Marc Ledent

Hi all,

While attempting to upgrade from 3.9 to 3.10, the startup of the node 
fails with the following error:


Unable to register node "node001" with API server: nodes "node001" is 
forbidden: node "node001.example.com" cannot modify node "node001"


This is while upgrading the master node, which is node001. I suspect 
that it is related with the difference with simple host name and FQDN.


Can you help me on this?

Thanks in advance,
Marc




smime.p7s
Description: S/MIME Cryptographic Signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


OKD 3.10 keeps switching between the certificates

2018-10-01 Thread Gaurav Ojha
Basically facing two different issues.

   1. OpenShift Origin 3.10 keeps switching between the custom named
   certificate deployed and the internal certificate being used. The web
   console randomly reports Server Connection Interrupted, and then switches
   to the internal certificate, but a fresh loading of the page serves the
   custom certificate.
   2. Even though the publicMasterURL is configured, the browser still
   redirects to the masterURL

oc v3.10.0+0c4577e-1
kubernetes v1.10.0+b81c8f8
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://lb.okd.cloud.rnoc.gatech.edu:8443
openshift v3.10.0+fd501dd-48
kubernetes v1.10.0+b81c8f8

Steps To Reproduce

   1. Configure a publicMasterURL and a masterURL. In my case they are
   publicMasterURL=okd-cluster.cloud.mydomain.com and masterURL=
   lb.cloud.mydomain.com. Note that here lb refers to the load balancer of
   my multi-master cluster.
   2. Deploy the certificates generated when installing through ansible.
   This works fine, I can see in my master-config.yml the correct values. The
   value for publicMasterURL points to okd-cluster.cloud.mydomain.com:8443
   and masterURL to lb.cloud.mydomain.com:8443. In the servingInfo, the
   correct certificates are pointed to. The generated certificate has a common
   name of lb.cloud.mydomain.com and an alternative name of
   okd-cluster.cloud.mydomain.com.
   3. Access the web console. The certificate served is valid.

Here, okd-cluster.cloud.mydomain.com is a CNAME to lb.cloud.mydomain.com
Current Result

   1. Even though I enter okd-cluster.cloud.mydomain.com:8443, the browser
   redirects to lb.cloud.mydomain.com:8443. I have checked and nowhere does
   the publicMasterURL points to lb.cloud.mydomain.com
   2. When logged in, the console randomly throws an error saying Server
   Connection Interrupted, and at times, refreshes and now reverts to the
   internal certificate and serves it. This goes away if I close the browser
   and reload the page. The correct certificate is again served, but again
   randomly reverts to the internal certificate.

My expectation is that once deployed, accessing
okd-cluster.cloud.mydomain.com should always use that address, and the
certificate should be served correctly always.

Is it because comman name is same as the masterURL and the alternative name
holds the same value as the publicMasterURL ? I am not sure if this is the
case, but it would be great to get some retrospective on this problem I am
seeing.


Regards

Gaurav
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users