Re: Deployment Strategy: lifecycle hooks how to inject configuration

2018-02-21 Thread Graham Dumpleton
Another example of where this can be useful is where the primary process in the 
container doesn't do what is required of process ID 1. That is, reap zombie 
processes. If that becomes an issue you can use a run script wrapper like:

#!/bin/sh

trap 'kill -TERM $PID' TERM INT

/usr/libexec/s2i/run &

PID=$!
wait $PID
trap - TERM INT
wait $PID
STATUS=$?
exit $STATUS

This simple alternative to a mini init process manager such as tini, will work 
fine in many cases.

Replace /usr/libexec/s2i/run with actual program to run.

Graham

> On 22 Feb 2018, at 9:33 am, Graham Dumpleton  wrote:
> 
> Badly worded perhaps.
> 
> In some cases you don't have the ability to modify an existing image with the 
> application in it, plus you may not want to create a new custom image as a 
> layer on top. In those cases, if all you need to do is some minor tweaks to 
> config prior to the application starting in the container you can use the 
> configmap trick as described. It will work so long as the config files you 
> need to change can be modified as the user the container is run as.
> 
> So you can do:
> 
> oc create configmap blog-run-script --from-file=run
> 
> oc set volume dc/blog --add --type=configmap \
> --configmap-name=blog-run-script \
> --mount-path=/opt/app-root/scripts
> 
> oc patch dc/blog --type=json --patch \
> '[{"op":"add",
>"path":"/spec/template/spec/containers/0/command",
>"value":["bash","/opt/app-root/scripts/run"]}]'
> 
> So the 'run' script makes the changes and then executes original command to 
> start the application in the container.
> 
> Graham
> 
>> On 22 Feb 2018, at 9:22 am, Fernando Lozano > > wrote:
>> 
>> Hi Graham,
>> 
>> This doesn't make sense to me:
>> 
>> >  3. If don't want to create a new custom image.
>> 
>> If you wanna run your application in a container you have to create a custom 
>> image with the application. There's no way around, because container images 
>> are immutable. You can only choose how you will build your custom image. 
>> This is the way containers are supposed to work, with or without OpenShift.
>> 
>> 
>> []s, Fernando Lozano
>> 
>> 
>> On Wed, Feb 21, 2018 at 6:15 PM, Graham Dumpleton > > wrote:
>> 
>> 
>>> On 22 Feb 2018, at 3:21 am, Fernando Lozano >> > wrote:
>>> 
>>> Hi Dan,
>>> 
>>> As you learned, lifecycle hooks were not made to change anything inside a 
>>> container image. Remember that container images are, by design, immutable. 
>>> It looks you want to build a custom container image that includes your 
>>> customizations to the wildfly configs plus your application. There are two 
>>> ways to accomplish that with OpenShift:
>>> 
>>> 1. Create a Dockerfile that uses the standard wildfly container image as 
>>> the parent, and adds your customization.
>>> 
>>> 2. Use the OpenShift source-to-image (s2i) process to add configurations 
>>> and your application. See the OpenShift docs about the wildfly s2i builder 
>>> image for details, this is easier than using a Dockerfile. The standard s2i 
>>> processes builds the application from sources, but it also supports feeding 
>>> an application war/ear.
>> 
>> 3. If don't want to create a new custom image, but want to add additional 
>> actions before application started in the container, mount a shell script 
>> into the container from a config map. Override the command for the pod to 
>> run your script mounted from config map. Do you work in the script, with 
>> your script then doing an exec on the original command for the application.
>> 
>> Graham
>> 
>>> []s, Fernando Lozano
>>> 
>>> 
>>> On Wed, Feb 21, 2018 at 9:43 AM, Dan Pungă >> > wrote:
>>> Hello all!
>>> 
>>> Trying to build an OShift configuration for running a Java app with a 
>>> Wildfly server.
>>> I've setup this with ChainBuilds where the app's artifacts are combined 
>>> with a runtime image of Wildfly.
>>> 
>>> For this particular app, however, I need to do some configuration on the 
>>> Wildfly environment, so that the app is properly deployed and works.
>>> - update a server module (grabbing the contents from the web and copying 
>>> them in the right location inside Wildfly)
>>> - add system properties and some other configuration to Wildfly's 
>>> standalone.xml configuration file
>>> - create some directory structure
>>> 
>>> I've tried to run all this with the Recreate deployment starategy and as a 
>>> mid-hook procedure (so the previous deployment pod is scaled down), but all 
>>> these changes aren't reflected in the actual(new) deployment pod.
>>> 
>>> Taking a closer look at the docs, I've found this line "Pod-based lifecycle 
>>> hooks execute hook code in a new pod derived from the template in a 
>>> deployment configuration."
>>> So 

Re: Deployment Strategy: lifecycle hooks how to inject configuration

2018-02-21 Thread Graham Dumpleton
Badly worded perhaps.

In some cases you don't have the ability to modify an existing image with the 
application in it, plus you may not want to create a new custom image as a 
layer on top. In those cases, if all you need to do is some minor tweaks to 
config prior to the application starting in the container you can use the 
configmap trick as described. It will work so long as the config files you need 
to change can be modified as the user the container is run as.

So you can do:

oc create configmap blog-run-script --from-file=run

oc set volume dc/blog --add --type=configmap \
--configmap-name=blog-run-script \
--mount-path=/opt/app-root/scripts

oc patch dc/blog --type=json --patch \
'[{"op":"add",
   "path":"/spec/template/spec/containers/0/command",
   "value":["bash","/opt/app-root/scripts/run"]}]'

So the 'run' script makes the changes and then executes original command to 
start the application in the container.

Graham

> On 22 Feb 2018, at 9:22 am, Fernando Lozano  wrote:
> 
> Hi Graham,
> 
> This doesn't make sense to me:
> 
> >  3. If don't want to create a new custom image.
> 
> If you wanna run your application in a container you have to create a custom 
> image with the application. There's no way around, because container images 
> are immutable. You can only choose how you will build your custom image. This 
> is the way containers are supposed to work, with or without OpenShift.
> 
> 
> []s, Fernando Lozano
> 
> 
> On Wed, Feb 21, 2018 at 6:15 PM, Graham Dumpleton  > wrote:
> 
> 
>> On 22 Feb 2018, at 3:21 am, Fernando Lozano > > wrote:
>> 
>> Hi Dan,
>> 
>> As you learned, lifecycle hooks were not made to change anything inside a 
>> container image. Remember that container images are, by design, immutable. 
>> It looks you want to build a custom container image that includes your 
>> customizations to the wildfly configs plus your application. There are two 
>> ways to accomplish that with OpenShift:
>> 
>> 1. Create a Dockerfile that uses the standard wildfly container image as the 
>> parent, and adds your customization.
>> 
>> 2. Use the OpenShift source-to-image (s2i) process to add configurations and 
>> your application. See the OpenShift docs about the wildfly s2i builder image 
>> for details, this is easier than using a Dockerfile. The standard s2i 
>> processes builds the application from sources, but it also supports feeding 
>> an application war/ear.
> 
> 3. If don't want to create a new custom image, but want to add additional 
> actions before application started in the container, mount a shell script 
> into the container from a config map. Override the command for the pod to run 
> your script mounted from config map. Do you work in the script, with your 
> script then doing an exec on the original command for the application.
> 
> Graham
> 
>> []s, Fernando Lozano
>> 
>> 
>> On Wed, Feb 21, 2018 at 9:43 AM, Dan Pungă > > wrote:
>> Hello all!
>> 
>> Trying to build an OShift configuration for running a Java app with a 
>> Wildfly server.
>> I've setup this with ChainBuilds where the app's artifacts are combined with 
>> a runtime image of Wildfly.
>> 
>> For this particular app, however, I need to do some configuration on the 
>> Wildfly environment, so that the app is properly deployed and works.
>> - update a server module (grabbing the contents from the web and copying 
>> them in the right location inside Wildfly)
>> - add system properties and some other configuration to Wildfly's 
>> standalone.xml configuration file
>> - create some directory structure
>> 
>> I've tried to run all this with the Recreate deployment starategy and as a 
>> mid-hook procedure (so the previous deployment pod is scaled down), but all 
>> these changes aren't reflected in the actual(new) deployment pod.
>> 
>> Taking a closer look at the docs, I've found this line "Pod-based lifecycle 
>> hooks execute hook code in a new pod derived from the template in a 
>> deployment configuration."
>> So whatever I'm doing in my hook, is actually done in a different pod, the 
>> hook pod, and not in the actual deployment pod. Did I understand this 
>> correctly?
>> If so, how does the injection work here? Does it have to do with the fact 
>> that the deployment has to have persistent volumes? So the hooks actually do 
>> changes inside a volume that will be mounted with the deployment pod too...
>> 
>> Thank you!
>> 
>> 
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com 
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users 
>> 
>> 
>> 
>> ___
>> users mailing 

Re: Deployment Strategy: lifecycle hooks how to inject configuration

2018-02-21 Thread Fernando Lozano
Hi Graham,

This doesn't make sense to me:

>  3. If don't want to create a new custom image.

If you wanna run your application in a container you have to create a
custom image with the application. There's no way around, because container
images are immutable. You can only choose how you will build your custom
image. This is the way containers are supposed to work, with or without
OpenShift.


[]s, Fernando Lozano


On Wed, Feb 21, 2018 at 6:15 PM, Graham Dumpleton 
wrote:

>
>
> On 22 Feb 2018, at 3:21 am, Fernando Lozano  wrote:
>
> Hi Dan,
>
> As you learned, lifecycle hooks were not made to change anything inside a
> container image. Remember that container images are, by design, immutable.
> It looks you want to build a custom container image that includes your
> customizations to the wildfly configs plus your application. There are two
> ways to accomplish that with OpenShift:
>
> 1. Create a Dockerfile that uses the standard wildfly container image as
> the parent, and adds your customization.
>
> 2. Use the OpenShift source-to-image (s2i) process to add configurations
> and your application. See the OpenShift docs about the wildfly s2i builder
> image for details, this is easier than using a Dockerfile. The standard s2i
> processes builds the application from sources, but it also supports feeding
> an application war/ear.
>
>
> 3. If don't want to create a new custom image, but want to add additional
> actions before application started in the container, mount a shell script
> into the container from a config map. Override the command for the pod to
> run your script mounted from config map. Do you work in the script, with
> your script then doing an exec on the original command for the application.
>
> Graham
>
> []s, Fernando Lozano
>
>
> On Wed, Feb 21, 2018 at 9:43 AM, Dan Pungă  wrote:
>
>> Hello all!
>>
>> Trying to build an OShift configuration for running a Java app with a
>> Wildfly server.
>> I've setup this with ChainBuilds where the app's artifacts are combined
>> with a runtime image of Wildfly.
>>
>> For this particular app, however, I need to do some configuration on the
>> Wildfly environment, so that the app is properly deployed and works.
>> - update a server module (grabbing the contents from the web and copying
>> them in the right location inside Wildfly)
>> - add system properties and some other configuration to Wildfly's
>> standalone.xml configuration file
>> - create some directory structure
>>
>> I've tried to run all this with the Recreate deployment starategy and as
>> a mid-hook procedure (so the previous deployment pod is scaled down), but
>> all these changes aren't reflected in the actual(new) deployment pod.
>>
>> Taking a closer look at the docs, I've found this line "Pod-based
>> lifecycle hooks execute hook code in a new pod derived from the template in
>> a deployment configuration."
>> So whatever I'm doing in my hook, is actually done in a different pod,
>> the hook pod, and not in the actual deployment pod. Did I understand this
>> correctly?
>> If so, how does the injection work here? Does it have to do with the fact
>> that the deployment *has to have* persistent volumes? So the hooks
>> actually do changes inside a volume that will be mounted with the
>> deployment pod too...
>>
>> Thank you!
>>
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Deployment Strategy: lifecycle hooks how to inject configuration

2018-02-21 Thread Graham Dumpleton


> On 22 Feb 2018, at 3:21 am, Fernando Lozano  wrote:
> 
> Hi Dan,
> 
> As you learned, lifecycle hooks were not made to change anything inside a 
> container image. Remember that container images are, by design, immutable. It 
> looks you want to build a custom container image that includes your 
> customizations to the wildfly configs plus your application. There are two 
> ways to accomplish that with OpenShift:
> 
> 1. Create a Dockerfile that uses the standard wildfly container image as the 
> parent, and adds your customization.
> 
> 2. Use the OpenShift source-to-image (s2i) process to add configurations and 
> your application. See the OpenShift docs about the wildfly s2i builder image 
> for details, this is easier than using a Dockerfile. The standard s2i 
> processes builds the application from sources, but it also supports feeding 
> an application war/ear.

3. If don't want to create a new custom image, but want to add additional 
actions before application started in the container, mount a shell script into 
the container from a config map. Override the command for the pod to run your 
script mounted from config map. Do you work in the script, with your script 
then doing an exec on the original command for the application.

Graham

> []s, Fernando Lozano
> 
> 
> On Wed, Feb 21, 2018 at 9:43 AM, Dan Pungă  > wrote:
> Hello all!
> 
> Trying to build an OShift configuration for running a Java app with a Wildfly 
> server.
> I've setup this with ChainBuilds where the app's artifacts are combined with 
> a runtime image of Wildfly.
> 
> For this particular app, however, I need to do some configuration on the 
> Wildfly environment, so that the app is properly deployed and works.
> - update a server module (grabbing the contents from the web and copying them 
> in the right location inside Wildfly)
> - add system properties and some other configuration to Wildfly's 
> standalone.xml configuration file
> - create some directory structure
> 
> I've tried to run all this with the Recreate deployment starategy and as a 
> mid-hook procedure (so the previous deployment pod is scaled down), but all 
> these changes aren't reflected in the actual(new) deployment pod.
> 
> Taking a closer look at the docs, I've found this line "Pod-based lifecycle 
> hooks execute hook code in a new pod derived from the template in a 
> deployment configuration."
> So whatever I'm doing in my hook, is actually done in a different pod, the 
> hook pod, and not in the actual deployment pod. Did I understand this 
> correctly?
> If so, how does the injection work here? Does it have to do with the fact 
> that the deployment has to have persistent volumes? So the hooks actually do 
> changes inside a volume that will be mounted with the deployment pod too...
> 
> Thank you!
> 
> 
> ___
> users mailing list
> users@lists.openshift.redhat.com 
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users 
> 
> 
> 
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: How to use DNS hostname of OpenShift on AWS

2018-02-21 Thread Joel Pearson
Michael are you running OpenShift on AWS?

https://github.com/openshift/openshift-ansible-contrib/tree/master/reference-architecture/aws-ansible
is the AWS reference architecture and it does use openshift-ansible once
the infrastructure is built, but it uses a dynamic inventory.

It’s not an option for us not to use the aws reference architecture to
install OpenShift as it would be rather painful as we’re relying heavily on
cloud formation and the dynamic inventory.

While ansible is running the hostnames are correct, so I’m suspecting that
maybe OpenShift itself is detecting the cloud provider and overriding the
hostname or maybe the ansible playbook is doing something similar. Inside
the ansible openshift_facts python library I saw some custom hostname
handling for Google Cloud, but not for AWS, but it made me suspicious
thinking it might be hiding somewhere else.
On Wed, 21 Feb 2018 at 11:38 pm, Feld, Michael (IMS) 
wrote:

> Deploying with https://github.com/openshift/openshift-ansible you can
> define the hostnames in your inventory file. There is a sample inventory
> file at
> https://docs.openshift.org/latest/install_config/install/advanced_install.html
> that shows how to define the master/etcd/nodes, and those names should be
> used as the hostnames in the cluster.
>
>
>
> *From:* users-boun...@lists.openshift.redhat.com [mailto:
> users-boun...@lists.openshift.redhat.com] *On Behalf Of *Joel Pearson
> *Sent:* Wednesday, February 21, 2018 7:14 AM
> *To:* users 
> *Subject:* How to use DNS hostname of OpenShift on AWS
>
>
>
> Hi,
>
>
>
> I'm trying to figure out how to use the DNS hostname when deploying
> OpenShift on AWS using
> https://github.com/openshift/openshift-ansible-contrib/tree/master/reference-architecture/aws-ansible
>  Currently
> it uses private dns name, eg, ip-10-2-7-121.ap-southeast-2.compute.internal
> but that isn't too useful a name for me.  I've managed to set the hostname
> on the ec2 instance properly but disabling the relevant cloud-init setting,
> but it still grabs the private dns name somehow.
>
>
>
> I tried adding "openshift_hostname" to be the same as "name" on this line:
> https://github.com/openshift/openshift-ansible-contrib/blob/master/reference-architecture/aws-ansible/playbooks/roles/instance-groups/tasks/main.yaml#L11
>
>
>
> Which did set the hostname in the node-config.yaml, but then when running
> "oc get nodes" it still returned the private dns name somehow, and
> installation failed waiting for the nodes to start properly, I guess a
> mismatch between node names somewhere.
>
>
>
> I found an old github issue, but it's all referring to files in ansible
> that exist no longer:
>
> https://github.com/openshift/openshift-ansible/issues/1170
>
>
>
> Even on OpenShift Online Starter, they're using the default ec2 names,
> eg: ip-172-31-28-11.ca-central-1.compute.internal, which isn't a good sign
> I guess.
>
>
>
> Has anyone successfully used a DNS name for OpenShift on AWS?
>
>
>
> Thanks,
>
>
>
> Joel
>
> --
>
> Information in this e-mail may be confidential. It is intended only for
> the addressee(s) identified above. If you are not the addressee(s), or an
> employee or agent of the addressee(s), please note that any dissemination,
> distribution, or copying of this communication is strictly prohibited. If
> you have received this e-mail in error, please notify the sender of the
> error.
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Deployment Strategy: lifecycle hooks how to inject configuration

2018-02-21 Thread Fernando Lozano
Hi Dan,

As you learned, lifecycle hooks were not made to change anything inside a
container image. Remember that container images are, by design, immutable.
It looks you want to build a custom container image that includes your
customizations to the wildfly configs plus your application. There are two
ways to accomplish that with OpenShift:

1. Create a Dockerfile that uses the standard wildfly container image as
the parent, and adds your customization.

2. Use the OpenShift source-to-image (s2i) process to add configurations
and your application. See the OpenShift docs about the wildfly s2i builder
image for details, this is easier than using a Dockerfile. The standard s2i
processes builds the application from sources, but it also supports feeding
an application war/ear.


[]s, Fernando Lozano


On Wed, Feb 21, 2018 at 9:43 AM, Dan Pungă  wrote:

> Hello all!
>
> Trying to build an OShift configuration for running a Java app with a
> Wildfly server.
> I've setup this with ChainBuilds where the app's artifacts are combined
> with a runtime image of Wildfly.
>
> For this particular app, however, I need to do some configuration on the
> Wildfly environment, so that the app is properly deployed and works.
> - update a server module (grabbing the contents from the web and copying
> them in the right location inside Wildfly)
> - add system properties and some other configuration to Wildfly's
> standalone.xml configuration file
> - create some directory structure
>
> I've tried to run all this with the Recreate deployment starategy and as a
> mid-hook procedure (so the previous deployment pod is scaled down), but all
> these changes aren't reflected in the actual(new) deployment pod.
>
> Taking a closer look at the docs, I've found this line "Pod-based
> lifecycle hooks execute hook code in a new pod derived from the template in
> a deployment configuration."
> So whatever I'm doing in my hook, is actually done in a different pod, the
> hook pod, and not in the actual deployment pod. Did I understand this
> correctly?
> If so, how does the injection work here? Does it have to do with the fact
> that the deployment *has to have* persistent volumes? So the hooks
> actually do changes inside a volume that will be mounted with the
> deployment pod too...
>
> Thank you!
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


upgrading from Origin 3.6 to 3.6.1

2018-02-21 Thread Thomas Hikade
Hi all,

whats the recommended way to upgrade from Origin 3.6 to 3.6.1 ( rpm-based
install)? There seems to be no upgrade playbook.

I have set up a cluster when 3.6 was out, and adding new nodes *now* (
using openshift-ansible/playbooks/byo/openshift-node/scaleup.yml ) will
install the 3.6.1 version!
Even when inventory variable is still pointing to 3.6!

 - snip -
openshift_release=v3.6
#openshift_image_tag=v3.6.0
#openshift_pkg_version=-3.6.0
#openshift_hosted_registry_storage_openstack_volumeID=3a650b4f-c8c5-4e0a-8ca5-eaee11f16c57
#openshift_hosted_metrics_deployer_version=3.6.0
#openshift_hosted_logging_deployer_version=3.6.0
 - snip -


# rpm -qa | grep origin
origin-3.6.1-1.0.008f2d5.x86_64
origin-sdn-ovs-3.6.1-1.0.008f2d5.x86_64
origin-docker-excluder-3.6.1-1.0.008f2d5.noarch
origin-clients-3.6.1-1.0.008f2d5.x86_64
tuned-profiles-origin-node-3.6.1-1.0.008f2d5.x86_64
origin-excluder-3.6.1-1.0.008f2d5.noarch
origin-node-3.6.1-1.0.008f2d5.x86_64

Thank you!
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Deployment Strategy: lifecycle hooks how to inject configuration

2018-02-21 Thread Tomas Nozicka
On Wed, 2018-02-21 at 14:43 +0200, Dan Pungă wrote:
> Hello all!
> 
> Trying to build an OShift configuration for running a Java app with a
> Wildfly server.
> I've setup this with ChainBuilds where the app's artifacts are
> combined with a runtime image of Wildfly.
> 
> For this particular app, however, I need to do some configuration on
> the Wildfly environment, so that the app is properly deployed and
> works.
> - update a server module (grabbing the contents from the web and
> copying them in the right location inside Wildfly)
> - add system properties and some other configuration to Wildfly's
> standalone.xml configuration file
> - create some directory structure
> 
> I've tried to run all this with the Recreate deployment starategy and
> as a mid-hook procedure (so the previous deployment pod is scaled
> down), but all these changes aren't reflected in the actual(new)
> deployment pod.
> 
> Taking a closer look at the docs, I've found this line "Pod-based
> lifecycle hooks execute hook code in a new pod derived from the
> template in a deployment configuration."
> So whatever I'm doing in my hook, is actually done in a different
> pod, the hook pod, and not in the actual deployment pod. Did I
> understand this correctly?
Yes, hook is a different pod. If you want to share it with other
(regular app) pods you need persistent volume; not that I would
recommend this approach.

Likely there is a better solution like having directory structure and
other static layout baked into image and sharing configuration by using
e.g. configmap mounted into pods or by env vars.

possibly check
https://kubernetes.io/docs/concepts/workloads/pods/init-containers/


> If so, how does the injection work here? Does it have to do with the
> fact that the deployment has to have persistent volumes? So the hooks
> actually do changes inside a volume that will be mounted with the
> deployment pod too...

> 
> Thank you!
> 
>  ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: How to use DNS hostname of OpenShift on AWS

2018-02-21 Thread Feld, Michael (IMS)
Deploying with https://github.com/openshift/openshift-ansible you can define 
the hostnames in your inventory file. There is a sample inventory file at 
https://docs.openshift.org/latest/install_config/install/advanced_install.html 
that shows how to define the master/etcd/nodes, and those names should be used 
as the hostnames in the cluster.

From: users-boun...@lists.openshift.redhat.com 
[mailto:users-boun...@lists.openshift.redhat.com] On Behalf Of Joel Pearson
Sent: Wednesday, February 21, 2018 7:14 AM
To: users 
Subject: How to use DNS hostname of OpenShift on AWS

Hi,

I'm trying to figure out how to use the DNS hostname when deploying OpenShift 
on AWS using 
https://github.com/openshift/openshift-ansible-contrib/tree/master/reference-architecture/aws-ansible
 Currently it uses private dns name, eg, 
ip-10-2-7-121.ap-southeast-2.compute.internal but that isn't too useful a name 
for me.  I've managed to set the hostname on the ec2 instance properly but 
disabling the relevant cloud-init setting, but it still grabs the private dns 
name somehow.

I tried adding "openshift_hostname" to be the same as "name" on this line: 
https://github.com/openshift/openshift-ansible-contrib/blob/master/reference-architecture/aws-ansible/playbooks/roles/instance-groups/tasks/main.yaml#L11

Which did set the hostname in the node-config.yaml, but then when running "oc 
get nodes" it still returned the private dns name somehow, and installation 
failed waiting for the nodes to start properly, I guess a mismatch between node 
names somewhere.

I found an old github issue, but it's all referring to files in ansible that 
exist no longer:
https://github.com/openshift/openshift-ansible/issues/1170

Even on OpenShift Online Starter, they're using the default ec2 names, eg: 
ip-172-31-28-11.ca-central-1.compute.internal, which isn't a good sign I guess.

Has anyone successfully used a DNS name for OpenShift on AWS?

Thanks,

Joel



Information in this e-mail may be confidential. It is intended only for the 
addressee(s) identified above. If you are not the addressee(s), or an employee 
or agent of the addressee(s), please note that any dissemination, distribution, 
or copying of this communication is strictly prohibited. If you have received 
this e-mail in error, please notify the sender of the error.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


How to use DNS hostname of OpenShift on AWS

2018-02-21 Thread Joel Pearson
Hi,

I'm trying to figure out how to use the DNS hostname when deploying
OpenShift on AWS using
https://github.com/openshift/openshift-ansible-contrib/tree/master/reference-architecture/aws-ansible
Currently
it uses private dns name, eg, ip-10-2-7-121.ap-southeast-2.compute.internal
but that isn't too useful a name for me.  I've managed to set the hostname
on the ec2 instance properly but disabling the relevant cloud-init setting,
but it still grabs the private dns name somehow.

I tried adding "openshift_hostname" to be the same as "name" on this line:
https://github.com/openshift/openshift-ansible-contrib/blob/master/reference-architecture/aws-ansible/playbooks/roles/instance-groups/tasks/main.yaml#L11

Which did set the hostname in the node-config.yaml, but then when running
"oc get nodes" it still returned the private dns name somehow, and
installation failed waiting for the nodes to start properly, I guess a
mismatch between node names somewhere.

I found an old github issue, but it's all referring to files in ansible
that exist no longer:
https://github.com/openshift/openshift-ansible/issues/1170

Even on OpenShift Online Starter, they're using the default ec2 names,
eg: ip-172-31-28-11.ca-central-1.compute.internal, which isn't a good sign
I guess.

Has anyone successfully used a DNS name for OpenShift on AWS?

Thanks,

Joel
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users