Re: Using Environment Variables within context.xml - Tomcat 8 Source 2 Image

2018-01-17 Thread Louis Santillan
David,

Try adding `env.` to your variables (e.g. `${env.MAPPING_JNDI}`) [0].  You
can also verify that the vars are set the way you expect using `oc rsh ...`
or `oc debug ...` (in the case of a failed pod).

[0] https://access.redhat.com/solutions/3190862

___

LOUIS P. SANTILLAN

Architect, OPENSHIFT, MIDDLEWARE & DEVOPS

Red Hat Consulting,  Container and PaaS Practice

lsant...@redhat.com   M: 3236334854

TRIED. TESTED. TRUSTED. 



On Wed, Jan 17, 2018 at 2:23 AM, David Gibson 
wrote:

> Hello,
>
> I was wondering if it is possible to achieve the following:
>
> We have created a Geoserver web app using the Tomcat 8 source to image
> file, however we require this app to connect to 3 external databases to
> retrieve the spatial data.
>
> To build our application we are using the Jenkins S2I and have created a
> build pipeline that will build, deploy and promote the application through
> various stages eg dev, test, prod.
>
> Using the Tomcat Source 2 Image the app has been created and the war file
> gets deployed along with the context.xml file specific for the application,
> which if we hardcode all the values in the context.xml file this will work
> for an individual environment.
>
> I have read that it was possible in OS version 2 to substitute the values
> in the context.xml file with environment variable within OS however this
> does not seem to work.
>
> What we have is
>
> context xml
>
>   url = "${MAPPING_URL}"
>
> etc.
>
>   url = "${OSMAP_URL}"
>
> In the deploy template we have these values configured as environment
> variable as such
>
> - name: "MAPPING_JNDI"
>   value: ${MAPPING_JNDI}
>
> where the values are read in from a properties file.
>
> If I use the terminal to inspect the pod I can see that the environment
> variable are all set correctly, however the JNDI lookup fails as the values
> have not been substituted. Is it possible to do this.
>
> Thanks,
>
> David
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OpenStack cloud provider problems

2018-01-17 Thread Tim Dudgeon
No, not yet, but first I think I need to understand what OpenShift is 
trying to do at this point.


Any Red Hatters out there who understand this?


On 17/01/18 10:56, Joel Pearson wrote:
Have you tried an OpenStack users list? It sounds like you need 
someone with in-depth OpenStack knowledge
On Wed, 17 Jan 2018 at 9:55 pm, Tim Dudgeon > wrote:


So what does "complete an install" entail?
Presumably  OpenShift/Kubernetes is trying to do something in
OpenStack but this is failing.

But what is it trying to do?


On 17/01/18 10:49, Joel Pearson wrote:

Complete stab in the dark, but maybe your OpenStack account
doesn’t have enough privileges to be able to complete an install?
On Wed, 17 Jan 2018 at 9:46 pm, Tim Dudgeon
> wrote:

I'm still having problems getting the OpenStack cloud
provider running.

I have a minimal OpenShift Origin 3.7 Ansible install that
runs OK. But
when I add the definition for the OpenStack cloud provider
(just the
cloud provider definition, nothing yet that uses it) the
installation
fails like this:

TASK [nickhammond.logrotate : nickhammond.logrotate | Setup
logrotate.d
scripts]

***

RUNNING HANDLER [openshift_node : restart node]


FAILED - RETRYING: restart node (3 retries left).
FAILED - RETRYING: restart node (3 retries left).
FAILED - RETRYING: restart node (3 retries left).
FAILED - RETRYING: restart node (3 retries left).
FAILED - RETRYING: restart node (3 retries left).
FAILED - RETRYING: restart node (2 retries left).
FAILED - RETRYING: restart node (2 retries left).
FAILED - RETRYING: restart node (2 retries left).
FAILED - RETRYING: restart node (2 retries left).
FAILED - RETRYING: restart node (2 retries left).
FAILED - RETRYING: restart node (1 retries left).
FAILED - RETRYING: restart node (1 retries left).
FAILED - RETRYING: restart node (1 retries left).
FAILED - RETRYING: restart node (1 retries left).
FAILED - RETRYING: restart node (1 retries left).
fatal: [orndev-node-000]: FAILED! => {"attempts": 3,
"changed": false,
"msg": "Unable to restart service origin-node: Job for
origin-node.service failed because the control process exited
with error
code. See \"systemctl status origin-node.service\" and
\"journalctl
-xe\" for details.\n"}
fatal: [orndev-node-001]: FAILED! => {"attempts": 3,
"changed": false,
"msg": "Unable to restart service origin-node: Job for
origin-node.service failed because the control process exited
with error
code. See \"systemctl status origin-node.service\" and
\"journalctl
-xe\" for details.\n"}
fatal: [orndev-master-000]: FAILED! => {"attempts": 3,
"changed": false,
"msg": "Unable to restart service origin-node: Job for
origin-node.service failed because the control process exited
with error
code. See \"systemctl status origin-node.service\" and
\"journalctl
-xe\" for details.\n"}
fatal: [orndev-node-002]: FAILED! => {"attempts": 3,
"changed": false,
"msg": "Unable to restart service origin-node: Job for
origin-node.service failed because the control process exited
with error
code. See \"systemctl status origin-node.service\" and
\"journalctl
-xe\" for details.\n"}
fatal: [orndev-infra-000]: FAILED! => {"attempts": 3,
"changed": false,
"msg": "Unable to restart service origin-node: Job for
origin-node.service failed because the control process exited
with error
code. See \"systemctl status origin-node.service\" and
\"journalctl
-xe\" for details.\n"}

RUNNING HANDLER [openshift_node : reload systemd units]


 to retry, use: --limit
@/home/centos/openshift-ansible/playbooks/byo/config.retry


Looking on one of the nodes I see this error in the
origin-node.service
logs:

Jan 17 09:40:49 orndev-master-000 origin-node[2419]: E0117
09:40:49.746806    2419 kubelet_node_status.go:106] Unable to
register
node "orndev-master-000" with API server: nodes

Re: OpenStack cloud provider problems

2018-01-17 Thread Joel Pearson
Have you tried an OpenStack users list? It sounds like you need someone
with in-depth OpenStack knowledge
On Wed, 17 Jan 2018 at 9:55 pm, Tim Dudgeon  wrote:

> So what does "complete an install" entail?
> Presumably  OpenShift/Kubernetes is trying to do something in OpenStack
> but this is failing.
>
> But what is it trying to do?
>
> On 17/01/18 10:49, Joel Pearson wrote:
>
> Complete stab in the dark, but maybe your OpenStack account doesn’t have
> enough privileges to be able to complete an install?
> On Wed, 17 Jan 2018 at 9:46 pm, Tim Dudgeon  wrote:
>
>> I'm still having problems getting the OpenStack cloud provider running.
>>
>> I have a minimal OpenShift Origin 3.7 Ansible install that runs OK. But
>> when I add the definition for the OpenStack cloud provider (just the
>> cloud provider definition, nothing yet that uses it) the installation
>> fails like this:
>>
>> TASK [nickhammond.logrotate : nickhammond.logrotate | Setup logrotate.d
>> scripts]
>>
>> ***
>>
>> RUNNING HANDLER [openshift_node : restart node]
>>
>> 
>> FAILED - RETRYING: restart node (3 retries left).
>> FAILED - RETRYING: restart node (3 retries left).
>> FAILED - RETRYING: restart node (3 retries left).
>> FAILED - RETRYING: restart node (3 retries left).
>> FAILED - RETRYING: restart node (3 retries left).
>> FAILED - RETRYING: restart node (2 retries left).
>> FAILED - RETRYING: restart node (2 retries left).
>> FAILED - RETRYING: restart node (2 retries left).
>> FAILED - RETRYING: restart node (2 retries left).
>> FAILED - RETRYING: restart node (2 retries left).
>> FAILED - RETRYING: restart node (1 retries left).
>> FAILED - RETRYING: restart node (1 retries left).
>> FAILED - RETRYING: restart node (1 retries left).
>> FAILED - RETRYING: restart node (1 retries left).
>> FAILED - RETRYING: restart node (1 retries left).
>> fatal: [orndev-node-000]: FAILED! => {"attempts": 3, "changed": false,
>> "msg": "Unable to restart service origin-node: Job for
>> origin-node.service failed because the control process exited with error
>> code. See \"systemctl status origin-node.service\" and \"journalctl
>> -xe\" for details.\n"}
>> fatal: [orndev-node-001]: FAILED! => {"attempts": 3, "changed": false,
>> "msg": "Unable to restart service origin-node: Job for
>> origin-node.service failed because the control process exited with error
>> code. See \"systemctl status origin-node.service\" and \"journalctl
>> -xe\" for details.\n"}
>> fatal: [orndev-master-000]: FAILED! => {"attempts": 3, "changed": false,
>> "msg": "Unable to restart service origin-node: Job for
>> origin-node.service failed because the control process exited with error
>> code. See \"systemctl status origin-node.service\" and \"journalctl
>> -xe\" for details.\n"}
>> fatal: [orndev-node-002]: FAILED! => {"attempts": 3, "changed": false,
>> "msg": "Unable to restart service origin-node: Job for
>> origin-node.service failed because the control process exited with error
>> code. See \"systemctl status origin-node.service\" and \"journalctl
>> -xe\" for details.\n"}
>> fatal: [orndev-infra-000]: FAILED! => {"attempts": 3, "changed": false,
>> "msg": "Unable to restart service origin-node: Job for
>> origin-node.service failed because the control process exited with error
>> code. See \"systemctl status origin-node.service\" and \"journalctl
>> -xe\" for details.\n"}
>>
>> RUNNING HANDLER [openshift_node : reload systemd units]
>>
>> 
>>  to retry, use: --limit
>> @/home/centos/openshift-ansible/playbooks/byo/config.retry
>>
>>
>> Looking on one of the nodes I see this error in the origin-node.service
>> logs:
>>
>> Jan 17 09:40:49 orndev-master-000 origin-node[2419]: E0117
>> 09:40:49.7468062419 kubelet_node_status.go:106] Unable to register
>> node "orndev-master-000" with API server: nodes "orndev-master-000" is
>> forbidden: node 10.0.0.6 cannot modify node orndev-master-000
>>
>> The /etc/origin/cloudprovider/openstack.conf file has been created OK,
>> and looks to be what is expected.
>> But I can't be sure its specified correctly and will work. In fact if I
>> deliberately change the configuration to use an invalid openstack
>> username the install fails at the same place, but the error message on
>> the node is different:
>>
>> Jan 17 10:08:58 orndev-master-000 origin-node[24066]: F0117
>> 10:08:58.474152   24066 start_node.go:159] could not init cloud provider
>> "openstack": Authentication failed
>>
>> When set back to the right username the node service again fails because
>> of:
>> Unable to register node 

Re: OpenStack cloud provider problems

2018-01-17 Thread Tim Dudgeon

So what does "complete an install" entail?
Presumably  OpenShift/Kubernetes is trying to do something in OpenStack 
but this is failing.


But what is it trying to do?


On 17/01/18 10:49, Joel Pearson wrote:
Complete stab in the dark, but maybe your OpenStack account doesn’t 
have enough privileges to be able to complete an install?
On Wed, 17 Jan 2018 at 9:46 pm, Tim Dudgeon > wrote:


I'm still having problems getting the OpenStack cloud provider
running.

I have a minimal OpenShift Origin 3.7 Ansible install that runs
OK. But
when I add the definition for the OpenStack cloud provider (just the
cloud provider definition, nothing yet that uses it) the installation
fails like this:

TASK [nickhammond.logrotate : nickhammond.logrotate | Setup
logrotate.d
scripts]

***

RUNNING HANDLER [openshift_node : restart node]


FAILED - RETRYING: restart node (3 retries left).
FAILED - RETRYING: restart node (3 retries left).
FAILED - RETRYING: restart node (3 retries left).
FAILED - RETRYING: restart node (3 retries left).
FAILED - RETRYING: restart node (3 retries left).
FAILED - RETRYING: restart node (2 retries left).
FAILED - RETRYING: restart node (2 retries left).
FAILED - RETRYING: restart node (2 retries left).
FAILED - RETRYING: restart node (2 retries left).
FAILED - RETRYING: restart node (2 retries left).
FAILED - RETRYING: restart node (1 retries left).
FAILED - RETRYING: restart node (1 retries left).
FAILED - RETRYING: restart node (1 retries left).
FAILED - RETRYING: restart node (1 retries left).
FAILED - RETRYING: restart node (1 retries left).
fatal: [orndev-node-000]: FAILED! => {"attempts": 3, "changed": false,
"msg": "Unable to restart service origin-node: Job for
origin-node.service failed because the control process exited with
error
code. See \"systemctl status origin-node.service\" and \"journalctl
-xe\" for details.\n"}
fatal: [orndev-node-001]: FAILED! => {"attempts": 3, "changed": false,
"msg": "Unable to restart service origin-node: Job for
origin-node.service failed because the control process exited with
error
code. See \"systemctl status origin-node.service\" and \"journalctl
-xe\" for details.\n"}
fatal: [orndev-master-000]: FAILED! => {"attempts": 3, "changed":
false,
"msg": "Unable to restart service origin-node: Job for
origin-node.service failed because the control process exited with
error
code. See \"systemctl status origin-node.service\" and \"journalctl
-xe\" for details.\n"}
fatal: [orndev-node-002]: FAILED! => {"attempts": 3, "changed": false,
"msg": "Unable to restart service origin-node: Job for
origin-node.service failed because the control process exited with
error
code. See \"systemctl status origin-node.service\" and \"journalctl
-xe\" for details.\n"}
fatal: [orndev-infra-000]: FAILED! => {"attempts": 3, "changed":
false,
"msg": "Unable to restart service origin-node: Job for
origin-node.service failed because the control process exited with
error
code. See \"systemctl status origin-node.service\" and \"journalctl
-xe\" for details.\n"}

RUNNING HANDLER [openshift_node : reload systemd units]


 to retry, use: --limit
@/home/centos/openshift-ansible/playbooks/byo/config.retry


Looking on one of the nodes I see this error in the
origin-node.service
logs:

Jan 17 09:40:49 orndev-master-000 origin-node[2419]: E0117
09:40:49.746806    2419 kubelet_node_status.go:106] Unable to register
node "orndev-master-000" with API server: nodes "orndev-master-000" is
forbidden: node 10.0.0.6 cannot modify node orndev-master-000

The /etc/origin/cloudprovider/openstack.conf file has been created OK,
and looks to be what is expected.
But I can't be sure its specified correctly and will work. In fact
if I
deliberately change the configuration to use an invalid openstack
username the install fails at the same place, but the error message on
the node is different:

Jan 17 10:08:58 orndev-master-000 origin-node[24066]: F0117
10:08:58.474152   24066 start_node.go:159] could not init cloud
provider
"openstack": Authentication failed

When set back to the right username the node service again fails
because of:
Unable to register node "orndev-master-000" with API server: nodes
"orndev-master-000" is 

Re: OpenStack cloud provider problems

2018-01-17 Thread Joel Pearson
Complete stab in the dark, but maybe your OpenStack account doesn’t have
enough privileges to be able to complete an install?
On Wed, 17 Jan 2018 at 9:46 pm, Tim Dudgeon  wrote:

> I'm still having problems getting the OpenStack cloud provider running.
>
> I have a minimal OpenShift Origin 3.7 Ansible install that runs OK. But
> when I add the definition for the OpenStack cloud provider (just the
> cloud provider definition, nothing yet that uses it) the installation
> fails like this:
>
> TASK [nickhammond.logrotate : nickhammond.logrotate | Setup logrotate.d
> scripts]
>
> ***
>
> RUNNING HANDLER [openshift_node : restart node]
>
> 
> FAILED - RETRYING: restart node (3 retries left).
> FAILED - RETRYING: restart node (3 retries left).
> FAILED - RETRYING: restart node (3 retries left).
> FAILED - RETRYING: restart node (3 retries left).
> FAILED - RETRYING: restart node (3 retries left).
> FAILED - RETRYING: restart node (2 retries left).
> FAILED - RETRYING: restart node (2 retries left).
> FAILED - RETRYING: restart node (2 retries left).
> FAILED - RETRYING: restart node (2 retries left).
> FAILED - RETRYING: restart node (2 retries left).
> FAILED - RETRYING: restart node (1 retries left).
> FAILED - RETRYING: restart node (1 retries left).
> FAILED - RETRYING: restart node (1 retries left).
> FAILED - RETRYING: restart node (1 retries left).
> FAILED - RETRYING: restart node (1 retries left).
> fatal: [orndev-node-000]: FAILED! => {"attempts": 3, "changed": false,
> "msg": "Unable to restart service origin-node: Job for
> origin-node.service failed because the control process exited with error
> code. See \"systemctl status origin-node.service\" and \"journalctl
> -xe\" for details.\n"}
> fatal: [orndev-node-001]: FAILED! => {"attempts": 3, "changed": false,
> "msg": "Unable to restart service origin-node: Job for
> origin-node.service failed because the control process exited with error
> code. See \"systemctl status origin-node.service\" and \"journalctl
> -xe\" for details.\n"}
> fatal: [orndev-master-000]: FAILED! => {"attempts": 3, "changed": false,
> "msg": "Unable to restart service origin-node: Job for
> origin-node.service failed because the control process exited with error
> code. See \"systemctl status origin-node.service\" and \"journalctl
> -xe\" for details.\n"}
> fatal: [orndev-node-002]: FAILED! => {"attempts": 3, "changed": false,
> "msg": "Unable to restart service origin-node: Job for
> origin-node.service failed because the control process exited with error
> code. See \"systemctl status origin-node.service\" and \"journalctl
> -xe\" for details.\n"}
> fatal: [orndev-infra-000]: FAILED! => {"attempts": 3, "changed": false,
> "msg": "Unable to restart service origin-node: Job for
> origin-node.service failed because the control process exited with error
> code. See \"systemctl status origin-node.service\" and \"journalctl
> -xe\" for details.\n"}
>
> RUNNING HANDLER [openshift_node : reload systemd units]
>
> 
>  to retry, use: --limit
> @/home/centos/openshift-ansible/playbooks/byo/config.retry
>
>
> Looking on one of the nodes I see this error in the origin-node.service
> logs:
>
> Jan 17 09:40:49 orndev-master-000 origin-node[2419]: E0117
> 09:40:49.7468062419 kubelet_node_status.go:106] Unable to register
> node "orndev-master-000" with API server: nodes "orndev-master-000" is
> forbidden: node 10.0.0.6 cannot modify node orndev-master-000
>
> The /etc/origin/cloudprovider/openstack.conf file has been created OK,
> and looks to be what is expected.
> But I can't be sure its specified correctly and will work. In fact if I
> deliberately change the configuration to use an invalid openstack
> username the install fails at the same place, but the error message on
> the node is different:
>
> Jan 17 10:08:58 orndev-master-000 origin-node[24066]: F0117
> 10:08:58.474152   24066 start_node.go:159] could not init cloud provider
> "openstack": Authentication failed
>
> When set back to the right username the node service again fails because
> of:
> Unable to register node "orndev-master-000" with API server: nodes
> "orndev-master-000" is forbidden: node 10.0.0.6 cannot modify node
> orndev-master-000
>
> How can this be tested on a node to ensure that the cloud provider is
> configured correctly?
> Any idea what the "node 10.0.0.6 cannot modify node orndev-master-000"
> error is about?
>
>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users

Using Environment Variables within context.xml - Tomcat 8 Source 2 Image

2018-01-17 Thread David Gibson
Hello,
I was wondering if it is possible to achieve the following:
We have created a Geoserver web app using the Tomcat 8 source to image file, 
however we require this app to connect to 3 external databases to retrieve the 
spatial data.
To build our application we are using the Jenkins S2I and have created a build 
pipeline that will build, deploy and promote the application through various 
stages eg dev, test, prod.
Using the Tomcat Source 2 Image the app has been created and the war file gets 
deployed along with the context.xml file specific for the application, which if 
we hardcode all the values in the context.xml file this will work for an 
individual environment.
I have read that it was possible in OS version 2 to substitute the values in 
the context.xml file with environment variable within OS however this does not 
seem to work.
What we have is 
context xml
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users