OpenShift environment in Prod: Security: pro and cons

2017-11-18 Thread Den Cowboy
I would like to know the pro and cons of openshift in a production environment 
from a security standpoint.
I am used to the three-tier architecture or separation via VLAN (presentation, 
Application, database), can you apply the same types of controls in a 
containerized environment and more specifically in openshift. If so how?
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Define in playbook: create logging project on node x

2016-12-19 Thread Den Cowboy
I found those very interesting variables:

openshift_hosted_logging_kibana_nodeselector=
openshift_hosted_logging_curator_nodeselector=
openshift_hosted_logging_elasticsearch_ops_nodeselector=


But I really don't find the right syntax to tell them to deploy on nodes with 
label region on infra. I tried this:

openshift_hosted_logging_kibana_nodeselector="region:infra"




Van: users-boun...@lists.openshift.redhat.com 
<users-boun...@lists.openshift.redhat.com> namens Den Cowboy 
<dencow...@hotmail.com>
Verzonden: zaterdag 17 december 2016 15:16:31
Aan: users@lists.openshift.redhat.com
Onderwerp: Define in playbook: create logging project on node x


Hi,


I'm using ansible 2.2 for OpenShift origin 2.2

I'm able to deploy my cluster + the logging project

Is there a way in the ansible playbook to tell: deploy the logging project on 
my node with label infra?


+ additionally: Is it possible to define it for each pod? Because when it's 
possible to definie. There will be some issue probably for fluentd pods? which 
need to be on each node.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: what is meaning of openshift_hosted_logging_hostname

2016-12-15 Thread Den Cowboy
Okay, now it works.


In my playbook:

openshift_hosted_logging_deploy=true
openshift_master_logging_public_url=https://logging-kibana-logging.apps.env.place

And after the setup I had to edit the oathclient:
./oc edit oathclient/kibana-proxy

And change the redirectURIs: to logging-kibana-logging...
Hope I'm able to reproduce this in the playbook without the manual step(s).







Van: users-boun...@lists.openshift.redhat.com 
<users-boun...@lists.openshift.redhat.com> namens Den Cowboy 
<dencow...@hotmail.com>
Verzonden: donderdag 15 december 2016 21:55:35
Aan: Rich Megginson; users@lists.openshift.redhat.com; Jeff Cantrill
Onderwerp: Re: what is meaning of openshift_hosted_logging_hostname


Tried this:

$ oc delete oauthclient/kibana-proxy
$ oc process logging-support-template | oc create -f -

But did not help

________
Van: Den Cowboy
Verzonden: donderdag 15 december 2016 21:46:18
Aan: Rich Megginson; users@lists.openshift.redhat.com; Jeff Cantrill
Onderwerp: Re: what is meaning of openshift_hosted_logging_hostname



I'm using OpenShift origin v1.3.0 + ansible 2.2


@Jeff Cantrill<mailto:jcant...@redhat.com>


<mailto:jcant...@redhat.com>

What do you mean with "2 per the README"
<mailto:jcant...@redhat.com>

This is the host for the route which should replace what is listed in 2 per the 
README



Van: Rich Megginson <rmegg...@redhat.com>
Verzonden: donderdag 15 december 2016 21:40:37
Aan: Den Cowboy; users@lists.openshift.redhat.com
Onderwerp: Re: what is meaning of openshift_hosted_logging_hostname

On 12/15/2016 02:22 PM, Den Cowboy wrote:
>
> Thanks for your reply. It was some messy question.
>
> My wildcard is *.apps.env.place and it works (for registry, metrics, etc.)
>
> So first of all I'll show you my playbook:
>
>
> openshift_hosted_logging_deploy=true
>
> openshift_master_logging_public_url=https://kibana-logging.apps.env.place
>
>
> I run my playbook:
>
> # ansible-playbook ~/openshift-ansible/playbooks/byo/config.yml
>
>
> After the install I'm checking as cluster-admin.
>
> Firstly I check master-config.yaml:
>
> --> loggingPublicURL: https://kibana-logging.apps.env.place
> <https://kibana-logging.apps.env.place>
>
>

What version of openshift are you using?
What version of openshift-ansible are you using?


> Than I check my logging project itself:
>
> There are 2 routes inside the project. one for logging-kibana and one
> for logging-kibana-ops (I don't use this).
> So there is one route which needs to be used:
>
> Name service: logging-kibana
> <https://master.dbm.bluepond:8443/console/project/logging/browse/routes/logging-kibana>
>
> Name route: https://logging-kibana-logging.apps.env.place
> Routes to: logging-kibana
>
>
> When I click on the route and accept the certificate:
>
> {"error":"invalid_request","error_description":"The request is missing
> a required parameter, includes an invalid parameter value, includes a
> parameter more than once, or is otherwise malformed."}
>
>
> What is wrong in my configuration? Do I need to set the
> openshift_hosted_logging_hostname?
>

Not sure.  You could try it.

> 
> *Van:* users-boun...@lists.openshift.redhat.com
> <users-boun...@lists.openshift.redhat.com> namens Rich Megginson
> <rmegg...@redhat.com>
> *Verzonden:* donderdag 15 december 2016 21:06:37
> *Aan:* users@lists.openshift.redhat.com
> *Onderwerp:* Re: what is meaning of openshift_hosted_logging_hostname
> On 12/15/2016 01:59 PM, Den Cowboy wrote:
> >
> > Hi, I saw this option in the ansible playbook example:
> >
> >
> https://github.com/openshift/openshift-ansible/blob/master/inventory/byo/hosts.ose.example
> >
> >
> > 1) What is the meaning of this viarable:
> > openshift_hosted_logging_hostname?
> >
> >
>
> This is the external hostname with which you will access kibana. This
> hostname should either have a DNS entry for the external IP address of
> the OpenShift master node, or you can hack it with /etc/hosts, or
> possibly xip.io.
>
> >
> > 2) I tried to deploy the logging project with ansible. All the pods
> > seems to deploy fine but I see things like:
> >
> > logging-kibana has containers without health checks, which ensure your
> > application is running correctly.
> >
>
> Yeah, we need to add health checks, but those errors/warnings are benign.
>
> >
> > + my route to kibana is:
> >
> > https://logg

Re: what is meaning of openshift_hosted_logging_hostname

2016-12-15 Thread Den Cowboy
Tried this:

$ oc delete oauthclient/kibana-proxy
$ oc process logging-support-template | oc create -f -

But did not help


Van: Den Cowboy
Verzonden: donderdag 15 december 2016 21:46:18
Aan: Rich Megginson; users@lists.openshift.redhat.com; Jeff Cantrill
Onderwerp: Re: what is meaning of openshift_hosted_logging_hostname



I'm using OpenShift origin v1.3.0 + ansible 2.2


@Jeff Cantrill<mailto:jcant...@redhat.com>


<mailto:jcant...@redhat.com>

What do you mean with "2 per the README"
<mailto:jcant...@redhat.com>

This is the host for the route which should replace what is listed in 2 per the 
README



Van: Rich Megginson <rmegg...@redhat.com>
Verzonden: donderdag 15 december 2016 21:40:37
Aan: Den Cowboy; users@lists.openshift.redhat.com
Onderwerp: Re: what is meaning of openshift_hosted_logging_hostname

On 12/15/2016 02:22 PM, Den Cowboy wrote:
>
> Thanks for your reply. It was some messy question.
>
> My wildcard is *.apps.env.place and it works (for registry, metrics, etc.)
>
> So first of all I'll show you my playbook:
>
>
> openshift_hosted_logging_deploy=true
>
> openshift_master_logging_public_url=https://kibana-logging.apps.env.place
>
>
> I run my playbook:
>
> # ansible-playbook ~/openshift-ansible/playbooks/byo/config.yml
>
>
> After the install I'm checking as cluster-admin.
>
> Firstly I check master-config.yaml:
>
> --> loggingPublicURL: https://kibana-logging.apps.env.place
> <https://kibana-logging.apps.env.place>
>
>

What version of openshift are you using?
What version of openshift-ansible are you using?


> Than I check my logging project itself:
>
> There are 2 routes inside the project. one for logging-kibana and one
> for logging-kibana-ops (I don't use this).
> So there is one route which needs to be used:
>
> Name service: logging-kibana
> <https://master.dbm.bluepond:8443/console/project/logging/browse/routes/logging-kibana>
>
> Name route: https://logging-kibana-logging.apps.env.place
> Routes to: logging-kibana
>
>
> When I click on the route and accept the certificate:
>
> {"error":"invalid_request","error_description":"The request is missing
> a required parameter, includes an invalid parameter value, includes a
> parameter more than once, or is otherwise malformed."}
>
>
> What is wrong in my configuration? Do I need to set the
> openshift_hosted_logging_hostname?
>

Not sure.  You could try it.

> 
> *Van:* users-boun...@lists.openshift.redhat.com
> <users-boun...@lists.openshift.redhat.com> namens Rich Megginson
> <rmegg...@redhat.com>
> *Verzonden:* donderdag 15 december 2016 21:06:37
> *Aan:* users@lists.openshift.redhat.com
> *Onderwerp:* Re: what is meaning of openshift_hosted_logging_hostname
> On 12/15/2016 01:59 PM, Den Cowboy wrote:
> >
> > Hi, I saw this option in the ansible playbook example:
> >
> >
> https://github.com/openshift/openshift-ansible/blob/master/inventory/byo/hosts.ose.example
> >
> >
> > 1) What is the meaning of this viarable:
> > openshift_hosted_logging_hostname?
> >
> >
>
> This is the external hostname with which you will access kibana. This
> hostname should either have a DNS entry for the external IP address of
> the OpenShift master node, or you can hack it with /etc/hosts, or
> possibly xip.io.
>
> >
> > 2) I tried to deploy the logging project with ansible. All the pods
> > seems to deploy fine but I see things like:
> >
> > logging-kibana has containers without health checks, which ensure your
> > application is running correctly.
> >
>
> Yeah, we need to add health checks, but those errors/warnings are benign.
>
> >
> > + my route to kibana is:
> >
> > https://logging-kibana-logging.apps.xx.xx
> > <https://logging-kibana-logging.apps.>
> >
>
> What route?
>
> > + gives: invalid request: missed required parameter... (don't know why
> > it's putting logging- before the kibana).
> >
>
> What gives?  What command are you using?
>
> >
> >
> > This are my variables in the playbook.
> >
> > openshift_hosted_logging_deploy=true
> >
> > openshift_master_logging_public_url=https://kibana-logging.apps.xx.xx
> >
> >
> >
> >
> >
> >
> >
> > ___
> > users mailing list
> > users@lists.openshift.redhat.com
> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: what is meaning of openshift_hosted_logging_hostname

2016-12-15 Thread Den Cowboy
Thanks for your reply. It was some messy question.

My wildcard is *.apps.env.place and it works (for registry, metrics, etc.)

So first of all I'll show you my playbook:


openshift_hosted_logging_deploy=true

openshift_master_logging_public_url=https://kibana-logging.apps.env.place


I run my playbook:

# ansible-playbook ~/openshift-ansible/playbooks/byo/config.yml


After the install I'm checking as cluster-admin.

Firstly I check master-config.yaml:

--> loggingPublicURL: https://kibana-logging.apps.env.place


Than I check my logging project itself:

There are 2 routes inside the project. one for logging-kibana and one for 
logging-kibana-ops (I don't use this).
So there is one route which needs to be used:

Name service: 
logging-kibana<https://master.dbm.bluepond:8443/console/project/logging/browse/routes/logging-kibana>

Name route: https://logging-kibana-logging.apps.env.place
Routes to: logging-kibana


When I click on the route and accept the certificate:

{"error":"invalid_request","error_description":"The request is missing a 
required parameter, includes an invalid parameter value, includes a parameter 
more than once, or is otherwise malformed."}


What is wrong in my configuration? Do I need to set the 
openshift_hosted_logging_hostname?


Van: users-boun...@lists.openshift.redhat.com 
<users-boun...@lists.openshift.redhat.com> namens Rich Megginson 
<rmegg...@redhat.com>
Verzonden: donderdag 15 december 2016 21:06:37
Aan: users@lists.openshift.redhat.com
Onderwerp: Re: what is meaning of openshift_hosted_logging_hostname

On 12/15/2016 01:59 PM, Den Cowboy wrote:
>
> Hi, I saw this option in the ansible playbook example:
>
> https://github.com/openshift/openshift-ansible/blob/master/inventory/byo/hosts.ose.example
>
>
> 1) What is the meaning of this viarable:
> openshift_hosted_logging_hostname?
>
>

This is the external hostname with which you will access kibana. This
hostname should either have a DNS entry for the external IP address of
the OpenShift master node, or you can hack it with /etc/hosts, or
possibly xip.io.

>
> 2) I tried to deploy the logging project with ansible. All the pods
> seems to deploy fine but I see things like:
>
> logging-kibana has containers without health checks, which ensure your
> application is running correctly.
>

Yeah, we need to add health checks, but those errors/warnings are benign.

>
> + my route to kibana is:
>
> https://logging-kibana-logging.apps.xx.xx
> <https://logging-kibana-logging.apps.>
>

What route?

> + gives: invalid request: missed required parameter... (don't know why
> it's putting logging- before the kibana).
>

What gives?  What command are you using?

>
>
> This are my variables in the playbook.
>
> openshift_hosted_logging_deploy=true
>
> openshift_master_logging_public_url=https://kibana-logging.apps.xx.xx
>
>
>
>
>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


what is meaning of openshift_hosted_logging_hostname

2016-12-15 Thread Den Cowboy
Hi, I saw this option in the ansible playbook example:

https://github.com/openshift/openshift-ansible/blob/master/inventory/byo/hosts.ose.example


1) What is the meaning of this viarable: openshift_hosted_logging_hostname?



2) I tried to deploy the logging project with ansible. All the pods seems to 
deploy fine but I see things like:

logging-kibana has containers without health checks, which ensure your 
application is running correctly.


+ my route to kibana is:

https://logging-kibana-logging.apps.xx.xx

+ gives: invalid request: missed required parameter... (don't know why it's 
putting logging- before the kibana).



This are my variables in the playbook.

openshift_hosted_logging_deploy=true

openshift_master_logging_public_url=https://kibana-logging.apps.xx.xx









___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


OpenShift Origin 1.3.0: Is it normal that my "host URL" is translated to his IP every time?

2016-12-14 Thread Den Cowboy
Hi,


We use our own private/internal DNS. We have one centos from where we trigger 
our install and one atomic host on which the install is performed.


A part of my playbook.


# host group for masters
[masters]
master.test.env

We can do a nslookup with our DNS (on install server and on atomic host).
$ nslookup master.test.env
Server:192.168.x.2
Address:192.168.x.2#53

Name:master.test.env
Address: 192.168.x.3

After our install our master.config looks like

masterPublicURL: https://192.168.x.3:8443 (etc...) instead of 
https://master.test.env:8443


Is this the normal approach? On our environment we're able to login of course 
with:

./oc login https://master.test.env:8443


But we would expect to see those hostnames/urls in our configs too instead of 
ip's.


Thanks.

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Best way to use oc client

2016-12-13 Thread Den Cowboy
Hi,


I've installed openshift 1.3.2 for the first time with atomic as OS. It went 
fine.
I used one normal centos as installation-server (so there ansible was installed 
and I executed the playbook there).


Now is my question. What is the best way to interact with my environment.

I've installed the oc-client tools on the centos server and I use ./oc login 
https://192.xx.xx.xx:8443 to authenticate.
But when I want to authenticate as system:admin I need the $KUBECONFIG 
(admin.kubeconfig). Is it a normal approach to copy this file from my os-master 
(atomic) to my centos server from which I try to manage everything?

Or do I need to install the client tools on my master itself? What is the most 
common approach?


Thanks
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Which openshift instances need a public IP

2016-12-08 Thread Den Cowboy
We installed all the prereqs. Than took the internetaccess away from our master 
+ node and we started the playbook. It crashes here.


TASK [openshift_facts : Gather Cluster facts and set is_containerized if 
needed] ***
fatal: [192.168.20.1]: FAILED! => {"changed": false, "failed": true, 
"module_stderr": "", "module_stdout": "Traceback (most recent call last):\r\n  
File \"/tmp/ansible_YACFWF/ansible_module_openshift_facts.py\", line 2130, in 
\r\nmain()\r\n  File 
\"/tmp/ansible_YACFWF/ansible_module_openshift_facts.py\", line 2111, in 
main\r\nprotected_facts_to_overwrite)\r\n  File 
\"/tmp/ansible_YACFWF/ansible_module_openshift_facts.py\", line 1589, in 
__init__\r\nprotected_facts_to_overwrite)\r\n  File 
\"/tmp/ansible_YACFWF/ansible_module_openshift_facts.py\", line 1622, in 
generate_facts\r\ndefaults = self.get_defaults(roles, deployment_type)\r\n  
File \"/tmp/ansible_YACFWF/ansible_module_openshift_facts.py\", line 1665, in 
get_defaults\r\nip_addr = 
self.system_facts['default_ipv4']['address']\r\nKeyError: 'address'\r\n", 
"msg": "MODULE FAILURE", "parsed": false}
fatal: [192.168.20.2]: FAILED! => {"changed": false, "failed": true, 
"module_stderr": "", "module_stdout": "Traceback (most recent call last):\r\n  
File \"/tmp/ansible_N3OAje/ansible_module_openshift_facts.py\", line 2130, in 
\r\nmain()\r\n  File 
\"/tmp/ansible_N3OAje/ansible_module_openshift_facts.py\", line 2111, in 
main\r\nprotected_facts_to_overwrite)\r\n  File 
\"/tmp/ansible_N3OAje/ansible_module_openshift_facts.py\", line 1589, in 
__init__\r\nprotected_facts_to_overwrite)\r\n  File 
\"/tmp/ansible_N3OAje/ansible_module_openshift_facts.py\", line 1622, in 
generate_facts\r\ndefaults = self.get_defaults(roles, deployment_type)\r\n  
File \"/tmp/ansible_N3OAje/ansible_module_openshift_facts.py\", line 1665, in 
get_defaults\r\nip_addr = 
self.system_facts['default_ipv4']['address']\r\nKeyError: 'address'\r\n", 
"msg": "MODULE FAILURE", "parsed": false}


Can someone tell us what we're doing wrong?


Van: Frederic Giloux <fgil...@redhat.com>
Verzonden: donderdag 8 december 2016 14:58:43
Aan: Den Cowboy
CC: users@lists.openshift.redhat.com
Onderwerp: Re: Which openshift instances need a public IP

The short answer is that you don't need public IP addresses. You can have 
everything running with private IPs.

On Thu, Dec 8, 2016 at 3:34 PM, Den Cowboy 
<dencow...@hotmail.com<mailto:dencow...@hotmail.com>> wrote:

Thanks for your reply. Just the main goal we want to obtain is to keep our 
traffic from pod to pod (using routes, router, dns-wildcard) internal. So 
performing al this stuf on a private IP. Is that possible?

I just checked this blog: 
http://dustymabe.com/2016/12/07/installing-an-openshift-origin-cluster-on-fedora-25-atomic-host-part-1/#comment-42901


He is using public ip's + private ip's. Are the privates useful in this case?

We're able to use both and use and we can setup our own dns server but we don't 
want that our routes are going outside of our cluster. In public and than going 
back in the cluster.


So main goal: translations of routes through router should stay in the private 
network.
Is that possible?


Thanks


Van: Frederic Giloux <fgil...@redhat.com<mailto:fgil...@redhat.com>>
Verzonden: donderdag 8 december 2016 13:35:12
Aan: Den Cowboy
CC: users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
Onderwerp: Re: Which openshift instances need a public IP

Hi Den,

you may need internet connectivity. Public IPs is not a requirement for that 
(confer proxy and NAT). Another option is to install OpenShift disconnected. 
See: 
https://docs.openshift.com/container-platform/3.3/install_config/install/disconnected_install.html.
Disconnected Installation - Installing a Cluster 
...<https://docs.openshift.com/container-platform/3.3/install_config/install/disconnected_install.html>
docs.openshift.com<http://docs.openshift.com>
An OpenShift Container Platform disconnected installation differs from a 
regular installation in two primary ways:


Also, editing etc/hosts is not enough. You will require a proper DNS server 
(dnsmasq for instance) as the containers don't use /etc/hosts of the host for 
name resolution.

Regards,

Fr?d?ric


On Thu, Dec 8, 2016 at 1:37 PM, Den Cowboy 
<dencow...@hotmail.com<mailto:dencow...@hotmail.com>> wrote:

Hi,


We have our own Registry (like dockerhub) from where we can pull images. (the 
registry is in the same private network 192.168.25.x).

Now we're trying to install OpenShift (very basic: 1 master + 1 node)

Re: Which openshift instances need a public IP

2016-12-08 Thread Den Cowboy
Thanks for your reply. Just the main goal we want to obtain is to keep our 
traffic from pod to pod (using routes, router, dns-wildcard) internal. So 
performing al this stuf on a private IP. Is that possible?

I just checked this blog: 
http://dustymabe.com/2016/12/07/installing-an-openshift-origin-cluster-on-fedora-25-atomic-host-part-1/#comment-42901


He is using public ip's + private ip's. Are the privates useful in this case?

We're able to use both and use and we can setup our own dns server but we don't 
want that our routes are going outside of our cluster. In public and than going 
back in the cluster.


So main goal: translations of routes through router should stay in the private 
network.
Is that possible?


Thanks


Van: Frederic Giloux <fgil...@redhat.com>
Verzonden: donderdag 8 december 2016 13:35:12
Aan: Den Cowboy
CC: users@lists.openshift.redhat.com
Onderwerp: Re: Which openshift instances need a public IP

Hi Den,

you may need internet connectivity. Public IPs is not a requirement for that 
(confer proxy and NAT). Another option is to install OpenShift disconnected. 
See: 
https://docs.openshift.com/container-platform/3.3/install_config/install/disconnected_install.html.
Disconnected Installation - Installing a Cluster 
...<https://docs.openshift.com/container-platform/3.3/install_config/install/disconnected_install.html>
docs.openshift.com
An OpenShift Container Platform disconnected installation differs from a 
regular installation in two primary ways:


Also, editing etc/hosts is not enough. You will require a proper DNS server 
(dnsmasq for instance) as the containers don't use /etc/hosts of the host for 
name resolution.

Regards,

Frédéric


On Thu, Dec 8, 2016 at 1:37 PM, Den Cowboy 
<dencow...@hotmail.com<mailto:dencow...@hotmail.com>> wrote:

Hi,


We have our own Registry (like dockerhub) from where we can pull images. (the 
registry is in the same private network 192.168.25.x).

Now we're trying to install OpenShift (very basic: 1 master + 1 node) on 
192.168.25.1 and 192.168.25.2.

We have experience with those installs but than we used public ip's.

We have SSH acces from our master to our node.


But: prereqs: you need ansible on the master, git, docker on master and node, 
...
- So initially we need public ip's on our servers to install those 
prerequisitions?

- Do we need a public IP on every instance when we want to run the playbook? 
(it failed for resolving something to check ik yum-utils were installed).

- Is this a good solution?: (public IP and private on master and node). Install 
prereqs and execute playbook. So we have a cluster. After that deleting the 
public network and reexecuting the playbook with only private ip's (or only a 
public ip on the master). Will this work?


So as you can see we can use some input in using the setup.
We want that the traffic between our nodes goes internally. So we probably need 
our own DNS server for hosts, routing, wildcards. (initially we try to cover 
this in /etc/hosts).


If someone has experience with the setup of OpenShift where the communication 
over routes (through the router) happends internally (so no public wildcard). 
Please share some knowledge :).

___
users mailing list
users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/users




--
Frédéric Giloux
Senior Middleware Consultant

Red Hat GmbH
MesseTurm, Friedrich-Ebert-Anlage 49, 60308 Frankfurt am Main

Mobile: +49 (0) 174 1724661
E-Mail: fgil...@redhat.com<mailto:fgil...@redhat.com>, http://www.redhat.de/ 
<http://www.redhat.de>

Delivering value year after year
Red Hat ranks # 1 in value among software vendors
http://www.redhat.com/promo/vendor/

Freedom...Courage...Commitment...Accountability

Red Hat GmbH, http://www.de.redhat.com/ Sitz: Grasbrunn,
Handelsregister: Amtsgericht München, HRB 153243
Geschäftsführer: Paul Argiry, Charles Cachera, Michael Cunningham, Michael 
O'Neill
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Which openshift instances need a public IP

2016-12-08 Thread Den Cowboy
Hi,


We have our own Registry (like dockerhub) from where we can pull images. (the 
registry is in the same private network 192.168.25.x).

Now we're trying to install OpenShift (very basic: 1 master + 1 node) on 
192.168.25.1 and 192.168.25.2.

We have experience with those installs but than we used public ip's.

We have SSH acces from our master to our node.


But: prereqs: you need ansible on the master, git, docker on master and node, 
...
- So initially we need public ip's on our servers to install those 
prerequisitions?

- Do we need a public IP on every instance when we want to run the playbook? 
(it failed for resolving something to check ik yum-utils were installed).

- Is this a good solution?: (public IP and private on master and node). Install 
prereqs and execute playbook. So we have a cluster. After that deleting the 
public network and reexecuting the playbook with only private ip's (or only a 
public ip on the master). Will this work?


So as you can see we can use some input in using the setup.
We want that the traffic between our nodes goes internally. So we probably need 
our own DNS server for hosts, routing, wildcards. (initially we try to cover 
this in /etc/hosts).


If someone has experience with the setup of OpenShift where the communication 
over routes (through the router) happends internally (so no public wildcard). 
Please share some knowledge :).
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OpenShift origin cluster in VLAN

2016-12-08 Thread Den Cowboy
my user has cluster-admin priviledges.


Logs of my regsitry


10.1.1.1 - - [08/Dec/2016:09:13:25 +] "GET /healthz HTTP/1.1" 200 0 "" "Go 
1.1 package http"
10.1.1.1 - - [08/Dec/2016:09:13:25 +] "GET /healthz HTTP/1.1" 200 0 "" "Go 
1.1 package http"
10.1.1.1 - - [08/Dec/2016:09:13:35 +] "GET /healthz HTTP/1.1" 200 0 "" "Go 
1.1 package http"
10.1.1.1 - - [08/Dec/2016:09:13:35 +] "GET /healthz HTTP/1.1" 200 0 "" "Go 
1.1 package http"
10.1.1.1 - - [08/Dec/2016:09:13:45 +] "GET /healthz HTTP/1.1" 200 0 "" "Go 
1.1 package http"

But logs of my registry on the moment I try to login:

time="2016-12-08T09:15:42.932147341Z" level=debug msg="authorizing request" 
go.version=go1.6 http.request.host="172.30.250.73:5000" 
http.request.id=ea57e668-5a03-4ef4-bcbe-69b1a4a3771d http.request.method=GET 
http.request.remoteaddr="10.1.1.1:54378" http.request.uri="/v2/" 
http.request.useragent="docker/1.10.3 go/go1.6.3 git-commit/cb079f6-unsupported 
kernel/3.10.0-327.36.3.el7.x86_64 os/linux arch/amd64" 
instance.id=2b1976e5-3ffc-4382-99bc-e6ae332da01d 
time="2016-12-08T09:15:42.932254033Z" level=error msg="error authorizing 
context: authorization header with basic token required" go.version=go1.6 
http.request.host="172.30.250.73:5000" 
http.request.id=ea57e668-5a03-4ef4-bcbe-69b1a4a3771d http.request.method=GET 
http.request.remoteaddr="10.1.1.1:54378" http.request.uri="/v2/" 
http.request.useragent="docker/1.10.3 go/go1.6.3 git-commit/cb079f6-unsupported 
kernel/3.10.0-327.36.3.el7.x86_64 os/linux arch/amd64" 
instance.id=2b1976e5-3ffc-4382-99bc-e6ae332da01d 10.1.1.1 - - 
[08/Dec/2016:09:15:42 +] "GET /v2/ HTTP/1.1" 401 87 "" "docker/1.10.3 
go/go1.6.3 git-commit/cb079f6-unsupported kernel/3.10.0-327.36.3.el7.x86_64 
os/linux arch/amd64"time="2016-12-08T09:15:42.934390662Z" level=debug 
msg="authorizing request" go.version=go1.6 
http.request.host="172.30.250.73:5000" 
http.request.id=0cfc7634-b120-4969-a4a6-49762c09edab http.request.method=GET 
http.request.remoteaddr="10.1.1.1:54380" http.request.uri="/v2/" 
http.request.useragent="docker/1.10.3 go/go1.6.3 git-commit/cb079f6-unsupported 
kernel/3.10.0-327.36.3.el7.x86_64 os/linux arch/amd64" 
instance.id=2b1976e5-3ffc-4382-99bc-e6ae332da01d 10.1.1.1 - - 
[08/Dec/2016:09:15:45 +] "GET /healthz HTTP/1.1" 200 0 "" "Go 1.1 package 
http"10.1.1.1 - - [08/Dec/2016:09:15:45 +] "GET /healthz HTTP/1.1" 200 0 "" 
"Go 1.1 package http"time="2016-12-08T09:15:52.939762277Z" level=error msg="Get 
user failed with error: Get https://master.test.com:8443/oapi/v1/users/~: dial 
tcp: lookup master.test.com on 193.xx.xx.xx:53: read udp 
10.1.1.2:59123->193.xx.xx.xx:53: i/o timeout" go.version=go1.6 
http.request.host="172.30.250.73:5000" 
http.request.id=0cfc7634-b120-4969-a4a6-49762c09edab http.request.method=GET 
http.request.remoteaddr="10.1.1.1:54380" http.request.uri="/v2/" 
http.request.useragent="docker/1.10.3 go/go1.6.3 git-commit/cb079f6-unsupported 
kernel/3.10.0-327.36.3.el7.x86_64 os/linux arch/amd64" 
instance.id=2b1976e5-3ffc-4382-99bc-e6ae332da01d 
time="2016-12-08T09:15:52.939827373Z" level=error msg="error checking 
authorization: Get https://master.test.com:8443/oapi/v1/users/~: dial tcp: 
lookup master.test.com on 193.xx.xx.xx:53: read udp 
10.1.1.2:59123->193.xx.xx.xx:53: i/o timeout" go.version=go1.6 
http.request.host="172.30.250.73:5000" 
http.request.id=0cfc7634-b120-4969-a4a6-49762c09edab http.request.method=GET 
http.request.remoteaddr="10.1.1.1:54380" http.request.uri="/v2/" 
http.request.useragent="docker/1.10.3 go/go1.6.3 git-commit/cb079f6-unsupported 
kernel/3.10.0-327.36.3.el7.x86_64 os/linux arch/amd64" 
instance.id=2b1976e5-3ffc-4382-99bc-e6ae332da01d 
time="2016-12-08T09:15:52.939860796Z" level=error msg="error authorizing 
context: Get https://master.test.com:8443/oapi/v1/users/~: dial tcp: lookup 
master.test.com on 193.xx.xx.xx:53: read udp 10.1.1.2:59123->193.xx.xx.xx:53: 
i/o timeout" go.version=go1.6 http.request.host="172.30.250.73:5000" 
http.request.id=0cfc7634-b120-4969-a4a6-49762c09edab http.request.method=GET 
http.request.remoteaddr="10.1.1.1:54380" http.request.uri="/v2/" 
http.request.useragent="docker/1.10.3 go/go1.6.3 git-commit/cb079f6-unsupported 
kernel/3.10.0-327.36.3.el7.x86_64 os/linux arch/amd64" 
instance.id=2b1976e5-3ffc-4382-99bc-e6ae332da01d 10.1.1.1 - - 
[08/Dec/2016:09:15:42 +] &

Re: OpenShift origin cluster in VLAN

2016-12-08 Thread Den Cowboy
I've changed the master-ip setting inside my master-config.yaml (which was 
still on the pub-ip of the installation). I replaced it with my private ip and 
restarted the cluster.

NAME ENDPOINTS   AGE
kubernetes   192.168.20.1:8053,192.168.20.1:8443,192.168.20.1:8053   19h

I'm able to deploy my router + registry (images are pulled form a private 
registry in the same VLAN).
But I'm not able to authenticate on my registry. I didn't secure it yet.

docker login -u admin -e a...@mail.com -p `oc whoami -t` 172.30.250.73:5000
Error response from daemon: no successful auth challenge for 
http://172.30.250.73:5000/v2/ - errors: [basic auth attempt to 
http://172.30.250.73:5000/v2/ realm "openshift" failed with status: 400 Bad 
Request]






Van: Clayton Coleman <ccole...@redhat.com>
Verzonden: woensdag 7 december 2016 14:56:30
Aan: Den Cowboy
CC: users@lists.openshift.redhat.com
Onderwerp: Re: OpenShift origin cluster in VLAN

Each master still needs an IP registered that then backs the Kubernetes service 
that clients use to talk to the API.  So verify that each master is reporting 
the correct IP that is reachable from all nodes to "oc get endpoints kubernetes 
-n defaults"

On Dec 7, 2016, at 9:39 AM, Den Cowboy 
<dencow...@hotmail.com<mailto:dencow...@hotmail.com>> wrote:


We've installed OpenShift origin with the advanced playbook. There we used 
public ip's. But after the installation we've deleted the public ip's. The 
master and nodes are in a VLAN. I'm able to create a user, authenticate, visite 
the webconsole. restart node, master configs. I'm able to pull images from our 
local registry but I'm not able to do a deployment.


couldn't get deployment default/router-5: Get 
https://172.30.0.1:443/api/v1/namespaces/default/replicationcontrollers/router-5:
 dial tcp 172.30.0.1:443<http://172.30.0.1:443>: getsockopt: network is 
unreachable

I'm even not able to curl the kubernetes service. What did we forgot/did wrong?

In our configs the dnsIP: option is in comment. So we did not specifiy it. The 
docker, origin-node, origin-master and openvswitch services are all running.

Logs of our origin-node show:
pkg/proxy/config/api.go:60: Failed to watch *api.Endpoints: Get 
https://master.xxx...ction refused
pkg/kubelet/kubelet.go:259: Failed to watch *api.Node: Get 
https://master.xxx:8443/..
pkg/kubelet/config/apiserver.go:43: Failed to watch *api.Pod
pkg/proxy/config/api.go:47: Failed to watch *api.Service: Get 
https://master.xxx refused



___
users mailing list
users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


OpenShift origin cluster in VLAN

2016-12-07 Thread Den Cowboy
We've installed OpenShift origin with the advanced playbook. There we used 
public ip's. But after the installation we've deleted the public ip's. The 
master and nodes are in a VLAN. I'm able to create a user, authenticate, visite 
the webconsole. restart node, master configs. I'm able to pull images from our 
local registry but I'm not able to do a deployment.


couldn't get deployment default/router-5: Get 
https://172.30.0.1:443/api/v1/namespaces/default/replicationcontrollers/router-5:
 dial tcp 172.30.0.1:443: getsockopt: network is unreachable

I'm even not able to curl the kubernetes service. What did we forgot/did wrong?

In our configs the dnsIP: option is in comment. So we did not specifiy it. The 
docker, origin-node, origin-master and openvswitch services are all running.

Logs of our origin-node show:
pkg/proxy/config/api.go:60: Failed to watch *api.Endpoints: Get 
https://master.xxx...ction refused
pkg/kubelet/kubelet.go:259: Failed to watch *api.Node: Get 
https://master.xxx:8443/..
pkg/kubelet/config/apiserver.go:43: Failed to watch *api.Pod
pkg/proxy/config/api.go:47: Failed to watch *api.Service: Get 
https://master.xxx refused


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: authentication for oadm prune in cron job

2016-12-06 Thread Den Cowboy
Were executing our prune commands with:

oadm prune images --keep-tag-revisions=20 
--certificate-authority=/etc/docker/certs.d/service-ip-registy:5000/ca.crt 
--registry-url=my-registry.dev --confirm


The real problem for our cron-jobs is the authentication on openshift itself 
(before we can execute oadm). Do we really need to put oc login -u myuser .. + 
define the passwd hardcoded in our cronjob?


Van: Clayton Coleman <ccole...@redhat.com>
Verzonden: maandag 5 december 2016 20:38:49
Aan: Srinivas Naga Kotaru (skotaru)
CC: Den Cowboy; users@lists.openshift.redhat.com
Onderwerp: Re: authentication for oadm prune in cron job

Prune has to connect to your registry server directly to delete blobs, and the 
registry does not support certificate based auth.  The most consistent path 
would be to use a service account that had the appropriate permissions and get 
its token with "oc serviceaccounts get-token".

On Mon, Dec 5, 2016 at 3:08 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Am also interested to know the answer.

Am thinking we don't need token for oadm command since it doesn't use tokens or 
oauth based authentication. Since it is installed with root privileges, we are 
using sudo oadm command to executive commands.

# sudo oadm prune builds --orphans --confirm
NAMESPACE NAME
java-hello-universe   os-sample-java-web-1
upgrade   upgrade-1
sujchinncae-test  django-1

We're not running internal registry for builds. Am not sure we still need to 
run prune operations in this scanario.

--
Srinivas Kotaru

From: 
<users-boun...@lists.openshift.redhat.com<mailto:users-boun...@lists.openshift.redhat.com>>
 on behalf of Den Cowboy <dencow...@hotmail.com<mailto:dencow...@hotmail.com>>
Date: Monday, December 5, 2016 at 12:37 AM
To: "users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>" 
<users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>>
Subject: authentication for oadm prune in cron job


We are able to delete old deployments + old images (also inside the registry) 
with our oadm prune commands.
We want to put this in cronjobs. But to perform oadm commands we need to be 
authenticated. Which is the best way to authenticate in a cron job?

At the moment we have 1 admin account (with cluster-admin permissions) + we 
have the system:admin account.

Do we need a new account (or service account) for our cronjobs and which 
permission would we need?



Thanks

___
users mailing list
users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Increase HeapSize on registry.access.redhat.com/jboss-webserver-3/webserver30-tomcat8-openshift:1.2-12

2016-12-05 Thread Den Cowboy
Thanks for your response Per.

I can confirm it's dynamically. We increased our resources and the heapsize 
increased for our tomcat.

When we want to increase the heap_size with the size of our environment 
variable. Where do I have to set this?
Inside the tomcat container or on our openshift-master or... ?

Thanks!



Van: Per Carlson <pe...@hemmop.com>
Verzonden: maandag 28 november 2016 14:25:25
Aan: Den Cowboy
CC: users@lists.openshift.redhat.com
Onderwerp: Re: Increase HeapSize on 
registry.access.redhat.com/jboss-webserver-3/webserver30-tomcat8-openshift:1.2-12

Hi Den.

The heap allocation is done dynamically. When the image starts it runs 
/opt/webserver/bin/launch.sh. In this file you will find

MAX_HEAP=`get_heap_size`
if [ -n "$MAX_HEAP" ]; then
  CATALINA_OPTS="$CATALINA_OPTS -Xms${MAX_HEAP}m -Xmx${MAX_HEAP}m"
fi

The function "get_heap_size" is sourced from 
/usr/local/dynamic-resources/dynamic_resources.sh. What I *think* it does is 
checking if any resource quotas are set, and if so allocates 50% of the 
available memory as heap. The percentage can be changed by setting the 
environment variable CONTAINER_HEAP_PERCENT to e.g 0.10 (to get 10%).


On 28 November 2016 at 13:02, Den Cowboy 
<dencow...@hotmail.com<mailto:dencow...@hotmail.com>> wrote:

Hi, we're using the tomcat of this image:

registry.access.redhat.com/jboss-webserver-3/webserver30-tomcat8-openshift:1.2-12<http://registry.access.redhat.com/jboss-webserver-3/webserver30-tomcat8-openshift:1.2-12>

The problem is were not able to increase the heapsize. It's different on our 
servers than on our local machines.
What's the best way to set this parameter? (e.g. in the dockerfile)

___
users mailing list
users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/users




--
Pelle

Research is what I'm doing when I don't know what I'm doing.
- Wernher von Braun
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


authentication for oadm prune in cron job

2016-12-05 Thread Den Cowboy
We are able to delete old deployments + old images (also inside the registry) 
with our oadm prune commands.
We want to put this in cronjobs. But to perform oadm commands we need to be 
authenticated. Which is the best way to authenticate in a cron job?

At the moment we have 1 admin account (with cluster-admin permissions) + we 
have the system:admin account.

Do we need a new account (or service account) for our cronjobs and which 
permission would we need?


Thanks
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Increase HeapSize on registry.access.redhat.com/jboss-webserver-3/webserver30-tomcat8-openshift:1.2-12

2016-11-28 Thread Den Cowboy
Hi, we're using the tomcat of this image:

registry.access.redhat.com/jboss-webserver-3/webserver30-tomcat8-openshift:1.2-12

The problem is were not able to increase the heapsize. It's different on our 
servers than on our local machines.
What's the best way to set this parameter? (e.g. in the dockerfile)
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Clean logs in ES on Origin 1.2.0

2016-11-23 Thread Den Cowboy
Is there also some solution for the cassandra DB for the metrics?


Van: users-boun...@lists.openshift.redhat.com 
<users-boun...@lists.openshift.redhat.com> namens Den Cowboy 
<dencow...@hotmail.com>
Verzonden: woensdag 23 november 2016 10:40:08
Aan: Jeff Cantrill
CC: users@lists.openshift.redhat.com
Onderwerp: Re: Clean logs in ES on Origin 1.2.0


Thanks, as fastest solution at the moment, I've updated the dc's of the curator.
Will check in a couple of days if it's working.


Van: Jeff Cantrill <jcant...@redhat.com>
Verzonden: dinsdag 22 november 2016 14:52:57
Aan: Den Cowboy
CC: users@lists.openshift.redhat.com
Onderwerp: Re: Clean logs in ES on Origin 1.2.0

You can create a secret with a file that has content like:


.defaults:
  delete:
days: 7
  runhour: 0
  runminute: 0

and add the volume to the deployment config as described here: 
https://github.com/openshift/origin-aggregated-logging/tree/v1.2.0#curator

Alternatively, if you do not provide the secret, you could update the following 
value in the deploymentconfig: 
https://github.com/openshift/origin-aggregated-logging/blob/v1.2.0/deployment/templates/curator.yaml#L90



https://github.com/openshift/origin-aggregated-logging/tree/v1.2.0

On Tue, Nov 22, 2016 at 9:34 AM, Den Cowboy 
<dencow...@hotmail.com<mailto:dencow...@hotmail.com>> wrote:

Hi,


We have an origin 1.2.0 cluster in which we've integrated the logging project. 
It works fine but we just followed the setup tutorial. We don't know may about 
the real setup.


We're facing sometimes issues that our disk is getting too fill because our ES 
is keeping to many data.

How can we easily configure the curator to delete every log of every project 
once a week?

We took a look to the documentation:

myapp-dev:
 delete:
   days: 1

myapp-qe:
  delete:
weeks: 1

.operations:
  delete:
weeks: 8

.defaults:
  delete:
days: 30
  runhour: 0
  runminute: 0

But it isn't clear for us. Isn't there just a setting in the deploymentconfig 
or what's the most easy approach for this?


Thanks

___
users mailing list
users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/users




--
--
Jeff Cantrill
Senior Software Engineer, Red Hat Engineering
OpenShift Integration Services
Red Hat, Inc.
Office: 703-748-4420 | 866-546-8970 ext. 8162420
jcant...@redhat.com<mailto:jcant...@redhat.com>
http://www.redhat.com
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Clean logs in ES on Origin 1.2.0

2016-11-23 Thread Den Cowboy
Thanks, as fastest solution at the moment, I've updated the dc's of the curator.
Will check in a couple of days if it's working.


Van: Jeff Cantrill <jcant...@redhat.com>
Verzonden: dinsdag 22 november 2016 14:52:57
Aan: Den Cowboy
CC: users@lists.openshift.redhat.com
Onderwerp: Re: Clean logs in ES on Origin 1.2.0

You can create a secret with a file that has content like:


.defaults:
  delete:
days: 7
  runhour: 0
  runminute: 0

and add the volume to the deployment config as described here: 
https://github.com/openshift/origin-aggregated-logging/tree/v1.2.0#curator

Alternatively, if you do not provide the secret, you could update the following 
value in the deploymentconfig: 
https://github.com/openshift/origin-aggregated-logging/blob/v1.2.0/deployment/templates/curator.yaml#L90



https://github.com/openshift/origin-aggregated-logging/tree/v1.2.0

On Tue, Nov 22, 2016 at 9:34 AM, Den Cowboy 
<dencow...@hotmail.com<mailto:dencow...@hotmail.com>> wrote:

Hi,


We have an origin 1.2.0 cluster in which we've integrated the logging project. 
It works fine but we just followed the setup tutorial. We don't know may about 
the real setup.


We're facing sometimes issues that our disk is getting too fill because our ES 
is keeping to many data.

How can we easily configure the curator to delete every log of every project 
once a week?

We took a look to the documentation:

myapp-dev:
 delete:
   days: 1

myapp-qe:
  delete:
weeks: 1

.operations:
  delete:
weeks: 8

.defaults:
  delete:
days: 30
  runhour: 0
  runminute: 0

But it isn't clear for us. Isn't there just a setting in the deploymentconfig 
or what's the most easy approach for this?


Thanks

___
users mailing list
users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/users




--
--
Jeff Cantrill
Senior Software Engineer, Red Hat Engineering
OpenShift Integration Services
Red Hat, Inc.
Office: 703-748-4420 | 866-546-8970 ext. 8162420
jcant...@redhat.com<mailto:jcant...@redhat.com>
http://www.redhat.com
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Ansible installation OpenShift origin 1.2.0 failed

2016-11-22 Thread Den Cowboy
Will try it next time []

I found the 2.1 ansible package in another repo and than everything seemed to 
work.


Thanks


Van: Andrew Butcher <abutc...@redhat.com>
Verzonden: donderdag 17 november 2016 19:59:07
Aan: Den Cowboy
CC: Rich Megginson; users@lists.openshift.redhat.com
Onderwerp: Re: Ansible installation OpenShift origin 1.2.0 failed

You're right, there's no ansible-2.1 package in EPEL. The latest ansible will 
work with openshift-ansible's release-1.2 branch where we've fixed these 
templating issues. I'd recommend using that if you can.

Be sure to set the following inventory variables to get the right packages if 
you go the release-1.2 branch route.

openshift_release=1.2
openshift_pkg_version=-1.2.1-1.el7

On Thu, Nov 17, 2016 at 1:04 PM, Den Cowboy 
<dencow...@hotmail.com<mailto:dencow...@hotmail.com>> wrote:

Seems not to help. If you list the possible packages it only shows the newest:


yum -y --enablerepo=epel --showduplicates list ansible
shows only ansible.noarch 2.2.0.0-3.el7 epel


Van: 
users-boun...@lists.openshift.redhat.com<mailto:users-boun...@lists.openshift.redhat.com>
 
<users-boun...@lists.openshift.redhat.com<mailto:users-boun...@lists.openshift.redhat.com>>
 namens Rich Megginson <rmegg...@redhat.com<mailto:rmegg...@redhat.com>>
Verzonden: donderdag 17 november 2016 17:54:26
Aan: users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
Onderwerp: Re: Ansible installation OpenShift origin 1.2.0 failed

On 11/17/2016 10:36 AM, Den Cowboy wrote:
>
> Thanks. This could probably be the issue.
>
>
> # yum -y --enablerepo=epel --showduplicates list ansible
> Failed to set locale, defaulting to C
> Loaded plugins: fastestmirror
> Loading mirror speeds from cached hostfile
>  * base: mirror2.hs-esslingen.de<http://mirror2.hs-esslingen.de>
>  * epel: epel.mirrors.ovh.net<http://epel.mirrors.ovh.net>
>  * extras: it.centos.contactlab.it<http://it.centos.contactlab.it>
>  * updates: mirror.netcologne.de<http://mirror.netcologne.de>
> Available Packages
> ansible.noarch 2.2.0.0-3.el7 epel
>
>
> I always installed 2.2 at the moment. Is there a way to install
> 2.1.0.0-1.el7 using yum?
>

You could try to yum downgrade ansible and see if that gets you an older
version.

>
> I found this website:
> https://www.rpmfind.net/linux/rpm2html/search.php?query=ansible but
> I'm not really familiar with rpm
>
> RPM resource ansible - Rpmfind mirror
> <https://www.rpmfind.net/linux/rpm2html/search.php?query=ansible>
> www.rpmfind.net<http://www.rpmfind.net>
> RPM resource ansible. Ansible is a radically simple model-driven
> configuration management, multi-node deployment, and remote task
> execution system.
>
>
> --------
> *Van:* Andrew Butcher <abutc...@redhat.com<mailto:abutc...@redhat.com>>
> *Verzonden:* donderdag 17 november 2016 15:49:50
> *Aan:* Den Cowboy
> *CC:* 
> users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
> *Onderwerp:* Re: Ansible installation OpenShift origin 1.2.0 failed
> Hey,
>
> What version of ansible are you using? There is an untemplated
> with_items for g_all_hosts | default([]) which isn't being
> interpreted. Untemplated with_items would have been okay with previous
> ansible versions but not with the latest 2.2 packages.
>
> Like this line but without the jinja template wrapping "{{ }}".
>
> https://github.com/openshift/openshift-ansible/blob/cd922e0f4a1370118c0e2fd60230a68d74b47095/playbooks/byo/openshift-cluster/config.yml#L16
>
> On Thu, Nov 17, 2016 at 10:30 AM, Den Cowboy 
> <dencow...@hotmail.com<mailto:dencow...@hotmail.com>
> <mailto:dencow...@hotmail.com>> wrote:
>
> Hi,
>
>
> I forked the repo of openshift when it was version 1.2.0.
>
> Now I did all the prerequisitions and I was able to ssh from my
> master to itself and to every node (using the names I specified in
> /etc/hosts).
>
> I created my hosts file and I start the installation but it ends
> pretty quick with this error. I don't understand why. I have some
> experience with installating version 1.2.0 with ansible.
>
>
> TASK [Evaluate oo_nodes_to_config]
> *
> changed: [localhost] => (item=master.xxx.com<http://master.xxx.com> 
> <http://master.xxx.com>)
> changed: [localhost] => (item=node01.xxx.com<http://node01.xxx.com> 
> <http://node01.xxx.com>)
> changed: [localhost] => (item=node02.xxx.com<http://node02.xxx.com> 
> <

Clean logs in ES on Origin 1.2.0

2016-11-22 Thread Den Cowboy
Hi,


We have an origin 1.2.0 cluster in which we've integrated the logging project. 
It works fine but we just followed the setup tutorial. We don't know may about 
the real setup.


We're facing sometimes issues that our disk is getting too fill because our ES 
is keeping to many data.

How can we easily configure the curator to delete every log of every project 
once a week?

We took a look to the documentation:

myapp-dev:
 delete:
   days: 1

myapp-qe:
  delete:
weeks: 1

.operations:
  delete:
weeks: 8

.defaults:
  delete:
days: 30
  runhour: 0
  runminute: 0

But it isn't clear for us. Isn't there just a setting in the deploymentconfig 
or what's the most easy approach for this?


Thanks
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Ansible installation OpenShift origin 1.2.0 failed

2016-11-17 Thread Den Cowboy
Thanks. This could probably be the issue.


# yum -y --enablerepo=epel --showduplicates list ansible
Failed to set locale, defaulting to C
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror2.hs-esslingen.de
 * epel: epel.mirrors.ovh.net
 * extras: it.centos.contactlab.it
 * updates: mirror.netcologne.de
Available Packages
ansible.noarch  
   2.2.0.0-3.el7
  epel


I always installed 2.2 at the moment. Is there a way to install 2.1.0.0-1.el7 
using yum?


I found this website: 
https://www.rpmfind.net/linux/rpm2html/search.php?query=ansible but I'm not 
really familiar with rpm

RPM resource ansible - Rpmfind 
mirror<https://www.rpmfind.net/linux/rpm2html/search.php?query=ansible>
www.rpmfind.net
RPM resource ansible. Ansible is a radically simple model-driven configuration 
management, multi-node deployment, and remote task execution system.




Van: Andrew Butcher <abutc...@redhat.com>
Verzonden: donderdag 17 november 2016 15:49:50
Aan: Den Cowboy
CC: users@lists.openshift.redhat.com
Onderwerp: Re: Ansible installation OpenShift origin 1.2.0 failed

Hey,

What version of ansible are you using? There is an untemplated with_items for 
g_all_hosts | default([]) which isn't being interpreted. Untemplated with_items 
would have been okay with previous ansible versions but not with the latest 2.2 
packages.

Like this line but without the jinja template wrapping "{{ }}".

https://github.com/openshift/openshift-ansible/blob/cd922e0f4a1370118c0e2fd60230a68d74b47095/playbooks/byo/openshift-cluster/config.yml#L16

On Thu, Nov 17, 2016 at 10:30 AM, Den Cowboy 
<dencow...@hotmail.com<mailto:dencow...@hotmail.com>> wrote:

Hi,


I forked the repo of openshift when it was version 1.2.0.

Now I did all the prerequisitions and I was able to ssh from my master to 
itself and to every node (using the names I specified in /etc/hosts).

I created my hosts file and I start the installation but it ends pretty quick 
with this error. I don't understand why. I have some experience with 
installating version 1.2.0 with ansible.


TASK [Evaluate oo_nodes_to_config] *
changed: [localhost] => (item=master.xxx.com<http://master.xxx.com>)
changed: [localhost] => (item=node01.xxx.com<http://node01.xxx.com>)
changed: [localhost] => (item=node02.xxx.com<http://node02.xxx.com>)
changed: [localhost] => (item=node03.xxx.com<http://node03.xxx.com>)
changed: [localhost] => (item=node04.xxx.com<http://node04.xxx.com>)

TASK [Evaluate oo_nodes_to_config] *
skipping: [localhost] => (item=master.xxx.com<http://master.xxx.com>)

TASK [Evaluate oo_first_etcd] **
changed: [localhost]

TASK [Evaluate oo_first_master] 
changed: [localhost]

TASK [Evaluate oo_lb_to_config] 

TASK [Evaluate oo_nfs_to_config] ***

PLAY [Initialize host facts] ***

TASK [setup] ***
fatal: [g_all_hosts | default([])]: UNREACHABLE! => {"changed": false, "msg": 
"Failed to connect to the host via ssh: ssh: Could not resolve hostname 
g_all_hosts | default([]): Name or service not known\r\n", "unreachable": true}

NO MORE HOSTS LEFT *
to retry, use: --limit @/root/openshift-ansible/playbooks/byo/config.retry

PLAY RECAP *
g_all_hosts: ok=1changed=0unreachable=0failed=0
g_all_hosts | default([])  : ok=0changed=0unreachable=1failed=0
localhost  : ok=9changed=8unreachable=0failed=0


___
users mailing list
users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


OpenShift origin GC on images

2016-11-16 Thread Den Cowboy
Hi I'm using OpenShift Origin 1.2.

We are using NFS to mount the storage of our Registry. We have 200GB of which 
97GB is used.

I just performed this on my master:

oadm prune images --keep-tag-revisions=10 
--certificate-authority=/etc/docker/certs.d/172.30.xx:xx:5000/ca.crt --confirm


Now I check my Registry and it's just the same (97GB used of 200GB).

The output of the previous command shows a lot of layers which should be 
deleted.
I also saw here: 
https://docs.openshift.org/latest/admin_guide/garbage_collection.html#image-garbage-collection
 that the GC should run every 5 minutes. So I also waited even 10 minutes to 
see if something happened but the volume of my registry does not decrease.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Prune images in OpenShift Registry

2016-10-25 Thread Den Cowboy
Hi,


I want to delete all images in our openshift registry which are older than 60 
days. Is this possible?

Another option could be to delete all images except the images which were used 
in the last 20 deploys. Is this possible?


I saw the prune image command but I don't really understand it and also the 
output is pretty unclear. I performed a dry-run.

Is it really deleting from the registry of from the server?


Do I have to edit registry configuration. (I have a bit experience with this on 
real docker registries (not in openshift)).
There I have to put REGISTRY_STORAGE_DELETE_ENABLED = True
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Secure SSL route

2016-10-19 Thread Den Cowboy
Hi,


we have an app which is hosted by nginx inside the same container. We used the 
default.conf of nginx which is listening to port 80.

443 and 80 are exposed but only 80 is 'in use'. We're able to create a secure 
route with the openshift webconsole (or oc expose svc..). But of course there 
was nothing to show.


Now we try to edit our nginx configuration so we can use a secure route on our 
service (above the pod).

In the container we changed the default.conf:

listen 443;


I have a bit experience with using apache (not nginx) to use ssl but than I 
need my self signed certificates inside my container and that kind of stuff. Do 
I need this for OpenShift?

I saw in my browser:

SSL_ERROR_RX_RECORD_TOO_LONG


I edit again to

listen   443 ssl http2;

But now I see in the logs of my pod:

*1 no "ssl_certificate" is defined in server listening on SSL port while SSL 
handshaking, client: 10.x.x.x, server: 0.0.0.0:443
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Route on Service : RPC CALL

2016-09-20 Thread Den Cowboy
Hi,


We have a container which is exposing a port . We have to perform RPC calls 
on it.
I can create a route on the port (https -> 8080) (map it on ) but this does 
not seem to work.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Exposing ports on environment

2016-08-04 Thread Den Cowboy
Edit of our config:

etcd:
• 2379/TCP  -?-> from master
• 2380/TCP  -?-> from etcd host


Master:
• 22/TCP- ssh—> 0.0.0.0/0 (from master minimum)
• 8443/TCP  - OpenShift Console-> 0.0.0.0/0
• 8053/TCP  - SkyDNS-> from all OpenShift Origin hosts


Node where our router is running (infrastructure nodes):
• 80/TCP- Web Apps-> 0.0.0.0/0
• 443/TCP   - Web Apps (https)—> 0.0.0.0/0
• 4789/UDP  - SDN / VXLAN-> from other nodes
• 10250/TCP - For use by the Kubelet-> from master
• 22/TCP- For ansible installer-> from master (where we start ansible 
install)


Every node:
• 4789/UDP  - SDN / VXLAN-> from other nodes
• 10250/TCP - For use by the Kubelet-> from master
• 22/TCP- For ansible installer-> from master (where we start ansible 
install)

Do we need additional ports for pushing to our registry or for to be able to 
pull images or something?


Van: users-boun...@lists.openshift.redhat.com 
<users-boun...@lists.openshift.redhat.com> namens Den Cowboy 
<dencow...@hotmail.com>
Verzonden: donderdag 4 augustus 2016 8:57:04
Aan: users@lists.openshift.redhat.com
Onderwerp: Exposing ports on environment


Hi, we have an openshift origin 1.2 cluster in our environment (1 master, 
multiple nodes).
Now we are securing it with firewall. We need to know which ports need to be 
exposed.

We took already a look on 
https://docs.openshift.org/latest/install_config/install/prerequisites.html#prereq-network-access

But it's still not that clear which ports we need to expose. Is there somewhere 
an overview about this?

Which ports on the master?
Which ports on the node where our router is running?
Which ports on the other nodes?

Which servers need access to the internet?

This is our presetup (can someone confirm if this is fine or what we need to 
add/change)


Master:
• 22/TCP- ssh
• 8443/TCP  - OpenShift Console
• 10250/TCP - kubelet


Node where our router is running:
• 80/TCP- Web Apps
• 443/TCP   - Web Apps (https)
• 4789/UDP  - SDN / VXLAN


Every node:
• 4789/UDP  - SDN / VXLAN

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Exposing ports on environment

2016-08-04 Thread Den Cowboy
Hi, we have an openshift origin 1.2 cluster in our environment (1 master, 
multiple nodes).
Now we are securing it with firewall. We need to know which ports need to be 
exposed.

We took already a look on 
https://docs.openshift.org/latest/install_config/install/prerequisites.html#prereq-network-access

But it's still not that clear which ports we need to expose. Is there somewhere 
an overview about this?

Which ports on the master?
Which ports on the node where our router is running?
Which ports on the other nodes?

Which servers need access to the internet?

This is our presetup (can someone confirm if this is fine or what we need to 
add/change)


Master:
* 22/TCP- ssh
* 8443/TCP  - OpenShift Console
* 10250/TCP - kubelet


Node where our router is running:
* 80/TCP- Web Apps
* 443/TCP   - Web Apps (https)
* 4789/UDP  - SDN / VXLAN


Every node:
* 4789/UDP  - SDN / VXLAN

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: Persistent Storage MYSQL

2016-07-27 Thread Den Cowboy
all users and all groups which are coming from that IP-range have access.
We're able to mount when we're using just a /mnt directory on our host. But not 
from inside our container.

From: dencow...@hotmail.com
To: bpar...@redhat.com; users@lists.openshift.redhat.com
Subject: RE: Persistent Storage MYSQL
Date: Wed, 27 Jul 2016 13:11:13 +




Yeah, that's something which is different. On my master I was working with 
exportfs -a etc.
But now it doesn't matter. The permissions are IP based. (we hadded the range 
in which are cluster is running)
[3:09] 

From: bpar...@redhat.com
Date: Wed, 27 Jul 2016 09:04:12 -0400
Subject: Re: Persistent Storage MYSQL
To: dencow...@hotmail.com
CC: users@lists.openshift.redhat.com

what are the permissions of the NFS exported volume?  and what is in the export 
definition?


On Wed, Jul 27, 2016 at 8:35 AM, Den Cowboy <dencow...@hotmail.com> wrote:



I try to make my MySQL pod persistent.
I always did this on training-environmnents where my DNS-server was on my 
master and I had never issues.
Now my NFS is on another server.

My pv looks like this
{
  "apiVersion": "v1",
  "kind": "PersistentVolume",
  "metadata": {
"name": "mysql-data"
  },
  "spec": {
"capacity": {
"storage": "20Gi"
},
"accessModes": [ "ReadWriteMany" ],
"nfs": {
"path": "/path/mysql",
"server": "server-IP"
}
  }
}

I also created a pvc and edited the dc of my mysql to use it. Just like I 
always did on my training environment.
After editting the dc there is a new deploy triggered but my pod is going in a 
recreation loop.


2016-07-27 08:17:01 0 [Note] /opt/rh/rh-mysql56/root/usr/libexec/mysqld (mysqld 
5.6.26) starting as process 18 ...
2016-07-27 08:17:01 18 [Warning] Can't create test file 
/var/lib/mysql/data/mysql-2-edp7q.lower-test
2016-07-27 08:17:01 18 [Warning] Can't create test file 
/var/lib/mysql/data/mysql-2-edp7q.lower-test
[10:17] 2016-07-27 08:17:01 7fd2befc3840  InnoDB: Operating system error number 
13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
2016-07-27 08:17:01 18 [ERROR] InnoDB: Creating or opening ./ibdata1 failed!
Does someone know what could be the issue?
When I create a directory /mnt on my node-host of OS I'm able to mount to my 
NFS storage server. So why isn't this working for my mysql container?

  

___

users mailing list

users@lists.openshift.redhat.com

http://lists.openshift.redhat.com/openshiftmm/listinfo/users




-- 
Ben Parees | OpenShift


  

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: Persistent Storage MYSQL

2016-07-27 Thread Den Cowboy
Yeah, that's something which is different. On my master I was working with 
exportfs -a etc.
But now it doesn't matter. The permissions are IP based. (we hadded the range 
in which are cluster is running)
[3:09] 

From: bpar...@redhat.com
Date: Wed, 27 Jul 2016 09:04:12 -0400
Subject: Re: Persistent Storage MYSQL
To: dencow...@hotmail.com
CC: users@lists.openshift.redhat.com

what are the permissions of the NFS exported volume?  and what is in the export 
definition?


On Wed, Jul 27, 2016 at 8:35 AM, Den Cowboy <dencow...@hotmail.com> wrote:



I try to make my MySQL pod persistent.
I always did this on training-environmnents where my DNS-server was on my 
master and I had never issues.
Now my NFS is on another server.

My pv looks like this
{
  "apiVersion": "v1",
  "kind": "PersistentVolume",
  "metadata": {
"name": "mysql-data"
  },
  "spec": {
"capacity": {
"storage": "20Gi"
},
"accessModes": [ "ReadWriteMany" ],
"nfs": {
"path": "/path/mysql",
"server": "server-IP"
}
  }
}

I also created a pvc and edited the dc of my mysql to use it. Just like I 
always did on my training environment.
After editting the dc there is a new deploy triggered but my pod is going in a 
recreation loop.


2016-07-27 08:17:01 0 [Note] /opt/rh/rh-mysql56/root/usr/libexec/mysqld (mysqld 
5.6.26) starting as process 18 ...
2016-07-27 08:17:01 18 [Warning] Can't create test file 
/var/lib/mysql/data/mysql-2-edp7q.lower-test
2016-07-27 08:17:01 18 [Warning] Can't create test file 
/var/lib/mysql/data/mysql-2-edp7q.lower-test
[10:17] 2016-07-27 08:17:01 7fd2befc3840  InnoDB: Operating system error number 
13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
2016-07-27 08:17:01 18 [ERROR] InnoDB: Creating or opening ./ibdata1 failed!
Does someone know what could be the issue?
When I create a directory /mnt on my node-host of OS I'm able to mount to my 
NFS storage server. So why isn't this working for my mysql container?

  

___

users mailing list

users@lists.openshift.redhat.com

http://lists.openshift.redhat.com/openshiftmm/listinfo/users




-- 
Ben Parees | OpenShift


  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Every user can authenticate on docker registry on openshift

2016-07-27 Thread Den Cowboy
Is it normal that every user can authenticate on the docker-registry of 
openshift?
I was always using the same user as my cluster-admin in my openshift.
But now I tried something else:

docker login -u userdoesnotexist \
> -p u89cSfZVXBBxw1cYsIlGKcHHYM_ycxxxlI 172.30.xx.xx:5000
Email (a...@mail.com):
WARNING: login credentials saved in /home/centos/.docker/config.json
Login Succeeded
  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Persistent Storage MYSQL

2016-07-27 Thread Den Cowboy
I try to make my MySQL pod persistent.
I always did this on training-environmnents where my DNS-server was on my 
master and I had never issues.
Now my NFS is on another server.

My pv looks like this
{
  "apiVersion": "v1",
  "kind": "PersistentVolume",
  "metadata": {
"name": "mysql-data"
  },
  "spec": {
"capacity": {
"storage": "20Gi"
},
"accessModes": [ "ReadWriteMany" ],
"nfs": {
"path": "/path/mysql",
"server": "server-IP"
}
  }
}

I also created a pvc and edited the dc of my mysql to use it. Just like I 
always did on my training environment.
After editting the dc there is a new deploy triggered but my pod is going in a 
recreation loop.

2016-07-27 08:17:01 0 [Note] /opt/rh/rh-mysql56/root/usr/libexec/mysqld (mysqld 
5.6.26) starting as process 18 ...2016-07-27 08:17:01 18 [Warning] Can't create 
test file /var/lib/mysql/data/mysql-2-edp7q.lower-test2016-07-27 08:17:01 18 
[Warning] Can't create test file 
/var/lib/mysql/data/mysql-2-edp7q.lower-test[10:17] 2016-07-27 08:17:01 
7fd2befc3840  InnoDB: Operating system error number 13 in a file 
operation.InnoDB: The error means mysqld does not have the access rights 
toInnoDB: the directory.2016-07-27 08:17:01 18 [ERROR] InnoDB: Creating or 
opening ./ibdata1 failed!
Does someone know what could be the issue?
When I create a directory /mnt on my node-host of OS I'm able to mount to my 
NFS storage server. So why isn't this working for my mysql container?

  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: Kibana: This site can’t be reached: ERR_CONTENT_DECODING_FAILED

2016-07-22 Thread Den Cowboy
FYI

Error in safari:
Cannot decode raw data

Error in Firefox:
Encoding error: non supported form of compression.

I read something about gzip and I see in the logs of Kibana: 
accept-encoding":"gzip,

Don't know if it has something to do with it

From: dencow...@hotmail.com
To: users@lists.openshift.redhat.com
Subject: Kibana: This site can’t be reached: ERR_CONTENT_DECODING_FAILED
Date: Fri, 22 Jul 2016 09:53:25 +




Hi,

I'm using openshift origin 1.2.0
I try to set up logging (I did it already a few times so I know the procedure).
I performed the prereqs and started the template with:

oc new-app logging-deployer-template \
>  --param KIBANA_HOSTNAME=kibana.xx-dev.xx \
>  --param ES_CLUSTER_SIZE=1 \
>  --param PUBLIC_MASTER_URL=https://master.xx-xx:8443 \
>  --param IMAGE_VERSION=v1.2.0

Everything is pulled and starting fine.
So after everything is running I try to access kibana which is redirecting me 
to the login page of kibana (equal to the login page of openshift)

After login in I'm redirected to my kibana URL but I don't see my logs. I got:
This site can’t be reachedThe webpage at https://kibana.xx.xx/ might be 
temporarily down or it may have moved permanently to a new web 
address.ERR_CONTENT_DECODING_FAILED
I don't see weird logs in my pods/containers.

Can someone help me? I tried it multiple times and in multiple browsers.
  

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Kibana: This site can’t be reached: ERR_CONTENT_DECODING_FAILED

2016-07-22 Thread Den Cowboy
Hi,

I'm using openshift origin 1.2.0
I try to set up logging (I did it already a few times so I know the procedure).
I performed the prereqs and started the template with:

oc new-app logging-deployer-template \
>  --param KIBANA_HOSTNAME=kibana.xx-dev.xx \
>  --param ES_CLUSTER_SIZE=1 \
>  --param PUBLIC_MASTER_URL=https://master.xx-xx:8443 \
>  --param IMAGE_VERSION=v1.2.0

Everything is pulled and starting fine.
So after everything is running I try to access kibana which is redirecting me 
to the login page of kibana (equal to the login page of openshift)

After login in I'm redirected to my kibana URL but I don't see my logs. I got:
This site can’t be reachedThe webpage at https://kibana.xx.xx/ might be 
temporarily down or it may have moved permanently to a new web 
address.ERR_CONTENT_DECODING_FAILED
I don't see weird logs in my pods/containers.

Can someone help me? I tried it multiple times and in multiple browsers.
  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: OpenShift origin: internal routing with services

2016-07-20 Thread Den Cowboy
I read the documentation about it.
It's not very clear for me but it seems to be something that you can deploy 
multiple routers en routerA will handle the routes of project A, B and C and 
router B will handle the routes of project D,E,F or something?

I don't really see or know how I can create a router which will handle routes 
internally (without going to the outside)

> Date: Mon, 18 Jul 2016 19:04:22 +0200
> From: al-openshiftus...@none.at
> To: dencow...@hotmail.com
> CC: users@lists.openshift.redhat.com
> Subject: Re: OpenShift origin: internal routing with services
> 
> Am 14-07-2016 09:27, schrieb Den Cowboy:
> 
> > Hi,
> > 
> > At the moment we have a setup like this:
> > project A
> > project B
> > 
> > project A contains a pod A which needs an API which is running in pod B 
> > in project B.
> > Pod A has an environment variable: "api-route.xxx.dev/api"
> > So when I'm going to that route in my browser I'm able to see the API 
> > and this works fine (okay we're able to configure https route etc)
> > 
> > But we'd like to keep everything internally. So without using routes. 
> > So thanks to the ovs-multitenant-pluging we're able to "join" the 
> > networks of our projects (namespaces). And I'm able to ping to from 
> > inside pod A to the service of my pod B in project B.
> > ping api-service.project-b
> > api-service.project-b.svc.cluster.local (172.30.xx.xx) 56(84) bytes of 
> > data.
> > 
> > So we're able to access the pod from its service without using an 
> > external route.
> > But like I told in the beginning. Our API is on api-route.xxx.dev/api 
> > so I have to go to something like 172.30.xx.xx:8080/api.
> > 
> > Is there a way to obtain this goal? So we try to connect to a 'subpath' 
> > of our service without using routes.
> > Is this possible?
> 
> I think  you can go another way and use a internal router with 
> router-sharding
> 
> https://docs.openshift.org/latest/architecture/core_concepts/routes.html#router-sharding
> 
> and deploy the internal api on the internal router.
> 
> Best regards
> Aleks
  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


TLS: secure routes on OpenShift

2016-07-20 Thread Den Cowboy
I read about the 3 types of secure routes:
- Edge : encrypts routing from outside till router
- Passthrough: encrypts routing from outside till pod
- Re-encrypt: encrpyts from outside till router and than reencrypts from router 
till pod (internally)

I'm able to create such a routes using the webconsole (or cli). 
But I don't really know what to do if I have an application which needs to 
connect with these secure routes?
For example:

project test1 (ns)
1 pod which is hosting some API service

project test2 (ns)
1 pod which is hosting some website which needs to connect with the API service.

When you start the pod in project2 you're able to give an ENV VAR which will 
contain the path to your API service:
oc new-app -e URL="http://my-api.dev.all; test1/app1

But when we have only a secure route to our API (so https)
And we will start the pod in project2 with:
oc new-app -e URL="https://my-api.dev.all; test1/app1

What do we need to do to have a full communication? Do we need to add the 
certificate(s) of our app1 somewhere for app2?
  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


OpenShift origin: internal routing with services

2016-07-14 Thread Den Cowboy
Hi,

At the moment we have a setup like this:
project A
project B

project A contains a pod A which needs an API which is running in pod B in 
project B.
Pod A has an environment variable: "api-route.xxx.dev/api"
So when I'm going to that route in my browser I'm able to see the API and this 
works fine (okay we're able to configure https route etc)

But we'd like to keep everything internally. So without using routes. So thanks 
to the ovs-multitenant-pluging we're able to "join" the networks of our 
projects (namespaces). And I'm able to ping to from inside pod A to the service 
of my pod B in project B.
ping api-service.project-b
api-service.project-b.svc.cluster.local (172.30.xx.xx) 56(84) bytes of data.   

So we're able to access the pod from its service without using an external 
route. 
But like I told in the beginning. Our API is on api-route.xxx.dev/api so I have 
to go to something like 172.30.xx.xx:8080/api.

Is there a way to obtain this goal? So we try to connect to a 'subpath' of our 
service without using routes.
Is this possible?

   
  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: Adding nodes to existing origin 1.2 cluster

2016-07-12 Thread Den Cowboy
Thanks! That was the solution!

From: alexwa...@exosite.com
Date: Tue, 12 Jul 2016 13:34:18 -0500
Subject: Re: Adding nodes to existing origin 1.2 cluster
To: dencow...@hotmail.com
CC: users@lists.openshift.redhat.com

I see that your [OSEv3:children] section does not contain new_nodes.  Maybe try 
adding that?  Mine contains masters, nodes, and new_nodes (we're using built-in 
etcd right now).
On Tue, Jul 12, 2016 at 1:27 PM, Den Cowboy <dencow...@hotmail.com> wrote:



I try to add nodes to our v1.2 cluster:
I added the new_nodes section to my /etc/ansible/host

[OSEv3:children]
masters
nodes
etcd

# Set variables common for all OSEv3 hosts
[OSEv3:vars]
ansible_ssh_user=root
deployment_type=origin
openshift_pkg_version=-1.2.0-4.el7


# uncomment the following to enable htpasswd authentication; defaults to 
DenyAllPasswordIdentityProvider
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 
'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': 
'/etc/origin/master/htpasswd'}]


# host group for masters
[masters]
master


# host group for etcd
[etcd]
master

# host group for nodes, includes region info
[nodes]
node1 openshift_node_labels="{'xx'}"
master openshift_node_labels="{'xx}"

[new_nodes]
node2 openshift_node_labels="{'xx}"
node3 openshift_node_labels="{'xx}"
node4 openshift_node_labels="{'xx}"
~


I execute:
ansible-playbook ~/openshift-ansible/playbooks/byo/openshift-node/scaleup.yml

It seems to start fine but pretty fast I get the following error:
TASK [openshift_facts : Gather Cluster facts and set is_containerized if 
needed] ***
fatal: [node2]: FAILED! => {"failed": true, "msg": "{{ deployment_type }}: 
'deployment_type' is undefined"}
fatal: [node3]: FAILED! => {"failed": true, "msg": "{{ deployment_type }}: 
'deployment_type' is undefined"}
fatal: [node4]: FAILED! => {"failed": true, "msg": "{{ deployment_type }}: 
'deployment_type' is undefined"}

But the deployment_type is described in my /etc/ansible/hosts file? Also the 
first deployment( some weeks ago) went well.
  

___

users mailing list

users@lists.openshift.redhat.com

http://lists.openshift.redhat.com/openshiftmm/listinfo/users




-- 
Alex Wauck // DevOps Engineer
E X O S I T E www.exosite.com 

Making Machines More Human.
  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Adding nodes to existing origin 1.2 cluster

2016-07-12 Thread Den Cowboy
I try to add nodes to our v1.2 cluster:
I added the new_nodes section to my /etc/ansible/host

[OSEv3:children]
masters
nodes
etcd

# Set variables common for all OSEv3 hosts
[OSEv3:vars]
ansible_ssh_user=root
deployment_type=origin
openshift_pkg_version=-1.2.0-4.el7


# uncomment the following to enable htpasswd authentication; defaults to 
DenyAllPasswordIdentityProvider
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 
'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': 
'/etc/origin/master/htpasswd'}]


# host group for masters
[masters]
master


# host group for etcd
[etcd]
master

# host group for nodes, includes region info
[nodes]
node1 openshift_node_labels="{'xx'}"
master openshift_node_labels="{'xx}"

[new_nodes]
node2 openshift_node_labels="{'xx}"
node3 openshift_node_labels="{'xx}"
node4 openshift_node_labels="{'xx}"
~


I execute:
ansible-playbook ~/openshift-ansible/playbooks/byo/openshift-node/scaleup.yml

It seems to start fine but pretty fast I get the following error:
TASK [openshift_facts : Gather Cluster facts and set is_containerized if 
needed] ***
fatal: [node2]: FAILED! => {"failed": true, "msg": "{{ deployment_type }}: 
'deployment_type' is undefined"}
fatal: [node3]: FAILED! => {"failed": true, "msg": "{{ deployment_type }}: 
'deployment_type' is undefined"}
fatal: [node4]: FAILED! => {"failed": true, "msg": "{{ deployment_type }}: 
'deployment_type' is undefined"}

But the deployment_type is described in my /etc/ansible/hosts file? Also the 
first deployment( some weeks ago) went well.
  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: Create selfsigned certs for securing openshift registry

2016-07-08 Thread Den Cowboy
I've created the certificate with my wildcard hostname ntoo and I've exposed 
it. Created pusher service-accounts in some projects because we are working 
with an external jenkins which builds images. Everything works fine now. Thanks

Date: Fri, 8 Jul 2016 09:05:14 -0400
Subject: Re: Create selfsigned certs for securing openshift registry
From: jdeti...@redhat.com
To: dencow...@hotmail.com
CC: users@lists.openshift.redhat.com



On Jul 8, 2016 1:52 AM, "Den Cowboy" <dencow...@hotmail.com> wrote:

>

> I try to secure my openshift registry:

>

> $ oadm ca create-server-cert \

> --signer-cert=/etc/origin/master/ca.crt \

> --signer-key=/etc/origin/master/ca.key \

> --signer-serial=/etc/origin/master/ca.serial.txt \

> --hostnames='docker-registry.default.svc.cluster.local,172.30.124.220' \

> --cert=/etc/secrets/registry.crt \

> --key=/etc/secrets/registry.key

>

>

> Which hostnames do I have to use?

> The service IP of my docker registry of course but what then?:
Currently everything internal should be using just the service IP.
>

> docker-registry.default.svc.cluster.local
This would cover the created service. We have plans to eventually use the 
registry service name instead of IP.
> OR/AND

> docker-registry.dev.wildcard.com
This would only be needed if you intend to expose the registry using a route 
for access external to the cluster.
>

> Thanks

>

> ___

> users mailing list

> users@lists.openshift.redhat.com

> http://lists.openshift.redhat.com/openshiftmm/listinfo/users

>

  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Create selfsigned certs for securing openshift registry

2016-07-07 Thread Den Cowboy
I try to secure my openshift registry:

$ oadm ca create-server-cert \
--signer-cert=/etc/origin/master/ca.crt \
--signer-key=/etc/origin/master/ca.key \
--signer-serial=/etc/origin/master/ca.serial.txt \
--hostnames='docker-registry.default.svc.cluster.local,172.30.124.220' \
--cert=/etc/secrets/registry.crt \
--key=/etc/secrets/registry.key
Which hostnames do I have to use?
The service IP of my docker registry of course but what then?:

docker-registry.default.svc.cluster.local
OR/AND
docker-registry.dev.wildcard.com
Thanks
  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: Back-off pulling image "/origin-logging-curator@sha256:b89cbdfc4e0e7d594f7a49c7581ae3f75b9d0313fce2ed8be83ee5c0426af72d"

2016-07-07 Thread Den Cowboy
Did not work.
It was able to pull the deploy image and the image fluentd.
Not the image of curator, proxy, kibana and es.

In the registry for auth-proxy:
sha256# ls
b5aa482640b96d2df8d5ec839488b7e144eb8189a4102b3b76ca12638630e833

in logs to pull:
auth-proxy@sha256:179b84eb803fac116f913182c2fb64a2e7adf01dd04fc58e1336d96ce0ce3d65

(for fluentd it isn't showing logs with @sha, that's probably the reason why it 
succeeds.)

From: dencow...@hotmail.com
To: agold...@redhat.com
Subject: RE: Back-off pulling image 
"/origin-logging-curator@sha256:b89cbdfc4e0e7d594f7a49c7581ae3f75b9d0313fce2ed8be83ee5c0426af72d"
Date: Thu, 7 Jul 2016 21:00:02 +
CC: users@lists.openshift.redhat.com




I don't see it.
Will it be a fix when I delete all the images in my registry and repull them 
from docker.io on a machine with docker verison 1.10.3 (after yum install 
docker) and push to my registry? (so push and pull from same docker version)

From: dencow...@hotmail.com
To: agold...@redhat.com
Subject: RE: Back-off pulling image 
"/origin-logging-curator@sha256:b89cbdfc4e0e7d594f7a49c7581ae3f75b9d0313fce2ed8be83ee5c0426af72d"
Date: Thu, 7 Jul 2016 20:45:43 +
CC: users@lists.openshift.redhat.com




sorry,
My node:
Version: 1.10.3
 API version: 1.22
 Package version: docker-common-1.10.3-44.el7.centos.x86_64
 Go version:  go1.4.2
 Git commit:  9419b24-unsupported
 Built:   Fri Jun 24 12:09:49 2016
 OS/Arch: linux/amd64


Pushed with docker 1.11 (after normal install an ubuntu)

From: agold...@redhat.com
Date: Thu, 7 Jul 2016 16:43:33 -0400
Subject: Re: Back-off pulling image 
"/origin-logging-curator@sha256:b89cbdfc4e0e7d594f7a49c7581ae3f75b9d0313fce2ed8be83ee5c0426af72d"
To: dencow...@hotmail.com
CC: users@lists.openshift.redhat.com

(asking again) Did you push the image using Docker 1.10 and your node is 
running Docker 1.9?

On Thu, Jul 7, 2016 at 4:40 PM, Den Cowboy <dencow...@hotmail.com> wrote:



Thanks. How can I handle this when I'm using my own images?

maybe a more clear explanation:

  


  
  

  

  We
 are using our own docker registry which is secured with a selfsigned 
certificate. So if we place the cert on our openshift node we're able to
 pull. We pulled the openshift-origin images v1.2.0 from dockerhub and 
pushed it inside our docker registry. We are using the registry instead 
of docker.io/openshift/origin-xxx 

This works fine for our router, registry, cluster metrics project etc..


But when we are deploying the logging project: 
https://docs.openshift.org/latest/install_config/aggregate_logging.html it 
doesn't work.

the pull of the registry.com/asco/origin-logging-deployment:v1.2.0 is fine and 
it deploys.

But the problem raises later. The fluentd image is also pulled fine (from our 
registry).

But the images of the rest aren't pulled in the right way, 



example of error events/logs: (origin-logging-"empty" is probably an issue?)



pulling image 
"registry.com/asco/origin-logging-auth-proxy@sha256:179b84eb803fac116f913182c2fb64a2e7adf01dd04fc58e1336d96ce0ce3d65"


Failed to pull image 
"registry.com/asco/origin-logging-auth-proxy@sha256:179b84xb803fac116f913182c2fb64a2e7adf01dd04fc58e1336d96ce0ce3d65":
 image pull failed for 
registry.com/asco/origin-logging-auth-proxy@sha256:179b84ex803fac116f913182c2fb64a2e7adf01dd04fc58e1336d96ce0ce3d65,
 this may be because there are no credentials on this request. details: 
(manifest unknown: manifest unknown)


From: agold...@redhat.com
Date: Thu, 7 Jul 2016 16:34:29 -0400
Subject: Re: Back-off pulling image 
"/origin-logging-curator@sha256:b89cbdfc4e0e7d594f7a49c7581ae3f75b9d0313fce2ed8be83ee5c0426af72d"
To: dencow...@hotmail.com
CC: users@lists.openshift.redhat.com



On Thu, Jul 7, 2016 at 3:48 PM, Den Cowboy <dencow...@hotmail.com> wrote:



Hi,

We are using our own registry which contains some necessary origin-images for 
us.
We already deployed the router, registry and cluster metrics using our regisry:

The images are all in the form of:
myregistry.com/company/origin-

Now I try to deploy a logging project:
After starting the logging deployer template (in which I described our registry 
+ v1.2.0) it starts pulling the origin-logging-deployer image which is fine.

Than everything seems to start but:
Back-off pulling image 
"myregistry.com/company/origin-logging-elasticsearch@sha256:5ad3b9e964ec6e420ac047be6ae96bf04abe817d94a7d77592af1c119543b37b"
(manifest unknown: manifest unknown)


Did you push the image using Docker 1.10 and your node is running Docker 1.9? 
In the deploymentconfig is also the image with the @sha 
This is happening for each image of our deployment (es, kibana, fluentd, ..)

Why is it adding that @sha after our image?

If you're using ImageChangeTriggers, we translate tags to content-addressable 
IDs for consistent image usage so that a moving tag such as "latest"

RE: Back-off pulling image "/origin-logging-curator@sha256:b89cbdfc4e0e7d594f7a49c7581ae3f75b9d0313fce2ed8be83ee5c0426af72d"

2016-07-07 Thread Den Cowboy
sorry,
My node:
Version: 1.10.3
 API version: 1.22
 Package version: docker-common-1.10.3-44.el7.centos.x86_64
 Go version:  go1.4.2
 Git commit:  9419b24-unsupported
 Built:   Fri Jun 24 12:09:49 2016
 OS/Arch: linux/amd64


Pushed with docker 1.11 (after normal install an ubuntu)

From: agold...@redhat.com
Date: Thu, 7 Jul 2016 16:43:33 -0400
Subject: Re: Back-off pulling image 
"/origin-logging-curator@sha256:b89cbdfc4e0e7d594f7a49c7581ae3f75b9d0313fce2ed8be83ee5c0426af72d"
To: dencow...@hotmail.com
CC: users@lists.openshift.redhat.com

(asking again) Did you push the image using Docker 1.10 and your node is 
running Docker 1.9?

On Thu, Jul 7, 2016 at 4:40 PM, Den Cowboy <dencow...@hotmail.com> wrote:



Thanks. How can I handle this when I'm using my own images?

maybe a more clear explanation:

  


  
  

  

  We
 are using our own docker registry which is secured with a selfsigned 
certificate. So if we place the cert on our openshift node we're able to
 pull. We pulled the openshift-origin images v1.2.0 from dockerhub and 
pushed it inside our docker registry. We are using the registry instead 
of docker.io/openshift/origin-xxx 

This works fine for our router, registry, cluster metrics project etc..


But when we are deploying the logging project: 
https://docs.openshift.org/latest/install_config/aggregate_logging.html it 
doesn't work.

the pull of the registry.com/asco/origin-logging-deployment:v1.2.0 is fine and 
it deploys.

But the problem raises later. The fluentd image is also pulled fine (from our 
registry).

But the images of the rest aren't pulled in the right way, 



example of error events/logs: (origin-logging-"empty" is probably an issue?)



pulling image 
"registry.com/asco/origin-logging-auth-proxy@sha256:179b84eb803fac116f913182c2fb64a2e7adf01dd04fc58e1336d96ce0ce3d65"


Failed to pull image 
"registry.com/asco/origin-logging-auth-proxy@sha256:179b84xb803fac116f913182c2fb64a2e7adf01dd04fc58e1336d96ce0ce3d65":
 image pull failed for 
registry.com/asco/origin-logging-auth-proxy@sha256:179b84ex803fac116f913182c2fb64a2e7adf01dd04fc58e1336d96ce0ce3d65,
 this may be because there are no credentials on this request. details: 
(manifest unknown: manifest unknown)


From: agold...@redhat.com
Date: Thu, 7 Jul 2016 16:34:29 -0400
Subject: Re: Back-off pulling image 
"/origin-logging-curator@sha256:b89cbdfc4e0e7d594f7a49c7581ae3f75b9d0313fce2ed8be83ee5c0426af72d"
To: dencow...@hotmail.com
CC: users@lists.openshift.redhat.com



On Thu, Jul 7, 2016 at 3:48 PM, Den Cowboy <dencow...@hotmail.com> wrote:



Hi,

We are using our own registry which contains some necessary origin-images for 
us.
We already deployed the router, registry and cluster metrics using our regisry:

The images are all in the form of:
myregistry.com/company/origin-

Now I try to deploy a logging project:
After starting the logging deployer template (in which I described our registry 
+ v1.2.0) it starts pulling the origin-logging-deployer image which is fine.

Than everything seems to start but:
Back-off pulling image 
"myregistry.com/company/origin-logging-elasticsearch@sha256:5ad3b9e964ec6e420ac047be6ae96bf04abe817d94a7d77592af1c119543b37b"
(manifest unknown: manifest unknown)


Did you push the image using Docker 1.10 and your node is running Docker 1.9? 
In the deploymentconfig is also the image with the @sha 
This is happening for each image of our deployment (es, kibana, fluentd, ..)

Why is it adding that @sha after our image?

If you're using ImageChangeTriggers, we translate tags to content-addressable 
IDs for consistent image usage so that a moving tag such as "latest" doesn't 
yield different images and possibly results when you deploy today, tomorrow, 
next week, etc.  
  

___

users mailing list

users@lists.openshift.redhat.com

http://lists.openshift.redhat.com/openshiftmm/listinfo/users



  

  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: Back-off pulling image "/origin-logging-curator@sha256:b89cbdfc4e0e7d594f7a49c7581ae3f75b9d0313fce2ed8be83ee5c0426af72d"

2016-07-07 Thread Den Cowboy
Thanks. How can I handle this when I'm using my own images?

maybe a more clear explanation:

  


  
  

  

  We
 are using our own docker registry which is secured with a selfsigned 
certificate. So if we place the cert on our openshift node we're able to
 pull. We pulled the openshift-origin images v1.2.0 from dockerhub and 
pushed it inside our docker registry. We are using the registry instead 
of docker.io/openshift/origin-xxx 

This works fine for our router, registry, cluster metrics project etc..


But when we are deploying the logging project: 
https://docs.openshift.org/latest/install_config/aggregate_logging.html it 
doesn't work.

the pull of the registry.com/asco/origin-logging-deployment:v1.2.0 is fine and 
it deploys.

But the problem raises later. The fluentd image is also pulled fine (from our 
registry).

But the images of the rest aren't pulled in the right way, 



example of error events/logs: (origin-logging-"empty" is probably an issue?)



pulling image 
"registry.com/asco/origin-logging-auth-proxy@sha256:179b84eb803fac116f913182c2fb64a2e7adf01dd04fc58e1336d96ce0ce3d65"


Failed to pull image 
"registry.com/asco/origin-logging-auth-proxy@sha256:179b84xb803fac116f913182c2fb64a2e7adf01dd04fc58e1336d96ce0ce3d65":
 image pull failed for 
registry.com/asco/origin-logging-auth-proxy@sha256:179b84ex803fac116f913182c2fb64a2e7adf01dd04fc58e1336d96ce0ce3d65,
 this may be because there are no credentials on this request. details: 
(manifest unknown: manifest unknown)


From: agold...@redhat.com
Date: Thu, 7 Jul 2016 16:34:29 -0400
Subject: Re: Back-off pulling image 
"/origin-logging-curator@sha256:b89cbdfc4e0e7d594f7a49c7581ae3f75b9d0313fce2ed8be83ee5c0426af72d"
To: dencow...@hotmail.com
CC: users@lists.openshift.redhat.com



On Thu, Jul 7, 2016 at 3:48 PM, Den Cowboy <dencow...@hotmail.com> wrote:



Hi,

We are using our own registry which contains some necessary origin-images for 
us.
We already deployed the router, registry and cluster metrics using our regisry:

The images are all in the form of:
myregistry.com/company/origin-

Now I try to deploy a logging project:
After starting the logging deployer template (in which I described our registry 
+ v1.2.0) it starts pulling the origin-logging-deployer image which is fine.

Than everything seems to start but:
Back-off pulling image 
"myregistry.com/company/origin-logging-elasticsearch@sha256:5ad3b9e964ec6e420ac047be6ae96bf04abe817d94a7d77592af1c119543b37b"
(manifest unknown: manifest unknown)


Did you push the image using Docker 1.10 and your node is running Docker 1.9? 
In the deploymentconfig is also the image with the @sha 
This is happening for each image of our deployment (es, kibana, fluentd, ..)

Why is it adding that @sha after our image?

If you're using ImageChangeTriggers, we translate tags to content-addressable 
IDs for consistent image usage so that a moving tag such as "latest" doesn't 
yield different images and possibly results when you deploy today, tomorrow, 
next week, etc.  
  

___

users mailing list

users@lists.openshift.redhat.com

http://lists.openshift.redhat.com/openshiftmm/listinfo/users



  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Back-off pulling image "/origin-logging-curator@sha256:b89cbdfc4e0e7d594f7a49c7581ae3f75b9d0313fce2ed8be83ee5c0426af72d"

2016-07-07 Thread Den Cowboy
Hi,

We are using our own registry which contains some necessary origin-images for 
us.
We already deployed the router, registry and cluster metrics using our regisry:

The images are all in the form of:
myregistry.com/company/origin-

Now I try to deploy a logging project:
After starting the logging deployer template (in which I described our registry 
+ v1.2.0) it starts pulling the origin-logging-deployer image which is fine.

Than everything seems to start but:
Back-off pulling image 
"myregistry.com/company/origin-logging-elasticsearch@sha256:5ad3b9e964ec6e420ac047be6ae96bf04abe817d94a7d77592af1c119543b37b"
(manifest unknown: manifest unknown)


In the deploymentconfig is also the image with the @sha 
This is happening for each image of our deployment (es, kibana, fluentd, ..)

Why is it adding that @sha after our image?
 
  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: Unable to connect with service using mysql-ephemeral template

2016-07-06 Thread Den Cowboy
I have an older environment on amazon (older images + 1.1.6v) and there it 
works for mysql:
I perform a lookup of my service IP:

nslookup 172.30.177.4   Server: 
172.30.0.1  
Address:172.30.0.1#53   

Non-authoritative answer:   
4.177.30.172.in-addr.arpa   name = mysql.dev-activiti.svc.cluster.local.

Authoritative answers can be found from:

But when I perform the same on my environment on OVH (newer version ofcourse)
sh-4.2$ nslookup 172.30.222.94  
Server: 213.186.33.xx   
Address:213.186.33.xx#53

** server can't find 94.222.30.172.in-addr.arpa.: NXDOMAIN  
 
Than it's pointing to a wrong server (not the 172.30... of OpenShift)
I didn't saw an issue pending the installation and also /var/log/messages are 
telling nothing.

logs:
Jul  6 19:11:17 node01 origin-node: I0706 19:11:17.9898994926 
manager.go:1024] Using docker native exec to run cmd [/bin/sh -i -c 
MYSQL_PWD="$MYSQL_PASSWORD" mysql -h 127.0.0.1 -u $MYSQL_USER -D 
$MYSQL_DATABASE -e 'SELECT 1'] inside container {docker 
b4958c468b643b7ec7dc239569f73e2ea8568b6c6d7e4151cffd621c58db5778}
Jul  6 19:11:17 node01 journal: time="2016-07-06T19:11:17.990407769+02:00" 
level=info msg="{Action=exec, 
ID=b4958c468b643b7ec7dc239569f73e2ea8568b6c6d7e4151cffd621c58db5778, 
LoginUID=4294967295, PID=4926}"
Jul  6 19:11:17 node01 journal: time="2016-07-06T19:11:17.991021609+02:00" 
level=info msg="{Action=start, LoginUID=4294967295, PID=4926}"
Jul  6 19:11:18 node01 origin-node: I0706 19:11:18.0422394926 
proxier.go:484] Setting endpoints for "test/mysql:mysql" to [10.1.0.2:3306]

It keeps showing this log every 10 seconds:
sing docker native exec to run cmd [/bin/sh -i -c MYSQL_PWD="$MYSQL_PASSWORD" 
mysql -h 127.0.0.1 -u $MYSQL_USER -D $MYSQL_DATABASE -e 'SELECT 1'] inside 
container ...
From: dencow...@hotmail.com
To: bpar...@redhat.com
Subject: RE: Unable to connect with service using mysql-ephemeral template
Date: Wed, 6 Jul 2016 17:03:56 +
CC: users@lists.openshift.redhat.com




I seem to have the same issue for my postgresdb:
nslookup 172.30.200.135
** server can't find 135.200.30.172.in-addr.arpa.: NXDOMAIN 



From: bpar...@redhat.com
Date: Wed, 6 Jul 2016 12:53:06 -0400
Subject: Re: Unable to connect with service using mysql-ephemeral template
To: dencow...@hotmail.com
CC: users@lists.openshift.redhat.com



On Wed, Jul 6, 2016 at 12:44 PM, Den Cowboy <dencow...@hotmail.com> wrote:



I don't know the best way to check:

​I was wondering if you had other apps deployed in your cluster that were 
accessing this, or other services by service hostname.
​ I see this error in my events after the deploy:
Readiness probe failed: sh: cannot set terminal process group (-1): 
Inappropriate ioctl for device
sh: no job control in this shell
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)



I also saw this: 
https://github.com/openshift/origin/blob/master/docs/debugging-openshift.md
I putted also 8.8.8.8 as ns in my /etc/resolv.conf and rebooted but didn't 
work. Also not after scaling down and up the pod

From: bpar...@redhat.com
Date: Wed, 6 Jul 2016 12:31:14 -0400
Subject: Re: Unable to connect with service using mysql-ephemeral template
To: dencow...@hotmail.com
CC: users@lists.openshift.redhat.com

is service hostname resolution otherwise working in your cluster?


On Wed, Jul 6, 2016 at 12:20 PM, Den Cowboy <dencow...@hotmail.com> wrote:



ping mysql: unknown host mysql
nslookup mysql:  
Server: 213.186.33.xx   
Address:213.186.33.xx#53

** server can't find mysql: NXDOMAIN  
dig: answer 0

content of /etc/resolv.conf:

search test.svc.cluster.local svc.cluster.local cluster.local ovh.net   
nameserver 178.32.27.xx
nameserver 213.186.33.xx
options ndots:5   

This 

RE: Unable to connect with service using mysql-ephemeral template

2016-07-06 Thread Den Cowboy
I seem to have the same issue for my postgresdb:
nslookup 172.30.200.135
** server can't find 135.200.30.172.in-addr.arpa.: NXDOMAIN 



From: bpar...@redhat.com
Date: Wed, 6 Jul 2016 12:53:06 -0400
Subject: Re: Unable to connect with service using mysql-ephemeral template
To: dencow...@hotmail.com
CC: users@lists.openshift.redhat.com



On Wed, Jul 6, 2016 at 12:44 PM, Den Cowboy <dencow...@hotmail.com> wrote:



I don't know the best way to check:

​I was wondering if you had other apps deployed in your cluster that were 
accessing this, or other services by service hostname.
​ I see this error in my events after the deploy:
Readiness probe failed: sh: cannot set terminal process group (-1): 
Inappropriate ioctl for device
sh: no job control in this shell
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)



I also saw this: 
https://github.com/openshift/origin/blob/master/docs/debugging-openshift.md
I putted also 8.8.8.8 as ns in my /etc/resolv.conf and rebooted but didn't 
work. Also not after scaling down and up the pod

From: bpar...@redhat.com
Date: Wed, 6 Jul 2016 12:31:14 -0400
Subject: Re: Unable to connect with service using mysql-ephemeral template
To: dencow...@hotmail.com
CC: users@lists.openshift.redhat.com

is service hostname resolution otherwise working in your cluster?


On Wed, Jul 6, 2016 at 12:20 PM, Den Cowboy <dencow...@hotmail.com> wrote:



ping mysql: unknown host mysql
nslookup mysql:  
Server: 213.186.33.xx   
Address:213.186.33.xx#53

** server can't find mysql: NXDOMAIN  
dig: answer 0

content of /etc/resolv.conf:

search test.svc.cluster.local svc.cluster.local cluster.local ovh.net   
nameserver 178.32.27.xx
nameserver 213.186.33.xx
options ndots:5   

This works fine (IP = service IP):
mysql -utest -ptest -h172.30.222.94   

ping 172.30.222.94  PING 
172.30.222.94 (172.30.222.94) 56(84) bytes of data.From 
10.1.0.1 icmp_seq=1 Destination Host Unreachable   From 
10.1.0.1 icmp_seq=2 Destination Host Unreachable   From 
10.1.0.1 icmp_seq=3 Destination Host Unreachable   From 
10.1.0.1 icmp_seq=4 Destination Host Unreachable
~  

From: bpar...@redhat.com
Date: Wed, 6 Jul 2016 12:02:20 -0400
Subject: Re: Unable to connect with service using mysql-ephemeral template
To: dencow...@hotmail.com
CC: users@lists.openshift.redhat.com

Is service DNS resolution otherwise working in your cluster?

if you just enter the container w/o starting the mysql shell are you able to 
dig/nslookup/ping the mysql hostname?

can you check the /etc/resolv.conf settings within the container to ensure the 
cluster DNS server is listed?


On Wed, Jul 6, 2016 at 11:49 AM, Den Cowboy <dencow...@hotmail.com> wrote:



I'm on:
oc v1.2.0
kubernetes v1.2.0-36-g4a3f9c5

I've deployed the mysql-template which went fine:
Now I've a running mysql container. I go to the terminal inside my webconsole 
to authenticate on my mysql container:

mysql -utest -ptest -h127.0.0.1
> mysql

Fine, but when I try my service as my host:
mysql -utest -ptest -hmysql

Error 2005 (HY000): Unknown MySQL server host 'mysql' (0)

My service above my container is called 'mysql'
Can someone explain this issue?
  

___

users mailing list

users@lists.openshift.redhat.com

http://lists.openshift.redhat.com/openshiftmm/listinfo/users




-- 
Ben Parees | OpenShift


  


-- 
Ben Parees | OpenShift


  


-- 
Ben Parees | OpenShift


  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: Unable to connect with service using mysql-ephemeral template

2016-07-06 Thread Den Cowboy
I see the bz, seems irrelevant because I'm able to access on 127.0.0.1:


From: dencow...@hotmail.com
To: bpar...@redhat.com
Subject: RE: Unable to connect with service using mysql-ephemeral template
Date: Wed, 6 Jul 2016 16:44:08 +
CC: users@lists.openshift.redhat.com




I don't know the best way to check:
I see this error in my events after the deploy:
Readiness probe failed: sh: cannot set terminal process group (-1): 
Inappropriate ioctl for device
sh: no job control in this shell
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)



I also saw this: 
https://github.com/openshift/origin/blob/master/docs/debugging-openshift.md
I putted also 8.8.8.8 as ns in my /etc/resolv.conf and rebooted but didn't 
work. Also not after scaling down and up the pod

From: bpar...@redhat.com
Date: Wed, 6 Jul 2016 12:31:14 -0400
Subject: Re: Unable to connect with service using mysql-ephemeral template
To: dencow...@hotmail.com
CC: users@lists.openshift.redhat.com

is service hostname resolution otherwise working in your cluster?


On Wed, Jul 6, 2016 at 12:20 PM, Den Cowboy <dencow...@hotmail.com> wrote:



ping mysql: unknown host mysql
nslookup mysql:  
Server: 213.186.33.xx   
Address:213.186.33.xx#53

** server can't find mysql: NXDOMAIN  
dig: answer 0

content of /etc/resolv.conf:

search test.svc.cluster.local svc.cluster.local cluster.local ovh.net   
nameserver 178.32.27.xx
nameserver 213.186.33.xx
options ndots:5   

This works fine (IP = service IP):
mysql -utest -ptest -h172.30.222.94   

ping 172.30.222.94  PING 
172.30.222.94 (172.30.222.94) 56(84) bytes of data.From 
10.1.0.1 icmp_seq=1 Destination Host Unreachable   From 
10.1.0.1 icmp_seq=2 Destination Host Unreachable   From 
10.1.0.1 icmp_seq=3 Destination Host Unreachable   From 
10.1.0.1 icmp_seq=4 Destination Host Unreachable
~  

From: bpar...@redhat.com
Date: Wed, 6 Jul 2016 12:02:20 -0400
Subject: Re: Unable to connect with service using mysql-ephemeral template
To: dencow...@hotmail.com
CC: users@lists.openshift.redhat.com

Is service DNS resolution otherwise working in your cluster?

if you just enter the container w/o starting the mysql shell are you able to 
dig/nslookup/ping the mysql hostname?

can you check the /etc/resolv.conf settings within the container to ensure the 
cluster DNS server is listed?


On Wed, Jul 6, 2016 at 11:49 AM, Den Cowboy <dencow...@hotmail.com> wrote:



I'm on:
oc v1.2.0
kubernetes v1.2.0-36-g4a3f9c5

I've deployed the mysql-template which went fine:
Now I've a running mysql container. I go to the terminal inside my webconsole 
to authenticate on my mysql container:

mysql -utest -ptest -h127.0.0.1
> mysql

Fine, but when I try my service as my host:
mysql -utest -ptest -hmysql

Error 2005 (HY000): Unknown MySQL server host 'mysql' (0)

My service above my container is called 'mysql'
Can someone explain this issue?
  

___

users mailing list

users@lists.openshift.redhat.com

http://lists.openshift.redhat.com/openshiftmm/listinfo/users




-- 
Ben Parees | OpenShift


  


-- 
Ben Parees | OpenShift


  

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: Unable to connect with service using mysql-ephemeral template

2016-07-06 Thread Den Cowboy
I don't know the best way to check:
I see this error in my events after the deploy:
Readiness probe failed: sh: cannot set terminal process group (-1): 
Inappropriate ioctl for device
sh: no job control in this shell
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)



I also saw this: 
https://github.com/openshift/origin/blob/master/docs/debugging-openshift.md
I putted also 8.8.8.8 as ns in my /etc/resolv.conf and rebooted but didn't 
work. Also not after scaling down and up the pod

From: bpar...@redhat.com
Date: Wed, 6 Jul 2016 12:31:14 -0400
Subject: Re: Unable to connect with service using mysql-ephemeral template
To: dencow...@hotmail.com
CC: users@lists.openshift.redhat.com

is service hostname resolution otherwise working in your cluster?


On Wed, Jul 6, 2016 at 12:20 PM, Den Cowboy <dencow...@hotmail.com> wrote:



ping mysql: unknown host mysql
nslookup mysql:  
Server: 213.186.33.xx   
Address:213.186.33.xx#53

** server can't find mysql: NXDOMAIN  
dig: answer 0

content of /etc/resolv.conf:

search test.svc.cluster.local svc.cluster.local cluster.local ovh.net   
nameserver 178.32.27.xx
nameserver 213.186.33.xx
options ndots:5   

This works fine (IP = service IP):
mysql -utest -ptest -h172.30.222.94   

ping 172.30.222.94  PING 
172.30.222.94 (172.30.222.94) 56(84) bytes of data.From 
10.1.0.1 icmp_seq=1 Destination Host Unreachable   From 
10.1.0.1 icmp_seq=2 Destination Host Unreachable   From 
10.1.0.1 icmp_seq=3 Destination Host Unreachable   From 
10.1.0.1 icmp_seq=4 Destination Host Unreachable
~  

From: bpar...@redhat.com
Date: Wed, 6 Jul 2016 12:02:20 -0400
Subject: Re: Unable to connect with service using mysql-ephemeral template
To: dencow...@hotmail.com
CC: users@lists.openshift.redhat.com

Is service DNS resolution otherwise working in your cluster?

if you just enter the container w/o starting the mysql shell are you able to 
dig/nslookup/ping the mysql hostname?

can you check the /etc/resolv.conf settings within the container to ensure the 
cluster DNS server is listed?


On Wed, Jul 6, 2016 at 11:49 AM, Den Cowboy <dencow...@hotmail.com> wrote:



I'm on:
oc v1.2.0
kubernetes v1.2.0-36-g4a3f9c5

I've deployed the mysql-template which went fine:
Now I've a running mysql container. I go to the terminal inside my webconsole 
to authenticate on my mysql container:

mysql -utest -ptest -h127.0.0.1
> mysql

Fine, but when I try my service as my host:
mysql -utest -ptest -hmysql

Error 2005 (HY000): Unknown MySQL server host 'mysql' (0)

My service above my container is called 'mysql'
Can someone explain this issue?
  

___

users mailing list

users@lists.openshift.redhat.com

http://lists.openshift.redhat.com/openshiftmm/listinfo/users




-- 
Ben Parees | OpenShift


  


-- 
Ben Parees | OpenShift


  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Unable to connect with service using mysql-ephemeral template

2016-07-06 Thread Den Cowboy
I'm on:
oc v1.2.0
kubernetes v1.2.0-36-g4a3f9c5

I've deployed the mysql-template which went fine:
Now I've a running mysql container. I go to the terminal inside my webconsole 
to authenticate on my mysql container:

mysql -utest -ptest -h127.0.0.1
> mysql

Fine, but when I try my service as my host:
mysql -utest -ptest -hmysql

Error 2005 (HY000): Unknown MySQL server host 'mysql' (0)

My service above my container is called 'mysql'
Can someone explain this issue?
  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: define openshift origin version (stable 1.2.0) for Ansible install

2016-06-23 Thread Den Cowboy
Why are you actually building 1.2.0-4 to let 1.2.0 work instead of downgrading 
(or using the older) origin-1.2.0-1.git.10183.7386b49.el7 like alexwauck? 
Because in ansible is I'm able to use 
openshift_pkg_version=-1.2.0-1.git.10183.7386b49.el7 but not 
openshift_pkg_version=-1.2.0-4.el7

Probably because you said: "This version is still getting signed and pushed 
out.  That takes more time."

Or is this because the version for origin-1.2.0-1.git.10183.7386b49.el7 is:
v1.2.0-1-g7386b49

Which is also a 'bad' version.
So as far as I understand we have to wait till origin-1.2.0-4.el7 is available 
for our ansible install?



From: dencow...@hotmail.com
To: tdaw...@redhat.com
Subject: RE: define openshift origin version (stable 1.2.0) for Ansible install
Date: Thu, 23 Jun 2016 11:17:12 +
CC: users@lists.openshift.redhat.com




Can you maybe explain how to use this?
I performed a yum --enablerepo=centos-openshift-origin-testing install origin\*

oc version gives me 
oc v1.2.0
kubernetes v1.2.0-36-g4a3f9c5

But how do I have to add nodes (using ansible) and that kind of stuff? After 
performing the yum I've just one master and one node on the same host.
Thanks



> From: tdaw...@redhat.com
> Date: Wed, 22 Jun 2016 17:27:17 -0500
> Subject: Re: define openshift origin version (stable 1.2.0) for Ansible 
> install
> To: alexwa...@exosite.com
> CC: dencow...@hotmail.com; users@lists.openshift.redhat.com
> 
> Yep, seems that my new way of creating the rpms for CentOS got the
> version of the rpm right, but wrong for setting the ldflags, which was
> causing the binary to have a different version.
> 
> At some point in the near future we need to re-evaluate git tags and
> versions in the origin.spec file.  (Why it is the rpm spec version
> always 0.0.1 when in reality the version everywhere else is 1.2.0)
> 
> Worked with Scott to figure out a correct way to consistently build
> the rpms.  In the end, neither of our workflows failed in sneaky ways,
> so I just fixed things manually.  Not something we can do
> consistently, but I really needed to get a working 1.2.0 version out.
> 
> What works:  origin-1.2.0-4.el7
> https://cbs.centos.org/koji/buildinfo?buildID=11349
> 
> You should be able to test it within an hour via
> yum --enablerepo=centos-openshift-origin-testing install origin\*
> 
> This version is still getting signed and pushed out.  That takes more time.
> 
> Sorry for all the problems this has caused.
> 
> Troy
> 
> 
> On Wed, Jun 22, 2016 at 2:57 PM, Alex Wauck <alexwa...@exosite.com> wrote:
> > This seems to be caused by the 1.2.0-2.el7 packages containing the wrong
> > version.  I had a conversation on IRC about this earlier (#openshift), and
> > somebody confirmed it.  I suspect a new release will be available soon.  At
> > any rate, downgrading to 1.2.0-1.el7 worked for us.
> >
> > On Wed, Jun 22, 2016 at 8:55 AM, Den Cowboy <dencow...@hotmail.com> wrote:
> >>
> >> I tried:
> >> [OSEv3:vars]
> >> ansible_ssh_user=root
> >> deployment_type=origin
> >> openshift_pkg_version=-1.2.0
> >> openshift_image_tag=-1.2.0
> >>
> >> But it installed a release canidad and not v1.2.0
> >>
> >> oc v1.2.0-rc1-13-g2e62fab
> >> kubernetes v1.2.0-36-g4a3f9c5
> >>
> >> 
> >> From: dencow...@hotmail.com
> >> To: cont...@stephane-klein.info
> >> Subject: RE: define openshift origin version (stable 1.2.0) for Ansible
> >> install
> >> Date: Wed, 22 Jun 2016 12:51:18 +
> >> CC: users@lists.openshift.redhat.com
> >>
> >>
> >> Thanks for your fast reply
> >> This is the beginning of my playbook:
> >>
> >> [OSEv3:vars]
> >> ansible_ssh_user=root
> >> deployment_type=origin
> >> openshift_pkg_version=v1.2.0
> >> openshift_image_tag=v1.2.0
> >>
> >> But I got an error:
> >> TASK [openshift_master_ca : Install the base package for admin tooling]
> >> 
> >> FAILED! => {"changed": false, "failed": true, "msg": "No Package matching
> >> 'originv1.2.0' found available, installed or updated", "rc": 0, "results":
> >> []}
> >>
> >> 
> >> From: cont...@stephane-klein.info
> >> Date: Wed, 22 Jun 2016 13:53:57 +0200
> >> Subject: Re: define openshift origin version (stable 1.2.0) for Ansible
> >> install
> >> To: dencow...@hotmail.com
> >> CC: users@lists.openshift.redhat.com
> >>
> >> Personally I use this options to f

RE: define openshift origin version (stable 1.2.0) for Ansible install

2016-06-23 Thread Den Cowboy
Can you maybe explain how to use this?
I performed a yum --enablerepo=centos-openshift-origin-testing install origin\*

oc version gives me 
oc v1.2.0
kubernetes v1.2.0-36-g4a3f9c5

But how do I have to add nodes (using ansible) and that kind of stuff? After 
performing the yum I've just one master and one node on the same host.
Thanks



> From: tdaw...@redhat.com
> Date: Wed, 22 Jun 2016 17:27:17 -0500
> Subject: Re: define openshift origin version (stable 1.2.0) for Ansible 
> install
> To: alexwa...@exosite.com
> CC: dencow...@hotmail.com; users@lists.openshift.redhat.com
> 
> Yep, seems that my new way of creating the rpms for CentOS got the
> version of the rpm right, but wrong for setting the ldflags, which was
> causing the binary to have a different version.
> 
> At some point in the near future we need to re-evaluate git tags and
> versions in the origin.spec file.  (Why it is the rpm spec version
> always 0.0.1 when in reality the version everywhere else is 1.2.0)
> 
> Worked with Scott to figure out a correct way to consistently build
> the rpms.  In the end, neither of our workflows failed in sneaky ways,
> so I just fixed things manually.  Not something we can do
> consistently, but I really needed to get a working 1.2.0 version out.
> 
> What works:  origin-1.2.0-4.el7
> https://cbs.centos.org/koji/buildinfo?buildID=11349
> 
> You should be able to test it within an hour via
> yum --enablerepo=centos-openshift-origin-testing install origin\*
> 
> This version is still getting signed and pushed out.  That takes more time.
> 
> Sorry for all the problems this has caused.
> 
> Troy
> 
> 
> On Wed, Jun 22, 2016 at 2:57 PM, Alex Wauck <alexwa...@exosite.com> wrote:
> > This seems to be caused by the 1.2.0-2.el7 packages containing the wrong
> > version.  I had a conversation on IRC about this earlier (#openshift), and
> > somebody confirmed it.  I suspect a new release will be available soon.  At
> > any rate, downgrading to 1.2.0-1.el7 worked for us.
> >
> > On Wed, Jun 22, 2016 at 8:55 AM, Den Cowboy <dencow...@hotmail.com> wrote:
> >>
> >> I tried:
> >> [OSEv3:vars]
> >> ansible_ssh_user=root
> >> deployment_type=origin
> >> openshift_pkg_version=-1.2.0
> >> openshift_image_tag=-1.2.0
> >>
> >> But it installed a release canidad and not v1.2.0
> >>
> >> oc v1.2.0-rc1-13-g2e62fab
> >> kubernetes v1.2.0-36-g4a3f9c5
> >>
> >> 
> >> From: dencow...@hotmail.com
> >> To: cont...@stephane-klein.info
> >> Subject: RE: define openshift origin version (stable 1.2.0) for Ansible
> >> install
> >> Date: Wed, 22 Jun 2016 12:51:18 +
> >> CC: users@lists.openshift.redhat.com
> >>
> >>
> >> Thanks for your fast reply
> >> This is the beginning of my playbook:
> >>
> >> [OSEv3:vars]
> >> ansible_ssh_user=root
> >> deployment_type=origin
> >> openshift_pkg_version=v1.2.0
> >> openshift_image_tag=v1.2.0
> >>
> >> But I got an error:
> >> TASK [openshift_master_ca : Install the base package for admin tooling]
> >> 
> >> FAILED! => {"changed": false, "failed": true, "msg": "No Package matching
> >> 'originv1.2.0' found available, installed or updated", "rc": 0, "results":
> >> []}
> >>
> >> 
> >> From: cont...@stephane-klein.info
> >> Date: Wed, 22 Jun 2016 13:53:57 +0200
> >> Subject: Re: define openshift origin version (stable 1.2.0) for Ansible
> >> install
> >> To: dencow...@hotmail.com
> >> CC: users@lists.openshift.redhat.com
> >>
> >> Personally I use this options to fix OpenShift version:
> >>
> >> openshift_pkg_version=v1.2.0
> >> openshift_image_tag=v1.2.0
> >>
> >>
> >> 2016-06-22 13:24 GMT+02:00 Den Cowboy <dencow...@hotmail.com>:
> >>
> >> Is it possible to define and origin version in your ansible install.
> >> At the moment we have so many issues with our newest install (while we had
> >> 1.1.6 pretty stable for some time)
> >> We want to go to a stable 1.2.0
> >>
> >> Our issues:
> >> version = oc v1.2.0-rc1-13-g2e62fab
> >> So images are pulled with tag oc v1.2.0-rc1-13-g2e62fab which doesn't
> >> exist in openshift. Okay we have a workaround by editing the master and 
> >> node
> >> config's and using 'i--image' but whe don't

RE: define openshift origin version (stable 1.2.0) for Ansible install

2016-06-22 Thread Den Cowboy
Thanks for your fast reply
This is the beginning of my playbook:

[OSEv3:vars]
ansible_ssh_user=root
deployment_type=origin
openshift_pkg_version=v1.2.0
openshift_image_tag=v1.2.0

But I got an error:
TASK [openshift_master_ca : Install the base package for admin tooling] 
FAILED! => {"changed": false, "failed": true, "msg": "No Package matching 
'originv1.2.0' found available, installed or updated", "rc": 0, "results": []}

From: cont...@stephane-klein.info
Date: Wed, 22 Jun 2016 13:53:57 +0200
Subject: Re: define openshift origin version (stable 1.2.0) for Ansible install
To: dencow...@hotmail.com
CC: users@lists.openshift.redhat.com

Personally I use this options to fix OpenShift version:

openshift_pkg_version=v1.2.0
openshift_image_tag=v1.2.0


2016-06-22 13:24 GMT+02:00 Den Cowboy <dencow...@hotmail.com>:



Is it possible to define and origin version in your ansible install.
At the moment we have so many issues with our newest install (while we had 
1.1.6 pretty stable for some time)
We want to go to a stable 1.2.0

Our issues:
version = oc v1.2.0-rc1-13-g2e62fab 
So images are pulled with tag oc v1.2.0-rc1-13-g2e62fab which doesn't exist in 
openshift. Okay we have a workaround by editing the master and node config's 
and using 'i--image' but whe don't like this approach

logs on our nodes:
 level=error msg="Error reading loginuid: open /proc/27182/loginuid: no such 
file or directory"
level=error msg="Error reading loginuid: open /proc/27182/loginuid: no such 
file or directory"

We started a mysql instance. We weren't able to use the service name to connect:
mysql -u test -h mysql -p did NOT work
mysql -u test -h 172.30.x.x (service ip) -p did work..

So we have too many issues on this version of OpenShift. We've deployed in a 
team several times and are pretty confident with the setup and it was always 
working fine for us. But now this last weird versions seem really bad for us.
  

___

users mailing list

users@lists.openshift.redhat.com

http://lists.openshift.redhat.com/openshiftmm/listinfo/users




-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: MySQL: Readiness probe failed

2016-06-21 Thread Den Cowboy
imestamp server option (see 
documentation for more details).
2016-06-21 16:20:02 0 [Note] /opt/rh/rh-mysql56/root/usr/libexec/mysqld (mysqld 
5.6.26) starting as process 1 ...
2016-06-21 16:20:02 1 [Note] Plugin 'FEDERATED' is disabled.
2016-06-21 16:20:02 6c4828a2d840 InnoDB: Warning: Using 
innodb_additional_mem_pool_size is DEPRECATED. This option may be removed in 
future releases, together with the option innodb_use_sys_malloc and with the 
InnoDB's internal memory allocator.
2016-06-21 16:20:02 1 [Note] InnoDB: Using atomics to ref count buffer pool 
pages
2016-06-21 16:20:02 1 [Note] InnoDB: The InnoDB memory heap is disabled
2016-06-21 16:20:02 1 [Note] InnoDB: Mutexes and rw_locks use GCC atomic 
builtins
2016-06-21 16:20:02 1 [Note] InnoDB: Memory barrier is not used
2016-06-21 16:20:02 1 [Note] InnoDB: Compressed tables use zlib 1.2.7
2016-06-21 16:20:02 1 [Note] InnoDB: Using Linux native AIO
2016-06-21 16:20:02 1 [Note] InnoDB: Using CPU crc32 instructions
2016-06-21 16:20:02 1 [Note] InnoDB: Initializing buffer pool, size = 32.0M
2016-06-21 16:20:02 1 [Note] InnoDB: Completed initialization of buffer pool
2016-06-21 16:20:02 1 [Note] InnoDB: Highest supported file format is Barracuda.
2016-06-21 16:20:02 1 [Note] InnoDB: 128 rollback segment(s) are active.
2016-06-21 16:20:02 1 [Note] InnoDB: Waiting for purge to start
2016-06-21 16:20:02 1 [Note] InnoDB: 5.6.26 started; log sequence number 1625997
2016-06-21 16:20:02 1 [Note] RSA private key file not found: 
/var/lib/mysql/data//private_key.pem. Some authentication plugins will not work.
2016-06-21 16:20:02 1 [Note] RSA public key file not found: 
/var/lib/mysql/data//public_key.pem. Some authentication plugins will not work.
2016-06-21 16:20:02 1 [Note] Server hostname (bind-address): '*'; port: 3306
2016-06-21 16:20:02 1 [Note] IPv6 is available.
2016-06-21 16:20:02 1 [Note]   - '::' resolves to '::';
2016-06-21 16:20:02 1 [Note] Server socket created on IP: '::'.
2016-06-21 16:20:02 1 [Warning] 'user' entry 'root@mysql-1-irei5' ignored in 
--skip-name-resolve mode.
2016-06-21 16:20:02 1 [Warning] 'user' entry '@mysql-1-irei5' ignored in 
--skip-name-resolve mode.
2016-06-21 16:20:02 1 [Warning] 'proxies_priv' entry '@ root@mysql-1-irei5' 
ignored in --skip-name-resolve mode.
2016-06-21 16:20:02 1 [Note] Event Scheduler: Loaded 0 events
2016-06-21 16:20:02 1 [Note] /opt/rh/rh-mysql56/root/usr/libexec/mysqld: ready 
for connections.
Version: '5.6.26'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL 
Community Server (GPL)

From: bpar...@redhat.com
Date: Tue, 21 Jun 2016 11:20:22 -0400
Subject: Re: MySQL: Readiness probe failed
To: dencow...@hotmail.com
CC: users@lists.openshift.redhat.com

can you provide the logs from the postgres pod?


On Tue, Jun 21, 2016 at 10:44 AM, Den Cowboy <dencow...@hotmail.com> wrote:



I'm using the MySQL template and started it with the right environment variables
The MySQL is running fine but I got this error and I'm not able to access my 
mysql on its service name:

mysql -u myuser -h mysql -p
password:xxx


Readiness probe failed: sh: cannot set terminal process group (-1): 
Inappropriate ioctl for device
sh: no job control in this shell
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)

  

___

users mailing list

users@lists.openshift.redhat.com

http://lists.openshift.redhat.com/openshiftmm/listinfo/users




-- 
Ben Parees | OpenShift


  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


MySQL: Readiness probe failed

2016-06-21 Thread Den Cowboy
I'm using the MySQL template and started it with the right environment variables
The MySQL is running fine but I got this error and I'm not able to access my 
mysql on its service name:

mysql -u myuser -h mysql -p
password:xxx


Readiness probe failed: sh: cannot set terminal process group (-1): 
Inappropriate ioctl for device
sh: no job control in this shell
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)

  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Ansible install: creation of ca.crt to authenticate on master

2016-06-20 Thread Den Cowboy
I've 2 Centos instances. They both have a public IP on eth0 (148.xx.xx.xx) and 
both have an internal IP (172.16.xx.xx) on eth1.
I'm able to create my cluster with my private IP inside my /etc/ansible/hosts 
file.

But when I try to login internally:
oc login https://172.xx.xx:8443
Unable to connect to the server: x509: certificate is valid for 149.xx.xx.xx, 
172.30.0.1, not 172.16.
I'm only able to authenticate with my public IP. What am I doing wrong?

This is my /etc/ansible/hosts file
# Create an OSEv3 group that contains the masters, nodes, and etcd groups
[OSEv3:children]
masters
nodes
etcd

# Set variables common for all OSEv3 hosts
[OSEv3:vars]
ansible_ssh_user=root
deployment_type=origin


# uncomment the following to enable htpasswd authentication; defaults to 
DenyAllPasswordIdentityProvider
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 
'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': 
'/etc/origin/master/htpasswd'}]

# host group for masters
[masters]
172.16.0.xx

# host group for etcd
[etcd]

# host group for nodes, includes region info
[nodes]
172.16.0.ww openshift_node_labels="{'region': 'primary', 'zone': 'east'}"


Thanks
  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: Use own Dockerhub registry instead of openshift

2016-06-17 Thread Den Cowboy




















$ oadm router router --replicas=1 \
--credentials='/etc/origin/master/openshift-router.kubeconfig' \
--service-account=router \
--images=docker.io/my-registry/origin-${component}:${latest}






dc:
image: 'docker.io/my-registry/origin-:'
imagePullPolicy: IfNotPresent

It will work if I use my real image name origin-haproxy and real tag but that 
doesn't seem like a real solution.


> Date: Fri, 17 Jun 2016 09:42:34 -0400
> Subject: Re: Use own Dockerhub registry instead of openshift
> From: sdod...@redhat.com
> To: dencow...@hotmail.com
> CC: ccole...@redhat.com; users@lists.openshift.redhat.com
> 
> You need to add
> `--images=docker.io/my-registry/origin-${component}:${latest}` to the
> oadm invocation.
> 
> On Fri, Jun 17, 2016 at 9:38 AM, Den Cowboy <dencow...@hotmail.com> wrote:
> > Hi Scott,
> >
> > I did the following now:
> > In master-config.yaml:
> > my-registry/origin-${component}:${latest}
> >
> > In node-config.yaml:
> > my-registry/origin-${component}:${latest}
> >
> > sudo service origin-master restart
> > sudo service origin-node restart
> >
> > Created a router:
> >
> > oadm router router --replicas=1 \
> >
> > --credentials='/etc/origin/master/openshift-router.kubeconfig' \
> >
> > --service-account=router
> >
> >
> > This is in my deploymentconfig:
> >
> > image: openshift/origin-haproxy-router:v1.2.0-1-g7386b49
> > imagePullPolicy: IfNotPresent
> >
> >
> >> Date: Fri, 17 Jun 2016 09:23:56 -0400
> >
> >> Subject: Re: Use own Dockerhub registry instead of openshift
> >> From: sdod...@redhat.com
> >> To: dencow...@hotmail.com
> >> CC: ccole...@redhat.com; users@lists.openshift.redhat.com
> >>
> >> So in your deployment config it should just be the fully qualified
> >> image repository, so 'docker.io/registry-name/haproxy-router:v1.2.0'
> >> not the template. If you didn't re-run the installer you'll also
> >> want to set imageConfig format in /etc/origin/node/node-config.yaml
> >> too, and of course after setting these values restart the node
> >> service.
> >>
> >> On Fri, Jun 17, 2016 at 9:14 AM, Den Cowboy <dencow...@hotmail.com> wrote:
> >> > Sorry for the spam but still stuck on this issue.
> >> > Changing the ansible/hosts file changed the master-config.
> >> > But deploymentconfig isn't changed.
> >> >
> >> > The masterconfig contains
> >> > docker.io/my-registry/origin-${component}:${latest}
> >> >
> >> > So the dc config needs to be:
> >> > my-registry/origin-${component}:${latest} but it's
> >> > opensfhit/origin-${component}:
> >> >
> >> > So it seems this will only work if you have an own registry which is
> >> > called
> >> > something like:
> >> > https://x:5000/openshift/origin-${component}:${latest}
> >> >
> >> > But we are using a registry of docker.io where you can only create
> >> > something
> >> > like docker.io/registry-name/origin-... (registry-name can't be
> >> > openshift
> >> > because that's the name of the registry of red hat).
> >> >
> >> > 
> >> > From: dencow...@hotmail.com
> >> > To: sdod...@redhat.com; ccole...@redhat.com
> >> > Subject: RE: Use own Dockerhub registry instead of openshift
> >> > Date: Thu, 16 Jun 2016 12:47:05 +
> >> > CC: users@lists.openshift.redhat.com
> >> >
> >> >
> >> > I edited my playbook.
> >> > this in my master-config:
> >> >
> >> > docker.io/my-registry/origin-${component}:${latest}
> >> >
> >> >
> >> > But this is in the dc config (when I try to start a router).
> >> > image: openshift/origin-haproxy-router:v1.2.0-1-g7386b49
> >> >
> >> > and it fails.
> >> >
> >> > 
> >> > From: dencow...@hotmail.com
> >> > To: sdod...@redhat.com; ccole...@redhat.com
> >> > Subject: RE: Use own Dockerhub registry instead of openshift
> >> > Date: Wed, 15 Jun 2016 17:33:58 +
> >> > CC: users@lists.openshift.redhat.com
> >> >
> >> > https://github.com/openshift/origin/issues/9315 same issue here for the
> >> > image tag
> >> >
> >> > ___

RE: Use own Dockerhub registry instead of openshift

2016-06-17 Thread Den Cowboy
Hi Scott,

I did the following now:
In master-config.yaml:
my-registry/origin-${component}:${latest}

In node-config.yaml:
my-registry/origin-${component}:${latest}

sudo service origin-master restart
sudo service origin-node restart

Created a router:




















oadm router router --replicas=1 \
--credentials='/etc/origin/master/openshift-router.kubeconfig' \
--service-account=router






This is in my deploymentconfig: 

image: openshift/origin-haproxy-router:v1.2.0-1-g7386b49
imagePullPolicy: IfNotPresent


> Date: Fri, 17 Jun 2016 09:23:56 -0400
> Subject: Re: Use own Dockerhub registry instead of openshift
> From: sdod...@redhat.com
> To: dencow...@hotmail.com
> CC: ccole...@redhat.com; users@lists.openshift.redhat.com
> 
> So in your deployment config it should just be the fully qualified
> image repository, so 'docker.io/registry-name/haproxy-router:v1.2.0'
> not the template.   If you didn't re-run the installer you'll also
> want to set imageConfig format in /etc/origin/node/node-config.yaml
> too, and of course after setting these values restart the node
> service.
> 
> On Fri, Jun 17, 2016 at 9:14 AM, Den Cowboy <dencow...@hotmail.com> wrote:
> > Sorry for the spam but still stuck on this issue.
> > Changing the ansible/hosts file changed the master-config.
> > But deploymentconfig isn't changed.
> >
> > The masterconfig contains
> > docker.io/my-registry/origin-${component}:${latest}
> >
> > So the dc config needs to be:
> > my-registry/origin-${component}:${latest} but it's
> > opensfhit/origin-${component}:
> >
> > So it seems this will only work if you have an own registry which is called
> > something like:
> > https://x:5000/openshift/origin-${component}:${latest}
> >
> > But we are using a registry of docker.io where you can only create something
> > like docker.io/registry-name/origin-... (registry-name can't be openshift
> > because that's the name of the registry of red hat).
> >
> > 
> > From: dencow...@hotmail.com
> > To: sdod...@redhat.com; ccole...@redhat.com
> > Subject: RE: Use own Dockerhub registry instead of openshift
> > Date: Thu, 16 Jun 2016 12:47:05 +
> > CC: users@lists.openshift.redhat.com
> >
> >
> > I edited my playbook.
> > this in my master-config:
> >
> > docker.io/my-registry/origin-${component}:${latest}
> >
> >
> > But this is in the dc config (when I try to start a router).
> >   image: openshift/origin-haproxy-router:v1.2.0-1-g7386b49
> >
> > and it fails.
> >
> > 
> > From: dencow...@hotmail.com
> > To: sdod...@redhat.com; ccole...@redhat.com
> > Subject: RE: Use own Dockerhub registry instead of openshift
> > Date: Wed, 15 Jun 2016 17:33:58 +
> > CC: users@lists.openshift.redhat.com
> >
> > https://github.com/openshift/origin/issues/9315 same issue here for the
> > image tag
> >
> > 
> > From: dencow...@hotmail.com
> > To: sdod...@redhat.com; ccole...@redhat.com
> > Subject: RE: Use own Dockerhub registry instead of openshift
> > Date: Wed, 15 Jun 2016 17:00:37 +
> > CC: users@lists.openshift.redhat.com
> >
> > Hi, thanks Scott.
> > I've edited the playbook and reran it:
> >
> > I've a registry with the origin-pod image and the ha-proxy-router image in
> > my registry.
> > I tried to deploy my router
> >
> > Error syncing pod, skipping: failed to "StartContainer" for "POD" with
> > ErrImagePull: "image pull failed for
> > docker.io/my-repo/origin-pod:v1.2.0-1-g7386b49, this may be because there
> > are no credentials on this request. details: (Tag v1.2.0-1-g7386b49 not
> > found in repository docker.io/my-repo/origin-pod)"
> >
> > 2 issues:
> > no credentials on this request (it's a public registry so probably no
> > issue?)
> > tag v1.2.0-1-g7386b49 not found: I have an image with tag v1.2.0 and tag
> > latest (on the same image). I don't know why it tries to pull an image with
> > this tag because this tag doesn't even exist in the OpenShift repo on
> > Dockerhub:
> >
> > ...
> > v1.3.0-alpha.0
> > 511 KB
> > 2 months ago
> > v1.2.0-rc2
> > 511 KB
> > 2 months ago
> > v1.2.0-rc1
> > 511 KB
> > 2 months ago
> > v1.1.6
> > 511 KB
> > 2 months ago
> > ...
> >
> >
> >
> >> Date: Wed, 15 Jun 2016 11:55:24 -0400
> >> Subject: Re: Use own Dockerhub registry instead of opens

RE: Use own Dockerhub registry instead of openshift

2016-06-17 Thread Den Cowboy
Sorry for the spam but still stuck on this issue.
Changing the ansible/hosts file changed the master-config. 
But deploymentconfig isn't changed.

The masterconfig contains
docker.io/my-registry/origin-${component}:${latest}

So the dc config needs to be:
my-registry/origin-${component}:${latest} but it's 
opensfhit/origin-${component}:

So it seems this will only work if you have an own registry which is called 
something like:
https://x:5000/openshift/origin-${component}:${latest}

But we are using a registry of docker.io where you can only create something 
like docker.io/registry-name/origin-... (registry-name can't be openshift 
because that's the name of the registry of red hat).

From: dencow...@hotmail.com
To: sdod...@redhat.com; ccole...@redhat.com
Subject: RE: Use own Dockerhub registry instead of openshift
Date: Thu, 16 Jun 2016 12:47:05 +
CC: users@lists.openshift.redhat.com




I edited my playbook.
this in my master-config:

docker.io/my-registry/origin-${component}:${latest}


But this is in the dc config (when I try to start a router).
  image: openshift/origin-haproxy-router:v1.2.0-1-g7386b49

and it fails.

From: dencow...@hotmail.com
To: sdod...@redhat.com; ccole...@redhat.com
Subject: RE: Use own Dockerhub registry instead of openshift
Date: Wed, 15 Jun 2016 17:33:58 +
CC: users@lists.openshift.redhat.com




https://github.com/openshift/origin/issues/9315 same issue here for the image 
tag

From: dencow...@hotmail.com
To: sdod...@redhat.com; ccole...@redhat.com
Subject: RE: Use own Dockerhub registry instead of openshift
Date: Wed, 15 Jun 2016 17:00:37 +
CC: users@lists.openshift.redhat.com




Hi, thanks Scott.
I've edited the playbook and reran it:

I've a registry with the origin-pod image and the ha-proxy-router image in my 
registry.
I tried to deploy my router

Error syncing pod, skipping: failed to "StartContainer" for "POD" with 
ErrImagePull: "image pull failed for 
docker.io/my-repo/origin-pod:v1.2.0-1-g7386b49, this may be because 
there are no credentials on this request.  details: (Tag 
v1.2.0-1-g7386b49 not found in repository docker.io/my-repo/origin-pod)"



2 issues: 
no credentials on this request (it's a public registry so probably no issue?)
tag 
v1.2.0-1-g7386b49 not found: I have an image with tag v1.2.0 and tag latest (on 
the same image). I don't know why it tries to pull an image with this tag 
because this tag doesn't even exist in the OpenShift repo on Dockerhub:

...
v1.3.0-alpha.0511 KB2 months agov1.2.0-rc2511 KB2 months agov1.2.0-rc1511 KB2 
months agov1.1.6511 KB2 months ago
...



> Date: Wed, 15 Jun 2016 11:55:24 -0400
> Subject: Re: Use own Dockerhub registry instead of openshift
> From: sdod...@redhat.com
> To: dencow...@hotmail.com
> CC: ccole...@redhat.com; users@lists.openshift.redhat.com
> 
> Den,
> 
> Here's the ansible variable documented in the example inventories. I'd
> suggest fully qualifying it ie:
> 'hub.docker.io/dencowboy/test-${component}:${version}'
> https://github.com/openshift/openshift-ansible/blob/master/inventory/byo/hosts.origin.example#L79-L82
> 
> On Wed, Jun 15, 2016 at 11:32 AM, Den Cowboy <dencow...@hotmail.com> wrote:
> > Thanks for the fast reply. Where can we find it in the ansible repo?
> > https://github.com/openshift/openshift-ansible
> >
> > Do we also need to change our images of or do we have to create a "test"
> > project.
> > For example if we want to push all the images with version 1.1.6 to our repo
> > on dockerhub wich is called test:
> > normally we do this as: test/origin-...:v1.1.6 but than it isn't inserted in
> > the "openshift" project probably?
> > Or do we have to call it: test/openshift/origin-... (if that's possible on
> > docker hub)
> >
> >> Date: Wed, 15 Jun 2016 09:54:00 -0400
> >> Subject: Re: Use own Dockerhub registry instead of openshift
> >> From: ccole...@redhat.com
> >> To: dencow...@hotmail.com
> >> CC: users@lists.openshift.redhat.com
> >>
> >> You can specify a different image pattern in Ansible (or in the CLI
> >> tools oadm registry / oadm router) to tell OpenShift where to pull the
> >> images from. You'll need to match the Origin pattern though
> >> (registry/namespace/openshift-{same_suffixes_as_origin}) and have a
> >> consistent tag for all of them.
> >>
> >> On Wed, Jun 15, 2016 at 9:16 AM, Den Cowboy <dencow...@hotmail.com> wrote:
> >> > We are setting up a POC for OpenShift Origin.
> >> > We try to use all our own images (so an own docker hub account and reuse
> >> > the
> >> > same images of OpenShift).
> >> > Because we had a big issue some time ago in ou

RE: Use own Dockerhub registry instead of openshift

2016-06-15 Thread Den Cowboy
https://github.com/openshift/origin/issues/9315 same issue here for the image 
tag

From: dencow...@hotmail.com
To: sdod...@redhat.com; ccole...@redhat.com
Subject: RE: Use own Dockerhub registry instead of openshift
Date: Wed, 15 Jun 2016 17:00:37 +
CC: users@lists.openshift.redhat.com




Hi, thanks Scott.
I've edited the playbook and reran it:

I've a registry with the origin-pod image and the ha-proxy-router image in my 
registry.
I tried to deploy my router

Error syncing pod, skipping: failed to "StartContainer" for "POD" with 
ErrImagePull: "image pull failed for 
docker.io/my-repo/origin-pod:v1.2.0-1-g7386b49, this may be because 
there are no credentials on this request.  details: (Tag 
v1.2.0-1-g7386b49 not found in repository docker.io/my-repo/origin-pod)"



2 issues: 
no credentials on this request (it's a public registry so probably no issue?)
tag 
v1.2.0-1-g7386b49 not found: I have an image with tag v1.2.0 and tag latest (on 
the same image). I don't know why it tries to pull an image with this tag 
because this tag doesn't even exist in the OpenShift repo on Dockerhub:

...
v1.3.0-alpha.0511 KB2 months agov1.2.0-rc2511 KB2 months agov1.2.0-rc1511 KB2 
months agov1.1.6511 KB2 months ago
...



> Date: Wed, 15 Jun 2016 11:55:24 -0400
> Subject: Re: Use own Dockerhub registry instead of openshift
> From: sdod...@redhat.com
> To: dencow...@hotmail.com
> CC: ccole...@redhat.com; users@lists.openshift.redhat.com
> 
> Den,
> 
> Here's the ansible variable documented in the example inventories. I'd
> suggest fully qualifying it ie:
> 'hub.docker.io/dencowboy/test-${component}:${version}'
> https://github.com/openshift/openshift-ansible/blob/master/inventory/byo/hosts.origin.example#L79-L82
> 
> On Wed, Jun 15, 2016 at 11:32 AM, Den Cowboy <dencow...@hotmail.com> wrote:
> > Thanks for the fast reply. Where can we find it in the ansible repo?
> > https://github.com/openshift/openshift-ansible
> >
> > Do we also need to change our images of or do we have to create a "test"
> > project.
> > For example if we want to push all the images with version 1.1.6 to our repo
> > on dockerhub wich is called test:
> > normally we do this as: test/origin-...:v1.1.6 but than it isn't inserted in
> > the "openshift" project probably?
> > Or do we have to call it: test/openshift/origin-... (if that's possible on
> > docker hub)
> >
> >> Date: Wed, 15 Jun 2016 09:54:00 -0400
> >> Subject: Re: Use own Dockerhub registry instead of openshift
> >> From: ccole...@redhat.com
> >> To: dencow...@hotmail.com
> >> CC: users@lists.openshift.redhat.com
> >>
> >> You can specify a different image pattern in Ansible (or in the CLI
> >> tools oadm registry / oadm router) to tell OpenShift where to pull the
> >> images from. You'll need to match the Origin pattern though
> >> (registry/namespace/openshift-{same_suffixes_as_origin}) and have a
> >> consistent tag for all of them.
> >>
> >> On Wed, Jun 15, 2016 at 9:16 AM, Den Cowboy <dencow...@hotmail.com> wrote:
> >> > We are setting up a POC for OpenShift Origin.
> >> > We try to use all our own images (so an own docker hub account and reuse
> >> > the
> >> > same images of OpenShift).
> >> > Because we had a big issue some time ago in our POC project because
> >> > OpenShift deleted some images which were older than 1.2.0.
> >> >
> >> > Is it possible to configure something inside openshift so we can pull
> >> > our
> >> > images (for metrics, for registry, for router etc.) from our own
> >> > registry
> >> > and not from the openshift/origin docker hub registry?
> >> >
> >> > Thanks
> >> >
> >> > ___
> >> > users mailing list
> >> > users@lists.openshift.redhat.com
> >> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> >> >
> >
> > ___
> > users mailing list
> > users@lists.openshift.redhat.com
> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> >
  

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: Use own Dockerhub registry instead of openshift

2016-06-15 Thread Den Cowboy
Hi, thanks Scott.
I've edited the playbook and reran it:

I've a registry with the origin-pod image and the ha-proxy-router image in my 
registry.
I tried to deploy my router

Error syncing pod, skipping: failed to "StartContainer" for "POD" with 
ErrImagePull: "image pull failed for 
docker.io/my-repo/origin-pod:v1.2.0-1-g7386b49, this may be because 
there are no credentials on this request.  details: (Tag 
v1.2.0-1-g7386b49 not found in repository docker.io/my-repo/origin-pod)"



2 issues: 
no credentials on this request (it's a public registry so probably no issue?)
tag 
v1.2.0-1-g7386b49 not found: I have an image with tag v1.2.0 and tag latest (on 
the same image). I don't know why it tries to pull an image with this tag 
because this tag doesn't even exist in the OpenShift repo on Dockerhub:

...
v1.3.0-alpha.0511 KB2 months agov1.2.0-rc2511 KB2 months agov1.2.0-rc1511 KB2 
months agov1.1.6511 KB2 months ago
...



> Date: Wed, 15 Jun 2016 11:55:24 -0400
> Subject: Re: Use own Dockerhub registry instead of openshift
> From: sdod...@redhat.com
> To: dencow...@hotmail.com
> CC: ccole...@redhat.com; users@lists.openshift.redhat.com
> 
> Den,
> 
> Here's the ansible variable documented in the example inventories. I'd
> suggest fully qualifying it ie:
> 'hub.docker.io/dencowboy/test-${component}:${version}'
> https://github.com/openshift/openshift-ansible/blob/master/inventory/byo/hosts.origin.example#L79-L82
> 
> On Wed, Jun 15, 2016 at 11:32 AM, Den Cowboy <dencow...@hotmail.com> wrote:
> > Thanks for the fast reply. Where can we find it in the ansible repo?
> > https://github.com/openshift/openshift-ansible
> >
> > Do we also need to change our images of or do we have to create a "test"
> > project.
> > For example if we want to push all the images with version 1.1.6 to our repo
> > on dockerhub wich is called test:
> > normally we do this as: test/origin-...:v1.1.6 but than it isn't inserted in
> > the "openshift" project probably?
> > Or do we have to call it: test/openshift/origin-... (if that's possible on
> > docker hub)
> >
> >> Date: Wed, 15 Jun 2016 09:54:00 -0400
> >> Subject: Re: Use own Dockerhub registry instead of openshift
> >> From: ccole...@redhat.com
> >> To: dencow...@hotmail.com
> >> CC: users@lists.openshift.redhat.com
> >>
> >> You can specify a different image pattern in Ansible (or in the CLI
> >> tools oadm registry / oadm router) to tell OpenShift where to pull the
> >> images from. You'll need to match the Origin pattern though
> >> (registry/namespace/openshift-{same_suffixes_as_origin}) and have a
> >> consistent tag for all of them.
> >>
> >> On Wed, Jun 15, 2016 at 9:16 AM, Den Cowboy <dencow...@hotmail.com> wrote:
> >> > We are setting up a POC for OpenShift Origin.
> >> > We try to use all our own images (so an own docker hub account and reuse
> >> > the
> >> > same images of OpenShift).
> >> > Because we had a big issue some time ago in our POC project because
> >> > OpenShift deleted some images which were older than 1.2.0.
> >> >
> >> > Is it possible to configure something inside openshift so we can pull
> >> > our
> >> > images (for metrics, for registry, for router etc.) from our own
> >> > registry
> >> > and not from the openshift/origin docker hub registry?
> >> >
> >> > Thanks
> >> >
> >> > ___
> >> > users mailing list
> >> > users@lists.openshift.redhat.com
> >> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> >> >
> >
> > ___
> > users mailing list
> > users@lists.openshift.redhat.com
> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> >
  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: Use own Dockerhub registry instead of openshift

2016-06-15 Thread Den Cowboy
Thanks for the fast reply. Where can we find it in the ansible repo?
https://github.com/openshift/openshift-ansible

Do we also need to change our images of or do we have to create a "test" 
project.
For example if we want to push all the images with version 1.1.6 to our repo on 
dockerhub wich is called test:
normally we do this as: test/origin-...:v1.1.6 but than it isn't inserted in 
the "openshift" project probably?
Or do we have to call it: test/openshift/origin-... (if that's possible on 
docker hub)

> Date: Wed, 15 Jun 2016 09:54:00 -0400
> Subject: Re: Use own Dockerhub registry instead of openshift
> From: ccole...@redhat.com
> To: dencow...@hotmail.com
> CC: users@lists.openshift.redhat.com
> 
> You can specify a different image pattern in Ansible (or in the CLI
> tools oadm registry / oadm router) to tell OpenShift where to pull the
> images from.  You'll need to match the Origin pattern though
> (registry/namespace/openshift-{same_suffixes_as_origin}) and have a
> consistent tag for all of them.
> 
> On Wed, Jun 15, 2016 at 9:16 AM, Den Cowboy <dencow...@hotmail.com> wrote:
> > We are setting up a POC for OpenShift Origin.
> > We try to use all our own images (so an own docker hub account and reuse the
> > same images of OpenShift).
> > Because we had a big issue some time ago in our POC project because
> > OpenShift deleted some images which were older than 1.2.0.
> >
> > Is it possible to configure something inside openshift so we can pull our
> > images (for metrics, for registry, for router etc.) from our own registry
> > and not from the openshift/origin docker hub registry?
> >
> > Thanks
> >
> > ___
> > users mailing list
> > users@lists.openshift.redhat.com
> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> >
  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Use own Dockerhub registry instead of openshift

2016-06-15 Thread Den Cowboy
We are setting up a POC for OpenShift Origin.
We try to use all our own images (so an own docker hub account and reuse the 
same images of OpenShift).
Because we had a big issue some time ago in our POC project because OpenShift 
deleted some images which were older than 1.2.0. 

Is it possible to configure something inside openshift so we can pull our 
images (for metrics, for registry, for router etc.) from our own registry and 
not from the openshift/origin docker hub registry?

Thanks
  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


OpenShift Origin: Kibana show indexes with additional string

2016-06-03 Thread Den Cowboy
I've set up aggregated logging on OpenShift Origin.
It works fine but my Kibana show my projects in the following way:

.all.operations.*dev-proj1.1dd68e1e-1e8c-11e6-baf9-064081126234.*dev-proj2.3cdda625-1da8-11e6-baf9-064081126234.*dev-proj3.5e94ef86-1cf0-11e6-baf9-064081126234.*dev-proj4.6cfe3919-28c8-11e6-8b8f-064081126234.*dev-proj5.728abf75-019c-11e6-8b8f-064081126234.*
While it was shown this way (when we were using origin 1.1)

.all.operations.*dev-proj1.*dev-proj2.*dev-proj3.*dev-proj4.*dev-proj5.*
What's the reason for this behaviour?

Thanks in advance.
  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: Jenkins setup for OpenShift

2016-05-12 Thread Den Cowboy
Thanks for the replies.
We can use some docker plugins to build our images. But the main problem 
remains the login into our registry from our external jenkins.

We don't have experience with dockercfg but it seems an option.
All the plugins of Docker give the option to describe the registry and a key 
which is fine. But for the openshift registry we still the that token which is 
only available after authenticating on openshift itself. Is it possible to 
configure this in dockercfg?

We're searching for the best way to push an image from an external jenkins to 
the OpenShift registry:


Date: Wed, 11 May 2016 10:44:07 -0400
Subject: Re: Jenkins setup for OpenShift
From: bpar...@redhat.com
To: john.skar...@ca.com
CC: dencow...@hotmail.com; users@lists.openshift.redhat.com

You can also just supply a dockercfg file that already has the right 
credentials in it, just make that file available to your Jenkins job. 
Ben Parees | OpenShift
On May 11, 2016 9:30 AM, "Skarbek, John" <john.skar...@ca.com> wrote:







On May 11, 2016 at 08:46:18, Den Cowboy (dencow...@hotmail.com) wrote:






We are using a Jenkins server which isn't running on openshift.

The main goal at the moment is:

- Get dockerfile out of our git

- Build image

- Push image to OpenShift Docker Registry



We have the dockerfile on our system. We can use docker commands in our Jenkins.

At the moment we are building our images like this:



cd folder/

docker build -t 172.30.xx.xx:5000/image:latest  .



So we have our image. Now we need to push our image to our OpenShift Registry.

We have 2 big issues:



1) Our first issue/question: Do we need to authenticate on our OpenShift 
environment (to get the necessary token of the next step) and if so, is there a 
more efficient way dan this?:

prereq: install oc tools on jenkins

oc login -u user -p password 
https://ec2-xx-xx-xx-xx-xx-1.compute.amazonaws.com:8443 
--certificate-authority='/path/to/ca.crt'



We have our ca.crt of our OpenShift stored in a folder on our Jenkins (manually 
putted on the server..)




You’ll need to get the credentials required to log into the docker registry 
somehow.  And there are options for completing this.

In our environment, we configure a service account for this exact process.  And 
when we build the jenkins server, we’ve got a play that’ll pull the docker 
config secret from the service account and push it into jenkins appropriately.  

In your case, it sounds like you are doing this manually, simply grab the 
credentials from your service account.  Look for the associated secret for the 
docker config.
oc get secrets will list the available secrets.  And  you should see the secret 
associated with the service account labeled something along the lines of
-dockercfg-.  Run an 
oc describe secret -dockercfg- and it’ll put put 
the huge preconfigured password for that service account, that you can use to 
log into the docker registry.

https://docs.openshift.org/latest/dev_guide/service_accounts.html#managing-service-account-credentials

There are some service accounts created per project automatically that you may 
be able to use to get away without creating one

https://docs.openshift.org/latest/admin_guide/service_accounts.html#managed-service-accounts
 










2) Our second issue is related to the first issue. It seems a strange behaviour 
to "login" on your openshift from your jenkins" and perform the steps from 
there.



# authenticate for OpenShift Registry

docker login -u user -e a...@mail.com \

-p `oc whoami -t` 172.30.xx.xx:5000



# push image to our registry

docker push 172.30.xx.xx:5000/dev/image:latest












You don’t need to log into openshift in order to push to the registry.  But one 
MUST log into the docker registry before pushing.  Without logging in, the 
docker registry will more than likely deny your request to push.



As a secondary note, to prevent jenkins from sending that huge password in 
clear text to the console, you can do something like this in the jenkins job:



docker build $image .

(set +x; docker login -u nobody -e nob...@nobody.com -p $token $registry)

docker push $registry/$image









___ 

users mailing list 

users@lists.openshift.redhat.com 

http://lists.openshift.redhat.com/openshiftmm/listinfo/users 








___

users mailing list

users@lists.openshift.redhat.com

http://lists.openshift.redhat.com/openshiftmm/listinfo/users


  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Only the previous 1000 log lines and new log messages will be displayed because of the large log size.

2016-04-21 Thread Den Cowboy

My webconsole is showing the following warning when I'm looking for the logs of 
a pod:


Only the previous 1000 log lines and new log messages will be displayed because 
of the large log size.

I'm afraid the logsize will be huge?
This is for a tomcat-container which isn't using persistent storage (it's 
hosting a web service which is showing logs after being triggered). Okay it's 
ephemeral so the logs will be gone when I delete the container (but normally 
this container never goes down).
So the amount of saved logs will be huge. Is the container saving (ephemeral) 
the logs for some max amount of time?



  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: openshift ansible playbook 1.1.6

2016-04-15 Thread Den Cowboy
Well it said: docker not found etc. it wasn't installed.
I tried it again of empty instances. than it was succesful for my master and 2 
of my nodes, enabling failed for the third one and the process failed. When I 
ssh'd to the 3th node docker was running.. Very weird.
Tried a third time, worked for all. Is it dependent of some timeout which isn't 
long enough or?

> Date: Fri, 15 Apr 2016 10:34:24 -0400
> Subject: Re: openshift ansible playbook 1.1.6
> From: sdod...@redhat.com
> To: dencow...@hotmail.com
> CC: users@lists.openshift.redhat.com
> 
> Can you check the logs from docker service to see why it's failing to start?
> `journalctl -lu docker`
> 
> On Fri, Apr 15, 2016 at 10:01 AM, Den Cowboy <dencow...@hotmail.com> wrote:
> > I try to set up a cluster with the ansible script on Centos7.
> > Previous times it went fine, but now it complains:
> >
> > TASK: [docker | enable and start the docker service]
> > **
> > failed: [script-openshift-master-3280c] => {"failed": true}
> > msg: Job for docker.service failed because start of the service was
> > attempted too often. See "systemctl status docker.service" and "journalctl
> > -xe" for details.
> > To force a start use "systemctl reset-failed docker.service" followed by
> > "systemctl start docker.service" again.
> >
> > So it seems that I have to install + enable docker manually on each instance
> > (like necessary on RHEL I think).
> > But this wasn't necessary on Centos7 till 3 days ago or something.
> >
> > I know there were issues with docker versions (older version for origin
> > 1.1.3 etc.) Don't know this is the issue?
> > Do I have to change a variable or parameter to let the script work without
> > manually install docker on each instance?
> >
> > Thanks
> >
> > ___
> > users mailing list
> > users@lists.openshift.redhat.com
> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> >
  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


openshift ansible playbook 1.1.6

2016-04-15 Thread Den Cowboy
I try to set up a cluster with the ansible script on Centos7.
Previous times it went fine, but now it complains:

TASK: [docker | enable and start the docker service] **
failed: [script-openshift-master-3280c] => {"failed": true}
msg: Job for docker.service failed because start of the service was attempted 
too often. See "systemctl status docker.service" and "journalctl -xe" for 
details.
To force a start use "systemctl reset-failed docker.service" followed by 
"systemctl start docker.service" again.

So it seems that I have to install + enable docker manually on each instance 
(like necessary on RHEL I think).
But this wasn't necessary on Centos7 till 3 days ago or something. 

I know there were issues with docker versions (older version for origin 1.1.3 
etc.) Don't know this is the issue?
Do I have to change a variable or parameter to let the script work without 
manually install docker on each instance?

Thanks
  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Secure route Origin 1.1.6

2016-04-13 Thread Den Cowboy
I have a docker container which is communicating on port 80 with another server.
So it's using http and its an insecure route.

Now we're going to use https (443). The other server has a certificate (.jks).
How do I have to settle this? I have to create a secure route but which type?
- passthrough
- edge
- re-encrypt

Do I have to convert his .jks to .pem and copy it in my route?

I read this about passthrough:
The destination pod is responsible for serving certificates for the
traffic at the endpoint.
So can I just create a passthrough route and that's it? Because that did not 
seem to work.
  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Error syncing pod, skipping: failed to "TeardownNetwork" for...

2016-04-12 Thread Den Cowboy
Wat is the meaning of this error? I got it pending my deployment. After the 
deploy the container is running + working fine.

Error syncing pod, skipping: failed to "TeardownNetwork" for 
"bluegreen-1-deploy_test" with TeardownNetworkError: "Failed to teardown
 network for pod \"18c89d78-00a3-11e6-95fe-06973cdc26b9\" using network 
plugins \"redhat/openshift-ovs-subnet\": exit status 1" 
  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: accessing secure registry on master isn't possible?

2016-04-08 Thread Den Cowboy
I'm using the ca.crt from /etc/origin/master/ca.crt and /etc/origin/node/ca.crt 

Date: Fri, 8 Apr 2016 11:02:19 +0200
Subject: Re: accessing secure registry on master isn't possible?
From: maszu...@redhat.com
To: dencow...@hotmail.com
CC: users@lists.openshift.redhat.com



On Fri, Apr 8, 2016 at 8:27 AM, Den Cowboy <dencow...@hotmail.com> wrote:



Yes I performed the same steps on my master as on my nodes. This is the error:
sudo docker login -u admin -e m...@mail.com \
> -p token 172.30.xx.xx:5000
Error response from daemon: invalid registry endpoint 
https://172.30.109.95:5000/v0/: unable to ping registry endpoint 
https://172.30.xx.xx:5000/v0/
v2 ping attempt failed with error: Get https://172.30.xx.xx:5000/v2/: dial tcp 
172.30.xx.xx:5000: i/o timeout
 v1 ping attempt failed with error: Get https://172.30.xx.xx:5000/v1/_ping: 
dial tcp 172.30.xx.xx:5000: i/o timeout. If this private registry supports only 
HTTP or HTTPS with an unknown CA certificate, please add `--insecure-registry 
172.30.xx.xx:5000` to the daemon's arguments. In the case of HTTPS, if you have 
access to the registry's CA certificate, no need for the flag; simply place the 
CA certificate at /etc/docker/certs.d/172.30.xx.xx:5000/ca.crt


Do you have the CA cert in /etc/docker/certs.d/172.30.xx.xx:5000/ca.crt the log 
you're seeing is 
the usual log that happens when you're using self-singed certs for registry. 
Eventually make sure
the above ca is the right one.
 While on all my 3 nodes:

sudo docker login -u admin -e m...@mail.com \
> -p token 172.30.xx.xx:5000
WARNING: login credentials saved in /root/.docker/config.json
Login Succeeded

Date: Thu, 7 Apr 2016 22:02:06 +0200
Subject: Re: accessing secure registry on master isn't possible?
From: maszu...@redhat.com
To: dencow...@hotmail.com
CC: users@lists.openshift.redhat.com

Per 
https://docs.openshift.org/latest/install_config/install/docker_registry.html#securing-the-registry,
 step 11 and 12,
I assume you've copied CA certificate to the Docker certificates directory on 
all nodes and restarted docker service, 
did you also do that on master as well. Without it any docker operation will 
fail with certificate check failure. 
What is the error you're seeing and what is the operation you're trying to do?


On Thu, Apr 7, 2016 at 4:20 PM, Den Cowboy <dencow...@hotmail.com> wrote:



I've created a secur registry on 1.1.6 
For the first time I've created my environment with 1 real master and 3 nodes 
(one infra). (The reason for this is because I'm using the community ansible 
aws setup. 
https://github.com/openshift/openshift-ansible/blob/master/README_AWS.md
Normally my master is also an unschedulable node. Now I've secured my registry.
I'm able to login and push to the registry from my nodes but not from my 
master? 
Is this normal , if yes,  why is it that way?
I don't think it's an issue because the images will always be pulled and pushed 
on my nodes because only there can run my containers but I want to know if it's 
a known thing.

Thanks

  

___

users mailing list

users@lists.openshift.redhat.com

http://lists.openshift.redhat.com/openshiftmm/listinfo/users



  

  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: accessing secure registry on master isn't possible?

2016-04-08 Thread Den Cowboy
Yes I performed the same steps on my master as on my nodes. This is the error:
sudo docker login -u admin -e m...@mail.com \
> -p token 172.30.xx.xx:5000
Error response from daemon: invalid registry endpoint 
https://172.30.109.95:5000/v0/: unable to ping registry endpoint 
https://172.30.xx.xx:5000/v0/
v2 ping attempt failed with error: Get https://172.30.xx.xx:5000/v2/: dial tcp 
172.30.xx.xx:5000: i/o timeout
 v1 ping attempt failed with error: Get https://172.30.xx.xx:5000/v1/_ping: 
dial tcp 172.30.xx.xx:5000: i/o timeout. If this private registry supports only 
HTTP or HTTPS with an unknown CA certificate, please add `--insecure-registry 
172.30.xx.xx:5000` to the daemon's arguments. In the case of HTTPS, if you have 
access to the registry's CA certificate, no need for the flag; simply place the 
CA certificate at /etc/docker/certs.d/172.30.xx.xx:5000/ca.crt

While on all my 3 nodes:

sudo docker login -u admin -e m...@mail.com \
> -p token 172.30.xx.xx:5000
WARNING: login credentials saved in /root/.docker/config.json
Login Succeeded

Date: Thu, 7 Apr 2016 22:02:06 +0200
Subject: Re: accessing secure registry on master isn't possible?
From: maszu...@redhat.com
To: dencow...@hotmail.com
CC: users@lists.openshift.redhat.com

Per 
https://docs.openshift.org/latest/install_config/install/docker_registry.html#securing-the-registry,
 step 11 and 12,
I assume you've copied CA certificate to the Docker certificates directory on 
all nodes and restarted docker service, 
did you also do that on master as well. Without it any docker operation will 
fail with certificate check failure. 
What is the error you're seeing and what is the operation you're trying to do?


On Thu, Apr 7, 2016 at 4:20 PM, Den Cowboy <dencow...@hotmail.com> wrote:



I've created a secur registry on 1.1.6 
For the first time I've created my environment with 1 real master and 3 nodes 
(one infra). (The reason for this is because I'm using the community ansible 
aws setup. 
https://github.com/openshift/openshift-ansible/blob/master/README_AWS.md
Normally my master is also an unschedulable node. Now I've secured my registry.
I'm able to login and push to the registry from my nodes but not from my 
master? 
Is this normal , if yes,  why is it that way?
I don't think it's an issue because the images will always be pulled and pushed 
on my nodes because only there can run my containers but I want to know if it's 
a known thing.

Thanks

  

___

users mailing list

users@lists.openshift.redhat.com

http://lists.openshift.redhat.com/openshiftmm/listinfo/users



  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: Can't access hawkular-metrics.dev.test.co

2016-04-04 Thread Den Cowboy
I just tried exactly the same with another hostname in the route and it's 
working fine now. Don't know why it wasn't working (the hawkular-metrics.xxx.eu 
worked in the past. 

From: dencow...@hotmail.com
To: mwri...@redhat.com
Subject: RE: Can't access hawkular-metrics.dev.test.co
Date: Mon, 4 Apr 2016 21:32:23 +
CC: users@lists.openshift.redhat.com




Chrome is telling me:

This site can’t be reachedhawkular-metrics.xxx.eu unexpectedly closed the 
connection.ERR_CONNECTION_CLOSED
While I have a jenkins (https), some tomcat apps (http) which are all working 
and I can visit them in my browser by the same domain-name.
From: dencow...@hotmail.com
To: mwri...@redhat.com
Subject: RE: Can't access hawkular-metrics.dev.test.co
Date: Mon, 4 Apr 2016 21:28:01 +
CC: users@lists.openshift.redhat.com




Thanks, I got:

{"MetricsService":"STARTED","Implementation-Version":"0.13.0-SNAPSHOT","Built-From-Git-SHA1":"96e6d3c83bb09f659c0cb6b17eb1a9648df66a6f"

which seems to be fine? Because I want to see something like that in my browser.


> Date: Mon, 4 Apr 2016 17:21:15 -0400
> From: mwri...@redhat.com
> To: dencow...@hotmail.com
> CC: users@lists.openshift.redhat.com
> Subject: Re: Can't access hawkular-metrics.dev.test.co
> 
> Sorry, I meant the IP address of the Hawkular Metrics pod
> 
> eg
> curl -k -X GET https://`oc get pod $(oc get pods | grep -i hawkular-metrics | 
> awk '{print $1}') -o template 
> --template='{{.status.podIP}}'`:8443/hawkular/metrics/status
> 
> [or port 8444 if running the OSE images instead of origin]
> 
> - Original Message -
> > From: "Den Cowboy" <dencow...@hotmail.com>
> > To: "Matt Wringe" <mwri...@redhat.com>
> > Cc: users@lists.openshift.redhat.com
> > Sent: Monday, April 4, 2016 5:05:33 PM
> > Subject: RE: Can't access hawkular-metrics.dev.test.co
> > 
> > I've created the metrics as admin which has the role cluster-admin in my
> > cluster?
> > 
> > From: dencow...@hotmail.com
> > To: mwri...@redhat.com
> > Subject: RE: Can't access hawkular-metrics.dev.test.co
> > Date: Mon, 4 Apr 2016 20:59:55 +
> > CC: users@lists.openshift.redhat.com
> > 
> > 
> > 
> > 
> > Sorry got it:
> > 
> > {
> >   "kind": "Status",
> >   "apiVersion": "v1",
> >   "metadata": {},
> >   "status": "Failure",
> >   "message": "User \"system:anonymous\" cannot \"get\" on
> >   \"/hawkular/metrics/status\"",
> >   "reason": "Forbidden",
> >   "details": {},
> >   "code": 403
> > }
> > 
> > From: dencow...@hotmail.com
> > To: mwri...@redhat.com
> > CC: users@lists.openshift.redhat.com
> > Subject: RE: Can't access hawkular-metrics.dev.test.co
> > Date: Mon, 4 Apr 2016 20:58:14 +
> > 
> > 
> > 
> > 
> > Do you mean in my browser and with the IP of the node where my router is on?
> > 
> > > Date: Mon, 4 Apr 2016 16:51:55 -0400
> > > From: mwri...@redhat.com
> > > To: dencow...@hotmail.com
> > > CC: users@lists.openshift.redhat.com
> > > Subject: Re: Can't access hawkular-metrics.dev.test.co
> > > 
> > > Please try this:
> > > 
> > > > > Are you able to access the Hawkular Metrics server directly with its 
> > > > > ip
> > > > > address? eg https://$IP_ADDRESS:8443/hawkular/metrics/status (or port
> > > > > 8444
> > > > > if running the OSE metric images)
> > > 
> > > - Original Message -
> > > > From: "Den Cowboy" <dencow...@hotmail.com>
> > > > To: "Matt Wringe" <mwri...@redhat.com>
> > > > Cc: users@lists.openshift.redhat.com
> > > > Sent: Monday, April 4, 2016 4:48:44 PM
> > > > Subject: RE: Can't access hawkular-metrics.dev.test.co
> > > > 
> > > > It's no router/DNS issue because other apps are working fine.
> > > > 
> > > > I had those errors between the startup but I don't know if it's normal:
> > > > 
> > > > Readiness probe failed: Failed to access the status endpoint : HTTP 
> > > > Error
> > > > 404: Not Found.
> > > > 
> > > > 
> > > > 
> > > > 
> > > > 
> > > > 
> > > > 10:44:40 PM
> > > > 
> > > > 
> > > > hawkular-metrics-b6qq2
> > > &

RE: Persistent volume

2016-03-22 Thread Den Cowboy
Is it possible to perform:
oc edit pv and oc edit pvc (inside the project). Edit the values and delete 
your pod and recreate it?

We tried it with a registry. Everything is still in the registry but the 
webconsole is telling:



Status:































Bound
to volume registry-volume
Capacity:

allocated 3 GiB


Requested Capacity:

8 GiB


Access Modes:RWX (Read-Write-Many)


We changed everything to 8GB. the value in PV and the 2 values in PVC. But the 
'3' remains there.

From: dencow...@hotmail.com
To: users@lists.openshift.redhat.com
Subject: Persistent volume
Date: Tue, 22 Mar 2016 13:11:07 +




Hi,

I'm using a Jenkins CI which is using persistent storage. (NFS)
jenkins-volume3GiRWX   Bound 
jenkins/jenkins-claim  13d


But when I perform: sudo du -sh jenkins I got:
14Gjenkins/

Is it possible to extend my persistent volume? We didn't lost any data at the 
moment but we're afraid we will loose it after shutting down the pod.
  

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Persistent volume

2016-03-22 Thread Den Cowboy
Hi,

I'm using a Jenkins CI which is using persistent storage. (NFS)
jenkins-volume3GiRWX   Bound 
jenkins/jenkins-claim  13d


But when I perform: sudo du -sh jenkins I got:
14Gjenkins/

Is it possible to extend my persistent volume? We didn't lost any data at the 
moment but we're afraid we will loose it after shutting down the pod.
  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: Set up logging: Kibana

2016-03-21 Thread Den Cowboy
Some logs: the pod returns fails (after executing oc process 
logging-deployer-template -n openshift ...

(Re-)Creating deployed objects
No resources found
+ oc process logging-support-pre-template
+ oc create -f -
serviceaccount "aggregated-logging-kibana" created
serviceaccount "aggregated-logging-elasticsearch" created
serviceaccount "aggregated-logging-fluentd" created
serviceaccount "aggregated-logging-curator" created
service "logging-es" created
service "logging-es-cluster" created
service "logging-es-ops" created
service "logging-es-ops-cluster" created
service "logging-kibana" created
service "logging-kibana-ops" created
+ oc delete dc,rc,pod --selector logging-infra=curator
No resources found
+ oc delete dc,rc,pod --selector logging-infra=kibana
No resources found
+ oc delete dc,rc,pod --selector logging-infra=fluentd
No resources found
+ oc delete dc,rc,pod --selector logging-infra=elasticsearch
No resources found
+ (( n=0 ))
+ (( n<1 ))
+ oc process logging-es-template
+ oc create -f -
deploymentconfig "logging-es-6jldefop" created
+ (( n++ ))
+ (( n<1 ))
+ oc process logging-fluentd-template
+ oc create -f -
json: cannot unmarshal object into Go value of type string

From: dencow...@hotmail.com
To: users@lists.openshift.redhat.com
Subject: Set up logging: Kibana
Date: Mon, 21 Mar 2016 12:22:01 +




I try to set up the logging system of Kibana:
https://docs.openshift.org/latest/install_config/aggregate_logging.html

I'm able to perform the steps till 
oc process logging-deployer-template -n openshift \
   -v 
KIBANA_HOSTNAME=kibana.example.com,ES_CLUSTER_SIZE=1,PUBLIC_MASTER_URL=https://localhost:8443
 \
   | oc create -f -

This creates a deploymentpod an it creates some services + 2 pods (logging-es 
and logging-es-cluster).
Then I perform:
oc process logging-support-template | oc create -f -

This creates the following:
oauthclient "kibana-proxy" created
route "kibana" created
route "kibana-ops" created
imagestream "logging-auth-proxy" created
imagestream "logging-elasticsearch" created
imagestream "logging-fluentd" created
imagestream "logging-kibana" created
imagestream "logging-curator" created

But I seem to miss deploymentconfigs? I'm unable to scale my fluentd or Kibana. 

Ps: the documentation is also a bit confusing at: 
$ oc policy add-role-to-user edit \
system:serviceaccount:default:logging-deployer

Because it's using project default instead of project logging (like in the 
other steps).
https://docs.openshift.org/latest/install_config/aggregate_logging.html

  

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Set up logging: Kibana

2016-03-21 Thread Den Cowboy
I try to set up the logging system of Kibana:
https://docs.openshift.org/latest/install_config/aggregate_logging.html

I'm able to perform the steps till 
oc process logging-deployer-template -n openshift \
   -v 
KIBANA_HOSTNAME=kibana.example.com,ES_CLUSTER_SIZE=1,PUBLIC_MASTER_URL=https://localhost:8443
 \
   | oc create -f -

This creates a deploymentpod an it creates some services + 2 pods (logging-es 
and logging-es-cluster).
Then I perform:
oc process logging-support-template | oc create -f -

This creates the following:
oauthclient "kibana-proxy" created
route "kibana" created
route "kibana-ops" created
imagestream "logging-auth-proxy" created
imagestream "logging-elasticsearch" created
imagestream "logging-fluentd" created
imagestream "logging-kibana" created
imagestream "logging-curator" created

But I seem to miss deploymentconfigs? I'm unable to scale my fluentd or Kibana. 

Ps: the documentation is also a bit confusing at: 
$ oc policy add-role-to-user edit \
system:serviceaccount:default:logging-deployer

Because it's using project default instead of project logging (like in the 
other steps).
https://docs.openshift.org/latest/install_config/aggregate_logging.html

  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: --env-file in OpenShift?

2016-03-21 Thread Den Cowboy
Found our answer: https://github.com/openshift/origin/issues/7585


From: dencow...@hotmail.com
To: users@lists.openshift.redhat.com
Subject: --env-file in OpenShift?
Date: Mon, 21 Mar 2016 10:02:29 +




Origin 1.1.4:

Is it possible to use --env-file when creating a new app (oc new-app). 
-e is working but we have a lot of environment variables. We tried it but it 
doesn't seem to work.
  

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


OpenShift installation on Amazon

2016-03-19 Thread Den Cowboy
Hi,

I saw the OpenShift installer of the community:
https://github.com/openshift/openshift-ansible/tree/master/playbooks/aws/openshift-cluster
I've 2 questions about it:

- I've a (maybe stupid) question about the OpenShift installation on AWS.
It's working fine now but do I have to assign an elastic IP in the beginning?
I'm
 able to create my instances and install OpenShift etc. But when I 
reboot an instance it's going completely wrong because the public ip 
changes but the ip in the master-config.yaml is stillt he IP of the 
initial installation. So I want to use an elastic IP. Is this a right 
approach? (during the normal installation (without the github playbook) I 
always used an elastic IP.

- After the installation my master is running on the public-ip. the webconsole 
is on https://public-ip:8443 etc. while the openshift advanced installation 
(https://docs.openshift.org/latest/install_config/install/advanced_install.html#what-s-next)
 is talking about using hostnames. So in the normal advanced installation 
(without the github playbook) I defined the public dns names instead of public 
ip's. It was also working fine. What's the right approach?

thanks

  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: oc new-app on image from OpenShift Registry

2016-03-15 Thread Den Cowboy
Okay the push failed because the namespace did not exist. Now the push is fine 
but the error of using the image is still there

From: dencow...@hotmail.com
To: tim.m...@spring.co.nz
Subject: RE: oc new-app on image from OpenShift Registry
Date: Tue, 15 Mar 2016 08:27:11 +
CC: users@lists.openshift.redhat.com




I see the push failed

440807311fed: Pushed 
ed3cf5a3e842: Pushed 
8a652dfcca24: Pushed 
latest: digest: 
sha256:b0b56d49abf65ff0c709390ce60be5d3a4d027f27b3c4349596b35b5e9e0 size: 
6972
Received unexpected HTTP status: 500 Internal Server Error

From: dencow...@hotmail.com
To: tim.m...@spring.co.nz
Subject: RE: oc new-app on image from OpenShift Registry
Date: Tue, 15 Mar 2016 08:22:40 +
CC: users@lists.openshift.redhat.com




Hi Tim, thanks for the fast reply:
docker-registry   registry.dev.com docker-registry:5000-tcp   
passthrough   docker-registry=default

But you made me think. The port is an issue probably.
The 5000 is only an internal port on the container. But How can I push and pull 
an image from the outside?

First I moved my ca.crt to registry.dev.com:443 instead of registry.dev.com:5000

I tried to tag an image on registry.dev.com:443/test3/test and I was able to 
push the image but when I try to start the image:

$ oc new-app --insecure-registry registry.dev.com:443/test3/test
error: can't look up Docker image "registry.dev.dbm.com:443/test3/test": 
Internal error occurred: Get https://registry.dev.dbm.com:443/v2/: dial tcp 
172.30.82.246:443: no route to host

From: tim.m...@spring.co.nz
To: dencow...@hotmail.com
CC: users@lists.openshift.redhat.com
Subject: Re: oc new-app on image from OpenShift Registry
Date: Tue, 15 Mar 2016 08:03:17 +






Hey Den, 



have you created your external route?



whats the output of:



oc get routes



Also, when using external routes you won’t need the port ‘:5000’ reference. 



Link
- 
https://docs.openshift.org/latest/install_config/install/docker_registry.html#access-pushing-and-pulling-images




On 15/03/2016, at 8:44 PM, Den Cowboy <dencow...@hotmail.com> wrote:



I've my OpenShift registry. It's using selfsigned certificates which are 
created for my service IP (172.30.82.xx) and my hostname (registry.dev.com)



[centos@ip-172-31-18-122 ~]$ oc new-app --insecure-registry 
registry.dev.com:5000/test2/test:7

W0315 07:38:52.206896   37667 pipeline.go:154] Could not find an image stream 
match for "registry.dev.com:5000/test2/test:7". Make sure that a Docker image 
with that tag is available on the node for the deployment
 to succeed.

--> Found Docker image 65262bc (4 hours old) from registry.dev.com:5000 for 
"registry.dev.com:5000/test2/test:7"



* This image will be deployed in deployment config "test"

* Ports 8080/tcp, /tcp will be load balanced by service "test"

  * Other containers can access this service through the hostname "test"

* WARNING: Image "test" runs as the 'root' user which may not be permitted 
by your cluster administrator



--> Creating resources with label app=test ...

deploymentconfig "test" created

service "test" created

--> Success

Run 'oc status' to view your app.



--> ERROR: Failed to pull image "registry.dev.com:5000/test2/test:7": image 
pull failed for registry.dev.com:5000/test2/test:7,
 this may be because there are no credentials on this request. details: (Error: 
image test2/test:7 not found) 





$ oc new-app --insecure-registry 172.30.82.xx:5000/test2/test:7

--> Found Docker image 65262bc (4 hours old) from 172.30.82.xx:5000 for 
"172.30.82.xx:5000/test2/test:7"



* An image stream will be created as "test:7" that will track this image

* This image will be deployed in deployment config "test"

* Ports 8080/tcp, /tcp will be load balanced by service "test"

  * Other containers can access this service through the hostname "test"

* WARNING: Image "test" runs as the 'root' user which may not be permitted 
by your cluster administrator



--> Creating resources with label app=test ...



--> WORKS







INFO: I defined the hostname when I was securing the registry:

oadm ca create-server-cert --signer-cert=ca.crt \
--signer-key=ca.key --signer-serial=ca.serial.txt \
--hostnames='registry.dev.com,172.30.xx.xx' \
--cert=registry.crt --key=registry.key
I'm also able to perform a manual login and push the image.


___
users
 mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users




  

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users  

RE: oc new-app on image from OpenShift Registry

2016-03-15 Thread Den Cowboy
I see the push failed

440807311fed: Pushed 
ed3cf5a3e842: Pushed 
8a652dfcca24: Pushed 
latest: digest: 
sha256:b0b56d49abf65ff0c709390ce60be5d3a4d027f27b3c4349596b35b5e9e0 size: 
6972
Received unexpected HTTP status: 500 Internal Server Error

From: dencow...@hotmail.com
To: tim.m...@spring.co.nz
Subject: RE: oc new-app on image from OpenShift Registry
Date: Tue, 15 Mar 2016 08:22:40 +
CC: users@lists.openshift.redhat.com




Hi Tim, thanks for the fast reply:
docker-registry   registry.dev.com docker-registry:5000-tcp   
passthrough   docker-registry=default

But you made me think. The port is an issue probably.
The 5000 is only an internal port on the container. But How can I push and pull 
an image from the outside?

First I moved my ca.crt to registry.dev.com:443 instead of registry.dev.com:5000

I tried to tag an image on registry.dev.com:443/test3/test and I was able to 
push the image but when I try to start the image:

$ oc new-app --insecure-registry registry.dev.com:443/test3/test
error: can't look up Docker image "registry.dev.dbm.com:443/test3/test": 
Internal error occurred: Get https://registry.dev.dbm.com:443/v2/: dial tcp 
172.30.82.246:443: no route to host

From: tim.m...@spring.co.nz
To: dencow...@hotmail.com
CC: users@lists.openshift.redhat.com
Subject: Re: oc new-app on image from OpenShift Registry
Date: Tue, 15 Mar 2016 08:03:17 +






Hey Den, 



have you created your external route?



whats the output of:



oc get routes



Also, when using external routes you won’t need the port ‘:5000’ reference. 



Link
- 
https://docs.openshift.org/latest/install_config/install/docker_registry.html#access-pushing-and-pulling-images




On 15/03/2016, at 8:44 PM, Den Cowboy <dencow...@hotmail.com> wrote:



I've my OpenShift registry. It's using selfsigned certificates which are 
created for my service IP (172.30.82.xx) and my hostname (registry.dev.com)



[centos@ip-172-31-18-122 ~]$ oc new-app --insecure-registry 
registry.dev.com:5000/test2/test:7

W0315 07:38:52.206896   37667 pipeline.go:154] Could not find an image stream 
match for "registry.dev.com:5000/test2/test:7". Make sure that a Docker image 
with that tag is available on the node for the deployment
 to succeed.

--> Found Docker image 65262bc (4 hours old) from registry.dev.com:5000 for 
"registry.dev.com:5000/test2/test:7"



* This image will be deployed in deployment config "test"

* Ports 8080/tcp, /tcp will be load balanced by service "test"

  * Other containers can access this service through the hostname "test"

* WARNING: Image "test" runs as the 'root' user which may not be permitted 
by your cluster administrator



--> Creating resources with label app=test ...

deploymentconfig "test" created

service "test" created

--> Success

Run 'oc status' to view your app.



--> ERROR: Failed to pull image "registry.dev.com:5000/test2/test:7": image 
pull failed for registry.dev.com:5000/test2/test:7,
 this may be because there are no credentials on this request. details: (Error: 
image test2/test:7 not found) 





$ oc new-app --insecure-registry 172.30.82.xx:5000/test2/test:7

--> Found Docker image 65262bc (4 hours old) from 172.30.82.xx:5000 for 
"172.30.82.xx:5000/test2/test:7"



* An image stream will be created as "test:7" that will track this image

* This image will be deployed in deployment config "test"

* Ports 8080/tcp, /tcp will be load balanced by service "test"

  * Other containers can access this service through the hostname "test"

* WARNING: Image "test" runs as the 'root' user which may not be permitted 
by your cluster administrator



--> Creating resources with label app=test ...



--> WORKS







INFO: I defined the hostname when I was securing the registry:

oadm ca create-server-cert --signer-cert=ca.crt \
--signer-key=ca.key --signer-serial=ca.serial.txt \
--hostnames='registry.dev.com,172.30.xx.xx' \
--cert=registry.crt --key=registry.key
I'm also able to perform a manual login and push the image.


___
users
 mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users




  

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: oc new-app on image from OpenShift Registry

2016-03-15 Thread Den Cowboy
Hi Tim, thanks for the fast reply:
docker-registry   registry.dev.com docker-registry:5000-tcp   
passthrough   docker-registry=default

But you made me think. The port is an issue probably.
The 5000 is only an internal port on the container. But How can I push and pull 
an image from the outside?

First I moved my ca.crt to registry.dev.com:443 instead of registry.dev.com:5000

I tried to tag an image on registry.dev.com:443/test3/test and I was able to 
push the image but when I try to start the image:

$ oc new-app --insecure-registry registry.dev.com:443/test3/test
error: can't look up Docker image "registry.dev.dbm.com:443/test3/test": 
Internal error occurred: Get https://registry.dev.dbm.com:443/v2/: dial tcp 
172.30.82.246:443: no route to host

From: tim.m...@spring.co.nz
To: dencow...@hotmail.com
CC: users@lists.openshift.redhat.com
Subject: Re: oc new-app on image from OpenShift Registry
Date: Tue, 15 Mar 2016 08:03:17 +






Hey Den, 



have you created your external route?



whats the output of:



oc get routes



Also, when using external routes you won’t need the port ‘:5000’ reference. 



Link
- 
https://docs.openshift.org/latest/install_config/install/docker_registry.html#access-pushing-and-pulling-images




On 15/03/2016, at 8:44 PM, Den Cowboy <dencow...@hotmail.com> wrote:



I've my OpenShift registry. It's using selfsigned certificates which are 
created for my service IP (172.30.82.xx) and my hostname (registry.dev.com)



[centos@ip-172-31-18-122 ~]$ oc new-app --insecure-registry 
registry.dev.com:5000/test2/test:7

W0315 07:38:52.206896   37667 pipeline.go:154] Could not find an image stream 
match for "registry.dev.com:5000/test2/test:7". Make sure that a Docker image 
with that tag is available on the node for the deployment
 to succeed.

--> Found Docker image 65262bc (4 hours old) from registry.dev.com:5000 for 
"registry.dev.com:5000/test2/test:7"



* This image will be deployed in deployment config "test"

* Ports 8080/tcp, /tcp will be load balanced by service "test"

  * Other containers can access this service through the hostname "test"

* WARNING: Image "test" runs as the 'root' user which may not be permitted 
by your cluster administrator



--> Creating resources with label app=test ...

deploymentconfig "test" created

service "test" created

--> Success

Run 'oc status' to view your app.



--> ERROR: Failed to pull image "registry.dev.com:5000/test2/test:7": image 
pull failed for registry.dev.com:5000/test2/test:7,
 this may be because there are no credentials on this request. details: (Error: 
image test2/test:7 not found) 





$ oc new-app --insecure-registry 172.30.82.xx:5000/test2/test:7

--> Found Docker image 65262bc (4 hours old) from 172.30.82.xx:5000 for 
"172.30.82.xx:5000/test2/test:7"



* An image stream will be created as "test:7" that will track this image

* This image will be deployed in deployment config "test"

* Ports 8080/tcp, /tcp will be load balanced by service "test"

  * Other containers can access this service through the hostname "test"

* WARNING: Image "test" runs as the 'root' user which may not be permitted 
by your cluster administrator



--> Creating resources with label app=test ...



--> WORKS







INFO: I defined the hostname when I was securing the registry:

oadm ca create-server-cert --signer-cert=ca.crt \
--signer-key=ca.key --signer-serial=ca.serial.txt \
--hostnames='registry.dev.com,172.30.xx.xx' \
--cert=registry.crt --key=registry.key
I'm also able to perform a manual login and push the image.


___
users
 mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users




  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


oc new-app on image from OpenShift Registry

2016-03-15 Thread Den Cowboy
I've my OpenShift registry. It's using selfsigned certificates which are 
created for my service IP (172.30.82.xx) and my hostname (registry.dev.com)

[centos@ip-172-31-18-122 ~]$ oc new-app --insecure-registry 
registry.dev.com:5000/test2/test:7
W0315 07:38:52.206896   37667 pipeline.go:154] Could not find an image stream 
match for "registry.dev.com:5000/test2/test:7". Make sure that a Docker image 
with that tag is available on the node for the deployment to succeed.
--> Found Docker image 65262bc (4 hours old) from registry.dev.com:5000 for 
"registry.dev.com:5000/test2/test:7"

* This image will be deployed in deployment config "test"
* Ports 8080/tcp, /tcp will be load balanced by service "test"
  * Other containers can access this service through the hostname "test"
* WARNING: Image "test" runs as the 'root' user which may not be permitted 
by your cluster administrator

--> Creating resources with label app=test ...
deploymentconfig "test" created
service "test" created
--> Success
Run 'oc status' to view your app.

--> ERROR: Failed to pull image "registry.dev.com:5000/test2/test:7": image 
pull failed for registry.dev.com:5000/test2/test:7, this may be 
because there are no credentials on this request.  details: (Error: 
image test2/test:7 not found)



$ oc new-app --insecure-registry 172.30.82.xx:5000/test2/test:7
--> Found Docker image 65262bc (4 hours old) from 172.30.82.xx:5000 for 
"172.30.82.xx:5000/test2/test:7"

* An image stream will be created as "test:7" that will track this image
* This image will be deployed in deployment config "test"
* Ports 8080/tcp, /tcp will be load balanced by service "test"
  * Other containers can access this service through the hostname "test"
* WARNING: Image "test" runs as the 'root' user which may not be permitted 
by your cluster administrator

--> Creating resources with label app=test ...

--> WORKS



INFO: I defined the hostname when I was securing the registry:
oadm ca create-server-cert --signer-cert=ca.crt \
--signer-key=ca.key --signer-serial=ca.serial.txt \
--hostnames='registry.dev.com,172.30.xx.xx' \
--cert=registry.crt --key=registry.keyI'm also able to perform a manual 
login and push the image.
  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: OpenShift certs for securing registry

2016-03-14 Thread Den Cowboy
Okay it seems I have to use --insecure-registry. Than it works for 
IP:5000/project/image:latest, not for registry:5000/project/image:latest. I 
don't have a wildcard dns at the moment. I'm specifying it in etc/hosts. 

Do I have to put IP registry in /etc/hosts on my container of the registry?

It's weird because the docker pull registry:5000/project/image:latest works fine


This is a weird error: But it finds the image:

Could not find an image stream match for 
"registry.xx.xx.com:5000/project/image:latest". Make sure that a Docker image 
with that tag is available on the node for the deployment to succeed.
--> Found Docker image cd38c74 (About an hour old) from 
registry.xxx.xxx.com:5000 for "registry.xx.xx.com:5000/project/image:latest"

But a backoff loop:



From: dencow...@hotmail.com
To: users@lists.openshift.redhat.com
Subject: OpenShift certs for securing registry
Date: Mon, 14 Mar 2016 13:42:02 +




I'm trying to push to my OpenShift Registry. I've secured the registry with 
ca.crt of OpenShift itself.
Is this secure or do I have to tag insecure?

I'm able to push an image into the registry and I'm also able to pull it by 
logging in into the registry.
I pushed into my openshift registry so I don't have to create image-streams. 
(oc import-image)
But this doesn't work:


oc new-app registry.xxx.com:5000/project/image:latest
error: can't look up Docker image "": Internal error occurred: Get 
https://:5000/v2/: x509: certificate signed by unknown authority
error: no match for "...:5000/projc/image:latest"

oc new-app 172.30.xx.xx:5000/project/image:latest
error: can't look up Docker image "172.30.xx.xx:5000/project/image:latest": 
Internal error occurred: Get https://172.30.xx.xx:5000/v2/: x509: certificate 
signed by unknown authority
error: no match for ".."

The certs are in /etc/docker/certs.d
I'm able to login manually (docker login -u .. -e ... -p token service-IP:5000 
or hostname:5000)
I'm able to pull and push with the docker commands.





Info:
oc version
oc v1.1.3
kubernetes v1.2.0-origin
  

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: etcd failure response: HTTP/0.0 0 status code 0

2016-03-14 Thread Den Cowboy
Okay I had to change:
roles/openshift_manage_node/tasks/main.yml

I replaced the bottom task:

- name: Label nodes
  command: >
{{ openshift.common.client_binary }} label --overwrite node {{ 
item.openshift.common.hostname | lower }} {{ item.openshift_node_labels | 
oo_combine_dict  }}
  with_items:
-  "{{ openshift_node_vars }}"
  when: "'openshift_node_labels'in item and item.openshift_node_labels !={}"

It's working now. Thanks

From: dencow...@hotmail.com
To: jdeti...@redhat.com
Subject: RE: etcd failure response: HTTP/0.0 0 status code 0
Date: Mon, 14 Mar 2016 07:33:11 +
CC: users@lists.openshift.redhat.com




The port is closed but I didn't specify to use etcd. This were my exports:





















export AWS_PROFILE=myprofile

 

export ec2_vpc_subnet='subnet-xxx'

 

export ec2_security_groups="['OpenShiftSec']"

 

export ec2_instance_type='m4.large'

 

export ec2_image='ami-xxx'

 

export ec2_region='xx‐xx‐1'

 

export ec2_keypair='openshift'

 

export ec2_assign_public_ip='true'

 

export os_master_root_vol_size='20'

 

export os_master_root_vol_type='standard'

 

export os_node_root_vol_size='15'

 

export os_node_root_vol_type='standard'

 







Date: Sun, 13 Mar 2016 16:53:02 -0400
Subject: Re: etcd failure response: HTTP/0.0 0 status code 0
From: jdeti...@redhat.com
To: dencow...@hotmail.com
CC: users@lists.openshift.redhat.com

Did you specify any etcd hosts? Does the security group used permit TCP/2379 
from the masters to the etcd hosts?
On Mar 13, 2016 10:57 AM, "Den Cowboy" <dencow...@hotmail.com> wrote:



I tried to install the Origin Cluster but I got this error when I'm running my 
playbook:
TASK: [openshift_master | Start and enable master api]  
failed: [52.xx.xx.xx => {"failed": true}
msg: Job for origin-master-api.service failed because the control process 
exited with error code. See "systemctl status origin-master-api.service" and 
"journalctl -xe" for details.


origin-master-api.service - Atomic OpenShift Master API
   Loaded: loaded (/usr/lib/systemd/system/origin-master-api.service; enabled; 
vendor preset: disabled)
   Active: failed (Result: exit-code) since Sun 2016-03-13 14:38:46 UTC; 13min 
ago
 Docs: https://github.com/openshift/origin
  Process: 18236 ExecStart=/usr/bin/openshift start master api 
--config=${CONFIG_FILE} $OPTIONS (code=exited, status=2)
 Main PID: 18236 (code=exited, status=2)

atomic-openshift-master-api[18236]: Content-Length: 0
atomic-openshift-master-api[18236]: E0313 14:38:45.122824   18236 etcd.go:128] 
etcd failure response: HTTP/0.0 0 status code 0
atomic-openshift-master-api[18236]: Content-Length: 0
atomic-openshift-master-api[18236]: E0313 14:38:46.123859   18236 etcd.go:128] 
etcd failure response: HTTP/0.0 0 status code 0
atomic-openshift-master-api[18236]: Content-Length: 0
systemd[1]: origin-master-api.service start operation timed out. Terminating.
systemd[1]: origin-master-api.service: main process exited, code=exited, 
status=2/INVALIDARGUMENT
systemd[1]: Failed to start Atomic OpenShift Master API.
systemd[1]: Unit origin-master-api.service entered failed state.
systemd[1]: origin-master-api.service failed.

What could be the issue? I used 
https://github.com/openshift/openshift-ansible/blob/master/README_AWS.md
  

___

users mailing list

users@lists.openshift.redhat.com

http://lists.openshift.redhat.com/openshiftmm/listinfo/users


  

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: etcd failure response: HTTP/0.0 0 status code 0

2016-03-14 Thread Den Cowboy
The port is closed but I didn't specify to use etcd. This were my exports:





















export AWS_PROFILE=myprofile

 

export ec2_vpc_subnet='subnet-xxx'

 

export ec2_security_groups="['OpenShiftSec']"

 

export ec2_instance_type='m4.large'

 

export ec2_image='ami-xxx'

 

export ec2_region='xx‐xx‐1'

 

export ec2_keypair='openshift'

 

export ec2_assign_public_ip='true'

 

export os_master_root_vol_size='20'

 

export os_master_root_vol_type='standard'

 

export os_node_root_vol_size='15'

 

export os_node_root_vol_type='standard'

 







Date: Sun, 13 Mar 2016 16:53:02 -0400
Subject: Re: etcd failure response: HTTP/0.0 0 status code 0
From: jdeti...@redhat.com
To: dencow...@hotmail.com
CC: users@lists.openshift.redhat.com

Did you specify any etcd hosts? Does the security group used permit TCP/2379 
from the masters to the etcd hosts?
On Mar 13, 2016 10:57 AM, "Den Cowboy" <dencow...@hotmail.com> wrote:



I tried to install the Origin Cluster but I got this error when I'm running my 
playbook:
TASK: [openshift_master | Start and enable master api]  
failed: [52.xx.xx.xx => {"failed": true}
msg: Job for origin-master-api.service failed because the control process 
exited with error code. See "systemctl status origin-master-api.service" and 
"journalctl -xe" for details.


origin-master-api.service - Atomic OpenShift Master API
   Loaded: loaded (/usr/lib/systemd/system/origin-master-api.service; enabled; 
vendor preset: disabled)
   Active: failed (Result: exit-code) since Sun 2016-03-13 14:38:46 UTC; 13min 
ago
 Docs: https://github.com/openshift/origin
  Process: 18236 ExecStart=/usr/bin/openshift start master api 
--config=${CONFIG_FILE} $OPTIONS (code=exited, status=2)
 Main PID: 18236 (code=exited, status=2)

atomic-openshift-master-api[18236]: Content-Length: 0
atomic-openshift-master-api[18236]: E0313 14:38:45.122824   18236 etcd.go:128] 
etcd failure response: HTTP/0.0 0 status code 0
atomic-openshift-master-api[18236]: Content-Length: 0
atomic-openshift-master-api[18236]: E0313 14:38:46.123859   18236 etcd.go:128] 
etcd failure response: HTTP/0.0 0 status code 0
atomic-openshift-master-api[18236]: Content-Length: 0
systemd[1]: origin-master-api.service start operation timed out. Terminating.
systemd[1]: origin-master-api.service: main process exited, code=exited, 
status=2/INVALIDARGUMENT
systemd[1]: Failed to start Atomic OpenShift Master API.
systemd[1]: Unit origin-master-api.service entered failed state.
systemd[1]: origin-master-api.service failed.

What could be the issue? I used 
https://github.com/openshift/openshift-ansible/blob/master/README_AWS.md
  

___

users mailing list

users@lists.openshift.redhat.com

http://lists.openshift.redhat.com/openshiftmm/listinfo/users


  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


etcd failure response: HTTP/0.0 0 status code 0

2016-03-13 Thread Den Cowboy
I tried to install the Origin Cluster but I got this error when I'm running my 
playbook:
TASK: [openshift_master | Start and enable master api]  
failed: [52.xx.xx.xx => {"failed": true}
msg: Job for origin-master-api.service failed because the control process 
exited with error code. See "systemctl status origin-master-api.service" and 
"journalctl -xe" for details.


origin-master-api.service - Atomic OpenShift Master API
   Loaded: loaded (/usr/lib/systemd/system/origin-master-api.service; enabled; 
vendor preset: disabled)
   Active: failed (Result: exit-code) since Sun 2016-03-13 14:38:46 UTC; 13min 
ago
 Docs: https://github.com/openshift/origin
  Process: 18236 ExecStart=/usr/bin/openshift start master api 
--config=${CONFIG_FILE} $OPTIONS (code=exited, status=2)
 Main PID: 18236 (code=exited, status=2)

atomic-openshift-master-api[18236]: Content-Length: 0
atomic-openshift-master-api[18236]: E0313 14:38:45.122824   18236 etcd.go:128] 
etcd failure response: HTTP/0.0 0 status code 0
atomic-openshift-master-api[18236]: Content-Length: 0
atomic-openshift-master-api[18236]: E0313 14:38:46.123859   18236 etcd.go:128] 
etcd failure response: HTTP/0.0 0 status code 0
atomic-openshift-master-api[18236]: Content-Length: 0
systemd[1]: origin-master-api.service start operation timed out. Terminating.
systemd[1]: origin-master-api.service: main process exited, code=exited, 
status=2/INVALIDARGUMENT
systemd[1]: Failed to start Atomic OpenShift Master API.
systemd[1]: Unit origin-master-api.service entered failed state.
systemd[1]: origin-master-api.service failed.

What could be the issue? I used 
https://github.com/openshift/openshift-ansible/blob/master/README_AWS.md
  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: use service from other project (namespace)

2016-03-03 Thread Den Cowboy
Thanks!

> From: t...@butter.sh
> To: dencow...@hotmail.com; users@lists.openshift.redhat.com
> Subject: Re: use service from other project (namespace)
> Date: Thu, 3 Mar 2016 14:53:45 +0100
> 
> Hi.
> 
> > Is it possible to connect with a service which is in another project 
> > (namespace)?
> 
> Yes. When using openshift-sdn's multi-tenant mode you will need to
> connect the networks though. (If neither of the namespaces is default or
> openshift, that is.) I'll refer you to [1] for details, but the gist is
> the following.
> 
> oadm pod-network join-projects --to project1 project2
> 
> If you have a "global" database that all projects ought to be able to
> access (I would not recommend that if you have a testing environment),
> you can make the one project global.
> 
> [1]: https://docs.openshift.org/latest/admin_guide/pod_network.html
> 
> Cheers,
>  Tobias Florek
  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: How to distribute an image in OpenShift cluster

2016-03-01 Thread Den Cowboy
I wanted to create an image stream for my image (which is pulled from a private 
registry). So I also need a secret.

oc secrets new-dockercfg mysecret --docker-server=ec2xxx:5000 
--docker-username=myuser --docker-password=mypass --docker-email=a...@mail.com

oc import-image --insecure ec2xxx:5000/proj/image:23 --confirm

The image stream isn't created correctly:
lastTransitionTime: 2016-03-01T09:23:04Z
  message: you may not have access to the Docker image 
"ec2-52-58-3-178.eu-central-1.compute.amazonaws.com:5000/dbm/ponds-ui-nodejs:83"
  reason: Unauthorized
  status: "False"


Probably because I have to link the secret to my image stream? How can I 
perform this? 
From: ccole...@redhat.com
Date: Thu, 25 Feb 2016 16:31:13 -0500
Subject: Re: How to distribute an image in OpenShift cluster
To: lorenz.vanthi...@outlook.com
CC: users@lists.openshift.redhat.com



On Feb 24, 2016, at 7:03 AM, Lorenz Vanthillo  
wrote:




I've a big OpenShift Origin 1.1 cluster with many nodes. Now I've pulled an 
image from an insecure registry (self-signed certificates) on one of the nodes. 
I can perform the oc new-app command to start the image and the application. 
The problem appears when I try to scale.
The scaling itself is going fine but the new containers are all on the same 
node from where I've pulled the image.
I thought the problem was the following: I'm creating a new app from an 
existing docker image. So because there is no s2i-build it's not necessary to 
build a new image and more important: it's not necessary to push this image 
into the OpenShift registry.

So other nodes aren't able to pull the image when I try to scale because it's 
not in the registry. It's only locally on one of my nodes.
How is it possible to solve this problem? I tried to push my image into the 
openshift registry manually. This was possible but after scaling the container 
is still recreated on that same node where the image already is.
I was reading about image streams but I don't know if that will solve my 
problem. When I will try to create an image stream on my image, will the image 
stream be known over the whole cluster-environment? 


Deployment configs point to images to run.  If the nodes have access to that 
image already, you won't need to do anything except reference it in the DC pod 
template.  If you want to control how the image is rolled out, you create an 
image stream and then have your DCs point to that image stream, then you can 
tag a new image in to the stream and rollouts will be triggered.



  
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: Create image-stream for image from insecure private docker registry

2016-02-25 Thread Den Cowboy
I've created the secret for my registry:

oc secrets new-dockercfg ec2-xx-xx-xx-xx.xx-xx-1.compute.amazonaws.com 
--docker-server=ec2-xx-xx-xx-xx.xx-xx-1.compute.amazonaws.com 
--docker-username= --docker-password= --docker-email=a...@mail.com

2 questions:
- What do I have to fill in for secret? Now I'm fillling in the public dns of 
the server where my registry is on.
oc secrets new-dockercfg SECRET ...

- How can I tell my image-stream to connect with the created secret?

From: dencow...@hotmail.com
To: ccole...@redhat.com
Subject: RE: Create image-stream for image from insecure private docker registry
Date: Thu, 25 Feb 2016 08:42:22 +
CC: users@lists.openshift.redhat.com




Now I tried:

oc import-image --insecure=true 
ec2-xx-xx-xx-xx.xx-central-1.compute.amazonaws.com:5000/test/test-image:14 
--confirm

message: you may not have access to the Docker image 
"ec2-xx-xx-xx-xx.xx-central-1.compute.amazonaws.com:5000/test/test-image:14"
  reason: Unauthorized
  status: "False"

From: dencow...@hotmail.com
To: ccole...@redhat.com
Subject: RE: Create image-stream for image from insecure private docker registry
Date: Thu, 25 Feb 2016 08:04:30 +




I've docker on all my instances.

oc import-image 
ec2-xx-xx-xx-xx.xx-central-1.compute.amazonaws.com:5000/test/test-image:14 
--confirm
The import completed successfully.

Name:ponds-ui-nodejs
Created:19 minutes ago
Labels:
Annotations:
openshift.io/image.dockerRepositoryCheck=2016-02-25T07:41:00Z
Docker Pull Spec:172.30.xx.xx:5000/test2/test-image

TagSpecCreatedPullSpec
Image
14ec2-xx-xx-xx-xx.xx-central-1.compute.amazonaws.com:5000/test/test-image   
 19 minutes agoimport failed: Internal error occurred: Get 
https://ec2-xx-xx-xx-xx...

When I want to edit it:

  lastTransitionTime: 2016-02-25T07:41:00Z
  message: 'Internal error occurred: Get 
https://ec2-xx-xx-xx-xx.xx-central-1.compute.amazonaws.com:5000/v2/:
x509: certificate signed by unknown authority'
  reason: InternalError
  status: "False"

In /etc/sysconfig/docker:
INSECURE_REGISTRY='--insecure-registry 
ec2-xx-xx-xx-xx.xx-central-1.compute.amazonaws.com'

The insecure registry is using selfsigned certs and basic authentication.
I'm able to login in the registry manually and pull the image manually.

From: ccole...@redhat.com
Date: Wed, 24 Feb 2016 20:33:51 -0500
Subject: Re: Create image-stream for image from insecure private docker registry
To: dencow...@hotmail.com
CC: maszu...@redhat.com; users@lists.openshift.redhat.com

If you are on 1.1.3 there is a bug with new-app if you are running new-app on a 
machine without Docker, you won't be able to select images from the DockerHub.  
1.1.4 will contain a fix for that.
If you want to import that image,
oc import-image 
ec2-52-58-3-178.eu-central-1.compute.amazonaws.com:5000/test/image-name
Should be all you need.
On Feb 24, 2016, at 7:26 AM, Den Cowboy <dencow...@hotmail.com> wrote:




I've created my secret as following:
oc secrets new-dockercfg ec2-xxx.com --docker-server=ec2-xxx.com:5000 
--docker-username=test --docker-password=test --docker-email=mail@mail.comAfter 
that I tried to create my image-stream (which is not yet connected with the 
secret. How do I have to perform this?)

oc create -f image-stream.json 

content of the .json:
kind: ImageStream
apiVersion: v1
metadata:
  name: image-name
  tags:
  - from:
  kind: DockerImage
  name: 
ec2-52-58-3-178.eu-central-1.compute.amazonaws.com:5000/test/image-name
name: 83
importPolicy:
  insecure: "true"

But after $ oc get is

$ oc get is
NAME  DOCKER REPOTAGS  UPDATED
image-name   172.30.xx.xx:5000/my-project-in-openshift/image-name  

But I don't see the tag.
$ oc new-app --list shows my create image-stream but also no tag:

and when I try to use the image-stream:
$ oc new-app --image-stream=image-name
error: component "image-name" had only a partial match of 
"my-project-in-openshift/image-name" - if this is the value you want to use, 
specify it explicitly

oc new-app --image-stream=my-project-in-openshift/ponds-ui-nodejs
error: component "test3/ponds-ui-nodejs" had only a partial match of 
"test3/ponds-ui-nodejs" - if this is the value you want to use, specify it 
explicitly





> Subject: Re: Create image-stream for image from insecure private docker 
> registry
> To: dencow...@hotmail.com; users@lists.openshift.redhat.com
> From: maszu...@redhat.com
> Date: Tue, 23 Feb 2016 14:25:43 +0100
> 
> 
> 
> On 02/23/2016 11:44 AM, Den Cowboy wrote:
> > I  try to create an image-stream for my image from a docker registry.
> > The registry is insecure (it's using selfsigned certificates) and there is 
> > a login + passwor

RE: horizontal autoscaler does not get cpu utilization

2016-02-25 Thread Den Cowboy
Some logs are showing:

Failed to reconcile test-scaler: failed to compute desired number of replicas 
based on CPU utilization for DeploymentConfig/test/test: failed to get cpu 
utilization: failed to get CPU consumption and request: failed to unmarshall 
heapster response: invalid character 'E' looking for beginning of value
Feb 25 07:48:57 ip-172-31-xx-xx origin-master: E0225 07:48:57.0790282242 
event.go:192] Server rejected event 
'{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, 
ObjectMeta:api.ObjectMeta{Name:"test-scaler.14361ecd543d4608", GenerateName:"", 
Namespace:"test", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, 
CreationTimestamp:unversioned.Time{Time:time.Time{sec:0, nsec:0, 
loc:(*time.Location)(nil)}}, DeletionTimestamp:(*unversioned.Time)(nil), 
DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), 
Annotations:map[string]string(nil)}, 
InvolvedObject:api.ObjectReference{Kind:"HorizontalPodAutoscaler", 
Namespace:"test", Name:"test-scaler", 
UID:"f7bac384-db00-11e5-ac6e-06b94d3c6589", APIVersion:"extensions", 
ResourceVersion:"13501", FieldPath:""}, Reason:"FailedGetMetrics", 
Message:"failed to get CPU consumption and request: failed to unmarshall 
heapster response: invalid character 'E' looking for beginning of value",

I used this configuration:
$ oc secrets new metrics-deployer nothing=/dev/null


> Date: Wed, 24 Feb 2016 13:29:21 -0500
> From: mwri...@redhat.com
> To: dencow...@hotmail.com
> CC: users@lists.openshift.redhat.com
> Subject: Re: horizontal autoscaler does not get cpu utilization
> 
> 
> 
> - Original Message -
> > From: "Den Cowboy" <dencow...@hotmail.com>
> > To: users@lists.openshift.redhat.com
> > Sent: Wednesday, February 24, 2016 9:35:34 AM
> > Subject: RE: horizontal autoscaler does not get cpu utilization
> > 
> > I don't know if this is maybe the issue?
> > In my browser https://hawkular-metrics.xx.xx.com/hawkular/metrics/status
> > {"MetricsService":"STARTED","Implementation-Version":"0.13.0-SNAPSHOT","Built-From-Git-SHA1":"7dee24acfcfb3beac356e2c4d83b7b1704ebf82x"}
> > curl on my master or nodes:
> > curl -X GET https://hawkular-metrics.xx.xx.com/hawkular/metrics/status
> > curl: (6) Could not resolve host: hawkular-metrics.xx.xx.com; Name or 
> > service
> > not known
> > 
> > I'm just describing the IP of the node where my router is in my local
> > /etc/hosts
> > like this: xx.xx.xx.xx hawkular-metrics.xx.xx.com
> 
> The router configuration is not used for the HPA and so not being able to 
> resolve the hostname from within the node or container should not be an issue.
> 
> What the HPA does use is the API proxy.
> 
> You can check if Heapster is accessible via the API proxy through the 
> following command:
> 
> curl -H "Authorization: Bearer X" \
>-X GET 
> https://${KUBERNETES_MASTER}/api/v1/proxy/namespaces/openshift-infra/services/https:heapster:/api/v1/model/
> 
> Are there any other errors in the OpenShift logs [not the container logs for 
> Hawkular-Metrics, Cassandra or Heapster, those appear to be working since you 
> can see metrics in the browser]
> 
> > 
> > 
> > From: dencow...@hotmail.com
> > To: users@lists.openshift.redhat.com
> > Subject: horizontal autoscaler does not get cpu utilization
> > Date: Wed, 24 Feb 2016 13:56:24 +
> > 
> > I'm on Origin 1.1.3
> > I've confgured the cluster-metrics (its in the openshift-infra project!). 
> > I'm
> > able to see all the metrics (memory & cpu) on my metrics-tab.
> > Now I try to create a simple autoscaler:
> > oc autoscale dc/test --min 2 --max 15 --cpu-percent=70
> > 
> > I've edited the dc of my container so now it's using resources requests and
> > limits.
> > In my webconsole I see the 2 cirkles and the MiB and millicores used.
> > 
> > But
> > oc get hpa
> > NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
> > test DeploymentConfig/test/scale 70%  2 15 21m
> > 
> > Target CPU utilization: 70%
> > Current CPU utilization: 
> > 
> > I read it was normal that the current state was waiting in the beginning. 
> > But
> > it's already on  for 21 minutes.
> > How can I check what's going wrong?
> > 
> > The logs of the pod where I've created the autoscaler doesn't show anything
> > different than normal:
> > AH00558: httpd: Could not reliably determine the server's fully qualified
&g

RE: openshift start => don't generate master-config.yaml, openshift start master --write-config => generate master-config.yaml, it's a bug or a feature ?

2016-02-24 Thread Den Cowboy
Have you checked /etc/origin/master/
That's where the config files are generated in Origin.

with the option --write-config you're going to write your own configfiles.
I assume you were watching to the documentation of OpenShift 3.0
But when you're working with origin I would recommend the Origin documentation: 
https://docs.openshift.org/latest/welcome/index.html


Date: Wed, 24 Feb 2016 10:02:42 +0100
Subject: openshift start => don't generate master-config.yaml, openshift start  
master --write-config => generate master-config.yaml,   it's a bug or a feature 
?
From: cont...@stephane-klein.info
To: users@lists.openshift.redhat.com

Hi,

when I execute :

```
# openshift start
# ls openshift.local.config/master/master-config.yaml
ls: cannot access openshift.local.config/master/master-config.yaml: No such 
file or directory
...

The "master-config.yaml" config file isn't generated.

Same result with :

```
# openshift start master
# ls openshift.local.config/master/master-config.yaml
ls: cannot access openshift.local.config/master/master-config.yaml: No such 
file or directory
```

But if I execute :

```
# openshift start master --write-config=openshift.local.config/master/
# ls openshift.local.config/master/master-config.yaml
openshift.local.config/master/master-config.yaml
```

The config file is present.

It's a bug or a feature ? If it's a feature, I don't understand why.

This is the version :

```
# openshift version
openshift v1.1.3
kubernetes v1.2.0-origin
etcd 2.2.2+git
```

Best regards,
Stéphane
-- 
Stéphane Klein 
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: Create app with image from own docker registry on OpenShift 3.1

2016-02-23 Thread Den Cowboy
I've added it + restarted docker:
INSECURE_REGISTRY='--insecure-registry 
ec2-xx-xx-xx-xx.xx-central-1.compute.amazonaws.com'

I'm able to perform a docker login and pull the image manually but
oc new-app ec2-xxx:5000/test/image:1 or /test/imageerror: can't look up Docker 
image "ec2-xxx:5000/dbm/ponds-ui-nodejs:83": Internal error occurred: Get 
https://ec2-xxx:5000/v2/: x509: certificate signed by unknown authority
error: no match for "ec2-xxx:5000/test/image:1"

From: bpar...@redhat.com
Date: Thu, 18 Feb 2016 09:48:32 -0500
Subject: Re: Create app with image from own docker registry on OpenShift 3.1
To: dencow...@hotmail.com; users@lists.openshift.redhat.com

INSECURE_REGISTRY is needed because your registry is using a self-signed cert, 
whether it is secured or not.


On Thu, Feb 18, 2016 at 4:59 AM, Den Cowboy <dencow...@hotmail.com> wrote:



No didn't do that. I'm using a secure registry for OpenShift. So the tag was 
not on insecure. 

From: bpar...@redhat.com
Date: Wed, 17 Feb 2016 10:53:48 -0500
Subject: Re: Create app with image from own docker registry on OpenShift 3.1
To: dencow...@hotmail.com
CC: users@lists.openshift.redhat.com

is ec2-xxx listed as an insecure registry in your docker daemon's configuration?

/etc/sysconfig/docker
INSECURE_REGISTRY='--insecure-registry ec2-'

I believe that is needed for docker to communicate with registries that use 
self-signed certs.

(you'll need to restart the docker daemon after adding that setting)



On Wed, Feb 17, 2016 at 8:15 AM, Den Cowboy <dencow...@hotmail.com> wrote:





I have my own docker registry secured with a selfsigned certificate.
On other servers, I'm able to login on the registry and pull/push images from 
it. So that seems to work fine.


But when I want to create an app from the image using OpenShift it does not 
seem te work:


oc new-app ec2-xxx:5000/test/image1
error: can't look up Docker image "ec2-xx/test/image1": Internal error 
occurred: Get https://ec2-xxx:5000/v2/: x509: certificate signed by unknown 
authority
error: no match for "ec2-xxx:5000/test/image1"


What could be the issue?
I'm able to login in the registry and pull the image manually.

  

___

users mailing list

users@lists.openshift.redhat.com

http://lists.openshift.redhat.com/openshiftmm/listinfo/users




-- 
Ben Parees | OpenShift


  


-- 
Ben Parees | OpenShift


  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Run extra docker registry on OpenShift Origin 1.1

2016-02-22 Thread Den Cowboy



I've the OpenShift registry which will contain all my images I've created 
inside my OpenShift cluster. But I want to run an external registry on 
OpenShift. 
At the moment it's just running with docker I performed this steps:





















Create self-signed certificates (SSL)

$ mkdir -p certs
&& openssl req \

  -newkey rsa:4096 -nodes -sha256 -keyout
certs/domain.key \

  -x509 -days 365 -out certs/domain.crt

 

 

Create user + password file

$ docker run
--entrypoint htpasswd registry:2 -Bbn testuser testpassword > auth/htpasswd

 

 

Create container for storing data = volume container (not
running)

$ docker create -v
/var/lib/registry --name registry-dv registry:2

 

 

Start registry server

$
docker run -d -p 5000:5000 --restart=always --name
ec2-52-29-xx-xx.xx-central-1.compute.amazonaws.com --volumes-from
registry-dv \

  -v `pwd`/auth:/auth \

  -e "REGISTRY_AUTH=htpasswd" \

  -e
"REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" \

  -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd
\

  -v `pwd`/certs:/certs \

  -e
REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \

  -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \

registry:2






Now is my question how I'm able to start this registry in OpenShift. Is it 
possible to use docker volume containers in OpenShift or do I have to use NFS 
or something ? And is it possible to use the -v and -e flag inside the oc 
new-app command? -e, --env=[]: Specify key value pairs of environment variables 
to set into each container should work so can I perform:

oc new-app registry:2 --name registry -e ...?
But the biggest problem seems to mee to mount the created certs and auth folder 
to the volume of my registry on OpenShift?
I read this: https://docs.openshift.com/enterprise/3.0/dev_guide/volumes.html
Is there maybe another example with the process of mounting folders inside 
volumes which will be used in OpenShift. 

  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Create app with image from own docker registry on OpenShift 3.1

2016-02-17 Thread Den Cowboy


I have my own docker registry secured with a selfsigned certificate.
On other servers, I'm able to login on the registry and pull/push images from 
it. So that seems to work fine.


But when I want to create an app from the image using OpenShift it does not 
seem te work:


oc new-app ec2-xxx:5000/test/image1
error: can't look up Docker image "ec2-xx/test/image1": Internal error 
occurred: Get https://ec2-xxx:5000/v2/: x509: certificate signed by unknown 
authority
error: no match for "ec2-xxx:5000/test/image1"


What could be the issue?
I'm able to login in the registry and pull the image manually.

  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: oc v1.1.2-dirty

2016-02-17 Thread Den Cowboy
Thanks. Did a new install:
oc v1.1.2-1-gbe558b1
kubernetes v1.2.0-alpha.4-851-g4a65fa1

It's fine now.

> From: blean...@redhat.com
> Date: Tue, 16 Feb 2016 07:23:08 -0500
> Subject: Re: oc v1.1.2-dirty
> To: dencow...@hotmail.com
> CC: sdod...@redhat.com; users@lists.openshift.redhat.com
> 
> On Tue, Feb 16, 2016 at 3:40 AM, Den Cowboy <dencow...@hotmail.com> wrote:
> > So I have to wait until de right files are commited in the Git?
> 
> I think what Scott is saying is that those files are harmless in this
> case.  I did hear he was planning to do another 1.1.2 build sometime
> soon which should resolve this versioning problem.
> 
> >
> >> Date: Mon, 15 Feb 2016 09:12:20 -0500
> >> Subject: Re: oc v1.1.2-dirty
> >> From: sdod...@redhat.com
> >> To: blean...@redhat.com
> >> CC: dencow...@hotmail.com; users@lists.openshift.redhat.com
> >
> >>
> >> This is actually in the copr RPM build I had done. At the time that I
> >> built the SRPM I had a new uncommitted file in my git checkout. I've
> >> verified that the contents of the RPM are unaffected. We'll address
> >> this release process deficiency as we get automated builds ready for
> >> Centos.
> >>
> >> On Mon, Feb 15, 2016 at 8:45 AM, Brenton Leanhardt <blean...@redhat.com>
> >> wrote:
> >> > On Mon, Feb 15, 2016 at 8:07 AM, Den Cowboy <dencow...@hotmail.com>
> >> > wrote:
> >> >> Why is this the name of this version?
> >> >> oc v1.1.2-dirty
> >> >>
> >> >> Is it because of the new layout or is it deprecated/broken?
> >> >
> >> > I think you'll see this when you perform a build with files in your
> >> > repository that aren't committed.
> >> >
> >> >
> >> >>
> >> >> ___
> >> >> users mailing list
> >> >> users@lists.openshift.redhat.com
> >> >> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> >> >>
> >> >
> >> > ___
> >> > users mailing list
> >> > users@lists.openshift.redhat.com
> >> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Start container which needs env's

2016-02-17 Thread Den Cowboy
I've an image. When I want to start the image I have to define som 
env-variables:
How do I have to do this in OpenShift? Can I just add the --env also after the 
oc-command?





















$ docker run --restart=always --name "nodejs"
--env
MOCK="x"
--env BRANDS="x" --env
PORT="" -d my-image:73






  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: Use /etc/origin/master/files without sudo

2016-02-15 Thread Den Cowboy
Or is it permitted to perform this commands as sudo user in production?

From: dencow...@hotmail.com
To: jligg...@redhat.com
CC: users@lists.openshift.redhat.com
Subject: RE: Use /etc/origin/master/files without sudo
Date: Mon, 15 Feb 2016 09:21:42 +




I understand, but than I'm unable to perform a command like this:
oadm ca create-server-cert --signer-cert=ca.crt \
--signer-key=ca.key --signer-serial=ca.serial.txt \
--hostnames="docker-registry.default.svc.cluster.local,${RESULT}" \
--cert=registry.crt --key=registry.key

Because it's not permitted to read/use the ca.crt etc.

From: jligg...@redhat.com
Date: Tue, 9 Feb 2016 11:45:37 -0500
Subject: Re: Use /etc/origin/master/files without sudo
To: dencow...@hotmail.com

Depends on what you're using these files for... for dev, 755 is fine. For 
production, you should be guarding the keys closely, and probably requiring 
sudo access to read/write/sign certs.

On Tue, Feb 9, 2016 at 10:18 AM, Den Cowboy <dencow...@hotmail.com> wrote:



Thanks. Is there a recommended chmod-command to perform on the the files in 
/master. Because chmod 755 +R worked but is unsave I think

From: jligg...@redhat.com
Date: Tue, 9 Feb 2016 10:15:19 -0500
Subject: Re: Use /etc/origin/master/files without sudo
To: dencow...@hotmail.com

sure, or write the initial config without using sudo and just run the server 
with sudo

On Tue, Feb 9, 2016 at 10:09 AM, Den Cowboy <dencow...@hotmail.com> wrote:



Thanks. And is it a right approach to set permissions on the files in the 
/master? (when you don't use your own certs)

From: jligg...@redhat.com
Date: Tue, 9 Feb 2016 09:57:15 -0500
Subject: Re: Use /etc/origin/master/files without sudo
To: dencow...@hotmail.com
CC: users@lists.openshift.redhat.com

Generating a certificate requires write permissions on the ca.serial.txt file 
to record the fact that another certificate was signed using the CA.

On Tue, Feb 9, 2016 at 9:54 AM, Den Cowboy <dencow...@hotmail.com> wrote:



What's the best way to use this files without using sudo?
I performed a chmod + r on it.

But when I try the following without sudo:
$ oadm ca create-server-cert --signer-cert=ca.crt \
> --signer-key=ca.key --signer-serial=ca.serial.txt \
> --hostnames='docker-registry.default.svc.cluster.local,172.30.21.34' \
> --cert=registry.crt --key=registry.key
panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xb code=0x1 addr=0x0 pc=0xcf747c]

goroutine 1 [running]:
github.com/openshift/origin/pkg/cmd/server/crypto.encodeCertificates(0xc2084a84c0,
 0x2, 0x2, 0x0, 0x0, 0x0, 0x0, 0x0)

/builddir/build/BUILD/origin-git-0.ce0e67f/_build/src/github.com/openshift/origin/pkg/cmd/server/crypto/crypto.go:467
 +0x2bc
github.com/openshift/origin/pkg/cmd/server/crypto.writeCertificates(0x7fff9db9d68e,
 0xc, 0xc2084a84c0, 0x2, 0x2, 0x0, 0x0)

/builddir/build/BUILD/origin-git-0.ce0e67f/_build/src/github.com/openshift/origin/pkg/cmd/server/crypto/crypto.go:501
 +0xdf
github.com/openshift/origin/pkg/cmd/server/crypto.(*TLSCertificateConfig).writeCertConfig(0xc2083c0690,
 0x7fff9db9d68e, 0xc, 0x7fff9db9d6a1, 0xc, 0x0, 0x0)

/builddir/build/BUILD/origin-git-0.ce0e67f/_build/src/github.com/openshift/origin/pkg/cmd/server/crypto/crypto.go:71
 +0x67
github.com/openshift/origin/pkg/cmd/server/crypto.(*CA).MakeServerCert(0xc2083c0750,
 0x7fff9db9d68e, 0xc, 0x7fff9db9d6a1, 0xc, 0xc2083c0780, 0x1, 0x0, 0x0)

/builddir/build/BUILD/origin-git-0.ce0e67f/_build/src/github.com/openshift/origin/pkg/cmd/server/crypto/crypto.go:258
 +0x5b2
github.com/openshift/origin/pkg/cmd/server/admin.CreateServerCertOptions.CreateServerCert(0xc20847fcc0,
 0x7fff9db9d68e, 0xc, 0x7fff9db9d6a1, 0xc, 0xc2084e6060, 0x2, 0x2, 0x1, 
0x7f6276ae9530, ...)

/builddir/build/BUILD/origin-git-0.ce0e67f/_build/src/github.com/openshift/origin/pkg/cmd/server/admin/create_servercert.go:116
 +0x224
github.com/openshift/origin/pkg/cmd/server/admin.func·015(0xc2084c7e00, 
0xc2081d3c20, 0x0, 0x6)

/builddir/build/BUILD/origin-git-0.ce0e67f/_build/src/github.com/openshift/origin/pkg/cmd/server/admin/create_servercert.go:59
 +0x139
github.com/spf13/cobra.(*Command).execute(0xc2084c7e00, 0xc2081d3b60, 0x6, 0x6, 
0x0, 0x0)

/builddir/build/BUILD/origin-git-0.ce0e67f/_thirdpartyhacks/src/github.com/spf13/cobra/command.go:572
 +0x82f
github.com/spf13/cobra.(*Command).ExecuteC(0xc2084a2200, 0xc2084c7e00, 0x0, 0x0)

/builddir/build/BUILD/origin-git-0.ce0e67f/_thirdpartyhacks/src/github.com/spf13/cobra/command.go:662
 +0x4db
github.com/spf13/cobra.(*Command).Execute(0xc2084a2200, 0x0, 0x0)

/builddir/build/BUILD/origin-git-0.ce0e67f/_thirdpartyhacks/src/github.com/spf13/cobra/command.go:618
 +0x3a
main.main()

/builddir/build/BUILD/origin-git-0.ce0e67f/_build/src/github.com/openshift/origin/cmd/openshift/openshift.go:22
 +0x175

goroutine 5 [syscall]:
os/signal.loop()
/usr/lib/golang/src/os/signal/signal_unix.go

RE: Securing registry failed: error bad certificate

2016-02-10 Thread Den Cowboy
Is it impossible to use an env-variable in this oadm command?
I tried everything. But it always fails. When I fill it in manually it works:

RESULT=$(oc get svc/docker-registry | awk '!/CLUSTER_IP/{print $2}')

# create certificates
cd /etc/origin/master/

echo $RESULT 
--> shows IP

echo "oadm ca create-server-cert --signer-cert=ca.crt \
--signer-key=ca.key --signer-serial=ca.serial.txt \
--hostnames='docker-registry.default.svc.cluster.local,$RESULT' \
--cert=registry.crt --key=registry.key"

--> shows right command

oadm ca create-server-cert --signer-cert=ca.crt \
--signer-key=ca.key --signer-serial=ca.serial.txt \
--hostnames='docker-registry.default.svc.cluster.local,$RESULT' \
--cert=registry.crt --key=registry.key

--> Seems to fill in the IP --> error: bad certificate.

oadm ca create-server-cert --signer-cert=ca.crt \
--signer-key=ca.key --signer-serial=ca.serial.txt \
--hostnames='docker-registry.default.svc.cluster.local,172.30.x.x' \
--cert=registry.crt --key=registry.key

--> works well


From: dencow...@hotmail.com
To: agold...@redhat.com
Subject: RE: Securing registry failed: error bad certificate
Date: Tue, 9 Feb 2016 13:26:33 +
CC: users@lists.openshift.redhat.com




I think I found the answer. It's probably no OpenShift issue (so I want to 
apologize myself).
I think it's because I'm executing the 'oamd' command as sudo (because 
otherwize I don't have permissions).
But when I execute sudo the env-variable isn't known.

But another question, related on this:
What's the best way to execute those commands in openshift. Because the 
documentation is always using:
'$' (so no root priviledges) But than I have no permission on some keys in my 
/etc/origin/master file.
Do you execute a chmod on those files or how are you solving this?

Thanks.

From: dencow...@hotmail.com
To: agold...@redhat.com
Subject: RE: Securing registry failed: error bad certificate
Date: Tue, 9 Feb 2016 13:01:58 +
CC: users@lists.openshift.redhat.com




This is so weird. I really don't understand it:
What I did now:
1) Run the first part of the script
2) execute the oadm ca create-server-cert command manually
3) Run the second part of the script

This worked. I'm able to login in my secure registry.

The sudo oadm ca create-server-cert can't handle environment variables or 
what's wrong with it?

From: dencow...@hotmail.com
To: agold...@redhat.com
Subject: RE: Securing registry failed: error bad certificate
Date: Tue, 9 Feb 2016 12:22:57 +
CC: users@lists.openshift.redhat.com




Thanks for the fast response.
Well, I performed this already manually and than the security was working. But 
now I wanted to script this.
So I used:

# get Cluster-IP
RESULT=$(oc get svc/docker-registry | awk '!/CLUSTER_IP/{print $2}')
--> echo $RESULT gave me the IP of the service


sudo oadm ca create-server-cert --signer-cert=ca.crt \
--signer-key=ca.key --signer-serial=ca.serial.txt \
--hostnames='docker-registry.default.svc.cluster.local,$RESULT' \
--cert=registry.crt --key=registry.key

When I echo the command I really get the IP on the place of $RESULT.



Date: Tue, 9 Feb 2016 07:13:45 -0500
Subject: Re: Securing registry failed: error bad certificate
From: agold...@redhat.com
To: dencow...@hotmail.com
CC: users@lists.openshift.redhat.com

It's saying the cert doesn't have the IP address of the registry listed as a 
subjectAltName. What command did you run to generate your cert?

On Tuesday, February 9, 2016, Den Cowboy <dencow...@hotmail.com> wrote:



I try to secure my registry but it fails:
This are the logs after a push:
I've checked the certificate: the ca.crt has the same content as the second 
part of my generated secret. So I don't know why this certificate is bad?

I0209 11:54:53.887517   1 sti.go:315] Successfully built 
172.30.221.132:5000/test2/test2:latest
I0209 11:54:53.917560   1 cleanup.go:23] Removing temporary directory 
/tmp/s2i-build586685329
I0209 11:54:53.917581   1 fs.go:117] Removing directory 
'/tmp/s2i-build586685329'
I0209 11:54:53.919251   1 sti.go:214] Using provided push secret for 
pushing 172.30.221.132:5000/test2/test2:latest image
I0209 11:54:53.919274   1 sti.go:218] Pushing 
172.30.221.132:5000/test2/test2:latest image ...
E0209 11:54:53.929640   1 dockerutil.go:78] push for image 
172.30.221.132:5000/test2/test2:latest failed, will retry in 5s seconds ...
E0209 11:54:58.939648   1 dockerutil.go:78] push for image 
172.30.221.132:5000/test2/test2:latest failed, will retry in 5s seconds ...
E0209 11:55:03.960704   1 dockerutil.go:78] push for image 
172.30.221.132:5000/test2/test2:latest failed, will retry in 5s seconds ...
E0209 11:55:08.967635   1 dockerutil.go:78] push for image 
172.30.221.132:5000/test2/test2:latest failed, will retry in 5s seconds ...
E0209 11:55:13.976535   1 dockerutil.go:78] push for image 
172.30.221.132:5000/test2/test2:latest failed, wil

  1   2   >