Re: openshift-ansible release-3.10 - Install fails with control plane pods

2018-10-09 Thread Marc Schlegel
Hello everyone

I was finally able to resolve the issue with the control plane.

The problem was caused by the master pod which was not able to connect to the 
etcd pod because the hostname always resolved to 127.0.0.1 and not the local 
cluster ip. This was due to the Vagrant box I used, and could be resolved by 
making sure that /etc/hosts only contained the localhost 127.0.0.1 entry.

Now the installer gets past the control-plane-check.

Unfortunately the next issue arises when the installer waits for the "catalog 
api server".  The command "curl -k 
https://apiserver.kube-service-catalog.svc/healthz; cannot connect because the 
installer only adds "cluster.local" to resolv.conf.
Either the installer makes sure that any service with .svc gets resolved as 
well (my current workaround, by adding server=/svc/172.30.0.1 to 
/etc/dnsmasq.d/origin-upstream-dns.conf), or all services get the hostname 
ending on "cluster.local"


Am Freitag, 31. August 2018, 21:15:12 CEST schrieben Sie:
> The dependency chain for control plane is node then etcd then api then
> controllers. From your previous post it looks like there's no apiserver
> running. I'd look into what's wrong there.
> 
> Check `master-logs api api` if that doesn't provide you any hints then
> check the logs for the node service but I can't think of anything that
> would fail there yet result in successfully starting the controller pods.
> The apiserver and controller pods use the same image. Each pod will have
> two containers, the k8s_POD containers are rarely interesting.
> 
> On Thu, Aug 30, 2018 at 2:37 PM Marc Schlegel  wrote:
> 
> > Thanks for the link. It looks like the api-pod is not getting up at all!
> >
> > Log from k8s_controllers_master-controllers-*
> >
> > [vagrant@master ~]$ sudo docker logs
> > k8s_controllers_master-controllers-master.vnet.de_kube-system_a3c3ca56f69ed817bad799176cba5ce8_1
> > E0830 18:28:05.787358   1 reflector.go:205]
> > github.com/openshift/origin/vendor/k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:594:
> > Failed to list *v1.Pod: Get
> > https://master.vnet.de:8443/api/v1/pods?fieldSelector=spec.schedulerName%3Ddefault-scheduler%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded=500=0:
> > dial tcp 127.0.0.1:8443: getsockopt: connection refused
> > E0830 18:28:05.788589   1 reflector.go:205]
> > github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87:
> > Failed to list *v1.ReplicationController: Get
> > https://master.vnet.de:8443/api/v1/replicationcontrollers?limit=500=0:
> > dial tcp 127.0.0.1:8443: getsockopt: connection refused
> > E0830 18:28:05.804239   1 reflector.go:205]
> > github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87:
> > Failed to list *v1.Node: Get
> > https://master.vnet.de:8443/api/v1/nodes?limit=500=0:
> > dial tcp 127.0.0.1:8443: getsockopt: connection refused
> > E0830 18:28:05.806879   1 reflector.go:205]
> > github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87:
> > Failed to list *v1beta1.StatefulSet: Get
> > https://master.vnet.de:8443/apis/apps/v1beta1/statefulsets?limit=500=0:
> > dial tcp 127.0.0.1:8443: getsockopt: connection refused
> > E0830 18:28:05.808195   1 reflector.go:205]
> > github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87:
> > Failed to list *v1beta1.PodDisruptionBudget: Get
> > https://master.vnet.de:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500=0:
> > dial tcp 127.0.0.1:8443: getsockopt: connection refused
> > E0830 18:28:06.673507   1 reflector.go:205]
> > github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87:
> > Failed to list *v1.PersistentVolume: Get
> > https://master.vnet.de:8443/api/v1/persistentvolumes?limit=500=0:
> > dial tcp 127.0.0.1:8443: getsockopt: connection refused
> > E0830 18:28:06.770141   1 reflector.go:205]
> > github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87:
> > Failed to list *v1beta1.ReplicaSet: Get
> > https://master.vnet.de:8443/apis/extensions/v1beta1/replicasets?limit=500=0:
> > dial tcp 127.0.0.1:8443: getsockopt: connection refused
> > E0830 18:28:06.773878   1 reflector.go:205]
> > github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87:
> > Failed to list *v1.Service: Get
> > https://master.vnet.de:8443/api/v1/services?limit=500=0:
> > dial tcp 127.0.0.1:8443: getsockopt: connection refused
> > E0830 18:28:06.778204   1 reflector.go:205]
> > github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87:
> > Failed to list *v1.StorageClass: Get
> > https://master.vnet.de:8443/apis/storage.k8s.io/v1/storageclasses?limit=500=0:
> > dial tcp 127.0.0.1:8443: getsockopt: connection refused
> > E0830 18:28:06.784874   1 reflector.go:205]
> > github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87:
> > Failed to list *v1.PersistentVolumeClaim: Get
> > 

Re: Node names as IPs not hostnames

2018-10-09 Thread Dan Pungă

Hi Rich!

What I understand from the description of your github issue is that 
you're trying to achieve node names as their IP addresses when 
specifically integrating with OpenStack.


My problem is that I don't want to integrate with OpenStack, but the 
openshift_facts.py script from the 3.9 release would still discover it 
as a provider and disregard the host-level/VM configuration when it 
comes to node naming. So I think my problem is different than your 
github issue.


As Scott Dodson pointed out, this is addressed in the release-3.10 
version of the script, where it takes into account the provider 
configuration only if this is so marked in the inventory file. Haven't 
tested it, but I guess the fix has to do with the check around here: 
https://github.com/openshift/openshift-ansible/blob/release-3.10/roles/openshift_facts/library/openshift_facts.py#L1033-L1036




On 09.10.2018 20:45, Rich Megginson wrote:
Are you hitting 
https://github.com/openshift/openshift-ansible/pull/9598 ?


On 10/9/18 11:25 AM, Dan Pungă wrote:

Thanks for the reply Scott!

I've used the release branches for both 3.9 and 3.10 of the 
openshift-ansible project, yes.
I've initially checked the openshift_facts.py script flow in the 3.9 
branch; now looking at the 3.10 version, I do see the change that 
you're pointing.


On 09.10.2018 05:40, Scott Dodson wrote:
Dan, are you using the latest from release-3.10 branch? I believe 
we've disabled the IaaS interrogation when you've not configured a 
cloud provider via openshift-ansible in the latest on that branch.


On Mon, Oct 8, 2018, 7:38 PM Dan Pungă > wrote:


    I've done a bit of digging and apparently my problem is 
precisely connected to the fact that I'm running the cluster on the 
OpenStack provider.


    Briefly put, the openshift_facts playbook relies on the 
openshift-ansible/roles/openshift_facts/library/openshift_facts.py 
script. This script uses the ansible.module_utils tools to
    discover the underlying system, including any existing IaaS 
provider with its detailis. In my case it discovers the OpenStack 
provider and when setting the hostnames, the provider
    configuration takes precedence over whatever I've configured at 
the VM level.


    In my case, I haven't properly set up the FQDNs/hostnames at the 
OpenStack level. Instead, after I've created and launched the 
instances, I disabled at the VM level the ability of the
    cloud provider to reset my hostname definition/configuration and 
I thought this would be enough.


    I guess I'll try a reinstall on a lab environment with the 
openshift_facts.py script modified so that it passes over the 
Openstack check and hope it does what I'd expect, which is to be

    agnostic to the type of hosts on which I install.
    I actually thought that the only way the OpenShift/OKD installer 
would try to integrate with a provider was if I'd specifically set 
the openshift_cloudprovider_kind variable in the

    inventory file along with the rest of the specific variables.

    Regards,
    Dan Pungă

    On 08.10.2018 18:44, Dan Pungă wrote:

    Hello all,

    I'm trying to upgrade a working cluster from Openshift Origin 
3.9 to OKD 3.10 and the control plane update fails at one point 
with host not found.
    I've looked abit over the problem and found this issue on 
github: https://github.com/openshift/openshift-ansible/issues/9935 
where michaelgugino points out that "when upgrading from
    3.9, your hostnames match the node names in 'oc get nodes' 
otherwise, we won't be able to find the CSRs for your nodes."


    In fact my issue is precisely this: the node names are in fact 
their IPs and not the hostnames of the specific machines. It was 
something that I saw upon installation, but as the 3.9

    cluster was functioning all right, I let it be.
    The idea is that I (think) I have the DNS resolution set up 
properly, with all machines being able to resolve each-other by 
FQDNs, however the 3.9 installer configured the node names
    with their respective IP addresses and I don't know how to 
address this.
    I should mention that the cluster is deployed inside an 
Openstack project, but the install config doesn't use 
OpenShift-Openstack configuration. However when running the
    ~/openshift-ansible/playbooks/byo/openshift_facts.yml I get 
references to the underlying openstack(somehow the installer 
"figures out" the undelying Openstack and treats it as a
    provider, the way I see it). I've pasted the output for one of 
the nodes below.


    Has any of you come across this node name config problem and 
were you able to solve it?
    Is there any procedure to change node names of a working 
cluster? I should say that the masters are also 
nodes(infrasructure), so I'm guessing the procedure, if there is 
one, would
    have to do with deprecating one master at a time, while for the 
nodes with a delete/change config/re-add procedure.


    Thank you!

    Output 

Re: Node names as IPs not hostnames

2018-10-09 Thread Rich Megginson

Are you hitting https://github.com/openshift/openshift-ansible/pull/9598 ?

On 10/9/18 11:25 AM, Dan Pungă wrote:

Thanks for the reply Scott!

I've used the release branches for both 3.9 and 3.10 of the openshift-ansible 
project, yes.
I've initially checked the openshift_facts.py script flow in the 3.9 branch; 
now looking at the 3.10 version, I do see the change that you're pointing.

On 09.10.2018 05:40, Scott Dodson wrote:
Dan, are you using the latest from release-3.10 branch? I believe we've disabled the IaaS interrogation when you've not configured a cloud provider via openshift-ansible in the latest on 
that branch.


On Mon, Oct 8, 2018, 7:38 PM Dan Pungă mailto:dan.pu...@gmail.com>> wrote:

I've done a bit of digging and apparently my problem is precisely connected 
to the fact that I'm running the cluster on the OpenStack provider.

Briefly put, the openshift_facts playbook relies on the 
openshift-ansible/roles/openshift_facts/library/openshift_facts.py script. This 
script uses the ansible.module_utils tools to
discover the underlying system, including any existing IaaS provider with 
its detailis. In my case it discovers the OpenStack provider and when setting 
the hostnames, the provider
configuration takes precedence over whatever I've configured at the VM 
level.

In my case, I haven't properly set up the FQDNs/hostnames at the OpenStack 
level. Instead, after I've created and launched the instances, I disabled at 
the VM level the ability of the
cloud provider to reset my hostname definition/configuration and I thought 
this would be enough.

I guess I'll try a reinstall on a lab environment with the 
openshift_facts.py script modified so that it passes over the Openstack check 
and hope it does what I'd expect, which is to be
agnostic to the type of hosts on which I install.
I actually thought that the only way the OpenShift/OKD installer would try 
to integrate with a provider was if I'd specifically set the 
openshift_cloudprovider_kind variable in the
inventory file along with the rest of the specific variables.

Regards,
Dan Pungă

On 08.10.2018 18:44, Dan Pungă wrote:

Hello all,

I'm trying to upgrade a working cluster from Openshift Origin 3.9 to OKD 
3.10 and the control plane update fails at one point with host not found.
I've looked abit over the problem and found this issue on github: 
https://github.com/openshift/openshift-ansible/issues/9935 where michaelgugino 
points out that "when upgrading from
3.9, your hostnames match the node names in 'oc get nodes' otherwise, we won't 
be able to find the CSRs for your nodes."

In fact my issue is precisely this: the node names are in fact their IPs 
and not the hostnames of the specific machines. It was something that I saw 
upon installation, but as the 3.9
cluster was functioning all right, I let it be.
The idea is that I (think) I have the DNS resolution set up properly, with 
all machines being able to resolve each-other by FQDNs, however the 3.9 
installer configured the node names
with their respective IP addresses and I don't know how to address this.
I should mention that the cluster is deployed inside an Openstack project, 
but the install config doesn't use OpenShift-Openstack configuration. However 
when running the
~/openshift-ansible/playbooks/byo/openshift_facts.yml I get references to the 
underlying openstack(somehow the installer "figures out" the undelying 
Openstack and treats it as a
provider, the way I see it). I've pasted the output for one of the nodes 
below.

Has any of you come across this node name config problem and were you able 
to solve it?
Is there any procedure to change node names of a working cluster? I should 
say that the masters are also nodes(infrasructure), so I'm guessing the 
procedure, if there is one, would
have to do with deprecating one master at a time, while for the nodes with 
a delete/change config/re-add procedure.

Thank you!

Output from openshift_facts playbook:

ok: [node1.oshift-pinfold.intra] => {
    "result": {
    "ansible_facts": {
    "openshift": {
    "common": {
    "all_hostnames": [
"node1.oshift-pinfold.intra",
    "192.168.150.22"
    ],
    "config_base": "/etc/origin",
    "deployment_subtype": "basic",
    "deployment_type": "origin",
    "dns_domain": "cluster.local",
    "examples_content_version": "v3.9",
    "generate_no_proxy_hosts": true,
    "hostname": "192.168.150.22",
    "internal_hostnames": [
    "192.168.150.22"
    ],
    "ip": "192.168.150.22",
    "kube_svc_ip": "172.30.0.1",

Re: Node names as IPs not hostnames

2018-10-09 Thread Dan Pungă

Thanks for the reply Scott!

I've used the release branches for both 3.9 and 3.10 of the 
openshift-ansible project, yes.
I've initially checked the openshift_facts.py script flow in the 3.9 
branch; now looking at the 3.10 version, I do see the change that you're 
pointing.


On 09.10.2018 05:40, Scott Dodson wrote:
Dan, are you using the latest from release-3.10 branch? I believe 
we've disabled the IaaS interrogation when you've not configured a 
cloud provider via openshift-ansible in the latest on that branch.


On Mon, Oct 8, 2018, 7:38 PM Dan Pungă > wrote:


I've done a bit of digging and apparently my problem is precisely
connected to the fact that I'm running the cluster on the
OpenStack provider.

Briefly put, the openshift_facts playbook relies on the
openshift-ansible/roles/openshift_facts/library/openshift_facts.py
script. This script uses the ansible.module_utils tools to
discover the underlying system, including any existing IaaS
provider with its detailis. In my case it discovers the OpenStack
provider and when setting the hostnames, the provider
configuration takes precedence over whatever I've configured at
the VM level.

In my case, I haven't properly set up the FQDNs/hostnames at the
OpenStack level. Instead, after I've created and launched the
instances, I disabled at the VM level the ability of the cloud
provider to reset my hostname definition/configuration and I
thought this would be enough.

I guess I'll try a reinstall on a lab environment with the
openshift_facts.py script modified so that it passes over the
Openstack check and hope it does what I'd expect, which is to be
agnostic to the type of hosts on which I install.
I actually thought that the only way the OpenShift/OKD installer
would try to integrate with a provider was if I'd specifically set
the openshift_cloudprovider_kind variable in the inventory file
along with the rest of the specific variables.

Regards,
Dan Pungă

On 08.10.2018 18:44, Dan Pungă wrote:

Hello all,

I'm trying to upgrade a working cluster from Openshift Origin 3.9
to OKD 3.10 and the control plane update fails at one point with
host not found.
I've looked abit over the problem and found this issue on github:
https://github.com/openshift/openshift-ansible/issues/9935 where
michaelgugino points out that "when upgrading from 3.9, your
hostnames match the node names in 'oc get nodes' otherwise, we
won't be able to find the CSRs for your nodes."

In fact my issue is precisely this: the node names are in fact
their IPs and not the hostnames of the specific machines. It was
something that I saw upon installation, but as the 3.9 cluster
was functioning all right, I let it be.
The idea is that I (think) I have the DNS resolution set up
properly, with all machines being able to resolve each-other by
FQDNs, however the 3.9 installer configured the node names with
their respective IP addresses and I don't know how to address this.
I should mention that the cluster is deployed inside an Openstack
project, but the install config doesn't use OpenShift-Openstack
configuration. However when running the
~/openshift-ansible/playbooks/byo/openshift_facts.yml I get
references to the underlying openstack(somehow the installer
"figures out" the undelying Openstack and treats it as a
provider, the way I see it). I've pasted the output for one of
the nodes below.

Has any of you come across this node name config problem and were
you able to solve it?
Is there any procedure to change node names of a working cluster?
I should say that the masters are also nodes(infrasructure), so
I'm guessing the procedure, if there is one, would have to do
with deprecating one master at a time, while for the nodes with a
delete/change config/re-add procedure.

Thank you!

Output from openshift_facts playbook:

ok: [node1.oshift-pinfold.intra] => {
    "result": {
    "ansible_facts": {
    "openshift": {
    "common": {
    "all_hostnames": [
    "node1.oshift-pinfold.intra",
    "192.168.150.22"
    ],
    "config_base": "/etc/origin",
    "deployment_subtype": "basic",
    "deployment_type": "origin",
    "dns_domain": "cluster.local",
    "examples_content_version": "v3.9",
    "generate_no_proxy_hosts": true,
    "hostname": "192.168.150.22",
    "internal_hostnames": [
    "192.168.150.22"
    ],
    "ip": "192.168.150.22",
    "kube_svc_ip": 

Re: OpenShift Origin on AWS

2018-10-09 Thread David Conde
We have upgraded from the 3.6 reference architecture to the 3.9 aws
playbooks in openshift-ansible. There was quite a bit of work in getting
nodes ported into the scaling groups. We have upgraded our masters to 3.9
with the BYO playbooks but have not ported them to use scaling groups yet.

We'll be sticking with the aws openshift-ansible playbooks in the future
over the reference architecture so that we can upgrade easily.

On Tue, Oct 9, 2018 at 1:29 PM Joel Pearson 
wrote:

> There is cloud formation templates as part of the 3.6 reference
> architecture. But that is now deprecated. I’m using that template at a
> client site and it worked fine (I’ve adapted it to work with 3.9 by using a
> static inventory as we didn’t want to revisit our architecture from
> scratch). We did customise it a fair bit though.
>
>
> https://github.com/openshift/openshift-ansible-contrib/blob/master/reference-architecture/aws-ansible/README.md
>
> Here is an example of a jinja template that outputs a cloud formation
> template.
>
> However, you can’t use the playbook as is for 3.9/3.10 because
> openshift-ansible has breaking changes to the playbooks.
>
> For some reason the new playbooks for 3.9/3.10 don’t use cloud formation,
> but rather use the amazon ansible plugins instead and directly interact
> with AWS resources:
>
>
> https://github.com/openshift/openshift-ansible/blob/master/playbooks/aws/README.md
>
> That new approach is pretty interesting though as it uses prebuilt AMIs
> and auto-scaling groups, which make it very quick to add nodes.
>
> Hopefully some of that is useful to you.
>
> On Tue, 9 Oct 2018 at 9:42 pm, Peter Heitman  wrote:
>
>> Thank you for the reminder and the pointer. I know of that document but
>> was too focused on searching for a CloudFormation template. I'll go back to
>> the reference architecture which I'm sure will answer at least some of my
>> questions.
>>
>> On Sun, Oct 7, 2018 at 4:24 PM Joel Pearson <
>> japear...@agiledigital.com.au> wrote:
>>
>>> Have you seen the AWS reference architecture?
>>> https://access.redhat.com/documentation/en-us/reference_architectures/2018/html/deploying_and_managing_openshift_3.9_on_amazon_web_services/index#
>>> On Tue, 2 Oct 2018 at 3:11 am, Peter Heitman  wrote:
>>>
 I've created a CloudFormation Stack for simple lab-test deployments of
 OpenShift Origin on AWS. Now I'd like to understand what would be best for
 production deployments of OpenShift Origin on AWS. In particular I'd like
 to create the corresponding CloudFormation Stack.

 I've seen the Install Guide page on Configuring for AWS and I've looked
 through the RedHat QuickStart Guide for OpenShift Enterprise but am still
 missing information. For example, the RedHat QuickStart Guide creates 3
 masters, 3 etcd servers and some number of compute nodes. Where are the
 routers (infra nodes) located? On the masters or on the etcd servers? How
 are the ELBs configured to work with those deployed routers? What if some
 of the traffic you are routing is not http/https? What is required to
 support that?

 I've seen the simple CloudFormation stack (
 https://sysdig.com/blog/deploy-openshift-aws/) but haven't found
 anything comparable for something that is closer to production ready (and
 likely takes advantage of using the AWS VPC QuickStart (
 https://aws.amazon.com/quickstart/architecture/vpc/).

 Does anyone have any prior work that they could share or point me to?

 Thanks in advance,

 Peter Heitman

 ___
 users mailing list
 users@lists.openshift.redhat.com
 http://lists.openshift.redhat.com/openshiftmm/listinfo/users

>>> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OpenShift Origin on AWS

2018-10-09 Thread Joel Pearson
There is cloud formation templates as part of the 3.6 reference
architecture. But that is now deprecated. I’m using that template at a
client site and it worked fine (I’ve adapted it to work with 3.9 by using a
static inventory as we didn’t want to revisit our architecture from
scratch). We did customise it a fair bit though.

https://github.com/openshift/openshift-ansible-contrib/blob/master/reference-architecture/aws-ansible/README.md

Here is an example of a jinja template that outputs a cloud formation
template.

However, you can’t use the playbook as is for 3.9/3.10 because
openshift-ansible has breaking changes to the playbooks.

For some reason the new playbooks for 3.9/3.10 don’t use cloud formation,
but rather use the amazon ansible plugins instead and directly interact
with AWS resources:

https://github.com/openshift/openshift-ansible/blob/master/playbooks/aws/README.md

That new approach is pretty interesting though as it uses prebuilt AMIs and
auto-scaling groups, which make it very quick to add nodes.

Hopefully some of that is useful to you.

On Tue, 9 Oct 2018 at 9:42 pm, Peter Heitman  wrote:

> Thank you for the reminder and the pointer. I know of that document but
> was too focused on searching for a CloudFormation template. I'll go back to
> the reference architecture which I'm sure will answer at least some of my
> questions.
>
> On Sun, Oct 7, 2018 at 4:24 PM Joel Pearson 
> wrote:
>
>> Have you seen the AWS reference architecture?
>> https://access.redhat.com/documentation/en-us/reference_architectures/2018/html/deploying_and_managing_openshift_3.9_on_amazon_web_services/index#
>> On Tue, 2 Oct 2018 at 3:11 am, Peter Heitman  wrote:
>>
>>> I've created a CloudFormation Stack for simple lab-test deployments of
>>> OpenShift Origin on AWS. Now I'd like to understand what would be best for
>>> production deployments of OpenShift Origin on AWS. In particular I'd like
>>> to create the corresponding CloudFormation Stack.
>>>
>>> I've seen the Install Guide page on Configuring for AWS and I've looked
>>> through the RedHat QuickStart Guide for OpenShift Enterprise but am still
>>> missing information. For example, the RedHat QuickStart Guide creates 3
>>> masters, 3 etcd servers and some number of compute nodes. Where are the
>>> routers (infra nodes) located? On the masters or on the etcd servers? How
>>> are the ELBs configured to work with those deployed routers? What if some
>>> of the traffic you are routing is not http/https? What is required to
>>> support that?
>>>
>>> I've seen the simple CloudFormation stack (
>>> https://sysdig.com/blog/deploy-openshift-aws/) but haven't found
>>> anything comparable for something that is closer to production ready (and
>>> likely takes advantage of using the AWS VPC QuickStart (
>>> https://aws.amazon.com/quickstart/architecture/vpc/).
>>>
>>> Does anyone have any prior work that they could share or point me to?
>>>
>>> Thanks in advance,
>>>
>>> Peter Heitman
>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users