Re: Ansible-openshift - Libvirt cluster create fails

2016-05-31 Thread Jason DeTiberus
On Tue, May 31, 2016 at 10:35 PM, Daniel Dumitriu <dan...@dumdan.com> wrote:

>
> Not sure this is the right forum for my question, but I could not find
> a more appropriate one...
> I an trying to work with "openshift-ansible".
>
> Most examples I found are busy talking about the "established" cloud
> providers - so, not much help, there...
> However, I find the most convenient "provider", by far, to be
> "libvirt". Especially for testing and development - since it comes as a
> default package group in most distributions.
>
> So, I have been trying - for a few days, now - to create a libvirt
> cluster but all my attempts have been unsuccessful !
>
> In the debugging process, I found some hard-coded variables in the
> playbooks (would those qualify as errors?), but I cannot find a way to
> go past one annoying error:
>
> In the "task-book" "roles/openshift_repos/tasks/main.yaml":
> -
> fatal: [danield-master-4206c]: FAILED! => {"failed": true, "msg": "The
> conditional check 'not openshift.common.is_containerized | bool' failed
>
> The error was: error while evaluating conditional (not
> openshift.common.is_containerized | bool):
> 'openshift' is undefined
>
> The error appears to have been in '/home/daniel/ansible-ws/openshift
> -ansible/roles/openshift_repos/tasks/main.yaml' at line 10  (assert)
> -
>
> I understand that the variable "openshift" is set by the
> "openshift_facts" module, defined in "roles/openshift_facts/library".
> But I, also, see the task that sets the "openshift" variables being
> SKIPPED, and do not understand why !
>
> (By the way, the VMs are being built and started just fine)
>
> Could anyone help me? I would, really, appreciate it !
>

This should be fixed in the current master branch. We reverted a change
yesterday that was causing issues similar to this.

--
Jason DeTiberus
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Ansible-openshift - Libvirt cluster create fails

2016-06-01 Thread Jason DeTiberus
I created a PR with a fix that worked in my environment, could you see if
it fixes your issue as well?

https://github.com/openshift/openshift-ansible/pull/1969

Thanks,
--
Jason DeTiberus

On Wed, Jun 1, 2016 at 9:50 AM, Jason DeTiberus <jdeti...@redhat.com> wrote:

>
>
> On Wed, Jun 1, 2016 at 1:21 AM, Daniel Dumitriu <dan...@dumdan.com> wrote:
>
>> Sorry, but it's not fixed...
>>
>> (I can come up with more details)
>>
>> In particular: this seems to happen because some tasks are skipped:
>> ...
>> TASK [openshift_facts : set_fact]
>> **
>> task path: /home/daniel/ansible-ws/openshift
>> -ansible/roles/openshift_facts/tasks/main.yml:15
>> skipping: [danield-master-e2398] => {"changed": false, "skip_reason":
>> "Conditional check failed", "skipped": true}
>> ...
>>
>> And the final (fatal) error:
>>
>> ..
>> TASK [openshift_repos : assert]
>> 
>> task path: /home/daniel/ansible-ws/openshift
>> -ansible/roles/openshift_repos/tasks/main.yaml:10
>> fatal: [danield-master-e2398]: FAILED! => {"failed": true, "msg": "The
>> conditional check 'not openshift.common.is_containerized | bool'
>> failed. The error was: error while evaluating conditional (not
>> openshift.common.is_containerized | bool): 'openshift' is
>> undefined\n\nThe error appears to have been in '/home/daniel/ansible
>> -ws/openshift-ansible/roles/openshift_repos/tasks/main.yaml': line 10,
>> column 3, but may\nbe elsewhere in the file depending on the exact
>> syntax problem.\n\nThe offending line appears to be:\n\n\n- assert:\n
>>  ^ here\n"}
>>
>
> Thanks for the additional info. I'll attempt to replicate it today to see
> if I can track down the issue.
>
>
>
>> __
>>
>> Daniel Dumitriu
>>
>>
>> On Tue, 2016-05-31 at 23:04 -0400, Jason DeTiberus wrote:
>> >
>> >
>> > On Tue, May 31, 2016 at 10:35 PM, Daniel Dumitriu <dan...@dumdan.com>
>> > wrote:
>> > > Not sure this is the right forum for my question, but I could not
>> > > find
>> > > a more appropriate one...
>> > > I an trying to work with "openshift-ansible".
>> > >
>> > > Most examples I found are busy talking about the "established"
>> > > cloud
>> > > providers - so, not much help, there...
>> > > However, I find the most convenient "provider", by far, to be
>> > > "libvirt". Especially for testing and development - since it comes
>> > > as a
>> > > default package group in most distributions.
>> > >
>> > > So, I have been trying - for a few days, now - to create a libvirt
>> > > cluster but all my attempts have been unsuccessful !
>> > >
>> > > In the debugging process, I found some hard-coded variables in the
>> > > playbooks (would those qualify as errors?), but I cannot find a way
>> > > to
>> > > go past one annoying error:
>> > >
>> > > In the "task-book" "roles/openshift_repos/tasks/main.yaml":
>> > > -
>> > > fatal: [danield-master-4206c]: FAILED! => {"failed": true, "msg":
>> > > "The
>> > > conditional check 'not openshift.common.is_containerized | bool'
>> > > failed
>> > >
>> > > The error was: error while evaluating conditional (not
>> > > openshift.common.is_containerized | bool):
>> > > 'openshift' is undefined
>> > >
>> > > The error appears to have been in '/home/daniel/ansible
>> > > -ws/openshift
>> > > -ansible/roles/openshift_repos/tasks/main.yaml' at line 10
>> > > (assert)
>> > > -
>> > >
>> > > I understand that the variable "openshift" is set by the
>> > > "openshift_facts" module, defined in
>> > > "roles/openshift_facts/library".
>> > > But I, also, see the task that sets the "openshift" variables being
>> > > SKIPPED, and do not understand why !
>> > >
>> > > (By the way, the VMs are being built and started just fine)
>> > >
>> > > Could anyone help me? I would, really, appreciate it !
>> > This should be fixed in the current master branch. We reverted a
>> > change yesterday that was causing issues similar to this.
>> >
>> > --
>> > Jason DeTiberus
>> >
>>
>
>
>
> --
> Jason DeTiberus
>



-- 
Jason DeTiberus
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: weird issue with etcd

2016-06-21 Thread Jason DeTiberus
Did you verify connectivity over the peering port as well (2380)?
On Jun 21, 2016 7:17 AM, "Julio Saura"  wrote:

> hello
>
> same problem
>
> jun 21 13:11:03 openshift-master01 atomic-openshift-master-api[59618]:
> F0621 13:11:03.155246   59618 auth.go:141] error #0: dial tcp :2379:
> connection refused ( the one i rebooted )
> jun 21 13:11:03 openshift-master01 atomic-openshift-master-api[59618]:
> error #1: client: etcd member https://:2379 has no leader
>
> i rebooted the etcd server and my master is not able to use other one
>
> still able to connect from both masters using telnet to the etcd port ..
>
> any clue? this is weird.
>
>
> > El 14 jun 2016, a las 9:28, Julio Saura  escribió:
> >
> > hello
> >
> > yes is correct .. it was the first thing i checked ..
> >
> > first master
> >
> > etcdClientInfo:
> > ca: master.etcd-ca.crt
> > certFile: master.etcd-client.crt
> > keyFile: master.etcd-client.key
> > urls:
> >   - https://openshift-balancer01:2379
> >   - https://openshift-balancer02:2379
> >
> >
> > second master
> >
> > etcdClientInfo:
> > ca: master.etcd-ca.crt
> > certFile: master.etcd-client.crt
> > keyFile: master.etcd-client.key
> > urls:
> >   - https://openshift-balancer01:2379
> >   - https://openshift-balancer02:2379
> >
> > dns names resolve in both masters
> >
> > Best regards and thanks!
> >
> >
> >> El 13 jun 2016, a las 18:45, Scott Dodson 
> escribió:
> >>
> >> Can you verify the connection information etcdClientInfo section in
> >> /etc/origin/master/master-config.yaml is correct?
> >>
> >> On Mon, Jun 13, 2016 at 11:56 AM, Julio Saura 
> wrote:
> >>> hello
> >>>
> >>> yes.. i have a external balancer in front of my masters for HA as doc
> says.
> >>>
> >>> i don’t have any balancer in front of my etcd servers for masters
> connection, it’s not necessary right? masters will try all etcd availables
> it one is down right?
> >>>
> >>> i don’t know why but none of my masters were able to connect to the
> second etcd instance, but using telnet from their shell worked .. so it was
> not a net o fw issue..
> >>>
> >>>
> >>> best regards.
> >>>
>  El 13 jun 2016, a las 17:53, Clayton Coleman 
> escribió:
> 
>  I have not seen that particular issue.  Do you have a load balancer in
>  between your masters and etcd?
> 
>  On Fri, Jun 10, 2016 at 5:55 AM, Julio Saura 
> wrote:
> > hello
> >
> > i have an origin 3.1 installation working cool so far
> >
> > today one of my etcd nodes ( 1 of 2 ) crashed and i started having
> problems..
> >
> > i noticed on one of my master nodes that it was not able to connect
> to second etcd server and that the etcd server was not able to promote as
> leader..
> >
> >
> > un 10 11:09:55 openshift-balancer02 etcd[47218]: 12c8a31c8fcae0d4 is
> starting a new election at term 10048
> > jun 10 11:09:55 openshift-balancer02 etcd[47218]: 12c8a31c8fcae0d4
> became candidate at term 10049
> > jun 10 11:09:55 openshift-balancer02 etcd[47218]: 12c8a31c8fcae0d4
> received vote from 12c8a31c8fcae0d4 at term 10049
> > jun 10 11:09:55 openshift-balancer02 etcd[47218]: 12c8a31c8fcae0d4
> [logterm: 8, index: 4600461] sent vote request to bf80ee3a26e8772c at term
> 10049
> > jun 10 11:09:56 openshift-balancer02 etcd[47218]: got unexpected
> response error (etcdserver: request timed out)
> >
> > my masters logged that they were not able to connect to the etcd
> >
> > er.go:218] unexpected ListAndWatch error: pkg/storage/cacher.go:161:
> Failed to list *extensions.Job: error #0: dial tcp X.X.X.X:2379: connection
> refused
> >
> > so i tried a simple test, just telnet from masters to the etcd node
> port ..
> >
> > [root@openshift-master01 log]# telnet X.X.X.X 2379
> > Trying X.X.X.X...
> > Connected to X.X.X.X.
> > Escape character is '^]’
> >
> > so i was able to connect from masters.
> >
> > i was not able to recover my oc masters until the first etcd node
> rebooted .. so it seems my etcd “cluster” is not working without the first
> node ..
> >
> > any clue?
> >
> > thanks
> >
> >
> > ___
> > users mailing list
> > users@lists.openshift.redhat.com
> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> >>>
> >>>
> >>> ___
> >>> users mailing list
> >>> users@lists.openshift.redhat.com
> >>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> >
> >
> > ___
> > users mailing list
> > users@lists.openshift.redhat.com
> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> 

Re: weird issue with etcd

2016-06-21 Thread Jason DeTiberus
On Tue, Jun 21, 2016 at 7:28 AM, Julio Saura <jsa...@hiberus.com> wrote:
> yes
>
> working
>
> [root@openshift-master01 ~]# telnet X 2380
> Trying ...
> Connected to .
> Escape character is '^]'.
> ^CConnection closed by foreign host.


Have you verified that time is syncd between the hosts? I'd also check
the peer certs between the hosts... Can you connect to the hosts using
etcdctl? There should be a status command that will give you more
information.

>
>
> El 21 jun 2016, a las 13:21, Jason DeTiberus <jdeti...@redhat.com> escribió:
>
> Did you verify connectivity over the peering port as well (2380)?
>
> On Jun 21, 2016 7:17 AM, "Julio Saura" <jsa...@hiberus.com> wrote:
>>
>> hello
>>
>> same problem
>>
>> jun 21 13:11:03 openshift-master01 atomic-openshift-master-api[59618]:
>> F0621 13:11:03.155246   59618 auth.go:141] error #0: dial tcp :2379:
>> connection refused ( the one i rebooted )
>> jun 21 13:11:03 openshift-master01 atomic-openshift-master-api[59618]:
>> error #1: client: etcd member https://:2379 has no leader
>>
>> i rebooted the etcd server and my master is not able to use other one
>>
>> still able to connect from both masters using telnet to the etcd port ..
>>
>> any clue? this is weird.
>>
>>
>> > El 14 jun 2016, a las 9:28, Julio Saura <jsa...@hiberus.com> escribió:
>> >
>> > hello
>> >
>> > yes is correct .. it was the first thing i checked ..
>> >
>> > first master
>> >
>> > etcdClientInfo:
>> > ca: master.etcd-ca.crt
>> > certFile: master.etcd-client.crt
>> > keyFile: master.etcd-client.key
>> > urls:
>> >   - https://openshift-balancer01:2379
>> >   - https://openshift-balancer02:2379
>> >
>> >
>> > second master
>> >
>> > etcdClientInfo:
>> > ca: master.etcd-ca.crt
>> > certFile: master.etcd-client.crt
>> > keyFile: master.etcd-client.key
>> > urls:
>> >   - https://openshift-balancer01:2379
>> >   - https://openshift-balancer02:2379
>> >
>> > dns names resolve in both masters
>> >
>> > Best regards and thanks!
>> >
>> >
>> >> El 13 jun 2016, a las 18:45, Scott Dodson <sdod...@redhat.com>
>> >> escribió:
>> >>
>> >> Can you verify the connection information etcdClientInfo section in
>> >> /etc/origin/master/master-config.yaml is correct?
>> >>
>> >> On Mon, Jun 13, 2016 at 11:56 AM, Julio Saura <jsa...@hiberus.com>
>> >> wrote:
>> >>> hello
>> >>>
>> >>> yes.. i have a external balancer in front of my masters for HA as doc
>> >>> says.
>> >>>
>> >>> i don’t have any balancer in front of my etcd servers for masters
>> >>> connection, it’s not necessary right? masters will try all etcd 
>> >>> availables
>> >>> it one is down right?
>> >>>
>> >>> i don’t know why but none of my masters were able to connect to the
>> >>> second etcd instance, but using telnet from their shell worked .. so it 
>> >>> was
>> >>> not a net o fw issue..
>> >>>
>> >>>
>> >>> best regards.
>> >>>
>> >>>> El 13 jun 2016, a las 17:53, Clayton Coleman <ccole...@redhat.com>
>> >>>> escribió:
>> >>>>credentials from
>> >>>> I have not seen that particular issue.  Do you have a load balancer
>> >>>> in
>> >>>> between your masters and etcd?
>> >>>>
>> >>>> On Fri, Jun 10, 2016 at 5:55 AM, Julio Saura <jsa...@hiberus.com>
>> >>>> wrote:
>> >>>>> hello
>> >>>>>
>> >>>>> i have an origin 3.1 installation working cool so far
>> >>>>>
>> >>>>> today one of my etcd nodes ( 1 of 2 ) crashed and i started having
>> >>>>> problems..
>> >>>>>
>> >>>>> i noticed on one of my master nodes that it was not able to connect
>> >>>>> to second etcd server and that the etcd server was not able to promote 
>> >>>>> as
>> >>>>> leader..
>> >>>>>
>> >>>>>
>> >>>>> un 10 11:09:55 openshift-balancer02 etcd[472

Re: Web Console default passwor

2016-06-23 Thread Jason DeTiberus
You can also set htpasswd users with the variables here:
https://github.com/openshift/openshift-ansible/blob/9193a58d129716601091b2f3ceb7ca3960a694cb/inventory/byo/hosts.origin.example#L91
On Jun 23, 2016 10:44 AM, "Olaf Radicke"  wrote:

> Yes, thank you Den. All is fine now. A restart of the master is not deeded.
>
> Olaf
>
> On 06/23/2016 11:19 AM, Den Cowboy wrote:
>
>> You have to go inside your folder and create a user:
>> htpasswd htpasswd admin
>> prompt for password: 
>>
>> User is created (don't really know if you have to restart your master).
>> To make your user cluster-admin
>>
>> $ oc login -u system:admin (authenticates with admin.kubeconfig)
>> $ oadm policiy add-cluster-role-to-user cluster-admin admin (if admin is
>> your user)
>>
>>
>> To: users@lists.openshift.redhat.com
>>> From: o.radi...@meteocontrol.de
>>> Subject: Web Console default passwor
>>> Date: Thu, 23 Jun 2016 10:06:40 +0200
>>>
>>> Hi,
>>>
>>> i've a second basic question: I can't find a default password in online
>>> documentation, for the first log in on the Web Console.
>>>
>>> I enter this in my playbook:
>>>
>>>
>>>  snip 
>>> openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login':
>>> 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider',
>>> 'filename': '/etc/origin/master/htpasswd'}]
>>>  snap -
>>>
>>> But the /etc/origin/master/htpasswd file is empty. I have to do it
>>> yourself the first entry? With...
>>>
>>>  snip 
>>> htpasswd /etc/origin/master/htpasswd admin
>>>  snap 
>>>
>>> ..Is this right?
>>>
>>> Thank you,
>>>
>>> Olaf Radicke
>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OpenShift Ansible install - multiple masters w/ "native" HAproxy -- location of "/etc/origin/htpasswd" ? "etcd" alternative ?

2016-02-10 Thread Jason DeTiberus
On Wed, Feb 10, 2016 at 4:12 PM, Florian Daniel Otel <florian.o...@gmail.com
> wrote:

> Hi all,
>
> Have a rather dunmy question (or at least it feels like that :))
>
> In case of the OSE setup with multiple masters using "native" HA (i.e.
> HAproxy) (as detailed here
> <https://docs.openshift.com/enterprise/3.1/install_config/install/advanced_install.html#multiple-masters>)
> when "openshift_master_identity_providers"  is set to "htpasswd_auth" , do
> I need to manually keep in sync the "/etc/origin/htpasswd" between master
> nodes whenever users are added / removed ?
>

Yes.


> Is there any alternative that uses the backend "etcd" for that ?  (can't
> find that option after a quick browse).
>

No, all of the other solutions would require an external source of
Authentication (LDAP, OpenID Connect, Github, Google). One could configure
an etcd based auth service and use Basic Auth or Remote Header Auth though.


>
> Thanks,
>
> /Florian
>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>


-- 
Jason DeTiberus
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Adding a node to the cluster without ansible

2016-02-04 Thread Jason DeTiberus
>
> I would like to add an additional node to the cluster without using
> ansible.
> (We have modified our cluster in many ways and don't dare running ansible
> because it might break our cluster.)


 The scale up playbooks take this into account.

They will query the master, generates and distributes the new certificates
for the new node, and then runs the config playbooks on the new nodes only.

To take advantage of this,  you will need to add a group to your inventory
called [new_nodes] and configure the hosts as you would for a new install
under the [nodes] group.
Then you would run the playbooks/byo/openshift-cluster/scaleup.yml playbook.


On Thu, Feb 4, 2016 at 9:55 AM, v  wrote:

> All right, looks like it works. These are the commands for the master with
> 3.1:
>
> oadm create-api-client-config \
>   --certificate-authority=/etc/origin/master/ca.crt \
>   --client-dir=/root/xyz4 \
>   --master=https://xyz1.eu:8443  \
>   --signer-cert=/etc/origin/master/ca.crt \
>   --signer-key=/etc/origin/master/ca.key \
>   --signer-serial=/etc/origin/master/ca.serial.txt \
> --groups=system:nodes \
> --user=system:node:xyz4.eu
>
> oadm create-node-config \
> --node-dir=/root/xyz4 \
> --node=xyz.eu \
> --hostnames=xyz4.eu,123.456.0.5 \
> --certificate-authority /etc/origin/master/ca.crt \
> --signer-cert /etc/origin/master/ca.crt \
> --signer-key /etc/origin/master/ca.key \
> --signer-serial /etc/origin/master/ca.serial.txt \
> --master=https://xyz1.eu:8443  \
> --node-client-certificate-authority /etc/origin/master/ca.crt
>
>
> Then I copied all the created files to /etc/origin/node on the new node.
> Took node-config.yaml from an old, working node, edited the hostnames and
> used it as node-config.yaml on the new node.
>
> It seems to work. The only thing that bugs me is that I'm being spammed
> with the following error on the new node:
> manager.go:313] NetworkPlugin redhat/openshift-ovs-subnet failed on the
> status hook for pod 'xy-router-2-imubn' - exit status 1
> manager.go:313] NetworkPlugin redhat/openshift-ovs-subnet failed on the
> status hook for pod 'ipf-default-1-dp4vc' - exit status 1
>

Do you mind submitting a PR or an issue to the openshift-docs repo for
these steps? https://github.com/openshift/openshift-docs


>
> Can anyone tell me if this is something important or whether there are
> additional steps needed that I have missed?
>

It sounds like you are missing the -sdn-ovs package on the new node host.
If you are running Origin, then it would be origin-sdn-ovs, otherwise it is
atomic-enterprise-sdn-ovs.


>
> Regards,
> v
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Adding a node to the cluster without ansible

2016-02-04 Thread Jason DeTiberus
On Thu, Feb 4, 2016 at 1:31 PM, Srinivas Naga Kotaru (skotaru) <
skot...@cisco.com> wrote:

> Thanks Jason for explanation
>
> It answer few my questions. Today only am aware that we have to run the
> scaleup.yaml and create a new node group.
>
> I heard some complaints in the community as well as my internal team about
> overwriting config changes when ran the ansible playbook. Am sure we might
> be using existing node group while adding new nodes.
>

This was the case a while back (3.0 Timeframe) and I believe we are not
officially supporting the scaleup playbook for 3.0 deployments as well.

I do see that we are missing the documentation for using the playbooks to
add a node for 3.1, so I'll go ahead and work on getting those added.


>
> Please take a look at existing documentation and modify with this new data
> if not already there.
>
> --
> *Srinivas Kotaru*
>
> From: Jason DeTiberus <jdeti...@redhat.com>
> Date: Thursday, February 4, 2016 at 10:24 AM
> To: skotaru <skot...@cisco.com>
> Cc: v <vekt...@gmx.net>, "users@lists.openshift.redhat.com" <
> users@lists.openshift.redhat.com>
> Subject: Re: Adding a node to the cluster without ansible
>
>
>
> On Thu, Feb 4, 2016 at 1:04 PM, Srinivas Naga Kotaru (skotaru) <
> skot...@cisco.com> wrote:
>
>> Will ansible will touch existing configuration  and by any chance it will
>>  overwrite custom config put into ?
>>
>
> If running the full configuration playbooks, yes Ansible will overwrite
> custom configuration of the following files (at least):
> - /etc/origin/master/master-config.yaml
> - /etc/origin/master/scheduler.json
> - /etc/origin/node/node-config.yaml
> - /etc/sysconfig/{origin,atomic-enterprise}-*
> - systemd unit files for ha master services
> - /etc/sysconfig/docker
> - ...
>
> The goals of the configuration playbooks are to be able to continually
> manage a system in addition to installation.
>
> If running the upgrade playbooks, we limit the changes made to the
> configuration files to the limit subset of configuration that we need to
> update. We use a custom ansible module to read in the YAML files, process
> the limited changes and write the file back out.
>
>
> If you are running the scaleup.yml playbook, the only tasks done on the
> master(s) (other than gather facts from them), is to generate the new
> certificates/kubeconfigs for the new nodes. The playbook then goes on to
> configure the new nodes only (leaving the existing nodes untouched). This
> does require defining the new nodes in a [new_nodes] group instead of just
> adding them onto the [nodes] group.
>
>
>> Just adding a new node, steps required looks scare me ( both ansible and
>> manual). Can we do better job here by automating this task and guaranteed
>> no disruption to existing cluster health?
>>
>
> The scaleup.yml playbook already does this. If your environment was
> installed with the openshift-ansible-installer, then you can also use that
> tool for configuring the new nodes as well. Eventually the installer tool
> will be able to work against a previously installed cluster, but we still
> have a bit of work to make that happen.
>
>
>>
>> My worry about real prod environments and always uptime guaranteed with
>> SLA’s.
>>
>> --
>> *Srinivas Kotaru*
>>
>>
>>
>
>
> --
> Jason DeTiberus
>



-- 
Jason DeTiberus
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Issues with the built-in registry

2016-01-29 Thread Jason DeTiberus
On Jan 29, 2016 8:05 AM, "Florian Daniel Otel" <florian.o...@gmail.com>
wrote:
>
> I should have mentioned that in my original email, but that's exactly the
steps I followed.

My apologies, missed the auth parts mentioned the first read through.

Just to verify, did you grant reguser admin rights on the openshift
project?
oadm policy add-role-to-user admin  -n openshift

As for not seeing any subdirectories under /registry, I believe that is to
be expected until a Docker push has been done (either by a builder pod or
by a manual push).

>
> IOW:  In addition to the stuff below (and prior to all that) I have done,
as "system:admin" , for user "reguser"
>
> oadm policy add-role-to-user system:registry reguser
> oadm policy add-role-to-user  system:image-builder reguser
>
> Again, following the instructions in the docs all works fine, until I try
a "docker push"
>
> The only thing that doesn't seem quite right is that listing the content
of the Docker registry only lists the top directory "/registry", but
nothing underneath it:
>
> root@osev31-node1 src]# docker ps
> CONTAINER IDIMAGE
   COMMAND  CREATED STATUS
 PORTS   NAMES
> ea83db288da1
registry.access.redhat.com/openshift3/ose-docker-registry:v3.1.1.6
"/bin/sh -c 'DOCKER_R"   2 hours ago Up 2 hours

 
k8s_registry.f0018725_docker-registry-1-1sfvt_default_691370c8-c673-11e5-bc1c-4201ac10fe14_dd13c8d0
> f383ae8db39fopenshift3/ose-pod:latest
   "/pod"   2 hours ago Up 2 hours

 
k8s_POD.f419fdd1_docker-registry-1-1sfvt_default_691370c8-c673-11e5-bc1c-4201ac10fe14_d21e1b8c
>
>
>
> [root@osev31-node1 src]# docker ps
> CONTAINER IDIMAGE
   COMMAND  CREATED STATUS
 PORTS   NAMES
> ea83db288da1
registry.access.redhat.com/openshift3/ose-docker-registry:v3.1.1.6
"/bin/sh -c 'DOCKER_R"   2 hours ago Up 2 hours

 
k8s_registry.f0018725_docker-registry-1-1sfvt_default_691370c8-c673-11e5-bc1c-4201ac10fe14_dd13c8d0
> f383ae8db39fopenshift3/ose-pod:latest
   "/pod"   2 hours ago Up 2 hours

 
k8s_POD.f419fdd1_docker-registry-1-1sfvt_default_691370c8-c673-11e5-bc1c-4201ac10fe14_d21e1b8c
> [root@osev31-node1 src]#
>
>
>  () Nothing listed under "/registry" ??
>
>
> [root@osev31-node1 src]# docker exec -it ea83db288da1 find /registry
> /registry
> [root@osev31-node1 src]#
>
>
>
> On Fri, Jan 29, 2016 at 1:03 PM, Jason DeTiberus <jdeti...@redhat.com>
wrote:
>>
>>
>> On Jan 29, 2016 6:07 AM, "Florian Daniel Otel" <florian.o...@gmail.com>
wrote:
>> >
>> > Hello all,
>> >
>> > I'm pretty sure it's mostly related to my ignorance, but for some
reason I'm not able to push to the built-in docker registry after deploying
it.
>> >
>> >
>> > Deplyoment:
>> >
>> > oadm registry --service-account=registry
--config=/etc/origin/master/admin.kubeconfig
--credentials=/etc/origin/master/openshift-registry.kubeconfig
--images='registry.access.redhat.com/openshift3/ose-${component}:${version}'
--mount-host=/opt/ose-registr
>> >
>> > ### Everything looks ok
>> >
>> > oc describe service docker-registry
>> > Name:   docker-registry
>> > Namespace:  default
>> > Labels: docker-registry=default
>> > Selector:   docker-registry=default
>> > Type:   ClusterIP
>> > IP: 172.30.38.99
>> > Port:   5000-tcp5000/TCP
>> > Endpoints:  10.1.0.138:5000
>> > Session Affinity:   ClientIP
>> > No events.
>> >
>> >
>> >  Adding the right roles to "reguser"
>> >
>> > oadm policy add-role-to-user system:registry reguser
>> >
>> >  Logging in as "reguser" into the registry:
>> >
>> > [root@osev31-node1 src]# oc whoami
>> > reguser
>> >
>> > [root@osev31-node1 src]# oc whoami -t
>> > GY_q37YZqjor7rIVPkm4ReBhEX0yV4XQqyWIOzf6ANs
>> >
>> > [root@osev31-node1 src]#  docker login -u reguser -e n...@nospam.org
-p GY_q37YZqjor7rIVPkm4ReBhEX0yV4XQqyWIOzf6ANs 172.30.38.99:5000
>> > WARNING: login credentials saved in /root/.docker/config.json
>> > Login Succeeded
>> >
>> >  Pulling "busybox" & tagging it:
>> >
>> > [root@osev31-node1 src]# docker pull docker.io/busyb

Re: Issues with the built-in registry

2016-01-29 Thread Jason DeTiberus
On Jan 29, 2016 8:43 AM, "Florian Daniel Otel" <florian.o...@gmail.com>
wrote:
>
>
> No worries ;) -- part since  it's my turn to apologise, since I missed
adding the  "admin" role to the "oepnshift" project.
>
> Done that now, and now I get a HTTP 500:
>
> [root@osev31-node1 src]#  docker push  172.30.38.99:5000/openshift/busybox
> The push refers to a repository [172.30.38.99:5000/openshift/busybox]
(len: 1)
> 964092b7f3e5: Preparing
> Received unexpected HTTP status: 500 Internal Server Error
> [root@osev31-node1 src]#
>
> Attached are the "oc logs" for the docker registry pods.
>
> The weird thing there (at least to me) is:
>
> level=error msg="response completed with error" err.code=UNKNOWN
err.detail="filesystem: mkdir /registry/docker: permission denied"
>
> Can this have smth to do with the way I deployed the registry (with the
"-mount-host=/opt/ose-registry" )  -- see below ? That directory exists,
but is empty

It sounds like a permissions issue on /opt/ose-registry. Unfortunately I do
not know what the permissions and/or the SELinux context should be.

>
> Thanks,
>
> Florian
>
> On Fri, Jan 29, 2016 at 2:30 PM, Jason DeTiberus <jdeti...@redhat.com>
wrote:
>>
>>
>> On Jan 29, 2016 8:05 AM, "Florian Daniel Otel" <florian.o...@gmail.com>
wrote:
>> >
>> > I should have mentioned that in my original email, but that's exactly
the steps I followed.
>>
>> My apologies, missed the auth parts mentioned the first read through.
>>
>> Just to verify, did you grant reguser admin rights on the openshift
project?
>> oadm policy add-role-to-user admin  -n openshift
>>
>> As for not seeing any subdirectories under /registry, I believe that is
to be expected until a Docker push has been done (either by a builder pod
or by a manual push).
>>
>> >
>> > IOW:  In addition to the stuff below (and prior to all that) I have
done, as "system:admin" , for user "reguser"
>> >
>> > oadm policy add-role-to-user system:registry reguser
>> > oadm policy add-role-to-user  system:image-builder reguser
>> >
>> > Again, following the instructions in the docs all works fine, until I
try a "docker push"
>> >
>> > The only thing that doesn't seem quite right is that listing the
content of the Docker registry only lists the top directory "/registry",
but nothing underneath it:
>> >
>> > root@osev31-node1 src]# docker ps
>> > CONTAINER IDIMAGE
   COMMAND  CREATED STATUS
 PORTS   NAMES
>> > ea83db288da1
registry.access.redhat.com/openshift3/ose-docker-registry:v3.1.1.6
"/bin/sh -c 'DOCKER_R"   2 hours ago Up 2 hours

 
k8s_registry.f0018725_docker-registry-1-1sfvt_default_691370c8-c673-11e5-bc1c-4201ac10fe14_dd13c8d0
>> > f383ae8db39fopenshift3/ose-pod:latest
   "/pod"   2 hours ago Up 2 hours

 
k8s_POD.f419fdd1_docker-registry-1-1sfvt_default_691370c8-c673-11e5-bc1c-4201ac10fe14_d21e1b8c
>> >
>> >
>> >
>> > [root@osev31-node1 src]# docker ps
>> > CONTAINER IDIMAGE
   COMMAND  CREATED STATUS
 PORTS   NAMES
>> > ea83db288da1
registry.access.redhat.com/openshift3/ose-docker-registry:v3.1.1.6
"/bin/sh -c 'DOCKER_R"   2 hours ago Up 2 hours

 
k8s_registry.f0018725_docker-registry-1-1sfvt_default_691370c8-c673-11e5-bc1c-4201ac10fe14_dd13c8d0
>> > f383ae8db39fopenshift3/ose-pod:latest
   "/pod"   2 hours ago Up 2 hours

 
k8s_POD.f419fdd1_docker-registry-1-1sfvt_default_691370c8-c673-11e5-bc1c-4201ac10fe14_d21e1b8c
>> > [root@osev31-node1 src]#
>> >
>> >
>> >  () Nothing listed under "/registry" ??
>> >
>> >
>> > [root@osev31-node1 src]# docker exec -it ea83db288da1 find /registry
>> > /registry
>> > [root@osev31-node1 src]#
>> >
>> >
>> >
>> > On Fri, Jan 29, 2016 at 1:03 PM, Jason DeTiberus <jdeti...@redhat.com>
wrote:
>> >>
>> >>
>> >> On Jan 29, 2016 6:07 AM, "Florian Daniel Otel" <florian.o...@gmail.com>
wrote:
>> >> >
>> >> > Hello all,
>> >> >
>> >> > I'm pretty sure it's mostly related to my ignorance, but for some
reason I'm not able to push to the built-in docker registry after deploying
it.
>> >> &

Re: Issues with the built-in registry

2016-01-29 Thread Jason DeTiberus
On Jan 29, 2016 6:07 AM, "Florian Daniel Otel" 
wrote:
>
> Hello all,
>
> I'm pretty sure it's mostly related to my ignorance, but for some reason
I'm not able to push to the built-in docker registry after deploying it.
>
>
> Deplyoment:
>
> oadm registry --service-account=registry
--config=/etc/origin/master/admin.kubeconfig
--credentials=/etc/origin/master/openshift-registry.kubeconfig
--images='registry.access.redhat.com/openshift3/ose-${component}:${version}'
--mount-host=/opt/ose-registr
>
> ### Everything looks ok
>
> oc describe service docker-registry
> Name:   docker-registry
> Namespace:  default
> Labels: docker-registry=default
> Selector:   docker-registry=default
> Type:   ClusterIP
> IP: 172.30.38.99
> Port:   5000-tcp5000/TCP
> Endpoints:  10.1.0.138:5000
> Session Affinity:   ClientIP
> No events.
>
>
>  Adding the right roles to "reguser"
>
> oadm policy add-role-to-user system:registry reguser
>
>  Logging in as "reguser" into the registry:
>
> [root@osev31-node1 src]# oc whoami
> reguser
>
> [root@osev31-node1 src]# oc whoami -t
> GY_q37YZqjor7rIVPkm4ReBhEX0yV4XQqyWIOzf6ANs
>
> [root@osev31-node1 src]#  docker login -u reguser -e n...@nospam.org -p
GY_q37YZqjor7rIVPkm4ReBhEX0yV4XQqyWIOzf6ANs 172.30.38.99:5000
> WARNING: login credentials saved in /root/.docker/config.json
> Login Succeeded
>
>  Pulling "busybox" & tagging it:
>
> [root@osev31-node1 src]# docker pull docker.io/busybox
> Using default tag: latest
> Trying to pull repository docker.io/library/busybox ... latest: Pulling
from library/busybox
> 9e77fef7a1c9: Pull complete
> 964092b7f3e5: Pull complete
> library/busybox:latest: The image you are pulling has been verified.
Important: image verification is a tech preview feature and should not be
relied on to provide security.
> Digest:
sha256:c1bc9b4bffe665bf014a305cc6cf3bca0e6effeb69d681d7a208ce741dad58e0
> Status: Downloaded newer image for docker.io/busybox:latest
>
> [root@osev31-node1 src]#  docker tag docker.io/busybox
172.30.38.99:5000/openshift/busybox
>
>
>  Pushing fails due to "authentication required"
>
> [root@osev31-node1 src]#  docker push  172.30.38.99:5000/openshift/busybox
> The push refers to a repository [172.30.38.99:5000/openshift/busybox]
(len: 1)
> 964092b7f3e5: Preparing
> unauthorized: authentication required
>
>
> Any advice on what I'm missing ?

This should be what you are looking for:
https://docs.openshift.com/enterprise/latest/install_config/install/docker_registry.html#access
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OpenShift and AWS

2016-02-02 Thread Jason DeTiberus
On Tue, Feb 2, 2016 at 10:32 AM, Lorenz Vanthillo <
lorenz.vanthi...@outlook.com> wrote:

> I was taking a look at
> https://github.com/openshift/openshift-ansible/blob/master/README_AWS.md
>
> I've 2 questions about it.
> - For having ETCD you really need a new EC2-instance? (Does it need
> configuration or is it enough to describe it in the inventory file?
>

The README_AWS.md file is specifically for using the bin/cluster script
(un-official and community supported) for provisioning ec2 instances. To
use the external etcd (and subsequently the multi-master capabilities of
it, it would indeed require etcd have it's own host).

The best supported method of deploying currently is to use the "byo"
playbooks and a user-defined inventory file as documented here:
https://docs.openshift.org/latest/install_config/install/advanced_install.html

- I saw:
>
> <https://github.com/openshift/openshift-ansible/blob/master/README_AWS.md#infra-node-instances>Infra
> node instances:
>
>- export ec2_infra_instance_type='m4.large'
>
>
> <https://github.com/openshift/openshift-ansible/blob/master/README_AWS.md#non-infra-node-instances>Non-infra
> node instances:
>
>- export ec2_node_instance_type='m4.large'
>
>
>
>
> What is the difference between infra nodes and non-infra nodes?
>

Infra nodes are meant to be used for deploying OpenShift cluster
infrastructure services (currently consisting of the router and registry,
but will also include the log aggregation and metrics components as well).

The non-infra nodes are meant for hosting general applications.


>
>
> Thanks
>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>


-- 
Jason DeTiberus
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Location of master's logs

2016-02-25 Thread Jason DeTiberus
On Thu, Feb 25, 2016 at 3:41 PM, Dean Peterson <peterson.d...@gmail.com>
wrote:

> If I install openshift origin using the ansible installer, where can I
> find the logs for the running master?
>

If you are running a single master, then you can get the logs with:
'journalctl -u origin-master -l', for ha masters, you would use
'origin-master-api' and 'origin-master-controllers' for the api server and
controllers service respectively.



>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>


-- 
Jason DeTiberus
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Help debug "oc login" returning "401" / certificate issues

2016-02-25 Thread Jason DeTiberus
On Thu, Feb 25, 2016 at 5:03 PM, Florian Daniel Otel <florian.o...@gmail.com
> wrote:

> Hi Jason,
>
> Kindest thanks for trying to help.
>
> In order
>
> 1) Indeed, the "lb" host is configured (via dnsmasq) as a DNS forwarder,
> has the correct "/etc/hosts" (which is propagated to all the other hosts in
> the cluster), and all hosts have an entry pointing to it in the
> "/etc/resolv.conf"
>
> 2) A bit puzzled wrt "system:node" vs "system:anonymous"
>
> I've just test the corresponding curl call on another system where
> everything work as expected (at least so far...)  and the response I get
> back from a GET to " /api/v1/namespaces" still refers to "system:anonymous"
> , and not "system:node"
>
> Also, to make things even more weird, if I copy the node "kubeconfig" in
> the ".kube/config" I am identified accordingly (i.e. as "system:node") when
> doing an "oc whoami"
>

I'm probably missing something with the way that the node identifies itself
when using client certificate authentication, I'm seeing the same behavior
on a system I have that is functioning as expected.


>
>
> 3) Thanks for pointing out that specifying "HTTP_PROXY" / "HTTPS_PROXY"
> and resp "NO_PROXY" is not yet possible via the Ansible installer.
>
> My  remaining question is: Is there any way to debug the authentication
> process / why the "oc login" with "httpasswd" back end doesn't work ?
>

You will most likely need to increase the logging level to see
authentication logs for the api service. In
/etc/sysconfig/atomic-openshift-master-api. increasing the loglevel to 4
should provide output around the authentication failure.



>
>
> Thanks again,
>
> /Florian
>
>
>
> On Thu, Feb 25, 2016 at 10:30 PM, Jason DeTiberus <jdeti...@redhat.com>
> wrote:
>
>>
>>
>> On Thu, Feb 25, 2016 at 10:54 AM, Florian Daniel Otel <
>> florian.o...@gmail.com> wrote:
>>
>>>
>>> Hello all,
>>>
>>> I have the following problems:
>>>
>>> I have a multimaster OSE setup consisting of the following:
>>> - A LB with "native" HA
>>> - Three masters (doubling as "etcd" nodes)
>>> - Two nodes
>>>
>>>
>>> All the hosts are themselves OpenStack instances (hence the ".novalocal"
>>> suffix). DNS is via an "/etc/hosts" propagated across, with the "lb" host
>>> doubling as DNS forwarder (via dnsmasq). All Internet access is via an http
>>> / https proxy.
>>>
>>
>> So, if I'm understanding this correctly, then the lb host is correctly
>> resolving the dns for all of the *.novalocal addresses that are in use by
>> the cluster and all of the hosts are pre-configured to use the lb host as
>> the dns resolver prior to running the installation? If not, then there will
>> definitely be issues, since /etc/hosts is not used by deployed containers.
>>
>>
>>>
>>> After many attempts we finally get a setup that is somewhat working (see
>>> P.S. for why "somehow"). Attached is the "/etc/ansible/hosts" file.
>>> Installation is from the main "openshift-ansible" repo (
>>> https://github.com/openshift/openshift-ansible)
>>>
>>> My problem:
>>>
>>> After installation, on one master I created two users in
>>> "/etc/origin/htpasswd". After creation I have propagated the file to all
>>> the other masters. UNIX permissions to the file on all masters are "0600"
>>>
>>> However, doing an "oc login" returns a "401 Unauthorized", and I cannot
>>> find what the issue is, or how to debug it (no trace for it in the
>>> "atomic-openshift-master-api" or "atomic-openshift-master-controllers"
>>> logs).
>>>
>>
>>>
>>> [root@az1node01 ~]# oc login
>>> Authentication required for https://az1lb01.mydomain.novalocal:8443
>>> (openshift)
>>> Username: reguser
>>> Password:
>>> Login failed (401 Unauthorized)
>>> Unauthorized
>>>
>>>
>>> The puzzling thing is that using the "system:node" certificates and keys
>>> work (in the sense I am identified as "system:anonymous"):
>>>
>>
>> Something is definitely not right here, the user for the system:node
>> certs should be identified as the system:node user and not anonymous. I
>>

Re: Docker Registry Versioning

2016-03-15 Thread Jason DeTiberus
On Mar 15, 2016 6:11 PM, "Clayton Coleman"  wrote:
>
> You're trying to pull the OpenShift v3 OSE images, but using the Origin
version numbers.  They are not the same - you'll need to use the OSE tag
values.
>
> On Tue, Mar 15, 2016 at 6:00 PM, Tim Moor  wrote:
>>
>> Hi list,
>>
>> We’re trying to find a way to phase the roll out of the OpenShift
updates through our various environments.
>>
>> Given that there seems to be conflicts when running mixed container
versions, we’d like to pin an environment to or example v1.1.3.
>>
>> However, when updating the deployment config for the docker-registry as
follows:
>>
>> - oc describe pod docker-registry-7-p2ue7 | grep image
>> - image: registry.access.redhat.com/openshift3/ose-docker-registry:v1.1.3

If you are running OpenShift Enterprise, this shouldn't be needed, it
should automatically pull the image related to the version of OpenShift
installed. Also the upgrade playbooks will handle updating the registry and
router images for you.

If you are running Origin, you will probably want to run the router and
registry commands passing in the --images tag to install the version that
coincides with the origin release you are running.

>>
>> We’re getting the following errors:
>>
>> - Back-off pulling image "
registry.access.redhat.com/openshift3/ose-docker-registry:v1.1.3”
>> - Error syncing pod, skipping: failed to "StartContainer" for "registry"
with ImagePullBackOff: "Back-off pulling image \"
registry.access.redhat.com/openshift3/ose-docker-registry:v1.1.3\""
>>
>> What have other’s done with regards to image versions and rolling out
updates?
>>
>> Thanks
>>
>> Tim Moor
>> m +64 22 100 4707
>> e tim.m...@spring.co.nz
>> w www.spring.co.nz
>> integrated business accelerator
>>
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: etcd failure response: HTTP/0.0 0 status code 0

2016-03-13 Thread Jason DeTiberus
Did you specify any etcd hosts? Does the security group used permit
TCP/2379 from the masters to the etcd hosts?
On Mar 13, 2016 10:57 AM, "Den Cowboy"  wrote:

> I tried to install the Origin Cluster but I got this error when I'm
> running my playbook:
> TASK: [openshift_master | Start and enable master api]
> 
> failed: [52.xx.xx.xx => {"failed": true}
> msg: Job for origin-master-api.service failed because the control process
> exited with error code. See "systemctl status origin-master-api.service"
> and "journalctl -xe" for details.
>
>
> origin-master-api.service - Atomic OpenShift Master API
>Loaded: loaded (/usr/lib/systemd/system/origin-master-api.service;
> enabled; vendor preset: disabled)
>Active: failed (Result: exit-code) since Sun 2016-03-13 14:38:46 UTC;
> 13min ago
>  Docs: https://github.com/openshift/origin
>   Process: 18236 ExecStart=/usr/bin/openshift start master api
> --config=${CONFIG_FILE} $OPTIONS (code=exited, status=2)
>  Main PID: 18236 (code=exited, status=2)
>
> atomic-openshift-master-api[18236]: Content-Length: 0
> atomic-openshift-master-api[18236]: E0313 14:38:45.122824   18236
> etcd.go:128] etcd failure response: HTTP/0.0 0 status code 0
> atomic-openshift-master-api[18236]: Content-Length: 0
> atomic-openshift-master-api[18236]: E0313 14:38:46.123859   18236
> etcd.go:128] etcd failure response: HTTP/0.0 0 status code 0
> atomic-openshift-master-api[18236]: Content-Length: 0
> systemd[1]: origin-master-api.service start operation timed out.
> Terminating.
> systemd[1]: origin-master-api.service: main process exited, code=exited,
> status=2/INVALIDARGUMENT
> systemd[1]: Failed to start Atomic OpenShift Master API.
> systemd[1]: Unit origin-master-api.service entered failed state.
> systemd[1]: origin-master-api.service failed.
>
> What could be the issue? I used
> https://github.com/openshift/openshift-ansible/blob/master/README_AWS.md
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Simple yum update to version 1.4 and docker 1.9 destroyed system

2016-03-19 Thread Jason DeTiberus
On Mar 18, 2016 8:40 AM, "David Strejc"  wrote:
>
> I've updated my testing system just with yum update (I don't know if this
is recommended approach - this is what I am asking) and after restarting of
origin-nodes and master and also restarting docker master web UI and
kubernetes seemed to work but old docker images won't start and also image
push failed wit i/o error.
>
> Is this my fault somehow? Should I use different approach to upgrade my
systems? Is this caused by migration to docker 1.9.1 and Open Shift 1.1.4
at the same time?

You'll need to follow the upgrade section of the docs to complete the
upgrade:
https://docs.openshift.org/latest/install_config/upgrading/index.html

>
> Thanks for advices!
> David Strejc
> t: +420734270131
> e: david.str...@gmail.com
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Simple yum update to version 1.4 and docker 1.9 destroyed system

2016-03-19 Thread Jason DeTiberus
On Mar 18, 2016 9:29 AM, "David Strejc"  wrote:
>
> I've removed docker images from my machines and restarted
openshift-master and node processes
>
> On master (which is also node) where is HA-Proxy located I still got:
>
> openshift/origin-haproxy-router:v1.1.3 after docker cleanup
> openshift/origin-docker-registry:v1.1.3 after docker cleanup
>
> I suppose I shold run some command for redeploying or upgrading to 1.1.4
after upgrade of OS?

These can be updated by using 'oc edit dc '

>
> but pods are
>
> openshift/origin-pod:v1.1.4 on master and also on nodes.
>
> Now when I've delted docker images and docker processes and restarting
everything I got:
>
> Error: build error: timeout while waiting for remote repository "
https://github.com/david-strejc/nginx.git;

It sounds like there may be some network issues present.

I would try the following:
systemctl stop origin-node docker openvswitch

systemctl start origin-node

If that doesn't do the trick, I would suggest the network troubleshooting
guide next.

>
> When I try to build from my dockerfile repo.
>
>
> David Strejc
> t: +420734270131
> e: david.str...@gmail.com
>
> On Fri, Mar 18, 2016 at 2:05 PM, David Strejc 
wrote:
>>
>> Image which won't start was my simplest Nginx from this repo:
>>
>> https://github.com/david-strejc/nginx/blob/master/Dockerfile
>>
>> Just openshift/centos7 with nginx and telnet and one html page. But I
suppose this was because of docker upgrade.
>>
>> When I've rebuilded image Open Shift said that it cannot push image due
to i/o timeout error.
>>
>>
>> David Strejc
>> t: +420734270131
>> e: david.str...@gmail.com
>>
>> On Fri, Mar 18, 2016 at 1:59 PM, Clayton Coleman 
wrote:
>>>
>>> Which old docker images won't start, and what error do they have?  What
errors in the registry logs for the push error?
>>>
>>> On Mar 18, 2016, at 8:40 AM, David Strejc 
wrote:
>>>
 I've updated my testing system just with yum update (I don't know if
this is recommended approach - this is what I am asking) and after
restarting of origin-nodes and master and also restarting docker master web
UI and kubernetes seemed to work but old docker images won't start and also
image push failed wit i/o error.

 Is this my fault somehow? Should I use different approach to upgrade
my systems? Is this caused by migration to docker 1.9.1 and Open Shift
1.1.4 at the same time?

 Thanks for advices!
 David Strejc
 t: +420734270131
 e: david.str...@gmail.com

 ___
 users mailing list
 users@lists.openshift.redhat.com
 http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Iptables changes?

2016-03-23 Thread Jason DeTiberus
On Wed, Mar 23, 2016 at 10:30 AM, Fernando Montenegro <
fsmontene...@gmail.com> wrote:

> Hi,
>
> (OO 1.1 running on CentOS Atomic)
>
> How would I go about introducing my own iptables changes, it at all
> (think: corporate security policy mandating specific controls)? My
> understanding is that origin-node does all the iptables changes to add pods
> and such.
>

origin-node (and docker) will manage iptables rules related to OpenShift
and container access. But these are not persisted and should not interfere
with external management of the firewall rules as long as you are not
frequently flushing the rules and/or removing the jump rule to the chains
they create.

openshift-ansible also manages some rules to allow the ports needed by the
services it installs, and it does this through maintaining a jump rule to a
specific chain for managing these rules.


>
> Thanks!
>
> Fernando
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>


-- 
Jason DeTiberus
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Deploying across clouds?

2016-03-02 Thread Jason DeTiberus
On Mar 2, 2016 1:08 PM, "Mohamed Lrhazi" 
wrote:
>
> Hello,
>
> Any one deploying Origin across data centers, and across clouds?

I'm not sure about Origin, but we have done this with OpenShift Enterprise.

> Maybe a mixture of on prem nodes, and others in AWS for example? Does
Origin have support for that?

Yes, with some caveats.

First, you cannot use the native cloud provider integration (you'll need to
provide all of those services yourself, including persistent volumes).

Second, unless the data centers have very low latency connections (@10ms)
you will not be able to spread the etcd hosts across the data centers.

You will also need to consider how you want to handle application
deployments across the datacenters/clouds. There may be changes that need
to be made to the scheduler config, project selectors, templates, etc to
ensure that applications are being deployed in the manner you intend.

>
> Anyone knows of any blog post or other document that discusses such
deployments?

There have been discussions about this in the past on this list, but I
don't have any links handy.

>
> Thank you very much,
> Mohamed.
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Error Starting Origin Node

2016-04-01 Thread Jason DeTiberus
What does your inventory file look like?

How about the output of the journal logs for origin-master?

Is this a cloud deployment (AWS, GCE, OpenStack)? If so, are you
configuring the cloud provider integration?
On Apr 1, 2016 8:18 AM, "Mfawa Alfred Onen"  wrote:

> I wanted to setup a small lab consisting of 1 Master, 1 Node, 1 NFS
> storage Node for the Registry but got the following error during the
> ansible playbook run. I am using the openshift-ansible installer (for
> advanced installation) from https://github.com/openshift/openshift-ansible
>
> *1. Ansible Playbook Error*
>
> TASK: [openshift_node | Start and enable node]
> 
> failed: [master.maomuffy.lab] => {"failed": true}
> msg: Job for origin-node.service failed because the control process exited
> with error code. See "systemctl status origin-node.service" and "journalctl
> -xe" for details.
>
> failed: [node1.maomuffy.lab] => {"failed": true}
> msg: Job for origin-node.service failed because the control process exited
> with error code. See "systemctl status origin-node.service" and "journalctl
> -xe" for details.
>
>
> FATAL: all hosts have already failed -- aborting
>
> PLAY RECAP
> 
>to retry, use: --limit @/root/config.retry
>
> localhost  : ok=22   changed=0unreachable=0failed=0
> master.maomuffy.lab: ok=295  changed=2unreachable=0failed=1
> node1.maomuffy.lab : ok=72   changed=1unreachable=0failed=1
> registry.maomuffy.lab  : ok=35   changed=0unreachable=0failed=0
>
>
> *2. Result of "systemctl status origin-node.service -l"*
>
> origin-node.service - Origin Node
>Loaded: loaded (/usr/lib/systemd/system/origin-node.service; enabled;
> vendor preset: disabled)
>   Drop-In: /usr/lib/systemd/system/origin-node.service.d
>ââopenshift-sdn-ovs.conf
>Active: activating (start) since Fri 2016-04-01 15:08:50 WAT; 28s ago
>  Docs: https://github.com/openshift/origin
>  Main PID: 22983 (openshift)
>CGroup: /system.slice/origin-node.service
>ââ22983 /usr/bin/openshift start node
> --config=/etc/origin/node/node-config.yaml --loglevel=2
>
> Apr 01 15:09:14 master.maomuffy.lab origin-node[22983]: W0401
> 15:09:14.509989   22983 subnets.go:150] Could not find an allocated subnet
> for node: master.maomuffy.lab, Waiting...
> Apr 01 15:09:15 master.maomuffy.lab origin-node[22983]: W0401
> 15:09:15.022509   22983 subnets.go:150] Could not find an allocated subnet
> for node: master.maomuffy.lab, Waiting...
> Apr 01 15:09:15 master.maomuffy.lab origin-node[22983]: W0401
> 15:09:15.530037   22983 subnets.go:150] Could not find an allocated subnet
> for node: master.maomuffy.lab, Waiting...
> Apr 01 15:09:16 master.maomuffy.lab origin-node[22983]: W0401
> 15:09:16.038613   22983 subnets.go:150] Could not find an allocated subnet
> for node: master.maomuffy.lab, Waiting...
> Apr 01 15:09:16 master.maomuffy.lab origin-node[22983]: W0401
> 15:09:16.546537   22983 subnets.go:150] Could not find an allocated subnet
> for node: master.maomuffy.lab, Waiting...
> Apr 01 15:09:17 master.maomuffy.lab origin-node[22983]: W0401
> 15:09:17.050915   22983 subnets.go:150] Could not find an allocated subnet
> for node: master.maomuffy.lab, Waiting...
> Apr 01 15:09:17 master.maomuffy.lab origin-node[22983]: W0401
> 15:09:17.554703   22983 subnets.go:150] Could not find an allocated subnet
> for node: master.maomuffy.lab, Waiting...
> Apr 01 15:09:18 master.maomuffy.lab origin-node[22983]: W0401
> 15:09:18.059674   22983 subnets.go:150] Could not find an allocated subnet
> for node: master.maomuffy.lab, Waiting...
> Apr 01 15:09:18 master.maomuffy.lab origin-node[22983]: W0401
> 15:09:18.563857   22983 subnets.go:150] Could not find an allocated subnet
> for node: master.maomuffy.lab, Waiting...
> Apr 01 15:09:19 master.maomuffy.lab origin-node[22983]: W0401
> 15:09:19.070060   22983 subnets.go:150] Could not find an allocated subnet
> for node: master.maomuffy.lab, Waiting...
>
> *3. Result of "journalctl -xe"*
>
> Apr 01 15:10:21 master.maomuffy.lab origin-node[23029]: W0401
> 15:10:21.856119   23029 subnets.go:150] Could not find an allocated su
> Apr 01 15:10:22 master.maomuffy.lab origin-node[23029]: W0401
> 15:10:22.358740   23029 subnets.go:150] Could not find an allocated su
> Apr 01 15:10:22 master.maomuffy.lab origin-node[23029]: W0401
> 15:10:22.864887   23029 subnets.go:150] Could not find an allocated su
> Apr 01 15:10:23 master.maomuffy.lab origin-node[23029]: W0401
> 15:10:23.371731   23029 subnets.go:150] Could not find an allocated su
> Apr 01 15:10:23 master.maomuffy.lab origin-node[23029]: W0401
> 15:10:23.879284   23029 subnets.go:150] Could not find an allocated su
> Apr 01 15:10:24 master.maomuffy.lab origin-node[23029]: W0401
> 15:10:24.385482   23029 subnets.go:150] Could not find an allocated su
> Apr 01 

Re: proper format for openshift_master_identity_providers in Ansible inventory?

2016-04-27 Thread Jason DeTiberus
On Wed, Apr 27, 2016 at 4:19 PM, Robert Wehner <robert.weh...@returnpath.com
> wrote:

> I am using the advanced installation method for Origin using the 3.0.84-1
> release of the openshift-ansible repo. I am trying to set up my identity
> providers so the cluster will accept LDAP- and htpasswd-based access using
> an openshift_master_identity_providers setting in my ansible inventory like
> this:
>
> openshift_master_identity_providers=[{ "name": "ldap_provider", "login" :
> true, "challenge" : true, "kind" : "LDAPPasswordIdentityProvider",
> "ldap_server" : "ldap.example.com", "ldap_bind_dn" : "",
> "ldap_bind_password" : "", "ldap_insecure" : true, "ldap_base_ou" :
> "ou=People,dc=example,dc=com", "ldap_preferred_username" : "uid" },
> {'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind':
> 'HTPasswdPasswordIdentityProvider', 'filename':
> '/etc/origin/master/htpasswd'}]
>

openshift_master_identity_providers=[{ 'name': 'ldap_provider', 'login' :
'true', 'challenge' : 'true', 'kind' : 'LDAPPasswordIdentityProvider',
'url' : 'ldap://ldap.example.com:389/ou=People,dc=example,dc=com?uid',
'bind_dn' : '', 'bind_password' : '', 'ldap_insecure' : 'true',
'attributes': {'preferredUsername' : 'uid'}}, {'name': 'htpasswd_auth',
'login': 'true', 'challenge': 'true', 'kind':
'HTPasswdPasswordIdentityProvider', 'filename':
'/etc/origin/master/htpasswd'}]

Because of the way that ansible serializes content to/from the inventory
file format, it is actually a json encoded string. There are also issues
with using boolean values within those json encoded strings when they are
not quoted. I updated your version using all single quotes (which I believe
doesn't really matter for the Ansible json parser, but it is required as
part of the json spec), and also updated some the format of the ldap entry
itself.


> I've posted this expanded out and easier to read at
> http://paste.fedoraproject.org/360411/61788028/
>
> This setting always fails with this error:
>
> TASK: [openshift_master | Install httpd-tools if needed]
> **
> fatal: [master01.kubtst1.tst.lan.returnpath.net] => with_items expects a
> list or a set
> FATAL: all hosts have already failed -- aborting
>
> I've added a debug statement right before this to print the
> "openshift.master.identity_providers" variable that ansible is trying to
> iterate over in this task and it basically looks like a string, not a list:
>
> TASK: [openshift_master | debug var=openshift.master.identity_providers]
> **
> ok: [master01.kubtst1.tst.lan.returnpath.net] => {
> "var": {
> "openshift.master.identity_providers": "[{ \"name\":
> \"ldap_provider\", \"login\" : true, \"challenge\" : true, \"kind\" :
> \"LDAPPasswordIdentityProvider\", \"ldap_server\" : \"ldap.example.com\",
> \"ldap_bind_dn\" : \"\", \"ldap_bind_password\" : \"\", \"ldap_insecure\" :
> true, \"ldap_base_ou\" : \"ou=People,dc=example,dc=com\",
> \"ldap_preferred_username\" : \"uid\" }, {'name': 'htpasswd_auth', 'login':
> 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider',
> 'filename': '/etc/origin/master/htpasswd'}]"
> }
> }
>
> Is this an ansible bug or am I formatting this argument incorrectly? I
> based the format on the example here:
> https://docs.openshift.org/latest/install_config/install/advanced_install.html#configuring-cluster-variables
>
>
> Thanks for any insight,
>
>
>
> --
> Robert Wehner
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>


-- 
Jason DeTiberus
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: proper format for openshift_master_identity_providers in Ansible inventory?

2016-04-27 Thread Jason DeTiberus
On Wed, Apr 27, 2016 at 4:40 PM, Jason DeTiberus <jdeti...@redhat.com>
wrote:

>
>
> On Wed, Apr 27, 2016 at 4:19 PM, Robert Wehner <
> robert.weh...@returnpath.com> wrote:
>
>> I am using the advanced installation method for Origin using the 3.0.84-1
>> release of the openshift-ansible repo. I am trying to set up my identity
>> providers so the cluster will accept LDAP- and htpasswd-based access using
>> an openshift_master_identity_providers setting in my ansible inventory like
>> this:
>>
>> openshift_master_identity_providers=[{ "name": "ldap_provider", "login" :
>> true, "challenge" : true, "kind" : "LDAPPasswordIdentityProvider",
>> "ldap_server" : "ldap.example.com", "ldap_bind_dn" : "",
>> "ldap_bind_password" : "", "ldap_insecure" : true, "ldap_base_ou" :
>> "ou=People,dc=example,dc=com", "ldap_preferred_username" : "uid" },
>> {'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind':
>> 'HTPasswdPasswordIdentityProvider', 'filename':
>> '/etc/origin/master/htpasswd'}]
>>
>
> openshift_master_identity_providers=[{ 'name': 'ldap_provider', 'login' :
> 'true', 'challenge' : 'true', 'kind' : 'LDAPPasswordIdentityProvider',
> 'url' : 'ldap://ldap.example.com:389/ou=People,dc=example,dc=com?uid',
> 'bind_dn' : '', 'bind_password' : '', 'ldap_insecure' : 'true',
> 'attributes': {'preferredUsername' : 'uid'}}, {'name': 'htpasswd_auth',
> 'login': 'true', 'challenge': 'true', 'kind':
> 'HTPasswdPasswordIdentityProvider', 'filename':
> '/etc/origin/master/htpasswd'}]
>
> Because of the way that ansible serializes content to/from the inventory
> file format, it is actually a json encoded string. There are also issues
> with using boolean values within those json encoded strings when they are
> not quoted. I updated your version using all single quotes (which I believe
> doesn't really matter for the Ansible json parser, but it is required as
> part of the json spec), and also updated some the format of the ldap entry
> itself.
>

It's been pointed out to me that I had this backwards. Double quotes should
be used rather than single quotes to conform with the json spec.


>
>
>> I've posted this expanded out and easier to read at
>> http://paste.fedoraproject.org/360411/61788028/
>>
>> This setting always fails with this error:
>>
>> TASK: [openshift_master | Install httpd-tools if needed]
>> **
>> fatal: [master01.kubtst1.tst.lan.returnpath.net] => with_items expects a
>> list or a set
>> FATAL: all hosts have already failed -- aborting
>>
>> I've added a debug statement right before this to print the
>> "openshift.master.identity_providers" variable that ansible is trying to
>> iterate over in this task and it basically looks like a string, not a list:
>>
>> TASK: [openshift_master | debug var=openshift.master.identity_providers]
>> **
>> ok: [master01.kubtst1.tst.lan.returnpath.net] => {
>> "var": {
>> "openshift.master.identity_providers": "[{ \"name\":
>> \"ldap_provider\", \"login\" : true, \"challenge\" : true, \"kind\" :
>> \"LDAPPasswordIdentityProvider\", \"ldap_server\" : \"ldap.example.com\",
>> \"ldap_bind_dn\" : \"\", \"ldap_bind_password\" : \"\", \"ldap_insecure\" :
>> true, \"ldap_base_ou\" : \"ou=People,dc=example,dc=com\",
>> \"ldap_preferred_username\" : \"uid\" }, {'name': 'htpasswd_auth', 'login':
>> 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider',
>> 'filename': '/etc/origin/master/htpasswd'}]"
>> }
>> }
>>
>> Is this an ansible bug or am I formatting this argument incorrectly? I
>> based the format on the example here:
>> https://docs.openshift.org/latest/install_config/install/advanced_install.html#configuring-cluster-variables
>>
>>
>> Thanks for any insight,
>>
>>
>>
>> --
>> Robert Wehner
>>
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
>
> --
> Jason DeTiberus
>



-- 
Jason DeTiberus
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: CentOS OpsTools (logging, monitoring, etc.) SIG proposal

2016-05-20 Thread Jason DeTiberus
On Fri, May 20, 2016 at 10:40 AM, Rich Megginson <rmegg...@redhat.com>
wrote:

> We are trying to start up a CentOS OpsTools SIG
> <https://wiki.centos.org/SpecialInterestGroup>
> https://wiki.centos.org/SpecialInterestGroup for logging, monitoring, etc.
>
It almost seems to me that this is actually a meta SIG in a sense. I would
almost expect there to be a SIG for each sub topic here.

> The intention is that this would be the upstream for development and
> packaging of tools related to logging (EFK stack, etc.), monitoring, and
> other opstools, as a single place where packages can be consumed by
> OpenShift Origin, RDO, and other upstream projects that use CentOS - pool
> our resources, share the lessons learned, and enable cross project log
> aggregation and correlation (e.g. running OpenShift on top of OpenStack on
> top of Ceph/Gluster - do my OpenShift application errors correlate with
> Nova errors?  file system errors?).
>
I definitely love the concept, I just want to make sure that we don't
duplicate effort being done by the existing SIGs or end up with conflicting
efforts.


> This would also be a place for installers (puppet manifests, ansible
> playbooks), and possibly testing/CI and containers.
>
So, for OpenShift we already have the PaaS SIG that will cover installation
and testing/CI. The Cloud SIG covers this for OpenStack as well.

There is also potentially overlap with the ConfigManagement SIG here as
well.


> It is intended that this will form the basis of
> https://github.com/openshift/origin-aggregated-logging which will be
> built from the packages and base images provided by the SIG.
> If you are interested, please chime in in the email thread:
> https://lists.centos.org/pipermail/centos-devel/2016-May/014777.html
>

-- 
Jason DeTiberus
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: CentOS OpsTools (logging, monitoring, etc.) SIG proposal

2016-05-20 Thread Jason DeTiberus
On Fri, May 20, 2016 at 1:22 PM, Rich Megginson <rmegg...@redhat.com> wrote:

> On 05/20/2016 10:54 AM, Jason DeTiberus wrote:
>
>
>
> On Fri, May 20, 2016 at 10:40 AM, Rich Megginson < <rmegg...@redhat.com>
> rmegg...@redhat.com> wrote:
>
>> We are trying to start up a CentOS OpsTools SIG
>> https://wiki.centos.org/SpecialInterestGroup for logging, monitoring,
>> etc.
>>
> It almost seems to me that this is actually a meta SIG in a sense. I would
> almost expect there to be a SIG for each sub topic here.
>
>
> So we should have a logging SIG, a monitoring SIG, etc.?
>

I believe so. I'm not completely against them living in an OpsTools SIG, I
just worry about OpsTools focusing on one type of logging and/or monitoring
framework and then a competing SIG coming along that would address others.


> The intention is that this would be the upstream for development and
>> packaging of tools related to logging (EFK stack, etc.), monitoring, and
>> other opstools, as a single place where packages can be consumed by
>> OpenShift Origin, RDO, and other upstream projects that use CentOS - pool
>> our resources, share the lessons learned, and enable cross project log
>> aggregation and correlation (e.g. running OpenShift on top of OpenStack on
>> top of Ceph/Gluster - do my OpenShift application errors correlate with
>> Nova errors?  file system errors?).
>>
> I definitely love the concept, I just want to make sure that we don't
> duplicate effort being done by the existing SIGs or end up with conflicting
> efforts.
>
>
>> This would also be a place for installers (puppet manifests, ansible
>> playbooks), and possibly testing/CI and containers.
>>
> So, for OpenShift we already have the PaaS SIG that will cover
> installation and testing/CI. The Cloud SIG covers this for OpenStack as
> well.
>
>
> What about testing/CI for running OpenShift with an integrated EFK stack?
> Would that be covered by the PaaS SIG?  Same with Cloud SIG, running
> OpenStack with an EFK stack for logging.
>

I believe we would want to extend that into the PaaS SIG (I can't really
speak to the Cloud SIG), since logging and metrics are an integral part of
the complete OpenShift platform. Obviously we need to do work towards the
automated deployment of those platforms, but I would fully intend that
testing and CI coverage include the deployed logging and metrics
components. Shipping containers would also have to be tied in closely with
the PaaS SIG, since platform versions are tied to versions of the
integrated containers as well.


>
>
>
> There is also potentially overlap with the ConfigManagement SIG here as
> well.
>
>
>> It is intended that this will form the basis of
>> https://github.com/openshift/origin-aggregated-logging which will be
>> built from the packages and base images provided by the SIG.
>> If you are interested, please chime in in the email thread:
>> https://lists.centos.org/pipermail/centos-devel/2016-May/014777.html
>>
>
> --
> Jason DeTiberus
>
>
>


-- 
Jason DeTiberus
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: ansible masters

2016-04-18 Thread Jason DeTiberus
On Mon, Apr 18, 2016 at 4:20 PM, Candide Kemmler <candide@intrinsic.world>
wrote:

> I'm a bit confused about how to configure masters & nodes in ansible hosts:
>
> I have a master @ paas.example.com and a node at node1.example.com
>
> I want my apps to be accessible from the outside world @ apps.example.com
>
> Is it correct to assume that node1.example.com will never be exposed by
> openshift routes to the outside world?
>

The default subdomain is exposed through the OpenShift router and a
wildcard DNS entry for *.apps.example.com would need to be created to point
to the router instance (or a load balancer containing the router instances).

See https://docs.openshift.org/latest/architecture/core_concepts/routes.html
for more information on how the routing layer works.

Also see
https://docs.openshift.org/latest/install_config/install/deploy_router.html
for how to deploy the router.


>
> So I have the following relevant information in ansible hosts:
>
> [OSEv3:children]
> masters
> nodes
>
> # Set variables common for all OSEv3 hosts
> [OSEv3:vars]
>
> # default subdomain to use for exposed routes
> osm_default_subdomain="apps.example.com"
>
> [...]
>
> # host group for masters
> [masters]
> paas.example.com openshift_hostname=paas.example.com
> openshift_public_hostname=paas.example.com
>
> # host group for nodes
> [nodes]
> node1.example.com openshift_hostname=node1.example.com
> openshift_public_hostname=node1.example.com


Your master should be listed under here as well. We currently require that
the master also be a node (to be a member of the SDN network), by default
openshift-ansible will make the node on the master host unschedulable to
avoid pods being scheduled there.


>
>
> Is it correct?
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>



-- 
Jason DeTiberus
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Adding master to 3 node install

2016-08-11 Thread Jason DeTiberus
On Aug 11, 2016 9:15 AM, "Philippe Lafoucrière" <
philippe.lafoucri...@tech-angels.com> wrote:
>
> Just for the records, we added a new node this week using the scaleup.yml
playbook, and it went pretty well.
>
> We also upgraded from 1.2.0 to 1.2.1 along with a Centos Atomic upgrade,
and it didn't went well :(
> All the images created by builders were "missing", and we had to rebuild
everything in every project, leading to a long unavailability (hopefully
during night).

This sounds like your registry was using ephemeral storage rather than
being backed by a PV or object storage.

The docs provide some additional details for this if manually deploying the
registry:
https://docs.openshift.org/latest/install_config/install/docker_registry.html

If using openshift-ansible for deployment, the example inventory file
provides some variables that allow for configuring an NFS volume, an
OpenStack Cinder volume, or a s3 bucket:
https://github.com/openshift/openshift-ansible/blob/master/inventory/byo/hosts.origin.example#L290

--
Jason DeTiberus

> So if you have a virtualization system above OS, you should definitely
snapshot before each run...
> ​
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: origin byo via ansible - etcd wrong IP

2016-07-18 Thread Jason DeTiberus
On Mon, Jul 18, 2016 at 11:10 AM, Andrew Butcher <abutc...@redhat.com>
wrote:

>
>
> On Mon, Jul 18, 2016 at 11:02 AM, Jason DeTiberus <jdeti...@redhat.com>
> wrote:
>
>>
>>
>> On Mon, Jul 18, 2016 at 10:51 AM, Miloslav Vlach <
>> miloslav.vl...@rohlik.cz> wrote:
>>
>>> Hi, I tried what you suggest but with no success. Maybe I don’t
>>> correctly write what I want.
>>>
>>> I can connect to master, node1, node2 - the installation process works
>>> good. The problem is that the IP address in master-config.yuml points to
>>> the wrong interface. Wrong IP address in config is 10.0.2.15 (NAT interface
>>> not accessible from other hosts) and the correct IP address is 10.2.2.10.
>>>
>>
>> Ah, my apologies. I misunderstood the issue originally. 'etcd_ip' should
>> be the variable that you want to override to fix the issue.
>>
>
> Setting openshift_ip is the correct way override it. I think the problem
> may be that openshift_hostname should also be set in order to override the
> etcd url in the master config.
>

Ah, thanks! I thought that was the case, but it's been a while since I
looked through the etcd config code that I missed the translation.


>
> On the 10.2.2.10 host, what is the output of `hostname -f`?
>
>
>> I don’t know how to specify the IP 10.2.2.10 for the etcd…
>>>
>>> Thanks Mila
>>>
>>>
>>>
>>> Dne 18. července 2016 v 16:33:41, Jason DeTiberus (jdeti...@redhat.com)
>>> napsal/a:
>>>
>>> If you are using ansible > 2.0, then you would set 'ansible_host' for
>>> each host. If using ansible < 2.0, then the variable is 'ansible_ssh_host'
>>>
>>> --
>>> Jason DeTiberus
>>>
>>> On Mon, Jul 18, 2016 at 10:12 AM, Miloslav Vlach <
>>> miloslav.vl...@rohlik.cz> wrote:
>>>
>>>> Hi all,
>>>>
>>>> I would like to install one master and two nodes to my virtual box. I
>>>> have problem with setting the primary IP address.
>>>>
>>>> All my VM have two interfaces: 10.0.2.15 (master NAT) and 10.2.2.10
>>>> (host only).
>>>>
>>>> When I run the ansible-playbook I got error with etcd, which tries to
>>>> connect to the 10.0.2.15. There is nothing. How can I setup the IP address 
>>>> ?
>>>>
>>>> My inventory looks like this
>>>>
>>>> [masters]
>>>>
>>>> master openshift_ip=10.2.2.10 openshift_public_ip=10.2.2.10
>>>>
>>>>
>>>> [etcd]
>>>>
>>>> master openshift_ip=10.2.2.10 openshift_public_ip=10.2.2.10
>>>>
>>>>
>>>> [nodes]
>>>>
>>>> master openshift_node_labels="{'region': 'infra', 'zone': 'default'}" 
>>>> openshift_ip=10.2.2.10
>>>> openshift_public_ip=10.2.2.10
>>>>
>>>> node1 openshift_node_labels="{'region': 'primary', 'zone': 'default'}" 
>>>> openshift_ip=10.2.2.11
>>>> openshift_public_ip=10.2.2.11
>>>>
>>>> node2 openshift_node_labels="{'region': 'primary', 'zone': 'default'}" 
>>>> openshift_ip=10.2.2.12
>>>> openshift_public_ip=10.2.2.12
>>>>
>>>>
>>>> When I manually change the master-config.yaml, the etched starts
>>>> working. But I can’t do this by the ansible.
>>>>
>>>> etcdClientInfo:
>>>>
>>>>   ca: master.etcd-ca.crt
>>>>
>>>>   certFile: master.etcd-client.crt
>>>>
>>>>   keyFile: master.etcd-client.key
>>>>
>>>>   urls:
>>>>
>>>> - https://10.2.2.10:2379
>>>>
>>>>
>>>>
>>>> Thanks Mila
>>>>
>>>>
>>>> ___
>>>> users mailing list
>>>> users@lists.openshift.redhat.com
>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>>
>>>>
>>>
>>>
>>>
>>
>>
>> --
>> Jason DeTiberus
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>


-- 
Jason DeTiberus
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: origin byo via ansible - etcd wrong IP

2016-07-18 Thread Jason DeTiberus
On Mon, Jul 18, 2016 at 10:51 AM, Miloslav Vlach <miloslav.vl...@rohlik.cz>
wrote:

> Hi, I tried what you suggest but with no success. Maybe I don’t correctly
> write what I want.
>
> I can connect to master, node1, node2 - the installation process works
> good. The problem is that the IP address in master-config.yuml points to
> the wrong interface. Wrong IP address in config is 10.0.2.15 (NAT interface
> not accessible from other hosts) and the correct IP address is 10.2.2.10.
>

Ah, my apologies. I misunderstood the issue originally. 'etcd_ip' should be
the variable that you want to override to fix the issue.


>
> I don’t know how to specify the IP 10.2.2.10 for the etcd…
>
> Thanks Mila
>
>
>
> Dne 18. července 2016 v 16:33:41, Jason DeTiberus (jdeti...@redhat.com)
> napsal/a:
>
> If you are using ansible > 2.0, then you would set 'ansible_host' for each
> host. If using ansible < 2.0, then the variable is 'ansible_ssh_host'
>
> --
> Jason DeTiberus
>
> On Mon, Jul 18, 2016 at 10:12 AM, Miloslav Vlach <miloslav.vl...@rohlik.cz
> > wrote:
>
>> Hi all,
>>
>> I would like to install one master and two nodes to my virtual box. I
>> have problem with setting the primary IP address.
>>
>> All my VM have two interfaces: 10.0.2.15 (master NAT) and 10.2.2.10 (host
>> only).
>>
>> When I run the ansible-playbook I got error with etcd, which tries to
>> connect to the 10.0.2.15. There is nothing. How can I setup the IP address ?
>>
>> My inventory looks like this
>>
>> [masters]
>>
>> master openshift_ip=10.2.2.10 openshift_public_ip=10.2.2.10
>>
>>
>> [etcd]
>>
>> master openshift_ip=10.2.2.10 openshift_public_ip=10.2.2.10
>>
>>
>> [nodes]
>>
>> master openshift_node_labels="{'region': 'infra', 'zone': 'default'}" 
>> openshift_ip=10.2.2.10
>> openshift_public_ip=10.2.2.10
>>
>> node1 openshift_node_labels="{'region': 'primary', 'zone': 'default'}" 
>> openshift_ip=10.2.2.11
>> openshift_public_ip=10.2.2.11
>>
>> node2 openshift_node_labels="{'region': 'primary', 'zone': 'default'}" 
>> openshift_ip=10.2.2.12
>> openshift_public_ip=10.2.2.12
>>
>>
>> When I manually change the master-config.yaml, the etched starts working.
>> But I can’t do this by the ansible.
>>
>> etcdClientInfo:
>>
>>   ca: master.etcd-ca.crt
>>
>>   certFile: master.etcd-client.crt
>>
>>   keyFile: master.etcd-client.key
>>
>>   urls:
>>
>> - https://10.2.2.10:2379
>>
>>
>>
>> Thanks Mila
>>
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
>
>


-- 
Jason DeTiberus
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: origin byo via ansible - etcd wrong IP

2016-07-18 Thread Jason DeTiberus
If you are using ansible > 2.0, then you would set 'ansible_host' for each
host. If using ansible < 2.0, then the variable is 'ansible_ssh_host'

--
Jason DeTiberus

On Mon, Jul 18, 2016 at 10:12 AM, Miloslav Vlach <miloslav.vl...@rohlik.cz>
wrote:

> Hi all,
>
> I would like to install one master and two nodes to my virtual box. I have
> problem with setting the primary IP address.
>
> All my VM have two interfaces: 10.0.2.15 (master NAT) and 10.2.2.10 (host
> only).
>
> When I run the ansible-playbook I got error with etcd, which tries to
> connect to the 10.0.2.15. There is nothing. How can I setup the IP address ?
>
> My inventory looks like this
>
> [masters]
>
> master openshift_ip=10.2.2.10 openshift_public_ip=10.2.2.10
>
>
> [etcd]
>
> master openshift_ip=10.2.2.10 openshift_public_ip=10.2.2.10
>
>
> [nodes]
>
> master openshift_node_labels="{'region': 'infra', 'zone': 'default'}" 
> openshift_ip=10.2.2.10
> openshift_public_ip=10.2.2.10
>
> node1 openshift_node_labels="{'region': 'primary', 'zone': 'default'}" 
> openshift_ip=10.2.2.11
> openshift_public_ip=10.2.2.11
>
> node2 openshift_node_labels="{'region': 'primary', 'zone': 'default'}" 
> openshift_ip=10.2.2.12
> openshift_public_ip=10.2.2.12
>
>
> When I manually change the master-config.yaml, the etched starts working.
> But I can’t do this by the ansible.
>
> etcdClientInfo:
>
>   ca: master.etcd-ca.crt
>
>   certFile: master.etcd-client.crt
>
>   keyFile: master.etcd-client.key
>
>   urls:
>
> - https://10.2.2.10:2379
>
>
>
> Thanks Mila
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Create selfsigned certs for securing openshift registry

2016-07-08 Thread Jason DeTiberus
On Jul 8, 2016 1:52 AM, "Den Cowboy"  wrote:
>
> I try to secure my openshift registry:
>
> $ oadm ca create-server-cert \
> --signer-cert=/etc/origin/master/ca.crt \
> --signer-key=/etc/origin/master/ca.key \
> --signer-serial=/etc/origin/master/ca.serial.txt \
>
--hostnames='docker-registry.default.svc.cluster.local,172.30.124.220' \
> --cert=/etc/secrets/registry.crt \
> --key=/etc/secrets/registry.key
>
>
> Which hostnames do I have to use?
> The service IP of my docker registry of course but what then?:

Currently everything internal should be using just the service IP.

>
> docker-registry.default.svc.cluster.local

This would cover the created service. We have plans to eventually use the
registry service name instead of IP.

> OR/AND
> docker-registry.dev.wildcard.com

This would only be needed if you intend to expose the registry using a
route for access external to the cluster.

>
> Thanks
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Hybrid Cloud Hostname Issues (AWS, Co-Lo)

2016-08-15 Thread Jason DeTiberus
On Mon, Aug 15, 2016 at 4:17 AM, Frank Liauw <fr...@vsee.com> wrote:

> Hi All,
>
> I have a 5 node Openshift cluster split across 2 AZs; our colocation
> center and AWS, with a master in each AZ and the rest being nodes.
>
> We setup our cluster with the Ansible script, and somewhere during the
> setup, the EC2 instance's private hostname were picked up and registered as
> node names of the nodes in AWS, which is a bit annoying as that deviates
> from our hostname conventions and is rather difficult to read, and it's not
> something that can be changed post setup.
>
> It didn't help that parts of the admin operations seem to be using the EC2
> instance's private hostname, so I get errors like this:
>
> # oc logs logging-fluentd-shfnu
> Error from server: Get https://ip-10-20-128-101.us-
> west-1.compute.internal:10250/containerLogs/logging/logging-
> fluentd-shfnu/fluentd-elasticsearch: dial tcp 198.90.20.95:10250: i/o
> timeout
>
> Scheduling system related pods on the AWS instances works (router,
> fluentd), though any build pods that lands up on EC2s never gets built, and
> just eventually times out; my suspicion is that the build process monitors
> depends on the hostname which can't be reached from our colocation center
> master (which we use as a primary), and hence breaks.
>
> I'm unable to find much detail on this behaviour.
>
> 1. Can we manually change the hostname of certain nodes?
>

The nodeName value overrides this, however if you are relying on cloud
provider integration there are limitations, see below.


>
> 2. How do we avoid registering EC2 nodes with their private hostnames?
>

f you are willing to give up the native cloud provider integration (ability
to leverage EBS volumes as PVs), then you can override this using the
openshift_hostname variable when installing the cluster. At least as of
Kubernetes/Origin 1.2, the nodeName value in the node config needed to
match the private dns name of the host.

--
Jason DeTiberus
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Missing OpenShift Nodes - Unable to Join Cluster

2016-09-09 Thread Jason DeTiberus
On Fri, Sep 9, 2016 at 10:18 AM, Isaac Christoffersen <
ichristoffer...@vizuri.com> wrote:

> So the hostnames did not change and after rolling back to just the BYO
> configuration and removing the AWS settings, I was able to get back up and
> running.  This means that the certificates were good as well.
>
> I lost the ability to use EBS volumes doing this, but we in the process of
> using EFS anyway.
>
> I suspect the issue is tied up in the fact that these node names have
> multiple aliases and have a different local hostname then they do in the
> EC2 console.  However, I'm not why this manifested itself after running
> successfully for 4 weeks.
>

That is definitely odd, I would expect that the hostname wouldn't matter.
For the cloud provider integration the value of the nodeName setting in
/etc/origin/node/node-config.yaml should match private-dns-name attribute
for the instance.



>
> Either way, I'm moving on with just BYO.
>
> thanks,
>
> Isaac
>
> Isaac Christoffersen <https://www.linkedin.com/in/ichristo>, Technical
> Director
> w: 703.318.7800 x8202 | m: 703.980.2836 | @ichristo
> <http://twitter.com/ichristo>
>
> Vizuri, a division of AEM Corporation
> 13880 Dulles Corner Lane # 300
> Herndon, Virginia 20171
> www.vizuri.com | @1Vizuri <http://twitter.com/1Vizuri>
>
>
> On Thu, Sep 8, 2016 at 10:36 PM, Isaac Christoffersen <
> ichristoffer...@vizuri.com> wrote:
>
>> No, the hostnames are the same.  Because I was getting the "external Id
>> from Cloud provider" error, I disabled the AWS configuration settings and
>> left it as solely a BYO.
>>
>> This allowed me to get my nodes back up.  There's definitely something
>> with the AWS cloud provider settings and how instance names for nodes are
>> being found.
>>
>> I only need the AWS config for EBS storage for Persistence Volumes, so I
>> can't fully disable it the AWS settings.
>>
>> How does the external id lookup work?  Can I verify the settings it
>> expects?
>>
>> Isaac Christoffersen <https://www.linkedin.com/in/ichristo>, Technical
>> Director
>> w: 703.318.7800 x8202 | m: 703.980.2836 | @ichristo
>> <http://twitter.com/ichristo>
>>
>> Vizuri, a division of AEM Corporation
>> 13880 Dulles Corner Lane # 300
>> Herndon, Virginia 20171
>> www.vizuri.com | @1Vizuri <http://twitter.com/1Vizuri>
>>
>>
>> On Thu, Sep 8, 2016 at 9:24 PM, Jason DeTiberus <jdeti...@redhat.com>
>> wrote:
>>
>>> On Sep 8, 2016 7:06 PM, "Isaac Christoffersen" <
>>> ichristoffer...@vizuri.com> wrote:
>>> >
>>> > I'm running Origin in AWS and after adding some shared EFS volumes to
>>> the node instances, the nodes seem to be unable to rejoin the cluster.
>>> >
>>> > It's a 3 Master + ETCD setup with 4 application Nodes.  An 'oc get
>>> nodes' returns an empty list and of course, none of the pods will start.
>>> >
>>> >
>>> > Various error messages that I see that are relevant are:
>>> >
>>> > "Unable to construct api.Node object for kubelet: failed to get
>>> external ID from cloud provider: instance not found
>>> > "Could not find an allocated subnet for node: ip-10-0-37-217. ,
>>> Waiting..."
>>> >
>>> > and
>>> >
>>> > ""Error updating node status, will retry: error getting node
>>> "ip-10-0-37-217": nodes "ip-10-0-37-217" not found"
>>> >
>>> >
>>> > Any insights into how to start troubleshooting further.  I'm baffled.
>>>
>>> Did the nodes come back up with a new IP address? If so, the internal
>>> DNS name would have also changed and the node would need to be reconfigured
>>> accordingly.
>>>
>>> Items that would need to be updated:
>>> - node name in the node config
>>> - node serving certificate
>>>
>>> There is an Ansible playbook that can automate the redeployment of
>>> certificates as well (playbooks/byo/openshift-clust
>>> er/redeploy-certificates.yml).
>>>
>>> --
>>> Jason DeTiberus
>>>
>>
>>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>


-- 
Jason DeTiberus
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Missing OpenShift Nodes - Unable to Join Cluster

2016-09-08 Thread Jason DeTiberus
On Sep 8, 2016 7:06 PM, "Isaac Christoffersen" <ichristoffer...@vizuri.com>
wrote:
>
> I'm running Origin in AWS and after adding some shared EFS volumes to the
node instances, the nodes seem to be unable to rejoin the cluster.
>
> It's a 3 Master + ETCD setup with 4 application Nodes.  An 'oc get nodes'
returns an empty list and of course, none of the pods will start.
>
>
> Various error messages that I see that are relevant are:
>
> "Unable to construct api.Node object for kubelet: failed to get external
ID from cloud provider: instance not found
> "Could not find an allocated subnet for node: ip-10-0-37-217. ,
Waiting..."
>
> and
>
> ""Error updating node status, will retry: error getting node
"ip-10-0-37-217": nodes "ip-10-0-37-217" not found"
>
>
> Any insights into how to start troubleshooting further.  I'm baffled.

Did the nodes come back up with a new IP address? If so, the internal DNS
name would have also changed and the node would need to be reconfigured
accordingly.

Items that would need to be updated:
- node name in the node config
- node serving certificate

There is an Ansible playbook that can automate the redeployment of
certificates as well
(playbooks/byo/openshift-cluster/redeploy-certificates.yml).

--
Jason DeTiberus
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Modifying existing advanced installation

2016-09-22 Thread Jason DeTiberus
On Tue, Sep 20, 2016 at 4:07 AM, Lionel Orellana <lione...@gmail.com> wrote:

> Hello
>
> I want to configure LDAP authentication on my existing cluster.
>
> Instead of manually modifying the master config file, can I add the new
> settings to my Ansible inventory and rerun the config playbook?
>

Yes, you can update your inventory and re-run Ansible.


> Does it know to only apply the new configuration?
>

It will re-run the entire config playbook. There are some steps that will
not be applied automatically (certificate creation, router, registry,
logging, metrics), and there are some tasks that may report "changed" when
they have not actually modified anything. We are working on improving the
roles for better suitability for ongoing configuration management.


> Generally speaking, is this the best way to make changes to an existing
> cluster?
>

It is the way that I would recommend, yes.



>
> Thanks
>
> Lionel.
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>


-- 
Jason DeTiberus
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: masters elb configuration

2016-09-30 Thread Jason DeTiberus
On Fri, Sep 30, 2016 at 4:12 AM, Andrew Lau <and...@andrewklau.com> wrote:

> Has anyone had any success running the master (api and console) behind
> ELB? The new ALB supports web sockets, however spdy isn't supported
> (although http/2 is):
>

For ELBs, you need to use tcp rather than http/https.

There is a config option to enable http/2 starting with Origin 1.3/OCP 3.3,
but we have not tested it behind an ALB.



>
> Running oc rsh or oc rsync through ELB ends up with clients getting the
> respective responses:
>
> Error from server: Upgrade request required
>
> WARNING: cannot use rsync: rsync not available in container
> WARNING: cannot use tar: tar not available in container
> error: No available strategies to copy.
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>


-- 
Jason DeTiberus
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: multi cloudprovider

2016-10-26 Thread Jason DeTiberus
On Oct 26, 2016 7:26 AM, "Andrew Lau" <and...@andrewklau.com> wrote:
>
> Thanks
>
> On Wed, 26 Oct 2016 at 22:12 Jason DeTiberus <jdeti...@redhat.com> wrote:
>>
>> On Oct 26, 2016 4:29 AM, "Andrew Lau" <and...@andrewklau.com> wrote:
>> >
>> > Does openshift have support for multi cloudproviders (without
federation)? eg. if we want to spread a cluster across AWS and oVirt.
>>
>> It does not. To deploy a single cluster across multiple cloud providers
or multiple regions in the same cloud provider the integrated cloud
provider support needs to be disabled.
>
> Would it be possible to have a cloud provider and then nodes not part of
the cloud provider?

It would not, part of the cloud provider integration is to automatically
remove nodes that are no longer present in the cloud provider. The node
controller process would delete those nodes.

>>
>> This is a limitation of Kubernetes and there are currently no plans to
change this outside of using federated clusters.
>
> Federation seems to be a kube only thing atm(?)

I'm not sure if we are planning as exposing it as tech preview for Origin
1.4 or 1.5. Hopefully someone else can chime in with more info.

>>
>> >
>> > My concern around such an implementation is the AWS dynamic volume
provisioning and masters access key requirements.
>>
>> Indeed, you would lose all support for cloud-based volumes, even
pre-provisiond ones.
>>
>> You could use Gluster for dynamic volumes.
>
> Does gluster still have the 100 volume limit? We lose out on a lot of IO
last time we had a large gluster cluster.

I'm not sure. I only mentioned it because it's the only built in dynamic
provisioner that isn't tied to a cloud provider.

>>
>> The bigger consideration will be latency between the hosts and the
design of your scheduler config, node labeling, and node selectors for you
projects/deployments to properly place things across the different
providers.
>>
>> --
>> Jason DeTiberus
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Managing OpenShift Configuration with Puppet/Ansible… what are your best practices?

2016-10-13 Thread Jason DeTiberus
On Thu, Oct 13, 2016 at 2:53 PM, Rich Megginson <rmegg...@redhat.com> wrote:

> On 10/13/2016 07:52 AM, Philippe Lafoucrière wrote:
>
>> Just to clarify our need here:
>>
>> We want the projects config inside a configuration tool. There's
>> currently nothing preventing from modifying the config of a project (let's
>> say, a DC), and no one will be notified of the change.
>>
>
> Do you mean, if someone does 'oc edit dc my-project-dc', you want to be
> able to sync those changes back to some config file, so that if you
> redeploy, it will use the changes you made when you did the 'oc edit'?
>


I believe he is looking to have the external config be the source of truth
in this case. Which would be covered by the future Ansible module work (we
aren't looking to provide additional configuration management support
beyond Ansible, as far as I know).


>
> We're looking for something to keep track of changes,
>
>
It is possible to do this part currently using watch, either through the
api or through the command line tooling.


> and make sure the config deployed is the config we have in our git repo.
>>
>
This is the trickier part, which the Ansible modules would help address.

--
Jason DeTiberus
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Satellite instead of subscription-manager PLEASE HELP (BLOCKED)

2016-10-13 Thread Jason DeTiberus
On Thu, Oct 13, 2016 at 4:48 PM, Dean Peterson <peterson.d...@gmail.com>
wrote:

> Our machines use rhn classic. If I try to run subscription-manager
> register it says I am already registered with redhat classic. However, this
> does seem to be compatible with Docker and Openshift. Operations wants to
> stick with redhat classic and satellite. Is this possible?
>

I don't think this is currently possible, the entitlement/subscription
mapping is done through a set of plugins that are specific to
subscription-manager. With RHN Classic approaching end of life (
https://access.redhat.com/rhn-to-rhsm) I don't really see that changing,
but you could always reach out to support to file a formal RFE.

--
Jason DeTiberus


>
> On Thu, Oct 13, 2016 at 3:29 PM, Kent Perrier <kperr...@redhat.com> wrote:
>
>> subscription-manager is used to register your host to your local
>> satellite as well. How are you patching your hosts if they are not
>> registered?
>>
>> Kent
>>
>> On Thu, Oct 13, 2016 at 3:05 PM, Dean Peterson <peterson.d...@gmail.com>
>> wrote:
>>
>>> Can anyone please help? We use satellite for access to our software. We
>>> do not use subscription-manager. Unfortunately when running docker builds,
>>> the containers cannot access the hosts registries because they expect to
>>> access auto attached subscription-manager subscriptions
>>> How is openshift supposed to work with satellite instead of
>>> subscription-manager?
>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>>
>>
>>
>> --
>> Kent Perrier
>> Technical Account Manager
>>
>>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Satellite instead of subscription-manager PLEASE HELP (BLOCKED)

2016-10-14 Thread Jason DeTiberus
On Fri, Oct 14, 2016 at 10:35 AM, Dean Peterson <peterson.d...@gmail.com>
wrote:

> I went to the link: "https://access.redhat.com/rhn-to-rhsm;. It says
> satellite users should be unaffected. I'm a little confused. I'm using
> satellite, but when I type subscription-manager register it says i'm
> registered. However, when I run "subscription-manager attach --auto", it
> spins for a while then says I am not registered.
>


The tooling hits separate systems, so register is showing that you are
registered, but only because the RHN tooling is configured and reports it's
registered. Attach tries to attach subscriptions from RHSM and will fail
because the RHSM system does not manage the system.



> We pay a lot of money for Openshift Enterprise and it will not work
> without upgrading our entire satellite system?
>

For the host subscription/entitlement information to be propagated into the
container, it would either require the host be subscribed to Satellite 6 or
the hosted subscription management service.


> Right now we are on version 5.5 of satellite. There is no way to make this
> work with our existing setup?
>

Possible options I can think of off the top of my head:
- Subscribe OpenShift systems directly to Subscription Manager, instead of
Satellite 5.5
- Access packages through a reposync'd mirror:
https://access.redhat.com/solutions/9892, and configure the mirror as part
of the container build.

I'd suggest contacting support and/or account manager, since they may know
of other options available and could potentially help advocate for adding
Satellite 5 support.

--
Jason



>
> On Thu, Oct 13, 2016 at 3:58 PM, Jason DeTiberus <jdeti...@redhat.com>
> wrote:
>
>>
>>
>> On Thu, Oct 13, 2016 at 4:48 PM, Dean Peterson <peterson.d...@gmail.com>
>> wrote:
>>
>>> Our machines use rhn classic. If I try to run subscription-manager
>>> register it says I am already registered with redhat classic. However, this
>>> does seem to be compatible with Docker and Openshift. Operations wants to
>>> stick with redhat classic and satellite. Is this possible?
>>>
>>
>> I don't think this is currently possible, the entitlement/subscription
>> mapping is done through a set of plugins that are specific to
>> subscription-manager. With RHN Classic approaching end of life (
>> https://access.redhat.com/rhn-to-rhsm) I don't really see that changing,
>> but you could always reach out to support to file a formal RFE.
>>
>> --
>> Jason DeTiberus
>>
>>
>>>
>>> On Thu, Oct 13, 2016 at 3:29 PM, Kent Perrier <kperr...@redhat.com>
>>> wrote:
>>>
>>>> subscription-manager is used to register your host to your local
>>>> satellite as well. How are you patching your hosts if they are not
>>>> registered?
>>>>
>>>> Kent
>>>>
>>>> On Thu, Oct 13, 2016 at 3:05 PM, Dean Peterson <peterson.d...@gmail.com
>>>> > wrote:
>>>>
>>>>> Can anyone please help? We use satellite for access to our software.
>>>>> We do not use subscription-manager. Unfortunately when running docker
>>>>> builds, the containers cannot access the hosts registries because they
>>>>> expect to access auto attached subscription-manager subscriptions
>>>>> How is openshift supposed to work with satellite instead of
>>>>> subscription-manager?
>>>>>
>>>>> ___
>>>>> users mailing list
>>>>> users@lists.openshift.redhat.com
>>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Kent Perrier
>>>> Technical Account Manager
>>>>
>>>>
>>
>


-- 
Jason DeTiberus
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Master/ETCD Migration

2016-12-13 Thread Jason DeTiberus
 best or the right way to do since this is a production
>> cluster and i want minimal downtime?
>>
>>
>> ---
>> Diego Castro / The CloudFather
>> GetupCloud.com
>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__GetupCloud.com=DgQFaQ=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74=SXZbgql2jEdZcxZf-F7G1PY7KWstOe44c8cHN7wPNKM=WBpMKzLoWt-i2RcaByenm6qveMOvVLk3hW7-68poML4=>
>> - Eliminamos a Gravidade
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.op
>> enshift.redhat.com_openshiftmm_listinfo_users=DgICAg=_hR
>> q4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0=8IlWeJZqFtf8Tvx1P
>> DV9NsLfM_M0oNfzEXXNp-tpx74=SXZbgql2jEdZcxZf-F7G1PY7KWstOe4
>> 4c8cHN7wPNKM=hljug4_Dzfra1fGcjSvwVO2n6CAsCQpr5yyPBcbOc-Y=
>>
>>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>


-- 
Jason DeTiberus
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Master/ETCD Migration

2016-12-13 Thread Jason DeTiberus
On Tue, Dec 13, 2016 at 1:49 PM, Diego Castro <diego.cas...@getupcloud.com>
wrote:

> 2016-12-13 15:24 GMT-03:00 Jason DeTiberus <jdeti...@redhat.com>:
>
>>
>>
>> On Tue, Dec 13, 2016 at 12:37 PM, Diego Castro <
>> diego.cas...@getupcloud.com> wrote:
>>
>>> Thanks John, it's very helpful.
>>> Looking over the playbook code, it's seems to replace all certificates
>>> and trigger node evacuation to update all pods CA, i definitely don't want
>>> that!
>>>
>>
>> It should only do that when openshift_certificates_redeploy_ca is set to
>> True, otherwise it should just redeploy certificates on the masters.
>>
> Perfect!
>
>>
>> There is also a PR for splitting out the certificate redeploy playbooks
>> to allow for more flexibility when running: https://github.com/op
>> enshift/openshift-ansible/pull/2671
>>
>>
>>> - ETCD wont be a problem since i can replace the certs, migrate the
>>> datadir and restart masters.
>>>
>>
>> We don't currently support automated resizing or migration of etcd
>> currently, but this approach should work just fine.
>>
>> That said, one *could* do the following:
>> - Add the new etcd hosts to the inventory
>> - Run Ansible against the hosts (I suspect it will fail on service
>> startup)
>> - Add the newly provisioned etcd hosts manually to the cluster using
>> etcdctl
>> - if Ansible failed on the previous step, re-run Ansible again to finish
>> landing the etcd config change
>> - Remove the old etcd hosts from the etcd cluster using etcdctl
>> - Update the inventory to remove the old etcd hosts
>> - Run Ansible to remove the old etcd hosts from the master configs
>>
>> I'll do it!
>
>>
>> - Masters is a big issue, since i had to change public cluster hostname.
>>>
>>
>> Indeed, but there shouldn't be a huge disruption of doing a rolling
>> update of the master services to land the new certificate. The controllers
>> service will migrate (possibly multiple times), but that should be mostly
>> transparent to running apps and users.
>>
>
> What you mean by 'rolling update', is the same process of nodes 'which i
> do by running scaleup playbook'?
>

For masters, this might work:
- If you are using a named certificates:
  - update inventory:
- update openshift_master_named_certificates to add the cert for the
new cluster name(s)
- add the additional master hosts to the inventory without updating the
cluster hostname(s)
  - Run Ansible to land the new named_certificate on the existing hosts and
install/configure the new hosts

At this point, the cluster should be up and functional with all masters and
should respond and serve the api/console using the new cluster hostname,
but nodes will still be configured to use the old cluster hostname

The certificate redeploy PR covers how to update the node kubeconfigs to
point to the new master host, which would need to be done on each host
(along with a node reboot), before the old cluster hostname/load balancer
is removed.


One other thing to keep in mind, is that you will want to migrate
/etc/etcd/generated_certs and /etc/origin/generated_configs to the new
"first etcd" and "first master" respectively after removing the old hosts.


>
> Once i get the new nodes up and running, can i just shutdown the old
> servers and update the inventory? Just wondering if something goes wrong
> replacing masters[0].
>
>
>>
>>
>
>>
>>>
>>>
>>> ---
>>> Diego Castro / The CloudFather
>>> GetupCloud.com - Eliminamos a Gravidade
>>>
>>> 2016-12-13 11:17 GMT-03:00 Skarbek, John <john.skar...@ca.com>:
>>>
>>>> Diego,
>>>>
>>>> We’ve done a similar thing in our environment. I’m not sure if the
>>>> openshift-ansible guys have a better way, but this is what we did at that
>>>> time.
>>>>
>>>> We created a custom playbook to run through all the steps as necessary.
>>>> And due to the version of openshift-ansible we were running, we had to be
>>>> careful when we did whichever server was index 0 in the array of hosts. (I
>>>> *think* they resolved that problem now)
>>>>
>>>> First we created a play that copied the necessary certificates too all
>>>> the nodes, such that it didn’t matter which node was in index 0 of the list
>>>> of nodes. So we had the playbook limited to operate one one node at a time
>>>> which dealt with tearing it down. Then we’d run the deploy on the entire
>>>&

Re: OpenShift origin cluster in VLAN

2016-12-07 Thread Jason DeTiberus
On Wed, Dec 7, 2016 at 9:37 AM, Den Cowboy <dencow...@hotmail.com> wrote:

> We've installed OpenShift origin with the advanced playbook. There we used
> public ip's. But after the installation we've deleted the public ip's. The
> master and nodes are in a VLAN. I'm able to create a user, authenticate,
> visite the webconsole. restart node, master configs. I'm able to pull
> images from our local registry but I'm not able to do a deployment.
>
You will need to regenerate the certificates for the deployment:
https://docs.openshift.org/latest/install_config/redeploying_certificates.html



>
> couldn't get deployment default/router-5: Get
> https://172.30.0.1:443/api/v1/namespaces/default/
> replicationcontrollers/router-5: dial tcp 172.30.0.1:443: getsockopt:
> network is unreachable
>
> I'm even not able to curl the kubernetes service. What did we forgot/did
> wrong?
>
> In our configs the dnsIP: option is in comment. So we did not specifiy
> it. The docker, origin-node, origin-master and openvswitch services are all
> running.
>
> Logs of our origin-node show:
> pkg/proxy/config/api.go:60: Failed to watch *api.Endpoints: Get
> https://master.xxx...ction refused
> pkg/kubelet/kubelet.go:259: Failed to watch *api.Node: Get
> https://master.xxx:8443/..
> pkg/kubelet/config/apiserver.go:43: Failed to watch *api.Pod
> pkg/proxy/config/api.go:47: Failed to watch *api.Service: Get
> https://master.xxx refused
>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>


-- 
Jason DeTiberus
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: In OpenShift Ansible, what is the differences between roles/openshift_hosted_metrics and roles/openshift_metrics ?

2017-04-28 Thread Jason DeTiberus
On Apr 28, 2017 1:01 PM, "Mateus Caruccio" <mateus.caruc...@getupcloud.com>
wrote:

I guess openshift_metrics is a refactor of openshift_hosted_metrics. Am I
right?

openshift_metrics is a refactoring of openshift_hosted_metrics and
leverages ansible-based deployment of the metrics stack instead of using a
deployer pod.

Similarly, openshift_hosted_logging has been deprecated in favor of
openshift_logging.

--
Jason DeTiberus




Em 28/04/2017 13:51, "Alex Wauck" <alexwa...@exosite.com> escreveu:

> I think Stéphane meant to link to this: https://github.com/openshift/o
> penshift-ansible/tree/master/roles/openshift_hosted_metrics
>
> What's the difference between that one and openshift_metrics?
>
> On Fri, Apr 28, 2017 at 11:46 AM, Tim Bielawa <tbiel...@redhat.com> wrote:
>
>> I believe that openshift-hosted-logging installs kibana (logging
>> exploration) whereas openshift-metrics will install hawkular (a metric
>> storage engine).
>>
>> On Fri, Apr 28, 2017 at 9:25 AM, Stéphane Klein <
>> cont...@stephane-klein.info> wrote:
>>
>>> Hi,
>>>
>>> what is the differences between :
>>>
>>> * roles/openshift_hosted_metrics (https://github.com/openshift/
>>> openshift-ansible/tree/master/roles/openshift_hosted_logging)
>>> * and roles/openshift_metrics (https://github.com/openshift/
>>> openshift-ansible/tree/master/roles/openshift_metrics)
>>>
>>> ?
>>>
>>> Best regards,
>>> Stéphane
>>> --
>>> Stéphane Klein <cont...@stephane-klein.info>
>>> blog: http://stephane-klein.info
>>> cv : http://cv.stephane-klein.info
>>> Twitter: http://twitter.com/klein_stephane
>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>>
>>
>>
>> --
>> Tim Bielawa, Software Engineer [ED-C137]
>> Cell: 919.332.6411 <(919)%20332-6411>  | IRC: tbielawa (#openshift)
>> 1BA0 4FAB 4C13 FBA0 A036  4958 AD05 E75E 0333 AE37
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
>
> --
>
> Alex Wauck // DevOps Engineer
>
> *E X O S I T E*
> *www.exosite.com <http://www.exosite.com/>*
>
> Making Machines More Human.
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: In OpenShift Ansible, what is the differences between roles/openshift_hosted_metrics and roles/openshift_metrics ?

2017-04-28 Thread Jason DeTiberus
On Apr 28, 2017 1:02 PM, "Aleksandar Lazic" <al...@me2digital.eu> wrote:

Hi.

The difference for me is that the hosted one it's able to be configured and
installed at install time like hosted logging and registry and the none
hosted looks to me legacy

https://github.com/openshift/openshift-ansible/tree/master/
roles/openshift_hosted_logging

https://github.com/openshift/openshift-ansible/tree/master/
roles/openshift_hosted

Regards aleks



Alex Wauck <alexwa...@exosite.com> schrieb am 28.04.2017:
>
> I think Stéphane meant to link to this: https://github.com/openshift/
> openshift-ansible/tree/master/roles/openshift_hosted_metrics
>
> What's the difference between that one and openshift_metrics?
>

openshift_hosted_metrics is in the process of being deprecated in favor of
openshift_metrics.

--
Jason DeTiberus


> On Fri, Apr 28, 2017 at 11:46 AM, Tim Bielawa <tbiel...@redhat.com> wrote:
>
>> I believe that openshift-hosted-logging installs kibana (logging
>> exploration) whereas openshift-metrics will install hawkular (a metric
>> storage engine).
>>
>> On Fri, Apr 28, 2017 at 9:25 AM, Stéphane Klein <
>> cont...@stephane-klein.info> wrote:
>>
>>> Hi,
>>>
>>> what is the differences between :
>>>
>>> * roles/openshift_hosted_metrics (https://github.com/openshift/
>>> openshift-ansible/tree/master/roles/openshift_hosted_logging)
>>> * and roles/openshift_metrics (https://github.com/openshift/
>>> openshift-ansible/tree/master/roles/openshift_metrics)
>>>
>>> ?
>>>
>>> Best regards,
>>> Stéphane
>>> --
>>> Stéphane Klein <cont...@stephane-klein.info>
>>> blog: http://stephane-klein.info
>>> cv : http://cv.stephane-klein.info
>>> Twitter: http://twitter.com/klein_stephane
>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>>
>>
>>
>> --
>> Tim Bielawa, Software Engineer [ED-C137]
>> Cell: 919.332.6411 <(919)%20332-6411>  | IRC: tbielawa (#openshift)
>> 1BA0 4FAB 4C13 FBA0 A036  4958 AD05 E75E 0333 AE37
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
>
> --
>
> Alex Wauck // DevOps Engineer
>
> *E X O S I T E*
> *www.exosite.com <http://www.exosite.com/>*
>
> Making Machines More Human.
>
> --
>
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users