RE: define openshift origin version (stable 1.2.0) for Ansible install

2016-06-23 Thread Den Cowboy
Why are you actually building 1.2.0-4 to let 1.2.0 work instead of downgrading 
(or using the older) origin-1.2.0-1.git.10183.7386b49.el7 like alexwauck? 
Because in ansible is I'm able to use 
openshift_pkg_version=-1.2.0-1.git.10183.7386b49.el7 but not 
openshift_pkg_version=-1.2.0-4.el7

Probably because you said: "This version is still getting signed and pushed 
out.  That takes more time."

Or is this because the version for origin-1.2.0-1.git.10183.7386b49.el7 is:
v1.2.0-1-g7386b49

Which is also a 'bad' version.
So as far as I understand we have to wait till origin-1.2.0-4.el7 is available 
for our ansible install?



From: dencow...@hotmail.com
To: tdaw...@redhat.com
Subject: RE: define openshift origin version (stable 1.2.0) for Ansible install
Date: Thu, 23 Jun 2016 11:17:12 +
CC: users@lists.openshift.redhat.com




Can you maybe explain how to use this?
I performed a yum --enablerepo=centos-openshift-origin-testing install origin\*

oc version gives me 
oc v1.2.0
kubernetes v1.2.0-36-g4a3f9c5

But how do I have to add nodes (using ansible) and that kind of stuff? After 
performing the yum I've just one master and one node on the same host.
Thanks



> From: tdaw...@redhat.com
> Date: Wed, 22 Jun 2016 17:27:17 -0500
> Subject: Re: define openshift origin version (stable 1.2.0) for Ansible 
> install
> To: alexwa...@exosite.com
> CC: dencow...@hotmail.com; users@lists.openshift.redhat.com
> 
> Yep, seems that my new way of creating the rpms for CentOS got the
> version of the rpm right, but wrong for setting the ldflags, which was
> causing the binary to have a different version.
> 
> At some point in the near future we need to re-evaluate git tags and
> versions in the origin.spec file.  (Why it is the rpm spec version
> always 0.0.1 when in reality the version everywhere else is 1.2.0)
> 
> Worked with Scott to figure out a correct way to consistently build
> the rpms.  In the end, neither of our workflows failed in sneaky ways,
> so I just fixed things manually.  Not something we can do
> consistently, but I really needed to get a working 1.2.0 version out.
> 
> What works:  origin-1.2.0-4.el7
> https://cbs.centos.org/koji/buildinfo?buildID=11349
> 
> You should be able to test it within an hour via
> yum --enablerepo=centos-openshift-origin-testing install origin\*
> 
> This version is still getting signed and pushed out.  That takes more time.
> 
> Sorry for all the problems this has caused.
> 
> Troy
> 
> 
> On Wed, Jun 22, 2016 at 2:57 PM, Alex Wauck <alexwa...@exosite.com> wrote:
> > This seems to be caused by the 1.2.0-2.el7 packages containing the wrong
> > version.  I had a conversation on IRC about this earlier (#openshift), and
> > somebody confirmed it.  I suspect a new release will be available soon.  At
> > any rate, downgrading to 1.2.0-1.el7 worked for us.
> >
> > On Wed, Jun 22, 2016 at 8:55 AM, Den Cowboy <dencow...@hotmail.com> wrote:
> >>
> >> I tried:
> >> [OSEv3:vars]
> >> ansible_ssh_user=root
> >> deployment_type=origin
> >> openshift_pkg_version=-1.2.0
> >> openshift_image_tag=-1.2.0
> >>
> >> But it installed a release canidad and not v1.2.0
> >>
> >> oc v1.2.0-rc1-13-g2e62fab
> >> kubernetes v1.2.0-36-g4a3f9c5
> >>
> >> 
> >> From: dencow...@hotmail.com
> >> To: cont...@stephane-klein.info
> >> Subject: RE: define openshift origin version (stable 1.2.0) for Ansible
> >> install
> >> Date: Wed, 22 Jun 2016 12:51:18 +
> >> CC: users@lists.openshift.redhat.com
> >>
> >>
> >> Thanks for your fast reply
> >> This is the beginning of my playbook:
> >>
> >> [OSEv3:vars]
> >> ansible_ssh_user=root
> >> deployment_type=origin
> >> openshift_pkg_version=v1.2.0
> >> openshift_image_tag=v1.2.0
> >>
> >> But I got an error:
> >> TASK [openshift_master_ca : Install the base package for admin tooling]
> >> 
> >> FAILED! => {"changed": false, "failed": true, "msg": "No Package matching
> >> 'originv1.2.0' found available, installed or updated", "rc": 0, "results":
> >> []}
> >>
> >> 
> >> From: cont...@stephane-klein.info
> >> Date: Wed, 22 Jun 2016 13:53:57 +0200
> >> Subject: Re: define openshift origin version (stable 1.2.0) for Ansible
> >> install
> >> To: dencow...@hotmail.com
> >> CC: users@lists.openshift.redhat.com
> >>
> >> Personally I use this options to f

RE: define openshift origin version (stable 1.2.0) for Ansible install

2016-06-23 Thread Den Cowboy
Can you maybe explain how to use this?
I performed a yum --enablerepo=centos-openshift-origin-testing install origin\*

oc version gives me 
oc v1.2.0
kubernetes v1.2.0-36-g4a3f9c5

But how do I have to add nodes (using ansible) and that kind of stuff? After 
performing the yum I've just one master and one node on the same host.
Thanks



> From: tdaw...@redhat.com
> Date: Wed, 22 Jun 2016 17:27:17 -0500
> Subject: Re: define openshift origin version (stable 1.2.0) for Ansible 
> install
> To: alexwa...@exosite.com
> CC: dencow...@hotmail.com; users@lists.openshift.redhat.com
> 
> Yep, seems that my new way of creating the rpms for CentOS got the
> version of the rpm right, but wrong for setting the ldflags, which was
> causing the binary to have a different version.
> 
> At some point in the near future we need to re-evaluate git tags and
> versions in the origin.spec file.  (Why it is the rpm spec version
> always 0.0.1 when in reality the version everywhere else is 1.2.0)
> 
> Worked with Scott to figure out a correct way to consistently build
> the rpms.  In the end, neither of our workflows failed in sneaky ways,
> so I just fixed things manually.  Not something we can do
> consistently, but I really needed to get a working 1.2.0 version out.
> 
> What works:  origin-1.2.0-4.el7
> https://cbs.centos.org/koji/buildinfo?buildID=11349
> 
> You should be able to test it within an hour via
> yum --enablerepo=centos-openshift-origin-testing install origin\*
> 
> This version is still getting signed and pushed out.  That takes more time.
> 
> Sorry for all the problems this has caused.
> 
> Troy
> 
> 
> On Wed, Jun 22, 2016 at 2:57 PM, Alex Wauck <alexwa...@exosite.com> wrote:
> > This seems to be caused by the 1.2.0-2.el7 packages containing the wrong
> > version.  I had a conversation on IRC about this earlier (#openshift), and
> > somebody confirmed it.  I suspect a new release will be available soon.  At
> > any rate, downgrading to 1.2.0-1.el7 worked for us.
> >
> > On Wed, Jun 22, 2016 at 8:55 AM, Den Cowboy <dencow...@hotmail.com> wrote:
> >>
> >> I tried:
> >> [OSEv3:vars]
> >> ansible_ssh_user=root
> >> deployment_type=origin
> >> openshift_pkg_version=-1.2.0
> >> openshift_image_tag=-1.2.0
> >>
> >> But it installed a release canidad and not v1.2.0
> >>
> >> oc v1.2.0-rc1-13-g2e62fab
> >> kubernetes v1.2.0-36-g4a3f9c5
> >>
> >> 
> >> From: dencow...@hotmail.com
> >> To: cont...@stephane-klein.info
> >> Subject: RE: define openshift origin version (stable 1.2.0) for Ansible
> >> install
> >> Date: Wed, 22 Jun 2016 12:51:18 +
> >> CC: users@lists.openshift.redhat.com
> >>
> >>
> >> Thanks for your fast reply
> >> This is the beginning of my playbook:
> >>
> >> [OSEv3:vars]
> >> ansible_ssh_user=root
> >> deployment_type=origin
> >> openshift_pkg_version=v1.2.0
> >> openshift_image_tag=v1.2.0
> >>
> >> But I got an error:
> >> TASK [openshift_master_ca : Install the base package for admin tooling]
> >> 
> >> FAILED! => {"changed": false, "failed": true, "msg": "No Package matching
> >> 'originv1.2.0' found available, installed or updated", "rc": 0, "results":
> >> []}
> >>
> >> 
> >> From: cont...@stephane-klein.info
> >> Date: Wed, 22 Jun 2016 13:53:57 +0200
> >> Subject: Re: define openshift origin version (stable 1.2.0) for Ansible
> >> install
> >> To: dencow...@hotmail.com
> >> CC: users@lists.openshift.redhat.com
> >>
> >> Personally I use this options to fix OpenShift version:
> >>
> >> openshift_pkg_version=v1.2.0
> >> openshift_image_tag=v1.2.0
> >>
> >>
> >> 2016-06-22 13:24 GMT+02:00 Den Cowboy <dencow...@hotmail.com>:
> >>
> >> Is it possible to define and origin version in your ansible install.
> >> At the moment we have so many issues with our newest install (while we had
> >> 1.1.6 pretty stable for some time)
> >> We want to go to a stable 1.2.0
> >>
> >> Our issues:
> >> version = oc v1.2.0-rc1-13-g2e62fab
> >> So images are pulled with tag oc v1.2.0-rc1-13-g2e62fab which doesn't
> >> exist in openshift. Okay we have a workaround by editing the master and 
> >> node
> >> config's and using 'i--image' but whe don't

Re: define openshift origin version (stable 1.2.0) for Ansible install

2016-06-22 Thread Alex Wauck
This seems to be caused by the 1.2.0-2.el7 packages containing the wrong
version.  I had a conversation on IRC about this earlier (#openshift), and
somebody confirmed it.  I suspect a new release will be available soon.  At
any rate, downgrading to 1.2.0-1.el7 worked for us.

On Wed, Jun 22, 2016 at 8:55 AM, Den Cowboy <dencow...@hotmail.com> wrote:

> I tried:
> [OSEv3:vars]
> ansible_ssh_user=root
> deployment_type=origin
> openshift_pkg_version=-1.2.0
> openshift_image_tag=-1.2.0
>
> But it installed a release canidad and not v1.2.0
>
> oc v1.2.0-rc1-13-g2e62fab
> kubernetes v1.2.0-36-g4a3f9c5
>
> --
> From: dencow...@hotmail.com
> To: cont...@stephane-klein.info
> Subject: RE: define openshift origin version (stable 1.2.0) for Ansible
> install
> Date: Wed, 22 Jun 2016 12:51:18 +
> CC: users@lists.openshift.redhat.com
>
>
> Thanks for your fast reply
> This is the beginning of my playbook:
>
> [OSEv3:vars]
> ansible_ssh_user=root
> deployment_type=origin
> openshift_pkg_version=v1.2.0
> openshift_image_tag=v1.2.0
>
> But I got an error:
> TASK [openshift_master_ca : Install the base package for admin tooling]
> 
> FAILED! => {"changed": false, "failed": true, "msg": "No Package matching
> 'originv1.2.0' found available, installed or updated", "rc": 0, "results":
> []}
>
> ------
> From: cont...@stephane-klein.info
> Date: Wed, 22 Jun 2016 13:53:57 +0200
> Subject: Re: define openshift origin version (stable 1.2.0) for Ansible
> install
> To: dencow...@hotmail.com
> CC: users@lists.openshift.redhat.com
>
> Personally I use this options to fix OpenShift version:
>
> openshift_pkg_version=v1.2.0
> openshift_image_tag=v1.2.0
>
>
> 2016-06-22 13:24 GMT+02:00 Den Cowboy <dencow...@hotmail.com>:
>
> Is it possible to define and origin version in your ansible install.
> At the moment we have so many issues with our newest install (while we had
> 1.1.6 pretty stable for some time)
> We want to go to a stable 1.2.0
>
> Our issues:
> version = oc v1.2.0-rc1-13-g2e62fab
> So images are pulled with tag oc v1.2.0-rc1-13-g2e62fab which doesn't
> exist in openshift. Okay we have a workaround by editing the master and
> node config's and using 'i--image' but whe don't like this approach
>
> logs on our nodes:
>  level=error msg="Error reading loginuid: open /proc/27182/loginuid: no
> such file or directory"
> level=error msg="Error reading loginuid: open /proc/27182/loginuid: no
> such file or directory"
>
> We started a mysql instance. We weren't able to use the service name to
> connect:
> mysql -u test -h mysql -p did NOT work
> mysql -u test -h 172.30.x.x (service ip) -p did work..
>
> So we have too many issues on this version of OpenShift. We've deployed in
> a team several times and are pretty confident with the setup and it was
> always working fine for us. But now this last weird versions seem really
> bad for us.
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
>
>
> --
> Stéphane Klein <cont...@stephane-klein.info>
> blog: http://stephane-klein.info
> cv : http://cv.stephane-klein.info
> Twitter: http://twitter.com/klein_stephane
>
> ___ users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>


-- 

Alex Wauck // DevOps Engineer
+1 612 790 1558 (USA Mobile)

*E X O S I T E*
275 Market Street, Suite 535
Minneapolis, MN 55405
*www.exosite.com <http://www.exosite.com/>*

This communication may contain confidential information that is proprietary to
Exosite. Any unauthorized use or disclosure of this information is
strictly prohibited. If you are not the intended recipient, please delete
this message and any attachments, and advise the sender by return e-mail.

Making Machines More Human.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: define openshift origin version (stable 1.2.0) for Ansible install

2016-06-22 Thread Den Cowboy
Thanks for your fast reply
This is the beginning of my playbook:

[OSEv3:vars]
ansible_ssh_user=root
deployment_type=origin
openshift_pkg_version=v1.2.0
openshift_image_tag=v1.2.0

But I got an error:
TASK [openshift_master_ca : Install the base package for admin tooling] 
FAILED! => {"changed": false, "failed": true, "msg": "No Package matching 
'originv1.2.0' found available, installed or updated", "rc": 0, "results": []}

From: cont...@stephane-klein.info
Date: Wed, 22 Jun 2016 13:53:57 +0200
Subject: Re: define openshift origin version (stable 1.2.0) for Ansible install
To: dencow...@hotmail.com
CC: users@lists.openshift.redhat.com

Personally I use this options to fix OpenShift version:

openshift_pkg_version=v1.2.0
openshift_image_tag=v1.2.0


2016-06-22 13:24 GMT+02:00 Den Cowboy <dencow...@hotmail.com>:



Is it possible to define and origin version in your ansible install.
At the moment we have so many issues with our newest install (while we had 
1.1.6 pretty stable for some time)
We want to go to a stable 1.2.0

Our issues:
version = oc v1.2.0-rc1-13-g2e62fab 
So images are pulled with tag oc v1.2.0-rc1-13-g2e62fab which doesn't exist in 
openshift. Okay we have a workaround by editing the master and node config's 
and using 'i--image' but whe don't like this approach

logs on our nodes:
 level=error msg="Error reading loginuid: open /proc/27182/loginuid: no such 
file or directory"
level=error msg="Error reading loginuid: open /proc/27182/loginuid: no such 
file or directory"

We started a mysql instance. We weren't able to use the service name to connect:
mysql -u test -h mysql -p did NOT work
mysql -u test -h 172.30.x.x (service ip) -p did work..

So we have too many issues on this version of OpenShift. We've deployed in a 
team several times and are pretty confident with the setup and it was always 
working fine for us. But now this last weird versions seem really bad for us.
  

___

users mailing list

users@lists.openshift.redhat.com

http://lists.openshift.redhat.com/openshiftmm/listinfo/users




-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: define openshift origin version (stable 1.2.0) for Ansible install

2016-06-22 Thread Stéphane Klein
Personally I use this options to fix OpenShift version:

openshift_pkg_version=v1.2.0
openshift_image_tag=v1.2.0


2016-06-22 13:24 GMT+02:00 Den Cowboy :

> Is it possible to define and origin version in your ansible install.
> At the moment we have so many issues with our newest install (while we had
> 1.1.6 pretty stable for some time)
> We want to go to a stable 1.2.0
>
> Our issues:
> version = oc v1.2.0-rc1-13-g2e62fab
> So images are pulled with tag oc v1.2.0-rc1-13-g2e62fab which doesn't
> exist in openshift. Okay we have a workaround by editing the master and
> node config's and using 'i--image' but whe don't like this approach
>
> logs on our nodes:
>  level=error msg="Error reading loginuid: open /proc/27182/loginuid: no
> such file or directory"
> level=error msg="Error reading loginuid: open /proc/27182/loginuid: no
> such file or directory"
>
> We started a mysql instance. We weren't able to use the service name to
> connect:
> mysql -u test -h mysql -p did NOT work
> mysql -u test -h 172.30.x.x (service ip) -p did work..
>
> So we have too many issues on this version of OpenShift. We've deployed in
> a team several times and are pretty confident with the setup and it was
> always working fine for us. But now this last weird versions seem really
> bad for us.
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>


-- 
Stéphane Klein 
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users