Certificate Updates

2019-01-28 Thread David Conde
We currently have a 3.9 cluster running on AWS, we ran the certificate
update ansible playbook but it failed due to not updating the nodes in
autoscale groups.

Can anyone point me at what is required to handle the above?

Thanks,
Dave
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OKD v3.11.0 has been tagged and pushed to GitHub

2018-10-12 Thread David Conde
On the 4.0 changes, is the plan to provide the ability to upgrade from 3.11
to 4.0 or would a totally fresh install be required?

On Thu, Oct 11, 2018 at 4:55 PM Clayton Coleman  wrote:

> https://github.com/openshift/origin/releases/tag/v3.11.0 contains the
> release notes and latest binaries.
>
> The v3.11.0 tag on docker.io is up to date and will be a rolling tag (new
> fixes will be delivered there).
>
> Thanks to everyone on their hard work!
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OpenShift Origin on AWS

2018-10-09 Thread David Conde
We have upgraded from the 3.6 reference architecture to the 3.9 aws
playbooks in openshift-ansible. There was quite a bit of work in getting
nodes ported into the scaling groups. We have upgraded our masters to 3.9
with the BYO playbooks but have not ported them to use scaling groups yet.

We'll be sticking with the aws openshift-ansible playbooks in the future
over the reference architecture so that we can upgrade easily.

On Tue, Oct 9, 2018 at 1:29 PM Joel Pearson 
wrote:

> There is cloud formation templates as part of the 3.6 reference
> architecture. But that is now deprecated. I’m using that template at a
> client site and it worked fine (I’ve adapted it to work with 3.9 by using a
> static inventory as we didn’t want to revisit our architecture from
> scratch). We did customise it a fair bit though.
>
>
> https://github.com/openshift/openshift-ansible-contrib/blob/master/reference-architecture/aws-ansible/README.md
>
> Here is an example of a jinja template that outputs a cloud formation
> template.
>
> However, you can’t use the playbook as is for 3.9/3.10 because
> openshift-ansible has breaking changes to the playbooks.
>
> For some reason the new playbooks for 3.9/3.10 don’t use cloud formation,
> but rather use the amazon ansible plugins instead and directly interact
> with AWS resources:
>
>
> https://github.com/openshift/openshift-ansible/blob/master/playbooks/aws/README.md
>
> That new approach is pretty interesting though as it uses prebuilt AMIs
> and auto-scaling groups, which make it very quick to add nodes.
>
> Hopefully some of that is useful to you.
>
> On Tue, 9 Oct 2018 at 9:42 pm, Peter Heitman  wrote:
>
>> Thank you for the reminder and the pointer. I know of that document but
>> was too focused on searching for a CloudFormation template. I'll go back to
>> the reference architecture which I'm sure will answer at least some of my
>> questions.
>>
>> On Sun, Oct 7, 2018 at 4:24 PM Joel Pearson <
>> japear...@agiledigital.com.au> wrote:
>>
>>> Have you seen the AWS reference architecture?
>>> https://access.redhat.com/documentation/en-us/reference_architectures/2018/html/deploying_and_managing_openshift_3.9_on_amazon_web_services/index#
>>> On Tue, 2 Oct 2018 at 3:11 am, Peter Heitman  wrote:
>>>
 I've created a CloudFormation Stack for simple lab-test deployments of
 OpenShift Origin on AWS. Now I'd like to understand what would be best for
 production deployments of OpenShift Origin on AWS. In particular I'd like
 to create the corresponding CloudFormation Stack.

 I've seen the Install Guide page on Configuring for AWS and I've looked
 through the RedHat QuickStart Guide for OpenShift Enterprise but am still
 missing information. For example, the RedHat QuickStart Guide creates 3
 masters, 3 etcd servers and some number of compute nodes. Where are the
 routers (infra nodes) located? On the masters or on the etcd servers? How
 are the ELBs configured to work with those deployed routers? What if some
 of the traffic you are routing is not http/https? What is required to
 support that?

 I've seen the simple CloudFormation stack (
 https://sysdig.com/blog/deploy-openshift-aws/) but haven't found
 anything comparable for something that is closer to production ready (and
 likely takes advantage of using the AWS VPC QuickStart (
 https://aws.amazon.com/quickstart/architecture/vpc/).

 Does anyone have any prior work that they could share or point me to?

 Thanks in advance,

 Peter Heitman

 ___
 users mailing list
 users@lists.openshift.redhat.com
 http://lists.openshift.redhat.com/openshiftmm/listinfo/users

>>> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Restricting access to some Routes

2018-08-30 Thread David Conde
Hi Peter,

Hopefully
https://docs.openshift.com/container-platform/3.9/architecture/networking/routes.html#whitelist
will sort you out.

Dave

On Thu, Aug 30, 2018 at 1:54 PM Peter Heitman  wrote:

> In my deployment there are 5 routes - two of them are from OpenShift
> (docker-registry and registry-console) and three of them are specific to my
> application. Of the 5, 4 of them are administrative and shouldn't be
> accessed by just anyone on the Internet. One of my application's route is
> required to be accessed by 'anyone' on the Internet.
>
> My question is, what is the best practice to achieve this restriction? Is
> there a way to set IP address or subnet restrictions on a route? Do I need
> to set up separate nodes and separate routers so that I can use a firewall
> to restrict access to the 4 routes and allow access to the Internet
> service? Any suggestions?
>
> Peter
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Registry Permissions

2018-08-22 Thread David Conde
Perfect thanks I'll give that a go :)

On Wed, Aug 22, 2018 at 2:59 PM Ben Parees  wrote:

>
>
> On Wed, Aug 22, 2018 at 9:58 AM, Ben Parees  wrote:
>
>>
>>
>> On Wed, Aug 22, 2018 at 9:49 AM, David Conde  wrote:
>>
>>> Thanks, will system:unauthenticated not open up the registry to people
>>> who are not authenticated at all? Also where do these permissions need to
>>> be added?
>>>
>>
>> I think you'd use oc adm policy add-cluster-role-to-group, add the
>> system:image-puller role to the system:authenticated group.
>>
>
>
> Sorry, that would be if you want everyone to be able to pull everything.
>
> if you only want to expose one project, then just "add-role-to-group" and
> specify the namespace as well.
>
>
>
>>
>>
>>> I have created a new service account that is dedicated to pushing the
>>> images, this has been given the cluster permission of registry-admin. The
>>> goal is to now have the images available to be pulled in to any project.
>>>
>>> Thanks again,
>>> Dave
>>>
>>> On Wed, Aug 22, 2018 at 2:42 PM David Eads  wrote:
>>>
>>>> They are groups.  "system:authenticated" and "system:unauthenticated"
>>>> and you probably want to assign both.
>>>>
>>>> On Wed, Aug 22, 2018 at 9:39 AM Ben Parees  wrote:
>>>>
>>>>>
>>>>>
>>>>> On Wed, Aug 22, 2018 at 6:51 AM, David Conde 
>>>>> wrote:
>>>>>
>>>>>> Is it possible to add global pull permissions to a project in the
>>>>>> registry? I'm looking to have a single place for pushing images to that 
>>>>>> all
>>>>>> projects can access, similar to how the Openshift project works for image
>>>>>> and template access.
>>>>>>
>>>>>
>>>>> you should be able to add appropriate permissions to the
>>>>> "system:authenticated" role which would allow any authenticated user to
>>>>> access it.  CCing David+Jordan who may have a more preferred approach.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>>
>>>>>> Thanks,
>>>>>> Dave
>>>>>>
>>>>>> ___
>>>>>> users mailing list
>>>>>> users@lists.openshift.redhat.com
>>>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Ben Parees | OpenShift
>>>>>
>>>>>
>>
>>
>> --
>> Ben Parees | OpenShift
>>
>>
>
>
> --
> Ben Parees | OpenShift
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Registry Permissions

2018-08-22 Thread David Conde
NAME  ROLE   USERS GROUPS
SERVICE ACCOUNTS   SUBJECTS

shared-resource-viewers   openshift/shared-resource-viewer
system:authenticated



^^ Is that the guy?

On Wed, Aug 22, 2018 at 2:54 PM David Eads  wrote:

> My mistake.  We only bind the openshift namespace to system:authenticated
>
> On Wed, Aug 22, 2018 at 9:49 AM David Conde  wrote:
>
>> Thanks, will system:unauthenticated not open up the registry to people
>> who are not authenticated at all? Also where do these permissions need to
>> be added?
>>
>> I have created a new service account that is dedicated to pushing the
>> images, this has been given the cluster permission of registry-admin. The
>> goal is to now have the images available to be pulled in to any project.
>>
>> Thanks again,
>> Dave
>>
>> On Wed, Aug 22, 2018 at 2:42 PM David Eads  wrote:
>>
>>> They are groups.  "system:authenticated" and "system:unauthenticated"
>>> and you probably want to assign both.
>>>
>>> On Wed, Aug 22, 2018 at 9:39 AM Ben Parees  wrote:
>>>
>>>>
>>>>
>>>> On Wed, Aug 22, 2018 at 6:51 AM, David Conde  wrote:
>>>>
>>>>> Is it possible to add global pull permissions to a project in the
>>>>> registry? I'm looking to have a single place for pushing images to that 
>>>>> all
>>>>> projects can access, similar to how the Openshift project works for image
>>>>> and template access.
>>>>>
>>>>
>>>> you should be able to add appropriate permissions to the
>>>> "system:authenticated" role which would allow any authenticated user to
>>>> access it.  CCing David+Jordan who may have a more preferred approach.
>>>>
>>>>
>>>>
>>>>
>>>>>
>>>>> Thanks,
>>>>> Dave
>>>>>
>>>>> ___
>>>>> users mailing list
>>>>> users@lists.openshift.redhat.com
>>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Ben Parees | OpenShift
>>>>
>>>>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Registry Permissions

2018-08-22 Thread David Conde
Thanks, will system:unauthenticated not open up the registry to people who
are not authenticated at all? Also where do these permissions need to be
added?

I have created a new service account that is dedicated to pushing the
images, this has been given the cluster permission of registry-admin. The
goal is to now have the images available to be pulled in to any project.

Thanks again,
Dave

On Wed, Aug 22, 2018 at 2:42 PM David Eads  wrote:

> They are groups.  "system:authenticated" and "system:unauthenticated" and
> you probably want to assign both.
>
> On Wed, Aug 22, 2018 at 9:39 AM Ben Parees  wrote:
>
>>
>>
>> On Wed, Aug 22, 2018 at 6:51 AM, David Conde  wrote:
>>
>>> Is it possible to add global pull permissions to a project in the
>>> registry? I'm looking to have a single place for pushing images to that all
>>> projects can access, similar to how the Openshift project works for image
>>> and template access.
>>>
>>
>> you should be able to add appropriate permissions to the
>> "system:authenticated" role which would allow any authenticated user to
>> access it.  CCing David+Jordan who may have a more preferred approach.
>>
>>
>>
>>
>>>
>>> Thanks,
>>> Dave
>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>>
>>
>>
>> --
>> Ben Parees | OpenShift
>>
>>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Registry Permissions

2018-08-22 Thread David Conde
Is it possible to add global pull permissions to a project in the registry?
I'm looking to have a single place for pushing images to that all projects
can access, similar to how the Openshift project works for image and
template access.

Thanks,
Dave
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: How to avoid upgrading to 3.10?

2018-08-15 Thread David Conde
This caught me out yesterday also, a fix is on the 3.9 branch now so
updating from there should help. If using AWS you will have to rebuild your
AMI also as far as I know so that the new yum repo files are picked up.

On Tue, Aug 14, 2018 at 9:12 PM Peter Heitman  wrote:

> I use ansible to deploy OpenShift. All of my current deployments are 3.9
> and I'd like to stay on 3.9 until we can do enough testing on 3.10 to be
> comfortable upgrading.
>
> Can someone point me to any documentation on how to avoid the forced
> upgrade to 3.10 when I deploy a new instance of OpenShift? I currently
> checkout release-3.9 of the ansible scripts:
>
> git clone https://github.com/openshift/openshift-ansible
> cd openshift-ansible
> git checkout release-3.9
>
> My inventory has the variables
>
> openshift_release=v3.9
> openshift_pkg_version=-3.9.0
>
> and yet I get the error below. How do I stay on 3.9?
>
> Failure summary:
>
>
>   1. Hosts:ph-dev-pshtest-master.pdx.hcl.com, 
> ph-dev-pshtest-minion1.pdx.hcl.com, ph-dev-pshtest-minion2.pdx.hcl.com, 
> ph-dev-pshtest-minion3.pdx.hcl.com
>  Play: OpenShift Health Checks
>  Task: Run health checks (install) - EL
>  Message:  One or more checks failed
>  Details:  check "package_version":
>Some required package(s) are available at a version
>that is higher than requested
>  origin-3.10.0
>  origin-node-3.10.0
>  origin-master-3.10.0
>This will prevent installing the version you requested.
>Please check your enabled repositories or adjust 
> openshift_release.
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Autoscaling groups

2018-07-27 Thread David Conde
Thanks for the help and have a great weekend :)

On Thu, Jul 26, 2018 at 5:24 PM Clayton Coleman  wrote:

> It is not possible to do to masters.  You'd need to create a new cluster.
>
> I don't think the node group addition in in official docs yet because it's
> still tech preview.  So Ansible is the best you'll get for now.
>
> On Tue, Jul 24, 2018 at 10:04 AM David Conde  wrote:
>
>> Thanks,
>>
>> Is it possible to also do that with masters post upgrade? Do you have any
>> info you could point me at to create the new node groups post upgrade?
>>
>>
>>
>> On Tue, Jul 24, 2018 at 3:00 PM Clayton Coleman 
>> wrote:
>>
>>> Upgrading from regular nodes to autoscaling groups is not implemented.
>>> You’d have to add new node groups post upgrade and manage it that way.
>>>
>>> > On Jul 24, 2018, at 7:22 AM, David Conde  wrote:
>>> >
>>> > I'm in the process of upgrading an origin cluster running on AWS from
>>> 3.7 to 3.9 using openshift ansible. I'd like the new instances to be
>>> registered in autoscaling groups.
>>> >
>>> > I have seen that if I create a new origin 3.9 cluster using the AWS
>>> playbooks this happens as part of the install, how would I go about
>>> ensuring this happens as part of the upgrade from 3.7 to 3.9?
>>> >
>>> > Thanks,
>>> > Dave
>>> > ___
>>> > users mailing list
>>> > users@lists.openshift.redhat.com
>>> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Autoscaling groups

2018-07-24 Thread David Conde
Thanks,

Is it possible to also do that with masters post upgrade? Do you have any
info you could point me at to create the new node groups post upgrade?



On Tue, Jul 24, 2018 at 3:00 PM Clayton Coleman  wrote:

> Upgrading from regular nodes to autoscaling groups is not implemented.
> You’d have to add new node groups post upgrade and manage it that way.
>
> > On Jul 24, 2018, at 7:22 AM, David Conde  wrote:
> >
> > I'm in the process of upgrading an origin cluster running on AWS from
> 3.7 to 3.9 using openshift ansible. I'd like the new instances to be
> registered in autoscaling groups.
> >
> > I have seen that if I create a new origin 3.9 cluster using the AWS
> playbooks this happens as part of the install, how would I go about
> ensuring this happens as part of the upgrade from 3.7 to 3.9?
> >
> > Thanks,
> > Dave
> > ___
> > users mailing list
> > users@lists.openshift.redhat.com
> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Fwd: Securing Masters

2017-08-29 Thread David Conde
Just bumping this one, any recommendations on below?


What is the recommendation around securing access to the masters? The AWS
reference architecture currently has the master ELB fully public on the
admin port. Is it recommended to lock down access to the masters to
specific IPs?
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Cluster AutoScaler

2017-08-25 Thread David Conde
Are there any plans to include this in OpenShift?

https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Securing Masters

2017-08-24 Thread David Conde
What is the recommendation around securing access to the masters? The AWS
reference architecture currently has the master ELB fully public on the
admin port. Is it recommended to lock down access to the masters to
specific IPs?
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: DBus Error

2017-08-21 Thread David Conde
For completeness, this was solved here:
https://github.com/openshift/openshift-ansible-contrib/issues/660

On Thu, Aug 17, 2017 at 4:11 PM, David Conde <da...@donedeal.ie> wrote:

> I am trying to add a node to Origin 3.6, I'm using the add-node.py script
> from openshift ansible aws contrib but I am seeing the following exception: 
> *'This
> module requires dbus python bindings' *
>
>
> Has anything changed around dbus requirements?
>
> Thanks,
> Dave
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


DBus Error

2017-08-17 Thread David Conde
I am trying to add a node to Origin 3.6, I'm using the add-node.py script
from openshift ansible aws contrib but I am seeing the following
exception: *'This
module requires dbus python bindings' *


Has anything changed around dbus requirements?

Thanks,
Dave
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Docker Thin Pool Space

2017-08-15 Thread David Conde
Ah thats handy to know thanks,

On that note, I'm trying to upgrade a 3.4 cluster to 3.5 using ansible but
I get the following error "Available origin-docker-excluder version 3.6.0
is higher than the upgrade target version" Has anyone seen this before?

On Tue, Aug 15, 2017 at 2:31 PM, Scott Dodson  wrote:

> I believe prior to 1.5 the garbage collector and docker free space
> requirement are tuned to the same value of 90% utilization which means that
> the garbage collection won't trigger a cleanup before you're unable to
> create new containers. You can override the GC threshold to 85% which would
> ensure that garbage collection happens before you hit the 90% mark where
> docker stops creating new containers.
>
> --
> Scott
>
> On Tue, Aug 15, 2017 at 9:07 AM, Alexey Surikov <
> alexey.suri...@booking.com> wrote:
>
>> On Mon, Aug 14, 2017 at 3:20 PM,
>>  wrote:
>> >
>> >
>> > Thanks Aleksandar,
>> >
>> > Do you just have that cron'd on each node?
>> >
>>
>> Isn't that supposed to be done by kubelet running on the nodes?
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Docker Thin Pool Space

2017-08-14 Thread David Conde
Thanks

On Mon, Aug 14, 2017 at 2:20 PM, Aleksandar Lazic 
wrote:

> Hi David.
>
> Yes.
>
> on Montag, 14. August 2017 at 15:05 was written:
>
>
> Thanks Aleksandar,
>
> Do you just have that cron'd on each node?
>
> On Mon, Aug 14, 2017 at 1:24 PM, Aleksandar Lazic 
> wrote:
>
> Hi David.
>
> on Montag, 14. August 2017 at 13:46 was written:
>
> > I keep getting the below error on my app nodes. Running Origin v1.4.1
> > on AWS installed using the reference architecture. Am I missing
> > something which should be cleaning up?
>
> You should cleanup the local docker storage on regular base.
>
> Something like this.
>
> https://github.com/rhcarvalho/openshift-devtools/blob/
> master/docker-cleanup
>
> > Error syncing pod, skipping: failed to "StartContainer" for "jnlp"
> > with ErrImagePull: "failed to register layer: devmapper: Thin Pool has
> > 4801 free data blocks which is less than minimum required 4863 free
> > data blocks. Create more free space in thin pool or use
> > dm.min_free_space option to change behavior"
> >
> > Thanks,
> >
> > Dave
>
> --
> Best Regards
> Aleks
>
>
>
>
> *-- Best Regards Aleks*
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Docker Thin Pool Space

2017-08-14 Thread David Conde
Thanks Aleksandar,

Do you just have that cron'd on each node?

On Mon, Aug 14, 2017 at 1:24 PM, Aleksandar Lazic 
wrote:

> Hi David.
>
> on Montag, 14. August 2017 at 13:46 was written:
>
> > I keep getting the below error on my app nodes. Running Origin v1.4.1
> > on AWS installed using the reference architecture. Am I missing
> > something which should be cleaning up?
>
> You should cleanup the local docker storage on regular base.
>
> Something like this.
>
> https://github.com/rhcarvalho/openshift-devtools/blob/
> master/docker-cleanup
>
> > Error syncing pod, skipping: failed to "StartContainer" for "jnlp"
> > with ErrImagePull: "failed to register layer: devmapper: Thin Pool has
> > 4801 free data blocks which is less than minimum required 4863 free
> > data blocks. Create more free space in thin pool or use
> > dm.min_free_space option to change behavior"
> >
> > Thanks,
> >
> > Dave
>
> --
> Best Regards
> Aleks
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Docker Thin Pool Space

2017-08-14 Thread David Conde
I keep getting the below error on my app nodes. Running Origin v1.4.1 on
AWS installed using the reference architecture. Am I missing something
which should be cleaning up?

Error syncing pod, skipping: failed to "StartContainer" for "jnlp" with
ErrImagePull: "failed to register layer: devmapper: Thin Pool has 4801 free
data blocks which is less than minimum required 4863 free data blocks.
Create more free space in thin pool or use dm.min_free_space option to
change behavior"


Thanks,

Dave
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Kubelet & Cadvisor

2017-05-19 Thread David Conde
>From what I could see, most of the issue seems to be
https://github.com/DataDog/dd-agent/blob/master/utils/kubernetes/kubeutil.py
around the cAdvisor access.


On 19 May 2017 at 00:39, Alex Creek <therealcree...@gmail.com> wrote:

> Ah nice.  I’m gonna dig through their agent code for kube and see if
> there’s any low hanging fruit that can be knocked out to get it talking to
> openshift/heapster.  They have a couple repos on github
> https://github.com/DataDog/integrations-core/blob/master/
> kubernetes/check.py
>
>
>
>
>
> Alex
>
>
>
>
>
> *From: *David Conde <da...@donedeal.ie>
> *Date: *Thursday, May 18, 2017 at 8:56 AM
> *To: *Alex Creek <therealcree...@gmail.com>
> *Cc: *Jay Vyas <jv...@redhat.com>, users <users@lists.openshift.redhat.com
> >
> *Subject: *Re: Kubelet & Cadvisor
>
>
>
> Hi Alex,
>
>
>
> I was able to get past that by enabling hostDir for the service account, I
> also had to set privileged: true in the security context so it could access
> the cgroups mounts etc. The next issue I hit was it tries to access
> cAdvisor via a non secure port instead of via the kubelet stats api.
>
>
>
> I also had to mount kubelet certs into a volume so that they could be used
> to talk to kubelet over SSL. I contacted their support about the cAdvisor
> issue, they responded with its currently in their backlog.
>
>
>
>
>
> On 18 May 2017 at 13:16, Alex Creek <therealcree...@gmail.com> wrote:
>
> +1 for protips on getting datadog to monitor openshift
>
>
>
> I wasn’t able to get the datadog agent working with 1.4.  I tried both of
> their out-of-the-box solutions for the k8s integration and no dice.  Have
> yet to deep dive and investigate.
>
>
>
> The initial problem I ran into was their pre-baked monitoring daemonSet
> used hostPath volumes and I didn’t have them enabled.  Once I did enable
> them the entire container filesystem was readonly and the datadog agent
> fell over when it tried to write to disk :\  I tried the other solution
> running the agent via docker cli and the container starts and appears to be
> working but no metrics are collected.
>
>
>
>
>
> Alex
>
>
>
>
>
> *From: *<users-boun...@lists.openshift.redhat.com> on behalf of David
> Conde <da...@donedeal.ie>
> *Date: *Thursday, May 18, 2017 at 6:24 AM
> *To: *Jay Vyas <jv...@redhat.com>
> *Cc: *users <users@lists.openshift.redhat.com>
> *Subject: *Re: Kubelet & Cadvisor
>
>
>
> Thanks Jay,
>
>
>
> I contacted DataDog and they have said it is a known issue for them which
> they expect to have fixed in a few months.
>
>
>
> Can anyone point me at a guide for a good setup for monitoring and
> alerting when using Openshift? I have looked at DataDog but it looks like
> there is an outstanding bug in the k8s integration which prevents it
> working with Openshift installs. I also installed hawkular but it looks
> like there is no UI to configure alerts in place yet.
>
>
>
> Would anyone be willing to share their experience?
>
>
>
> Thanks,
>
> Dave
>
>
>
>
>
>
> On 17 May 2017 at 13:23, Jay Vyas <jv...@redhat.com> wrote:
>
>
>
> On May 17, 2017, at 7:37 AM, David Conde <da...@donedeal.ie> wrote:
>
> Hi,
>
>
>
> I am trying to get the DataDog agent working with k8s integration. I'm
> hitting an issue around cadvisor not being available. I have read that
> cadvisor is available via kubelet as long as I'm using certs to access it.
>
>
>
> Does anyone know what the equivalent of the 2 URLs below are when trying
> to access via kubelet?
>
>
> - /api/v1.3/machine/
> - /api/v1.3/subcontainers/
>
>
>
>
>
> In general I don't think there is anything wrong with accessing cadvisor
> directly, except that I believe cadvisor isn't exposed from kubelets So
> i think, /stats/summary in the kubelet will externalize some of the
> cadvisor metrics you want.
>
>
>
> The thread below describes the idea behind stats as a  cadvisor wrapper
>
> https://groups.google.com/forum/m/#!topic/kubernetes-sig-node/txBjT8-WvM0
>
>
>
> Thanks,
>
> Dave
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
>
> ___ users mailing list
> users@lists.openshift.redhat.com http://lists.openshift.redhat.
> com/openshiftmm/listinfo/users
>
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Kubelet & Cadvisor

2017-05-18 Thread David Conde
Thanks Jay,

I contacted DataDog and they have said it is a known issue for them which
they expect to have fixed in a few months.

Can anyone point me at a guide for a good setup for monitoring and alerting
when using Openshift? I have looked at DataDog but it looks like there is
an outstanding bug in the k8s integration which prevents it working with
Openshift installs. I also installed hawkular but it looks like there is no
UI to configure alerts in place yet.

Would anyone be willing to share their experience?

Thanks,
Dave



On 17 May 2017 at 13:23, Jay Vyas <jv...@redhat.com> wrote:

>
> On May 17, 2017, at 7:37 AM, David Conde <da...@donedeal.ie> wrote:
>
> Hi,
>
> I am trying to get the DataDog agent working with k8s integration. I'm
> hitting an issue around cadvisor not being available. I have read that
> cadvisor is available via kubelet as long as I'm using certs to access it.
>
> Does anyone know what the equivalent of the 2 URLs below are when trying
> to access via kubelet?
>
> - /api/v1.3/machine/
> - /api/v1.3/subcontainers/
>
>
> In general I don't think there is anything wrong with accessing cadvisor
> directly, except that I believe cadvisor isn't exposed from kubelets So
> i think, /stats/summary in the kubelet will externalize some of the
> cadvisor metrics you want.
>
> The thread below describes the idea behind stats as a  cadvisor wrapper
> https://groups.google.com/forum/m/#!topic/kubernetes-sig-node/txBjT8-WvM0
>
> Thanks,
> Dave
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Kubelet & Cadvisor

2017-05-17 Thread David Conde
Hi,

I am trying to get the DataDog agent working with k8s integration. I'm
hitting an issue around cadvisor not being available. I have read that
cadvisor is available via kubelet as long as I'm using certs to access it.

Does anyone know what the equivalent of the 2 URLs below are when trying to
access via kubelet?

- /api/v1.3/machine/
- /api/v1.3/subcontainers/

Thanks,
Dave
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Routing & External Service

2017-05-03 Thread David Conde
Thanks Aleksandar,

I'll give that a try.



David Conde

On 2 May 2017 at 22:36, Aleksandar Lazic <al...@me2digital.eu> wrote:

> Hi David.
>
> Am Wed, 26 Apr 2017 16:09:03 +0100
> schrieb David Conde <da...@donedeal.ie>:
>
> > I am looking for a bit of advice on the best practice for routing.
> >
> > I have a service which I do not control. It lives behind an ELB and
> > runs over plain HTTP.
> >
> > I would like to add the following to it via an Openshift cluster:
> > 1) HTTPS termination
> > 2) CORS headers
> > 3) Enhance the request to include some API keys via http headers
> >
> > I could deploy a new service that adds 2 + 3 with 1 added via a
> > route. But haproxy in front of haproxy seems overkill.
> >
> > I was looking at the potential of a service with a type of
> > ExternalName but that does not help with adding the headers needed.
> >
> > I'm also not too keen on adding a configmap to all the haproxy config
> > in the router config just to add the few extra headers for a single
> > route.
> >
> > What would be the recommended way to achieve the above?
>
> Well I would use the Passthrough Termination
>
> https://docs.openshift.org/latest/architecture/core_concepts/routes.html#
> passthrough-termination
>
> with "insecureEdgeTerminationPolicy: Redirect" and terminate on your
> custom haproxy.
>
> In case you don't want to build your own haproxy image can you
> use my custom haproxy image
>
> https://hub.docker.com/r/me2digital/haproxy17/
>
> based on
>
> https://gitlab.com/aleks001/haproxy17-centos
>
> HTH
>
>
> > Thanks,
> > *David Conde*
>
> --
> Best regards
> Aleksandar Lazic - ME2Digital e. U.
> https://me2digital.online/
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


LoadBalancer & Security Groups

2017-04-28 Thread David Conde
I have a cluster running on AWS that was provisioned using the aws
reference architecture scripts. When I try and create a service of
type=LoadBalancer I can see an ELB gets created but I then get the
following error 'Multiple tagged security groups found for instance xxx
ensure only the k8s security group is tagged'

Looking at the security groups attached to the instance listed, it is an
infra node with 2 security groups 1) Infra group 2) Node group

Removing the node security group from the infrastructure nodes allows it to
pass through but I expect that'll cause other issues.

Anyone any ideas on how best to proceed?


*David Conde*
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Routing & External Service

2017-04-26 Thread David Conde
I am looking for a bit of advice on the best practice for routing.

I have a service which I do not control. It lives behind an ELB and runs
over plain HTTP.

I would like to add the following to it via an Openshift cluster:
1) HTTPS termination
2) CORS headers
3) Enhance the request to include some API keys via http headers

I could deploy a new service that adds 2 + 3 with 1 added via a route. But
haproxy in front of haproxy seems overkill.

I was looking at the potential of a service with a type of ExternalName but
that does not help with adding the headers needed.

I'm also not too keen on adding a configmap to all the haproxy config in
the router config just to add the few extra headers for a single route.

What would be the recommended way to achieve the above?

Thanks,
*David Conde*
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Fwd: Node AutoScaling

2017-01-30 Thread David Conde
Hi Subhendu,

Ideally yes, I would like to keep my masters running on-prem and have some
nodes in AWS. I'm thinking this might be a step too far at the moment
though and would settle for 2 clusters with the ability to scale nodes up
and down in a 2nd cluster on AWS.



On 30 January 2017 at 15:49, Subhendu Ghosh <sghosh...@gmail.com> wrote:

> Hi David
>
> Are you looking to scale a single cluster from on-prem to AWS? One set of
> masters?
>
> Subhendu
>
> On Jan 30, 2017 15:41, "David Conde" <da...@donedeal.ie> wrote:
>
>> Hi Seth,
>>
>> Thanks for getting back to me, would the goal be to look at reusing what
>> k8s has in beta at the moment? If you could point me at any discussions
>> that have been had on it that would be great, maybe its something I could
>> contribute back if I get it working.
>>
>> We are currently running on-prem but I'd love to have the ability to
>> provision extra capacity on AWS when needed.
>>
>> Thanks,
>> Dave
>>
>>
>> On 30 January 2017 at 15:27, Seth Jennings <sjenn...@redhat.com> wrote:
>>
>>> Upstream kube has this in beta.  Origin doesn't support this right now.
>>> The real trick is joining nodes to the cluster.  Currently,
>>> openshift-ansible has a playbook for joining nodes to an existing cluster
>>> but that pattern doesn't work really well for node autoscaling.
>>>
>>> We are looking at ways to do this, but there is no work done as of yet.
>>>
>>> On Mon, Jan 30, 2017 at 7:23 AM, David Conde <da...@donedeal.ie> wrote:
>>>
>>>> Has any work been done on node autoscaling?
>>>>
>>>> I'd like to install an origin cluster on AWS with the ability to scale
>>>> nodes up and down using something like an autoscaling group.
>>>>
>>>>
>>>> Thanks,
>>>> David Conde
>>>>
>>>>
>>>> ___
>>>> users mailing list
>>>> users@lists.openshift.redhat.com
>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>>
>>>>
>>>
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Node AutoScaling

2017-01-30 Thread David Conde
Hi Seth,

Thanks for getting back to me, would the goal be to look at reusing what
k8s has in beta at the moment? If you could point me at any discussions
that have been had on it that would be great, maybe its something I could
contribute back if I get it working.

We are currently running on-prem but I'd love to have the ability to
provision extra capacity on AWS when needed.

Thanks,
Dave


On 30 January 2017 at 15:27, Seth Jennings <sjenn...@redhat.com> wrote:

> Upstream kube has this in beta.  Origin doesn't support this right now.
> The real trick is joining nodes to the cluster.  Currently,
> openshift-ansible has a playbook for joining nodes to an existing cluster
> but that pattern doesn't work really well for node autoscaling.
>
> We are looking at ways to do this, but there is no work done as of yet.
>
> On Mon, Jan 30, 2017 at 7:23 AM, David Conde <da...@donedeal.ie> wrote:
>
>> Has any work been done on node autoscaling?
>>
>> I'd like to install an origin cluster on AWS with the ability to scale
>> nodes up and down using something like an autoscaling group.
>>
>>
>> Thanks,
>> David Conde
>>
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users