Re: Reducing toil in resource quota bumping

2018-08-30 Thread Clayton Coleman
Ultimately you need to ask what you are trying to prevent:

1. a user from accidentally blowing up the cluster
2. malicious users
3. an application breaking at runtime because it needs more resources than
it is allotted

The second one is more what we've been discussing here - being draconian up
front.  1 is usually where you'd have two quotas - initial, and generous,
and then just swap them out as needed, possibly via some automation.  3 is
what most people are most afraid of (failing to meet your SLA because you
didn't allocate resources).





On Thu, Aug 30, 2018 at 2:17 PM Andrew Feller  wrote:

> Thanks for the feedback Jessica!
>
> Limiting # of projects users can create is definitely one of the things
> expected, however the question was mostly focused on reducing toil due to
> changing resource quotas for projects.  The idea with option #1 was
> restricting devs to 1 project with heftier resources allocated whereas the
> hope with option #2 was ClusterResourceQuota per developer might relax
> other options for developers to modify project resource quotas without
> waiting on system administrators.
>
> On Thu, Aug 30, 2018 at 10:14 AM Jessica Forrester 
> wrote:
>
>>
>>
>> On Thu, Aug 30, 2018 at 8:18 AM Andrew Feller 
>> wrote:
>>
>>> Has anyone found an effective way to minimize toil between developers
>>> and system administrators regarding project resource quotas *without
>>> resorting to letting people do whatever they want unrestrained*?
>>>
>>> There are only 2 ideas I can see to address this issue:
>>>
>>>1. Removing self-provisioning access, provisioning a single project
>>>per developer, and giving them admin access to it.
>>>
>>>
>> You can limit the number of self-provisioned projects they can have
>>
>> https://docs.openshift.com/container-platform/3.10/admin_guide/managing_projects.html#limit-projects-per-user
>>
>>
>>>
>>>1. Create ClusterResourceQuota per developer restricting total
>>>resources allowed
>>>
>>> I don't know how ClusterResourceQuota handle a system administrator
>>> increasing a project's quotas for a user who is already met their total.
>>>
>>
>> If either a cluster resource quota or a resource quota has been exceeded,
>> then you you've exceeded quota for that resource and can't make more.
>>
>>
>>>
>>> Any feedback is welcomed and appreciated!
>>> --
>>>
>>> [image: BandwidthMaroon.png]
>>>
>>> Andy Feller  •  Sr DevOps Engineer
>>>
>>> 900 Main Campus Drive, Suite 500, Raleigh, NC 27606
>>>
>>>
>>> e: afel...@bandwidth.com
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>
>
> --
>
> [image: BandwidthMaroon.png]
>
> Andy Feller  •  Sr DevOps Engineer
>
> 900 Main Campus Drive, Suite 500, Raleigh, NC 27606
>
>
> e: afel...@bandwidth.com
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: openshift-ansible release-3.10 - Install fails with control plane pods

2018-08-30 Thread Marc Schlegel
Thanks for the link. It looks like the api-pod is not getting up at all!

Log from k8s_controllers_master-controllers-*

[vagrant@master ~]$ sudo docker logs 
k8s_controllers_master-controllers-master.vnet.de_kube-system_a3c3ca56f69ed817bad799176cba5ce8_1
E0830 18:28:05.787358   1 reflector.go:205] 
github.com/openshift/origin/vendor/k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:594:
 Failed to list *v1.Pod: Get 
https://master.vnet.de:8443/api/v1/pods?fieldSelector=spec.schedulerName%3Ddefault-scheduler%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded=500=0:
 dial tcp 127.0.0.1:8443: getsockopt: connection refused
E0830 18:28:05.788589   1 reflector.go:205] 
github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87: 
Failed to list *v1.ReplicationController: Get 
https://master.vnet.de:8443/api/v1/replicationcontrollers?limit=500=0:
 dial tcp 127.0.0.1:8443: getsockopt: connection refused
E0830 18:28:05.804239   1 reflector.go:205] 
github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87: 
Failed to list *v1.Node: Get 
https://master.vnet.de:8443/api/v1/nodes?limit=500=0: dial tcp 
127.0.0.1:8443: getsockopt: connection refused
E0830 18:28:05.806879   1 reflector.go:205] 
github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87: 
Failed to list *v1beta1.StatefulSet: Get 
https://master.vnet.de:8443/apis/apps/v1beta1/statefulsets?limit=500=0:
 dial tcp 127.0.0.1:8443: getsockopt: connection refused
E0830 18:28:05.808195   1 reflector.go:205] 
github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87: 
Failed to list *v1beta1.PodDisruptionBudget: Get 
https://master.vnet.de:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500=0:
 dial tcp 127.0.0.1:8443: getsockopt: connection refused
E0830 18:28:06.673507   1 reflector.go:205] 
github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87: 
Failed to list *v1.PersistentVolume: Get 
https://master.vnet.de:8443/api/v1/persistentvolumes?limit=500=0:
 dial tcp 127.0.0.1:8443: getsockopt: connection refused
E0830 18:28:06.770141   1 reflector.go:205] 
github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87: 
Failed to list *v1beta1.ReplicaSet: Get 
https://master.vnet.de:8443/apis/extensions/v1beta1/replicasets?limit=500=0:
 dial tcp 127.0.0.1:8443: getsockopt: connection refused
E0830 18:28:06.773878   1 reflector.go:205] 
github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87: 
Failed to list *v1.Service: Get 
https://master.vnet.de:8443/api/v1/services?limit=500=0: dial 
tcp 127.0.0.1:8443: getsockopt: connection refused
E0830 18:28:06.778204   1 reflector.go:205] 
github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87: 
Failed to list *v1.StorageClass: Get 
https://master.vnet.de:8443/apis/storage.k8s.io/v1/storageclasses?limit=500=0:
 dial tcp 127.0.0.1:8443: getsockopt: connection refused
E0830 18:28:06.784874   1 reflector.go:205] 
github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87: 
Failed to list *v1.PersistentVolumeClaim: Get 
https://master.vnet.de:8443/api/v1/persistentvolumeclaims?limit=500=0:
 dial tcp 127.0.0.1:8443: getsockopt: connection refused

The log is full with those. Since it is all about api, I tried to get the logs 
from k8s_POD_master-api-master.vnet.de_kube-system_* which is completely empty 
:-/

[vagrant@master ~]$ sudo docker logs 
k8s_POD_master-api-master.vnet.de_kube-system_86017803919d833e39cb3d694c249997_1
[vagrant@master ~]$ 

Is there any special prerequisite about the api-pod?

regards
Marc


> Marc,
> 
> could you please look over the issue [1] and pull the master pod logs and
> see if you bumped into same issue mentioned by the other folks?
> Also make sure the openshift-ansible release is the latest one.
> 
> Dani
> 
> [1] https://github.com/openshift/openshift-ansible/issues/9575
> 
> On Wed, Aug 29, 2018 at 7:36 PM Marc Schlegel  wrote:
> 
> > Hello everyone
> >
> > I am having trouble getting a working Origin 3.10 installation using the
> > openshift-ansible installer. My install always fails because the control
> > pane pods are not available. I've checkout the release-3.10 branch from
> > openshift-ansible and configured the inventory accordingly
> >
> >
> > TASK [openshift_control_plane : Start and enable self-hosting node]
> > **
> > changed: [master]
> > TASK [openshift_control_plane : Get node logs]
> > ***
> > skipping: [master]
> > TASK [openshift_control_plane : debug]
> > **
> > skipping: [master]
> > TASK [openshift_control_plane : fail]
> > *
> > skipping: [master]
> > TASK [openshift_control_plane : Wait for control plane pods to appear]
> > ***
> >
> > failed: [master] (item=etcd) => {"attempts": 60, "changed": false, "item":
> > "etcd", 

Re: Reducing toil in resource quota bumping

2018-08-30 Thread Andrew Feller
Thanks for the feedback Jessica!

Limiting # of projects users can create is definitely one of the things
expected, however the question was mostly focused on reducing toil due to
changing resource quotas for projects.  The idea with option #1 was
restricting devs to 1 project with heftier resources allocated whereas the
hope with option #2 was ClusterResourceQuota per developer might relax
other options for developers to modify project resource quotas without
waiting on system administrators.

On Thu, Aug 30, 2018 at 10:14 AM Jessica Forrester 
wrote:

>
>
> On Thu, Aug 30, 2018 at 8:18 AM Andrew Feller 
> wrote:
>
>> Has anyone found an effective way to minimize toil between developers and
>> system administrators regarding project resource quotas *without
>> resorting to letting people do whatever they want unrestrained*?
>>
>> There are only 2 ideas I can see to address this issue:
>>
>>1. Removing self-provisioning access, provisioning a single project
>>per developer, and giving them admin access to it.
>>
>>
> You can limit the number of self-provisioned projects they can have
>
> https://docs.openshift.com/container-platform/3.10/admin_guide/managing_projects.html#limit-projects-per-user
>
>
>>
>>1. Create ClusterResourceQuota per developer restricting total
>>resources allowed
>>
>> I don't know how ClusterResourceQuota handle a system administrator
>> increasing a project's quotas for a user who is already met their total.
>>
>
> If either a cluster resource quota or a resource quota has been exceeded,
> then you you've exceeded quota for that resource and can't make more.
>
>
>>
>> Any feedback is welcomed and appreciated!
>> --
>>
>> [image: BandwidthMaroon.png]
>>
>> Andy Feller  •  Sr DevOps Engineer
>>
>> 900 Main Campus Drive, Suite 500, Raleigh, NC 27606
>>
>>
>> e: afel...@bandwidth.com
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>

-- 

[image: BandwidthMaroon.png]

Andy Feller  •  Sr DevOps Engineer

900 Main Campus Drive, Suite 500, Raleigh, NC 27606


e: afel...@bandwidth.com
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Restricting access to some Routes

2018-08-30 Thread Ahmed Ossama

Hi Peter,

We have the same case in one of our OpenShift deployments. We decided to 
experiment with router sharding.


https://blog.openshift.com/openshift-router-sharding-for-production-and-development-traffic/

On 8/30/18 3:07 PM, David Conde wrote:

Hi Peter,

Hopefully 
https://docs.openshift.com/container-platform/3.9/architecture/networking/routes.html#whitelist 
will sort you out.


Dave

On Thu, Aug 30, 2018 at 1:54 PM Peter Heitman > wrote:


In my deployment there are 5 routes - two of them are from
OpenShift (docker-registry and registry-console) and three of them
are specific to my application. Of the 5, 4 of them are
administrative and shouldn't be accessed by just anyone on the
Internet. One of my application's route is required to be accessed
by 'anyone' on the Internet.

My question is, what is the best practice to achieve this
restriction? Is there a way to set IP address or subnet
restrictions on a route? Do I need to set up separate nodes and
separate routers so that I can use a firewall to restrict access
to the 4 routes and allow access to the Internet service? Any
suggestions?

Peter

___
users mailing list
users@lists.openshift.redhat.com

http://lists.openshift.redhat.com/openshiftmm/listinfo/users


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


--
Regards,
Ahmed Ossama

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Reducing toil in resource quota bumping

2018-08-30 Thread Jessica Forrester
On Thu, Aug 30, 2018 at 8:18 AM Andrew Feller  wrote:

> Has anyone found an effective way to minimize toil between developers and
> system administrators regarding project resource quotas *without
> resorting to letting people do whatever they want unrestrained*?
>
> There are only 2 ideas I can see to address this issue:
>
>1. Removing self-provisioning access, provisioning a single project
>per developer, and giving them admin access to it.
>
>
You can limit the number of self-provisioned projects they can have
https://docs.openshift.com/container-platform/3.10/admin_guide/managing_projects.html#limit-projects-per-user


>
>1. Create ClusterResourceQuota per developer restricting total
>resources allowed
>
> I don't know how ClusterResourceQuota handle a system administrator
> increasing a project's quotas for a user who is already met their total.
>

If either a cluster resource quota or a resource quota has been exceeded,
then you you've exceeded quota for that resource and can't make more.


>
> Any feedback is welcomed and appreciated!
> --
>
> [image: BandwidthMaroon.png]
>
> Andy Feller  •  Sr DevOps Engineer
>
> 900 Main Campus Drive, Suite 500, Raleigh, NC 27606
>
>
> e: afel...@bandwidth.com
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Restricting access to some Routes

2018-08-30 Thread David Conde
Hi Peter,

Hopefully
https://docs.openshift.com/container-platform/3.9/architecture/networking/routes.html#whitelist
will sort you out.

Dave

On Thu, Aug 30, 2018 at 1:54 PM Peter Heitman  wrote:

> In my deployment there are 5 routes - two of them are from OpenShift
> (docker-registry and registry-console) and three of them are specific to my
> application. Of the 5, 4 of them are administrative and shouldn't be
> accessed by just anyone on the Internet. One of my application's route is
> required to be accessed by 'anyone' on the Internet.
>
> My question is, what is the best practice to achieve this restriction? Is
> there a way to set IP address or subnet restrictions on a route? Do I need
> to set up separate nodes and separate routers so that I can use a firewall
> to restrict access to the 4 routes and allow access to the Internet
> service? Any suggestions?
>
> Peter
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: Restricting access to some Routes

2018-08-30 Thread François VILLAIN
Hi

From this documentation : 
https://docs.openshift.com/container-platform/3.10/architecture/networking/routes.html#route-specific-annotations

You can annotate a route with : haproxy.router.openshift.io/ip_whitelist to set 
a whitelist for the route.

Never tried though, let me know if this works 

François


De : users-boun...@lists.openshift.redhat.com 
 De la part de Peter Heitman
Envoyé : jeudi 30 août 2018 14:54
À : users@lists.openshift.redhat.com
Objet : Restricting access to some Routes

In my deployment there are 5 routes - two of them are from OpenShift 
(docker-registry and registry-console) and three of them are specific to my 
application. Of the 5, 4 of them are administrative and shouldn't be accessed 
by just anyone on the Internet. One of my application's route is required to be 
accessed by 'anyone' on the Internet.

My question is, what is the best practice to achieve this restriction? Is there 
a way to set IP address or subnet restrictions on a route? Do I need to set up 
separate nodes and separate routers so that I can use a firewall to restrict 
access to the 4 routes and allow access to the Internet service? Any 
suggestions?

Peter

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Restricting access to some Routes

2018-08-30 Thread Peter Heitman
In my deployment there are 5 routes - two of them are from OpenShift
(docker-registry and registry-console) and three of them are specific to my
application. Of the 5, 4 of them are administrative and shouldn't be
accessed by just anyone on the Internet. One of my application's route is
required to be accessed by 'anyone' on the Internet.

My question is, what is the best practice to achieve this restriction? Is
there a way to set IP address or subnet restrictions on a route? Do I need
to set up separate nodes and separate routers so that I can use a firewall
to restrict access to the 4 routes and allow access to the Internet
service? Any suggestions?

Peter
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Reducing toil in resource quota bumping

2018-08-30 Thread Andrew Feller
Has anyone found an effective way to minimize toil between developers and
system administrators regarding project resource quotas *without resorting
to letting people do whatever they want unrestrained*?

There are only 2 ideas I can see to address this issue:

   1. Removing self-provisioning access, provisioning a single project per
   developer, and giving them admin access to it.
   2. Create ClusterResourceQuota per developer restricting total resources
   allowed

I don't know how ClusterResourceQuota handle a system administrator
increasing a project's quotas for a user who is already met their total.

Any feedback is welcomed and appreciated!
-- 

[image: BandwidthMaroon.png]

Andy Feller  •  Sr DevOps Engineer

900 Main Campus Drive, Suite 500, Raleigh, NC 27606


e: afel...@bandwidth.com
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Configure custom project roles OCP 3.10

2018-08-30 Thread Marcello Lorenzi
Hi All,
we tried to define some guidelines into the project grants for all users
for a newer OCP cluster. In our previous experience we configured the admin
role to system:authenticated group but the some users can edit the routes
and deployment configs. What is the best way to configure the roles to
permit only the container restart end view logs for some specified users?

Thanks,
Marcello
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: openshift-ansible release-3.10 - Install fails with control plane pods

2018-08-30 Thread Daniel Comnea
Marc,

could you please look over the issue [1] and pull the master pod logs and
see if you bumped into same issue mentioned by the other folks?
Also make sure the openshift-ansible release is the latest one.

Dani

[1] https://github.com/openshift/openshift-ansible/issues/9575

On Wed, Aug 29, 2018 at 7:36 PM Marc Schlegel  wrote:

> Hello everyone
>
> I am having trouble getting a working Origin 3.10 installation using the
> openshift-ansible installer. My install always fails because the control
> pane pods are not available. I've checkout the release-3.10 branch from
> openshift-ansible and configured the inventory accordingly
>
>
> TASK [openshift_control_plane : Start and enable self-hosting node]
> **
> changed: [master]
> TASK [openshift_control_plane : Get node logs]
> ***
> skipping: [master]
> TASK [openshift_control_plane : debug]
> **
> skipping: [master]
> TASK [openshift_control_plane : fail]
> *
> skipping: [master]
> TASK [openshift_control_plane : Wait for control plane pods to appear]
> ***
>
> failed: [master] (item=etcd) => {"attempts": 60, "changed": false, "item":
> "etcd", "msg": {"cmd": "/bin/oc get pod master-etcd-master.vnet.de -o
> json -n kube-system", "results": [{}], "returncode": 1, "stderr": "The
> connection to the server master.vnet.de:8443 was refused - did you
> specify the right host or port?\n", "stdout": ""}}
>
> TASK [openshift_control_plane : Report control plane errors]
> *
> fatal: [master]: FAILED! => {"changed": false, "msg": "Control plane pods
> didn't come up"}
>
>
> I am using Vagrant to setup a local domain (vnet.de) which also includes
> a dnsmasq-node to have full control over the dns. The following VMs are
> running and DNS ans SSH works as expected
>
> Hostname IP
> domain.vnet.de   192.168.60.100
> master.vnet.de192.168.60.150 (dns also works for openshift.vnet.de
> which is configured as openshift_master_cluster_public_hostname) also runs
> etcd
> infra.vnet.de192.168.60.151 (openshift_master_default_subdomain
> wildcard points to this node)
> app1.vnet.de192.168.60.152
> app2.vnet.de192.168.60.153
>
>
> When connecting to the master-node I can see that several docker-instances
> are up and running
>
> [vagrant@master ~]$ sudo docker ps
> CONTAINER IDIMAGECOMMAND
> CREATED STATUS  PORTS
>  NAMES
>
> 9a0844123909ff5dd2137a4f "/bin/sh -c
> '#!/bi..."   19 minutes ago  Up 19 minutes
>  
> k8s_etcd_master-etcd-master.vnet.de_kube-system_a2c858fccd481c334a9af7413728e203_0
>
> 41d803023b72f216d84cdf54 "/bin/bash -c
> '#!/..."   19 minutes ago  Up 19 minutes
>  
> k8s_controllers_master-controllers-master.vnet.de_kube-system_a3c3ca56f69ed817bad799176cba5ce8_0
>
> 044c9d12588cdocker.io/openshift/origin-pod:v3.10.0
>  "/usr/bin/pod"   19 minutes ago  Up 19 minutes
>
>  
> k8s_POD_master-api-master.vnet.de_kube-system_86017803919d833e39cb3d694c249997_0
>
> 10a197e394b3docker.io/openshift/origin-pod:v3.10.0
>  "/usr/bin/pod"   19 minutes ago  Up 19 minutes
>
>  
> k8s_POD_master-controllers-master.vnet.de_kube-system_a3c3ca56f69ed817bad799176cba5ce8_0
>
> 20f4f86bdd07docker.io/openshift/origin-pod:v3.10.0
>  "/usr/bin/pod"   19 minutes ago  Up 19 minutes
>
>  
> k8s_POD_master-etcd-master.vnet.de_kube-system_a2c858fccd481c334a9af7413728e203_0
>
>
> However, there is no port 8443 open on the master-node. No wonder the
> ansible-installer complains.
>
> The machines are using a plain Centos 7.5 and I've run the
> openshift-ansible/playbooks/prerequisites.yml first and then
> openshift-ansible/playbooks/deploy_cluster.yml.
> I've double-checked the installation documentation and my Vagrant
> config...all looks correct.
>
> Any ideas/advice?
> regards
> Marc
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users