ulimit core size for specific pod(container) in Openshift

2018-06-27 Thread Saravanakumar Arumugam
Hi, Is there a way in openshift to configure ulimit core size(as 0) for a specific docker container ? In docker, there is a argument like --core size=0 to "docker run" by which you can have core size for a specific docker container. Is there some configuration available in openshift

Re: How to make 172.30.0.1 (kubernetes service) health checked?

2018-06-27 Thread Clayton Coleman
In OpenShift 3.9, when a master goes down the endpoints object should be updated within 15s (the TTL on the record for the master). You can check the value of "oc get endpoints -n default kubernetes" - if you still see the master IP in that list after 15s then something else is wrong. On Wed,

Re: Log tracing on configmaps modifications - or other resources

2018-06-27 Thread Clayton Coleman
If you have api audit logging on (see docs for master-config) you would see who edited the config map and what time. On Jun 27, 2018, at 1:59 PM, leo David wrote: Hello everyone, I'm encountering this situation on OS Origin 3.9, in which someone whith full acces in a particular namespace

Log tracing on configmaps modifications - or other resources

2018-06-27 Thread leo David
Hello everyone, I'm encountering this situation on OS Origin 3.9, in which someone whith full acces in a particular namespace modified a ConfigMap and broke a service. Is there a way to trace who / when edited a resource in OpenShift - as security concerns ? Thank you very much ! -- *Leo

List the options openshift_node_labels

2018-06-27 Thread Rafael Tomelin
Hi dear, Where find the list options the configuration for openshift_node_group_name . and openshift_node_labels to OpenShift Origin? -- Atenciosamente, Rafael Tomelin skype: rafael.tomelin E-mail: rafael.tome...@gmail.com RHCE - Red Hat Certified Engineer PPT-205 - Puppet Certified

How to make 172.30.0.1 (kubernetes service) health checked?

2018-06-27 Thread Joel Pearson
Hi, I'm running OpenShift 3.9 on AWS with masters in HA mode using Classic ELB's doing TCP load balancing. If I restart masters, from outside the cluster the ELB does the right thing and takes a master out of service. However, if something tries to talk to the kubernetes API inside the cluster,