How to specify admission controller correctly?

2018-07-24 Thread Marc Boorshtein
I've got origin 3.9 running and trying to setup an admission controller
webhook.  I added the appropriate confgurations to master-config.yaml.  I
added the following:

kind: ValidatingWebhookConfiguration
apiVersion: admissionregistration.k8s.io/v1beta1
metadata:
  name: opa-validating-webhook
webhooks:
  - name: validating-webhook.openpolicyagent.org
rules:
  - operations: ["CREATE", "UPDATE"]
apiGroups: ["*"]
apiVersions: ["*"]
resources: ["pods"]
clientConfig:
  #url: https://unison-opa.unison.svc/kubernetes/admission/reveiw
  service:
namespace: unison
name: unison-opa


here's the unison-opa service:
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: 2018-07-18T01:35:21Z
  labels:
app: unison
  name: unison-opa
  namespace: unison
  resourceVersion: "13118928"
  selfLink: /api/v1/namespaces/unison/services/unison-opa
  uid: d596be9f-8a2a-11e8-9ee7-525400887c40
spec:
  clusterIP: 172.30.254.250
  ports:
  - name: 443-tcp
port: 443
protocol: TCP
targetPort: 8444
  selector:
deploymentconfig: unison
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

here's what i see in the master logs:
Jul 24 14:21:26 os atomic-openshift-master-api: W0724 14:21:26.389179
1723 admission.go:252] Failed calling webhook, failing open
validating-webhook.openpolicyagent.org: failed calling admission webhook "
validating-webhook.openpolicyagent.org": Post
https://unison-opa.unison.svc:443/?timeout=30s: net/http: request canceled
while waiting for connection (Client.Timeout exceeded while awaiting
headers)
Jul 24 14:21:26 os atomic-openshift-master-api: E0724 14:21:26.389241
1723 admission.go:253] failed calling admission webhook "
validating-webhook.openpolicyagent.org": Post
https://unison-opa.unison.svc:443/?timeout=30s: net/http: request canceled
while waiting for connection (Client.Timeout exceeded while awaiting
headers)

I've also tried running through the router and going directly to 8444.
Nothing seems to work.  The service is setup correctly, i can connect from
inside of containers.

Thanks
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


CentOS PaaS SIG meeting (2018-07-25)

2018-07-24 Thread Ricardo Martinelli de Oliveira
Hello,
It's time for our weekly PaaS SIG sync-up meeting

Time: 1700 UTC - Wednesdays (date -d "1700 UTC")
Date: Tomorrow Wednesday, 25 July 2018
Where: IRC- Freenode - #centos-devel

Agenda:
- OpenShift Current Status
-- rpms
-- automation
- Open Floor

Minutes from last  meeting:
https://www.centos.org/minutes/2018/July/centos-devel.2018-07-11-17.22.log.html

-- 
Ricardo Martinelli de Oliveira
Senior Software Engineer
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Autoscaling groups

2018-07-24 Thread David Conde
Thanks,

Is it possible to also do that with masters post upgrade? Do you have any
info you could point me at to create the new node groups post upgrade?



On Tue, Jul 24, 2018 at 3:00 PM Clayton Coleman  wrote:

> Upgrading from regular nodes to autoscaling groups is not implemented.
> You’d have to add new node groups post upgrade and manage it that way.
>
> > On Jul 24, 2018, at 7:22 AM, David Conde  wrote:
> >
> > I'm in the process of upgrading an origin cluster running on AWS from
> 3.7 to 3.9 using openshift ansible. I'd like the new instances to be
> registered in autoscaling groups.
> >
> > I have seen that if I create a new origin 3.9 cluster using the AWS
> playbooks this happens as part of the install, how would I go about
> ensuring this happens as part of the upgrade from 3.7 to 3.9?
> >
> > Thanks,
> > Dave
> > ___
> > users mailing list
> > users@lists.openshift.redhat.com
> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Autoscaling groups

2018-07-24 Thread Clayton Coleman
Upgrading from regular nodes to autoscaling groups is not implemented.
You’d have to add new node groups post upgrade and manage it that way.

> On Jul 24, 2018, at 7:22 AM, David Conde  wrote:
>
> I'm in the process of upgrading an origin cluster running on AWS from 3.7 to 
> 3.9 using openshift ansible. I'd like the new instances to be registered in 
> autoscaling groups.
>
> I have seen that if I create a new origin 3.9 cluster using the AWS playbooks 
> this happens as part of the install, how would I go about ensuring this 
> happens as part of the upgrade from 3.7 to 3.9?
>
> Thanks,
> Dave
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users