Could you access registry web console?

Thanks,

Jared, (韦煜)
Software developer
Interested in open source software, big data, Linux

________________________________
From: [email protected] 
<[email protected]> on behalf of Andreas Mather 
<[email protected]>
Sent: Friday, December 1, 2017 9:01:34 PM
To: [email protected]
Subject: Cannot pull images from internal registry when creating a pod

Hi All!

I'm facing an issue where, even though I can push images from my client into 
the internal registry, creating a pod which uses internal images fails with 
'image not found'. Further debugging indicated an authentication problem.

I've created following issue where I described all the details:
https://github.com/openshift/origin/issues/17523

The issue was closed without any reason given so I hope someone here can help.

In the meantime, I've tried installing the cluster with following 
openshift-ansible checkouts/configurations and hit the problem in all setups:

openshift-ansible checkout openshift-ansible-3.7.2-1-8-g56b529e:
installs the cluster without issues

openshift-ansible checkout master:
installs the cluster but then fails at "Reconcile with RBAC file"
(that's the reason I usually used above checkout)

openshift-ansible checkout master with openshift_repos_enable_testing=true in 
[OSEv3:vars]:
installs the cluster but then fails at "Verify that TSB is running"

So it doesn't seem to be correlated to the openshift-ansible version I checkout 
or the openshift/kubernetes version the cluster installs with.

Another noteable detail: As my nodes and master communicate via host-to-host 
IPSsec I had to set the mtu to 1350 in /etc/origin/node/node-config.yaml and 
rebooted all nodes and master prior to installing the registry. I had TLS and 
networking issues before, but setting the MTU resolved all of them.

Maybe I'm missing a configuration step, so here's the complete list of commands 
I issue to setup the registry, push the image and creating the pod:

# create registry
# on master as root (whaomi: system:admin):
$ cd /etc/origin/master
$ oadm registry --config=admin.kubeconfig --service-account=registry
$ oc get svc docker-registry # get service IP address
$ oadm ca create-server-cert \
    --signer-cert=/etc/origin/master/ca.crt \
    --signer-key=/etc/origin/master/ca.key \
    --signer-serial=/etc/origin/master/ca.serial.txt \
    
--hostnames='registry.mycompany.com<http://registry.mycompany.com>,docker-registry.default.svc.cluster.local,172.30.185.69'
 \
    --cert=/etc/secrets/registry.crt \
    --key=/etc/secrets/registry.key
$ oc rollout pause dc/docker-registry
$ oc secrets new registry-certificates /etc/secrets/registry.crt 
/etc/secrets/registry.key
$ oc secrets link registry registry-certificates
$ oc secrets link default  registry-certificates
$ oc volume dc/docker-registry --add --type=secret 
--secret-name=registry-certificates -m /etc/secrets
$ oc set env dc/docker-registry 
REGISTRY_HTTP_TLS_CERTIFICATE=/etc/secrets/registry.crt 
REGISTRY_HTTP_TLS_KEY=/etc/secrets/registry.key
$ oc patch dc/docker-registry -p '{"spec": {"template": {"spec": 
{"containers":[{"name":"registry","livenessProbe":  {"httpGet": 
{"scheme":"HTTPS"}}}]}}}}'
$ oc patch dc/docker-registry -p '{"spec": {"template": {"spec": 
{"containers":[{"name":"registry","readinessProbe":  {"httpGet": 
{"scheme":"HTTPS"}}}]}}}}'
$ oc rollout resume dc/docker-registry

# deploy registry certs
$ cat deploy_docker_certs.sh
for h in kubmaster1 kubnode1 kubnode2
do
  ssh="ssh -o StrictHostKeyChecking=no $h"

  for dir in docker-registry.default.svc.cluster.local:5000 
172.30.185.69:5000<http://172.30.185.69:5000> 
registry.mycompany.com:5000<http://registry.mycompany.com:5000>
  do
    $ssh "mkdir /etc/docker/certs.d/${dir}" 2>/dev/null
    scp -o StrictHostKeyChecking=no /etc/origin/master/ca.crt 
${h}:/etc/docker/certs.d/${dir}/
  done
  $ssh sudo systemctl daemon-reload
  $ssh sudo systemctl restart docker
done
$ ./deploy_docker_cert.sh

# external route
$ oc create route reencrypt --service=docker-registry 
--cert=/server/tls/mywildcard.cer --key=/server/tls/mywildcard.key 
--ca-cert=/server/tls/mywildcard_ca.cer 
--dest-ca-cert=/etc/origin/master/ca.crt 
--hostname=registry.mycompany.com<http://registry.mycompany.com>

# create user
$ newuser=amather
$ htpasswd htpasswd $newuser # htpasswd auth and file location configured in 
ansible hosts file
$ oc create user $newuser
$ oc create identity htpasswd_auth:$newuser
$ oc create useridentitymapping htpasswd_auth:$newuser $newuser
$ oadm policy add-role-to-user system:registry $newuser # registry login
$ oadm policy add-role-to-user admin $newuser # project admin
$ oadm policy add-role-to-user system:image-builder $newuser # image pusher

# on my client (os x)
$ oc login
$ oc whoami
amather
$ docker login -u $(oc whoami) -p $(oc whoami -t) 
registry.mycompany.com<http://registry.mycompany.com>
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Login Succeeded
$ docker pull busybox
$ docker tag busybox 
registry.mycompany.com/default/my-busybox<http://registry.mycompany.com/default/my-busybox>
$ docker push 
registry.mycompany.com/default/my-busybox<http://registry.mycompany.com/default/my-busybox>

# on master
$ oc get is
NAME         DOCKER REPO                                           TAGS      
UPDATED
my-busybox   docker-registry.default.svc:5000/default/my-busybox   latest    28 
minutes ago

$ cat testapp.yml
apiVersion: v1
kind: Pod
metadata:
  generateName: testapp-
spec:
  # for testing; known where to grab the docker logs from
  nodeSelector:
    openshift-infra: apiserver
  containers:
  - name: nginx
    image: nginx:1.7.9
    ports:
    - containerPort: 80
  - name: test
    image: default/my-busybox

$ oc create -f testapp.yml
$ oc get pod
NAME                      READY     STATUS             RESTARTS   AGE
docker-registry-2-7klmn   1/1       Running            1          33m
router-1-8zdm5            1/1       Running            3          40m
testapp-m7trf             1/2       ImagePullBackOff   0          31m


As described in the issue (where more error logs are shown), there seems to be 
an authentication problem between the default serviceaccount and the registry. 
E.g:

$ oc get secrets
$ oc describe secret default-dockercfg-zbb95
...
dockercfg:      
{"172.30.185.69:5000":{"username":"serviceaccount","password":"xxx...","email":"[email protected]<mailto:[email protected]>","auth":"yyy..."},"docker-registry.default.svc:5000":{"username":"serviceaccount","password":"xxx...","email":"[email protected]<mailto:[email protected]>","auth":"yyy..."}}
...

$ oc login --token=xxx....
Logged into "https://kubmaster1.mycompany.com:8443"; as 
"system:serviceaccount:default:default" using the token provided.
...

$ docker login -u $(oc whoami) -p $(oc whoami -t) 
docker-registry.default.svc.cluster.local:5000
Error response from daemon: Get 
https://docker-registry.default.svc.cluster.local:5000/v2/: unauthorized: 
authentication required

The last message, "unauthorized: authentication required", also shows up in the 
dockerd logs on the system where the pod is beeing created.

Any hints on how to debug this further are highly appreciated.

Thanks,
Andreas


_______________________________________________
users mailing list
[email protected]
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

Reply via email to