Public bug reported:

1) cat /etc/lsb_release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04.1 LTS"

2) My setup is:
Openstack                             Queens
magnum-api                            6.1.0-0ubuntu1
magnum-common                         6.1.0-0ubuntu1
magnum-conductor                      6.1.0-0ubuntu1
python-magnum                         6.1.0-0ubuntu1
python-magnumclient                   2.8.0-0ubuntu1

3) What you expected to happen:
Create a kubernetes cluster

4) What happened instead:

I create the following template:
openstack coe cluster template create kubernetes-cluster-template \
                     --image fedora-atomic-27 \
                     --external-network external \
                     --dns-nameserver 8.8.8.8 \
                     --master-flavor m1.medium \
                     --flavor m1.medium \
                     --docker-storage-driver overlay2 \
                     --coe kubernetes \
                     --tls-disabled

where fedora-atomic-27 is Fedora-Atomic-27-20180419.0.x86_64.qcow2

I launch my cluster as follows:
openstack coe cluster create kubernetes-cluster \
                        --cluster-template kubernetes-cluster-template \
                        --master-count 1 \
                        --node-count 1 \
                        --keypair magnum

Checking the status reports the following:
$ openstack coe cluster show kubernetes-cluster
+---------------------+------------------------------------------------------------+
| Field               | Value                                                   
   |
+---------------------+------------------------------------------------------------+
| status              | CREATE_IN_PROGRESS                                      
   |
| cluster_template_id | 5cf527fd-10a5-4a64-9da6-db02322afc18                    
   |
| node_addresses      | []                                                      
   |
| uuid                | 1a9490c2-b351-4903-9237-a94e9139307b                    
   |
| stack_id            | 3bd7783f-7469-4ad8-920a-9a38955f8d10                    
   |
| status_reason       | None                                                    
   |
| created_at          | 2018-12-20T12:50:07+00:00                               
   |
| updated_at          | 2018-12-20T12:50:13+00:00                               
   |
| coe_version         | None                                                    
   |
| labels              | {}                                                      
   |
| faults              |                                                         
   |
| keypair             | magnum                                                  
   |
| api_address         | None                                                    
   |
| master_addresses    | []                                                      
   |
| create_timeout      | 60                                                      
   |
| node_count          | 1                                                       
   |
| discovery_url       | 
https://discovery.etcd.io/5e3c06417323e4b2c267e74bbcf0a402 |
| master_count        | 1                                                       
   |
| container_version   | None                                                    
   |
| name                | kubernetes-cluster                                      
   |
| master_flavor_id    | m1.medium                                               
   |
| flavor_id           | m1.medium                                               
   |
+---------------------+------------------------------------------------------------+

The status is always stuck in CREATE_IN_PROGRESS.

Checking the /var/log/cloud-init-output.log inside the newly create
kubernetes master vm reports the following:

....
+ echo 'Waiting for Kubernetes API...'
Waiting for Kubernetes API...
+ curl --silent http://127.0.0.1:8080/version
+ sleep 5
+ curl --silent http://127.0.0.1:8080/version
+ sleep 5
+ curl --silent http://127.0.0.1:8080/version
+ sleep 5
+ curl --silent http://127.0.0.1:8080/version
+ sleep 5
+ curl --silent http://127.0.0.1:8080/version
+ sleep 5

Checking the status of failed services inside the vm reports the following:
systemctl list-units | grep failed
● kube-apiserver.service                                                        
            loaded failed     failed          kubernetes-apiserver

Reviewing the log for the kube-apiserver.service  reveals the true issue:
-- Logs begin at Thu 2018-12-20 12:52:47 UTC, end at Thu 2018-12-20 13:06:07 
UTC. --
Dec 20 12:58:44 kubernetes-cluster-lathbpb54t7w-master-0.novalocal systemd[1]: 
Started kubernetes-apiserver.
Dec 20 12:58:46 kubernetes-cluster-lathbpb54t7w-master-0.novalocal runc[1966]: 
I1220 12:58:45.991182       1 server.go:121] Version: v1.9.3
Dec 20 12:58:47 kubernetes-cluster-lathbpb54t7w-master-0.novalocal runc[1966]: 
error creating self-signed certificates: open 
/var/run/kubernetes/apiserver.crt: permission denied
Dec 20 12:58:47 kubernetes-cluster-lathbpb54t7w-master-0.novalocal systemd[1]: 
kube-apiserver.service: Main process exited, code=exited, status=1/FAILURE
Dec 20 12:58:47 kubernetes-cluster-lathbpb54t7w-master-0.novalocal systemd[1]: 
kube-apiserver.service: Unit entered failed state.
Dec 20 12:58:47 kubernetes-cluster-lathbpb54t7w-master-0.novalocal systemd[1]: 
kube-apiserver.service: Failed with result 'exit-code'.
Dec 20 12:58:47 kubernetes-cluster-lathbpb54t7w-master-0.novalocal systemd[1]: 
kube-apiserver.service: Service hold-off time over, scheduling restart.
Dec 20 12:58:47 kubernetes-cluster-lathbpb54t7w-master-0.novalocal systemd[1]: 
Stopped kubernetes-apiserver.
Dec 20 12:58:47 kubernetes-cluster-lathbpb54t7w-master-0.novalocal systemd[1]: 
Started kubernetes-apiserver.
Dec 20 12:58:47 kubernetes-cluster-lathbpb54t7w-master-0.novalocal runc[2261]: 
I1220 12:58:47.544421       1 server.go:121] Version: v1.9.3
Dec 20 12:58:47 kubernetes-cluster-lathbpb54t7w-master-0.novalocal runc[2261]: 
error creating self-signed certificates: open 
/var/run/kubernetes/apiserver.crt: permission denied
Dec 20 12:58:47 kubernetes-cluster-lathbpb54t7w-master-0.novalocal systemd[1]: 
kube-apiserver.service: Main process exited, code=exited, status=1/FAILURE
Dec 20 12:58:47 kubernetes-cluster-lathbpb54t7w-master-0.novalocal systemd[1]: 
kube-apiserver.service: Unit entered failed state.
Dec 20 12:58:47 kubernetes-cluster-lathbpb54t7w-master-0.novalocal systemd[1]: 
kube-apiserver.service: Failed with result 'exit-code'.
Dec 20 12:58:47 kubernetes-cluster-lathbpb54t7w-master-0.novalocal systemd[1]: 
kube-apiserver.service: Service hold-off time over, scheduling restart.
Dec 20 12:58:47 kubernetes-cluster-lathbpb54t7w-master-0.novalocal systemd[1]: 
Stopped kubernetes-apiserver.
Dec 20 12:58:48 kubernetes-cluster-lathbpb54t7w-master-0.novalocal systemd[1]: 
Started kubernetes-apiserver.

So apparently I cannot access /var/run/kubernetes/.

Is there anyway to fix this?

** Affects: magnum (Ubuntu)
     Importance: Undecided
         Status: New

** Description changed:

  1) cat /etc/lsb_release
  DISTRIB_ID=Ubuntu
  DISTRIB_RELEASE=18.04
  DISTRIB_CODENAME=bionic
  DISTRIB_DESCRIPTION="Ubuntu 18.04.1 LTS"
  
- 
  2) My setup is:
  Openstack                             Queens
- magnum-api                            6.1.0-0ubuntu1                   
- magnum-common                         6.1.0-0ubuntu1 
- magnum-conductor                      6.1.0-0ubuntu1              
+ magnum-api                            6.1.0-0ubuntu1
+ magnum-common                         6.1.0-0ubuntu1
+ magnum-conductor                      6.1.0-0ubuntu1
  python-magnum                         6.1.0-0ubuntu1
- python-magnumclient                   2.8.0-0ubuntu1 
+ python-magnumclient                   2.8.0-0ubuntu1
  
  3) What you expected to happen:
  Create a kubernetes cluster
  
  4) What happened instead:
  
  I create the following template:
  openstack coe cluster template create kubernetes-cluster-template \
-                      --image fedora-atomic-27 \
-                      --external-network external \
-                      --dns-nameserver 8.8.8.8 \
-                      --master-flavor m1.medium \
-                      --flavor m1.medium \
-                      --docker-storage-driver overlay2 \
-                      --coe kubernetes \
-                      --tls-disabled
+                      --image fedora-atomic-27 \
+                      --external-network external \
+                      --dns-nameserver 8.8.8.8 \
+                      --master-flavor m1.medium \
+                      --flavor m1.medium \
+                      --docker-storage-driver overlay2 \
+                      --coe kubernetes \
+                      --tls-disabled
  
  where fedora-atomic-27 is Fedora-Atomic-27-20180419.0.x86_64.qcow2
  
- 
  I launch my cluster as follows:
  openstack coe cluster create kubernetes-cluster \
-                         --cluster-template kubernetes-cluster-template \
-                         --master-count 1 \
-                         --node-count 1 \
-                         --keypair magnum
- 
+                         --cluster-template kubernetes-cluster-template \
+                         --master-count 1 \
+                         --node-count 1 \
+                         --keypair magnum
  
  Checking the status reports the following:
  $ openstack coe cluster show kubernetes-cluster
  
+---------------------+------------------------------------------------------------+
  | Field               | Value                                                 
     |
  
+---------------------+------------------------------------------------------------+
  | status              | CREATE_IN_PROGRESS                                    
     |
  | cluster_template_id | 5cf527fd-10a5-4a64-9da6-db02322afc18                  
     |
  | node_addresses      | []                                                    
     |
  | uuid                | 1a9490c2-b351-4903-9237-a94e9139307b                  
     |
  | stack_id            | 3bd7783f-7469-4ad8-920a-9a38955f8d10                  
     |
  | status_reason       | None                                                  
     |
  | created_at          | 2018-12-20T12:50:07+00:00                             
     |
  | updated_at          | 2018-12-20T12:50:13+00:00                             
     |
  | coe_version         | None                                                  
     |
  | labels              | {}                                                    
     |
  | faults              |                                                       
     |
  | keypair             | magnum                                                
     |
  | api_address         | None                                                  
     |
  | master_addresses    | []                                                    
     |
  | create_timeout      | 60                                                    
     |
  | node_count          | 1                                                     
     |
  | discovery_url       | 
https://discovery.etcd.io/5e3c06417323e4b2c267e74bbcf0a402 |
  | master_count        | 1                                                     
     |
  | container_version   | None                                                  
     |
  | name                | kubernetes-cluster                                    
     |
  | master_flavor_id    | m1.medium                                             
     |
  | flavor_id           | m1.medium                                             
     |
  
+---------------------+------------------------------------------------------------+
  
  The status is always stuck in CREATE_IN_PROGRESS.
  
- Checking the /var/log/cloud-init-output.log reports the following:
+ Checking the /var/log/cloud-init-output.log inside the newly create
+ kubernetes master vm reports the following:
  
  ....
  + echo 'Waiting for Kubernetes API...'
  Waiting for Kubernetes API...
  + curl --silent http://127.0.0.1:8080/version
  + sleep 5
  + curl --silent http://127.0.0.1:8080/version
  + sleep 5
  + curl --silent http://127.0.0.1:8080/version
  + sleep 5
  + curl --silent http://127.0.0.1:8080/version
  + sleep 5
  + curl --silent http://127.0.0.1:8080/version
  + sleep 5
  
- 
- Checking the status of failed services reports the following:
+ Checking the status of failed services inside the vm reports the following:
  systemctl list-units | grep failed
- ● kube-apiserver.service                                                      
              loaded failed     failed          kubernetes-apiserver 
+ ● kube-apiserver.service                                                      
              loaded failed     failed          kubernetes-apiserver
  
  Reviewing the log for the kube-apiserver.service  reveals the true issue:
  -- Logs begin at Thu 2018-12-20 12:52:47 UTC, end at Thu 2018-12-20 13:06:07 
UTC. --
  Dec 20 12:58:44 kubernetes-cluster-lathbpb54t7w-master-0.novalocal 
systemd[1]: Started kubernetes-apiserver.
  Dec 20 12:58:46 kubernetes-cluster-lathbpb54t7w-master-0.novalocal 
runc[1966]: I1220 12:58:45.991182       1 server.go:121] Version: v1.9.3
  Dec 20 12:58:47 kubernetes-cluster-lathbpb54t7w-master-0.novalocal 
runc[1966]: error creating self-signed certificates: open 
/var/run/kubernetes/apiserver.crt: permission denied
  Dec 20 12:58:47 kubernetes-cluster-lathbpb54t7w-master-0.novalocal 
systemd[1]: kube-apiserver.service: Main process exited, code=exited, 
status=1/FAILURE
  Dec 20 12:58:47 kubernetes-cluster-lathbpb54t7w-master-0.novalocal 
systemd[1]: kube-apiserver.service: Unit entered failed state.
  Dec 20 12:58:47 kubernetes-cluster-lathbpb54t7w-master-0.novalocal 
systemd[1]: kube-apiserver.service: Failed with result 'exit-code'.
  Dec 20 12:58:47 kubernetes-cluster-lathbpb54t7w-master-0.novalocal 
systemd[1]: kube-apiserver.service: Service hold-off time over, scheduling 
restart.
  Dec 20 12:58:47 kubernetes-cluster-lathbpb54t7w-master-0.novalocal 
systemd[1]: Stopped kubernetes-apiserver.
  Dec 20 12:58:47 kubernetes-cluster-lathbpb54t7w-master-0.novalocal 
systemd[1]: Started kubernetes-apiserver.
  Dec 20 12:58:47 kubernetes-cluster-lathbpb54t7w-master-0.novalocal 
runc[2261]: I1220 12:58:47.544421       1 server.go:121] Version: v1.9.3
  Dec 20 12:58:47 kubernetes-cluster-lathbpb54t7w-master-0.novalocal 
runc[2261]: error creating self-signed certificates: open 
/var/run/kubernetes/apiserver.crt: permission denied
  Dec 20 12:58:47 kubernetes-cluster-lathbpb54t7w-master-0.novalocal 
systemd[1]: kube-apiserver.service: Main process exited, code=exited, 
status=1/FAILURE
  Dec 20 12:58:47 kubernetes-cluster-lathbpb54t7w-master-0.novalocal 
systemd[1]: kube-apiserver.service: Unit entered failed state.
  Dec 20 12:58:47 kubernetes-cluster-lathbpb54t7w-master-0.novalocal 
systemd[1]: kube-apiserver.service: Failed with result 'exit-code'.
  Dec 20 12:58:47 kubernetes-cluster-lathbpb54t7w-master-0.novalocal 
systemd[1]: kube-apiserver.service: Service hold-off time over, scheduling 
restart.
  Dec 20 12:58:47 kubernetes-cluster-lathbpb54t7w-master-0.novalocal 
systemd[1]: Stopped kubernetes-apiserver.
  Dec 20 12:58:48 kubernetes-cluster-lathbpb54t7w-master-0.novalocal 
systemd[1]: Started kubernetes-apiserver.
  
- 
  So apparently I cannot access /var/run/kubernetes/.
  
  Is there anyway to fix this?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1809254

Title:
  Cannot create kubernetes cluster with tls_disabled

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/magnum/+bug/1809254/+subscriptions

-- 
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to