Hi Kuryrs! Yesterday in the meeting we discussed about the need to design a way for the functional testing to happen for the Kuryr-Kubernetes integration. I studied the possibilities today and came up with the following proposal (After the proposal I'll put more detailed explanations).
Prerequisites ============= Usual services: * Neutron and its agents (LBaaSv2 included) * Keystone Devstack plugin =============== * Installs Docker just like kuryr-libnetwork's plugin * Installs Docker compose * Pulls gcr.io/google_containers/hyperkube-amd64:v1.3.6 * Pulls quay.io/coreos/etcd:v3.0.7 * Runs in --net=host: * coreos/etcd * google_containers/hyperkube /setup-files.sh * google_containers/hyperkube /hyperkube apiserver * google_containers/hyperkube /hyperkube controller-manager * google_containers/hyperkube /hyperkube scheduler * google_containers/hyperkube /hyperkube kubelet with the Kuryr CNI driver mounted as a volume * Starts the Kuryr Watcher pointing to the apiserver as a devstack service After the steps above, we can use a python kubernetes client to run the tests. apiserver --------- This is what the Kuryr watcher connects to. Example parameters for running the hyperkube container:: --service-cluster-ip-range=10.0.0.1/24 \ --insecure-bind-address=0.0.0.0 \ --insecure-port=8080 \ --etcd-servers=http://${LOCAL_IP}:2379 \ --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota \ --client-ca-file=/srv/kubernetes/ca.crt \ --basic-auth-file=/srv/kubernetes/basic_auth.csv \ --min-request-timeout=300 \ --tls-cert-file=/srv/kubernetes/server.cert \ --tls-private-key-file=/srv/kubernetes/server.key \ --token-auth-file=/srv/kubernetes/known_tokens.csv \ --allow-privileged=true \ --v=2 \ --logtostderr=true controller-manager ------------------ It will be running the plugin for LoadBalancer service type in the future. Example parameters for running the hyperkube container:: --master=127.0.0.1:8080 \ --service-account-private-key-file=/srv/kubernetes/server.key \ --root-ca-file=/srv/kubernetes/ca.crt \ --min-resync-period=3m \ --v=2 \ --logtostderr=true scheduler --------- To do the hard job of scheduling to the only kubelet. Example parameters for running the hyperkube container:: --master=127.0.0.1:8080 \ --service-account-private-key-file=/srv/kubernetes/server.key \ --root-ca-file=/srv/kubernetes/ca.crt \ --min-resync-period=3m \ --v=2 \ --logtostderr=true Kubelet ------- It needs to run in privileged mode and with the following volumes:: --volume=/:/rootfs:ro \ --volume=/sys:/sys:ro \ --volume=/var/lib/docker/:/var/lib/docker:rw \ --volume=/var/lib/kubelet/:/var/lib/kubelet:rw \ --volume=/var/run:/var/run:rw \ --volume=/var/log/kuryr:/var/log/kuryr \ --net=host \ --privileged=true \ --pid=host It also needs the CNI driver, which should be mounted as a volume from the current /opt/stack/kuryr-kubernetes source. However, the container is not likely to have Python, so I propose to build a CNI binary with python-install and mount just the binary. The example parameters to run it would be:: --allow-privileged=true \ # this we can probably omit for tests --api-servers="http://127.0.0.1:8080" \ --v=2 \ --address='0.0.0.0' \ --enable-server \ --containerized \ --network-plugin=cni Why Hyperkube and compose instead of minikube? ============================================== Hyperkube provides us with more flexibility to run the building blocks that we need for the integration, like not running kube-proxy. It also can run easily without modification in the OpenStack CI jenkins worker with little resources. Minikube spawns a Virtual Machine using Docker Machine. This means that it would need more resources and make its Keystone/Neutron usage more complicated. It could prossibly be hacked to use the Docker Machine generic SSH driver and point it to the same machine, but I find it going to too much trouble compared to the simplicity of the solution above. Why Hyperkube and compose instead of just running kubernetes from src ===================================================================== Kubernetes building would take a high amount of resources ~8GiB and more time than pulling the hyperkube containers. However, this is a decision that we may want to revisit once we start contributing the kuryr cloud provider to Kubernetes (for the loadbalancer service type). ----------------------- ----------------------- ----------------------- Please, all Kuryrs, feel welcome to dispute the proposal and claims above and to propose alternatives. After a bit of discussion we can propose a blueprint and start implementing. Regards, Toni __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev