Hi, tl;dr chicken and egg problem on docker using cbr0 and kubelet
I have a working 1.4.0 cluster set up manually and I want to automate the process. The master is on one AWS ec2 instance (apiserver, controller-manager, scheduler, etcd) and the nodes are on multiple ec2 instances (kubelet, kube-proxy, docker). For networking, I'm using AWS routes created automatically by the controller-manager. The nodes get assigned their own Pod CIDR, and the AWS routes are created successfully. The main thing I'm working on is setting up cbr0 and passing that to docker. I'm using --configure-cbr0 on kubelet and it creates cbr0 successfully. However, docker needs to be running for kubelet to start. But docker can't start if cbr0 doesn't exist. Manually, I run docker using the default bridge. Then kubelet creates cbr0 and restarts docker. How do I automate this? One option I'm trying is editing /etc/init.d/docker to use cbr0 if it exists. When kubelet restarts docker, cbr0 is used. Notes: I have used flannel before to set up networking but I want to see if I can let kubernetes set up the networking. configure-cbr0 has been deprecated on 1.4.0. I am trying network plugins and kubenet but it has the same chicken and egg problem with docker and kubelet. Does kubenet restart docker? -Christopher Rigor -- You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group. To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-users+unsubscr...@googlegroups.com. To post to this group, send email to kubernetes-users@googlegroups.com. Visit this group at https://groups.google.com/group/kubernetes-users. For more options, visit https://groups.google.com/d/optout.