Thanks for your response!

I have a bit, but now I'm trying to simulate creating a multi-node cluster,
and I don't have enough machines for that.  I have also done some
experimenting with k8s on GKE, but wanted to learn more about k8s
networking and setup.

I've been able to get some success; I've had a multi-node cluster up and
running, but I had to disable the Weave add-on, so it wasn't a "real" setup
-- I was actually surprised it worked at all, but the nodes weren't really
working as a cluster.  I was just able to start pods on the nodes and
access them directly.

Now I'm at the point where I'm struggling to get Weave going, esp on the
worker node.  I'm using kubeadm v 1.6.1 (I've been told there are issues
with 1.6.4).

At the moment, it seems that to get Weave going on the master node, I need
to

   - Do a kubeadm init
   - remove KUBELET_NETWORK_ARGS from the kubelet config ExecStart command
   - restart kubelet
   - manually copy a Weave conf file into /etc/cni/net.d
   - re-add KUBELET_NETWORK_ARGS to the kubelet ExecStart command in the
   conf file
   - restart kubelet
   - apply the Weave add-on

The master node is then in a Ready state, and the pods on the master look
healthy as far as I can tell.  The worker node I still can't really get
going, the weave-net pod on the worker gets into a CrashLoopBackOff state
and I haven't been able to figure out why.

It seems that the weave config files and plugin binaries need to be
manually copied over to the worker nodes.  Is that to be expected?

Thanks,
Mike


On Mon, Jun 5, 2017 at 8:21 PM, Brandon Philips <brandon.phil...@coreos.com>
wrote:

> Any reason to not use https://github.com/kubernetes/minikube?
>
> On Wed, May 31, 2017 at 9:02 AM Mike Cico <mikec...@gmail.com> wrote:
>
>> Hi all,
>>
>> I'm experimenting with Kubernetes on my local laptop (RHEL 7), and trying
>> to set up a k8s cluster using VirtualBox.  Here's my configuration so far:
>>
>> - kubeadm/kubectl/kubelet 1.6
>> - Docker 1.12.6
>> - 2 VB nodes (master and worker) running Ubuntu 16.04
>> - Both nodes are configured with NAT and host-only adapters
>>
>> The host-only network is intended as the internal network for the nodes
>> to communicate, and the NAT adapter for external access.  The 2 VMs can
>> ping each other over their host-only IPs fine.  However, when I run
>> "kubectl get nodes" from the master, the worker node shows as "NotReady",
>> so it's not able to accept deployments.
>>
>> I am able to set up the master node fine, and the work is able to join
>> the cluster (apparently) fine, at least with no errors:
>>
>> Master node setup:
>>
>> kubeadm init --apiserver-advertise-address=192.168.99.100
>> sudo cp /etc/kubernetes/admin.conf $HOME/
>> sudo chown $(id -u):$(id -g) $HOME/admin.conf
>> export KUBECONFIG=$HOME/admin.conf
>> kubectl apply -f weave-daemonset-k8s-1.6.yaml   # Had to download the
>> YAML separately for some reason
>>
>>
>> Worker node setup:
>>
>> kubeadm join --token 9dd48f.2b3e4e3732b2aa41 192.168.99.100:6443
>>
>>
>> If I run 'kubelet' from the command-line, I see the following info (I'm
>> assuming log entries) from the kubelet service.  I've highlighted what I
>> think are relevant errors:
>>
>>
>> *W0531 11:56:58.167372   12376 cni.go:157] Unable to update cni config:
>> No networks found in /etc/cni/net.d*
>> I0531 11:56:58.175278   12376 manager.go:143] cAdvisor running in
>> container: "/user.slice"
>> *W0531 11:56:58.182134   12376 manager.go:151] unable to connect to Rkt
>> api service: rkt: cannot tcp Dial rkt api service: dial tcp 127.0.0.1:15441
>> <http://127.0.0.1:15441>: getsockopt: connection refused*
>> I0531 11:56:58.186323   12376 fs.go:117] Filesystem partitions:
>> map[/dev/sda1:{mountpoint:/var/lib/docker/aufs major:8 minor:1
>> fsType:ext4 blockSize:0}]
>> I0531 11:56:58.192677   12376 manager.go:198] Machine: {NumCores:1
>> CpuFrequency:2593992 MemoryCapacity:2097061888 MachineID:
>> ab4fad20859448f493aa428ffe811564 
>> SystemUUID:4F055E4A-2383-468C-A046-085F0112FE77
>> BootID:74fd9c5d-3b1c-4588-9b04-c7adb5925dc1
>> Filesystems:[{Device:/dev/sda1 Capacity:31571570688 Type:vfs Inodes:1966080
>> HasInodes:true}] DiskMap:map[8:0:{Name:sda Major:8 Minor:0 Size:34359738368
>> Scheduler:deadline}] NetworkDevices:[{Name:datapath
>> MacAddress:fe:02:7e:59:5c:29 Speed:0 Mtu:1376} {Name:dummy0
>> MacAddress:3a:c5:c5:07:dc:87 Speed:0 Mtu:1500} {Name:enp0s3
>> MacAddress:08:00:27:ba:e9:d0 Speed:1000 Mtu:1500} {Name:enp0s8
>> MacAddress:08:00:27:6f:92:f0 Speed:1000 Mtu:1500} {Name:vxlan-6784
>> MacAddress:7a:24:c6:5e:f1:48 Speed:0 Mtu:65485} {Name:weave
>> MacAddress:ae:e7:0f:ef:10:c2 Speed:0 Mtu:1376}] Topology:[{Id:0
>> Memory:2097061888 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data
>> Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified
>> Level:2}]}] Caches:[{Size:3145728 Type:Unified Level:3}]}]
>> CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
>> I0531 11:56:58.193392   12376 manager.go:204] Version:
>> {KernelVersion:4.8.0-52-generic ContainerOsVersion:Ubuntu 16.04.2 LTS
>> DockerVersion:1.12.6 CadvisorVersion: CadvisorRevision:}
>> *W0531 11:56:58.193963   12376 server.go:350] No api server defined - no
>> events will be sent to API server.*
>> I0531 11:56:58.197668   12376 server.go:509] --cgroups-per-qos enabled,
>> but --cgroup-root was not specified.  defaulting to /
>> *I0531 11:56:58.204579   12376 cadvisor_linux.go:152] Failed to register
>> cAdvisor on port 4194, retrying. Error: listen tcp :4194: bind: address
>> already in use*
>> *W0531 11:56:58.205325   12376 container_manager_linux.go:218] Running
>> with swap on is not supported, please disable swap! This will be a fatal
>> error by default starting in K8s v1.6! In the meantime, you can opt-in to
>> making this a fatal error by enabling --experimental-fail-swap-on.*
>> I0531 11:56:58.205461   12376 container_manager_linux.go:245] container
>> manager verified user specified cgroup-root exists: /
>> I0531 11:56:58.205513   12376 container_manager_linux.go:250] Creating
>> Container Manager object based on Node Config: {RuntimeCgroupsName:
>> SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker
>> CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs
>> ProtectKernelDefaults:false EnableCRI:true 
>> NodeAllocatableConfig:{KubeReservedCgroupName:
>> SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}]
>> KubeReserved:map[] SystemReserved:map[] 
>> HardEvictionThresholds:[{Signal:memory.available
>> Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s
>> MinReclaim:<nil>}]} ExperimentalQOSReserved:map[]}
>> *W0531 11:56:58.214810   12376 kubelet_network.go:70] Hairpin mode set to
>> "promiscuous-bridge" but kubenet is not enabled, falling back to
>> "hairpin-veth"*
>> I0531 11:56:58.214924   12376 kubelet.go:494] Hairpin mode set to
>> "hairpin-veth"
>> *W0531 11:56:58.247353   12376 cni.go:157] Unable to update cni config:
>> No networks found in /etc/cni/net.d*
>> I0531 11:56:58.271630   12376 docker_service.go:187] Docker cri
>> networking managed by kubernetes.io/no-op
>> I0531 11:56:58.275319   12376 docker_service.go:204] Setting cgroupDriver
>> to cgroupfs
>> I0531 11:56:58.283946   12376 remote_runtime.go:41] Connecting to runtime
>> service /var/run/dockershim.sock
>> I0531 11:56:58.285263   12376 kuberuntime_manager.go:171] Container
>> runtime docker initialized, version: 1.12.6, apiVersion: 1.24.0
>> I0531 11:56:58.286358   12376 server.go:869] Started kubelet v1.6.1
>> *E0531 11:56:58.286486   12376 server.go:586] Starting health server
>> failed: listen tcp 127.0.0.1:10248 <http://127.0.0.1:10248>: bind: address
>> already in use*
>> *E0531 11:56:58.286678   12376 kubelet.go:1165] Image garbage collection
>> failed: unable to find data for container /*
>> W0531 11:56:58.286748   12376 kubelet.go:1242] No api server defined - no
>> node status update will be sent.
>> I0531 11:56:58.286925   12376 kubelet_node_status.go:230] Setting node
>> annotation to enable volume controller attach/detach
>> *I0531 11:56:58.287680   12376 server.go:127] Starting to listen on
>> 0.0.0.0:10250 <http://0.0.0.0:10250>*
>> *F0531 11:56:58.300363   12376 server.go:152] listen tcp 0.0.0.0:10255
>> <http://0.0.0.0:10255>: bind: address already in use*
>>
>>
>> I'm not sure if port-binding errors are true failures, or what.  Has
>> anyone seen this before?  Are there other things I look for to try to
>> figure out what's going on?
>>
>> Thanks,
>> Mike
>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Kubernetes user discussion and Q&A" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to kubernetes-users+unsubscr...@googlegroups.com.
>> To post to this group, send email to kubernetes-users@googlegroups.com.
>> Visit this group at https://groups.google.com/group/kubernetes-users.
>> For more options, visit https://groups.google.com/d/optout.
>>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "Kubernetes user discussion and Q&A" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/
> topic/kubernetes-users/DjCfeTXRDhg/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to