I've set up a kubernetes 1.6 cluster with flannel according to the docs, 
using "kubeadm init --pod-network-cidr=10.244.0.0/16", and successfully 
deployed some containers to it that run nginx. The containers are up and 
running on the pod subnet. The problem is routing is messed up, and on each 
node I can only reach containers running on that node. I should be able to 
real all the container from any node, right?

The containers have these IP addresses: [root@ops-k7s301 ~]#  kubectl 
describe pods | grep IP
IP: 10.244.2.10
IP: 10.244.1.9
IP: 10.244.3.14
IP: 10.244.3.13
IP: 10.244.3.12
IP: 10.244.0.12
IP: 10.244.2.11
IP: 10.244.2.12
IP: 10.244.0.13
IP: 10.244.1.10

I set up flannel with hairpinMode (on the advice of our developers) with 
the following yaml:
cni-conf.json: |
  {
    "name": "cbr0",
    "type": "flannel",
    "delegate": {
      "isDefaultGateway": true,
      "hairpinMode": true
    }
  }

After deploying the pods, the routing I get is: 

ONE NODE: # netstat -nr
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt 
Iface
0.0.0.0         10.7.53.1       0.0.0.0         UG        0 0          0 
ens160
10.7.53.0       0.0.0.0         255.255.255.0   U         0 0          0 
ens160
10.244.0.0      0.0.0.0         255.255.255.0   U         0 0          0 
cni0
10.244.0.0      0.0.0.0         255.255.0.0     U         0 0          0 
flannel.1
172.17.0.0      0.0.0.0         255.255.0.0     U         0 0          0 
docker0

NEXT NODE: # netstat -nr
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt 
Iface
0.0.0.0         10.7.53.1       0.0.0.0         UG        0 0          0 
ens160
10.7.53.0       0.0.0.0         255.255.255.0   U         0 0          0 
ens160
10.244.0.0      0.0.0.0         255.255.0.0     U         0 0          0 
flannel.1
10.244.1.0      0.0.0.0         255.255.255.0   U         0 0          0 
cni0
172.17.0.0      0.0.0.0         255.255.0.0     U         0 0          0 
docker0

etc.

So according to this route table I can only see the containers on my local 
node - I should be able to see all of them right? 

If I delete the 10.244/24 route to fall back to the 10.244/16 route I get 
nothing. 

It would also appear that the flanneld container is not making any changes 
to any of the nodes or the containers' iptables.

# iptables --list
Chain INPUT (policy ACCEPT)
target     prot opt source               destination
KUBE-FIREWALL  all  --  anywhere             anywhere

Chain FORWARD (policy DROP)
target     prot opt source               destination
DOCKER-ISOLATION  all  --  anywhere             anywhere
DOCKER     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere             ctstate 
RELATED,ESTABLISHED
ACCEPT     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
KUBE-SERVICES  all  --  anywhere             anywhere             /* 
kubernetes service portals */
KUBE-FIREWALL  all  --  anywhere             anywhere

Chain DOCKER (1 references)
target     prot opt source               destination

Chain DOCKER-ISOLATION (1 references)
target     prot opt source               destination
RETURN     all  --  anywhere             anywhere

Chain KUBE-FIREWALL (2 references)
target     prot opt source               destination
DROP       all  --  anywhere             anywhere             /* kubernetes 
firewall for dropping marked packets */ mark match 0x8000/0x8000

Chain KUBE-SERVICES (1 references)
target     prot opt source               destination
REJECT     tcp  --  anywhere             10.100.4.230         /* 
kube-system/kubernetes-dashboard: has no endpoints */ tcp dpt:http 
reject-with icmp-port-unreachable

I set up a cluster using kubernetes 1.5 and flannel 0.3 (?) a while back 
and it worked. I seem to recall flanneld setting up NAT in iptables for me, 
etc. 

Any ideas? I am the flannel holdout here, our build team has switched to 
weave. 

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to