On Monday, November 14, 2016 at 7:24:54 PM UTC+8, Fede Diaz wrote:
>
> Hi there.
> I've just deloyed a Kubernetes cluster on 3 Ubuntu 16.04 virtual machines 
> with kubeadm following this doc: 
> http://kubernetes.io/docs/getting-started-guides/kubeadm/
>
> I'm using Weave as network overlay, so I do not pass any argument to 
> kubeadm init.
>
> By the end of the doc everything looks great:
>
> root@master:~# cat /etc/hosts
>
> 192.168.1.50   master
> 192.168.1.51   minion1
> 192.168.1.52   minion2
>
> root@master:~# kubectl get nodes 
> NAME      STATUS    AGE
> master    Ready     24m
> minion1   Ready     21m
> minion2   Ready     21m
>
> root@master:~# kubectl get pods --all-namespaces --show-all 
> NAMESPACE     NAME                              READY     STATUS   
>  RESTARTS   AGE
> kube-system   dummy-2088944543-d6o3p            1/1       Running   0     
>      23m
> kube-system   etcd-master                       1/1       Running   0     
>      21m
> kube-system   kube-apiserver-master             1/1       Running   0     
>      23m
> kube-system   kube-controller-manager-master    1/1       Running   0     
>      23m
> kube-system   kube-discovery-1150918428-j04dp   1/1       Running   0     
>      23m
> kube-system   kube-dns-654381707-qf0do          3/3       Running   0     
>      22m
> kube-system   kube-proxy-m1f92                  1/1       Running   0     
>      20m
> kube-system   kube-proxy-qjzbt                  1/1       Running   0     
>      20m
> kube-system   kube-proxy-r9i4j                  1/1       Running   0     
>      22m
> kube-system   kube-scheduler-master             1/1       Running   0     
>      22m
> kube-system   weave-net-88kx2                   2/2       Running   0     
>      21m
> kube-system   weave-net-9yeic                   2/2       Running   1     
>      20m
> kube-system   weave-net-vjieq                   2/2       Running   0     
>      20m
>
>
> Then, I want to complete this example:
>
> https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/complete-example
> to build a nginx load balancer.
>
> So everything looks great too
>
> root@master:~# kubectl get pods -o wide
> NAME                     READY     STATUS    RESTARTS   AGE       IP       
>    NODE
> coffee-rc-g6axg          1/1       Running   0          22m       
> 10.47.0.3   minion2
> coffee-rc-m7kjf          1/1       Running   0          22m       
> 10.44.0.3   minion1
> nginx-ingress-rc-xy4jg   1/1       Running   0          23m       
> 10.44.0.1   minion1
> tea-rc-bjg5b             1/1       Running   0          23m       
> 10.47.0.1   minion2
> tea-rc-bz0qq             1/1       Running   0          23m       
> 10.47.0.2   minion2
> tea-rc-umyn8             1/1       Running   0          23m       
> 10.44.0.2   minion1
>
> But when I try to connect
> root@master:~# ping -c1 minion1
> PING minion1 (192.168.1.51) 56(84) bytes of data.
> 64 bytes from minion1 (192.168.1.51): icmp_seq=1 ttl=64 time=0.310 ms
>
> --- minion1 ping statistics ---
> 1 packets transmitted, 1 received, 0% packet loss, time 0ms
> rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms
>
> curl --resolve cafe.example.com:443:192.168.1.51 
> https://cafe.example.com/coffee --insecure
> curl: (7) Failed to connect to cafe.example.com port 443: connection 
> refused
>
> I tried to build the cluster several times with the same results. I don't 
> know what step I'm skiping.
>
> There is not any service listenig on minion1 on port 443 or 80.
>
> root@master:~# nmap minion1 -p80,443
>
> Starting Nmap 7.01 ( https://nmap.org ) at 2016-11-14 12:20 CET
> Nmap scan report for minion1 (192.168.1.51)
> Host is up (0.00030s latency).
> PORT    STATE  SERVICE
> 80/tcp  closed http
> 443/tcp closed https
> MAC Address: 08:00:27:AB:14:62 (Oracle VirtualBox virtual NIC)
>
> Nmap done: 1 IP address (1 host up) scanned in 0.72 seconds
>
> The ingress-controller container doesn't show any useful
>
> root@minion1:~# docker logs 942d08da944f
> I1114 10:41:24.213006       1 main.go:37] Starting NGINX Ingress 
> controller Version 0.5.0
> 2016/11/14 10:42:29 [notice] 20#20: signal process started
>
> Iptables on minion1 shows the following.
> root@minion1:~# iptables-save 
> # Generated by iptables-save v1.6.0 on Mon Nov 14 12:10:10 2016
> *nat
> :PREROUTING ACCEPT [0:0]
> :INPUT ACCEPT [0:0]
> :OUTPUT ACCEPT [0:0]
> :POSTROUTING ACCEPT [0:0]
> :DOCKER - [0:0]
> :KUBE-MARK-DROP - [0:0]
> :KUBE-MARK-MASQ - [0:0]
> :KUBE-NODEPORTS - [0:0]
> :KUBE-POSTROUTING - [0:0]
> :KUBE-SEP-5EVYJB2LHCX2OGR5 - [0:0]
> :KUBE-SEP-5YKGFUMBPOG2O4CX - [0:0]
> :KUBE-SEP-H7KNLTDWROLYJK5M - [0:0]
> :KUBE-SEP-MGVHHE5VCTX2DVNY - [0:0]
> :KUBE-SEP-N736QRXYX3KKDBUU - [0:0]
> :KUBE-SEP-Y3NQPZDG46ECTICS - [0:0]
> :KUBE-SEP-ZGZGAS3RZVPCLREZ - [0:0]
> :KUBE-SEP-ZL5JWKVPNBJ44DP6 - [0:0]
> :KUBE-SERVICES - [0:0]
> :KUBE-SVC-DVHFM6YVY2RW3DPQ - [0:0]
> :KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
> :KUBE-SVC-I277KLBDTTJWT3KA - [0:0]
> :KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
> :KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
> :WEAVE - [0:0]
> -A PREROUTING -m comment --comment "kubernetes service portals" -j 
> KUBE-SERVICES
> -A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
> -A OUTPUT -m comment --comment "kubernetes service portals" -j 
> KUBE-SERVICES
> -A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
> -A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j 
> KUBE-POSTROUTING
> -A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
> -A POSTROUTING -j WEAVE
> -A DOCKER -i docker0 -j RETURN
> -A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
> -A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
> -A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic 
> requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
> -A KUBE-SEP-5EVYJB2LHCX2OGR5 -s 10.47.0.3/32 -m comment --comment 
> "default/coffee-svc:http" -j KUBE-MARK-MASQ
> -A KUBE-SEP-5EVYJB2LHCX2OGR5 -p tcp -m comment --comment 
> "default/coffee-svc:http" -m tcp -j DNAT --to-destination 10.47.0.3:80
> -A KUBE-SEP-5YKGFUMBPOG2O4CX -s 10.44.0.2/32 -m comment --comment 
> "default/tea-svc:http" -j KUBE-MARK-MASQ
> -A KUBE-SEP-5YKGFUMBPOG2O4CX -p tcp -m comment --comment 
> "default/tea-svc:http" -m tcp -j DNAT --to-destination 10.44.0.2:80
> -A KUBE-SEP-H7KNLTDWROLYJK5M -s 10.32.0.1/32 -m comment --comment 
> "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
> -A KUBE-SEP-H7KNLTDWROLYJK5M -p tcp -m comment --comment 
> "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 
> 10.32.0.1:53
> -A KUBE-SEP-MGVHHE5VCTX2DVNY -s 10.32.0.1/32 -m comment --comment 
> "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
> -A KUBE-SEP-MGVHHE5VCTX2DVNY -p udp -m comment --comment 
> "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.32.0.1:53
> -A KUBE-SEP-N736QRXYX3KKDBUU -s 10.47.0.1/32 -m comment --comment 
> "default/tea-svc:http" -j KUBE-MARK-MASQ
> -A KUBE-SEP-N736QRXYX3KKDBUU -p tcp -m comment --comment 
> "default/tea-svc:http" -m tcp -j DNAT --to-destination 10.47.0.1:80
> -A KUBE-SEP-Y3NQPZDG46ECTICS -s 192.168.1.50/32 -m comment --comment 
> "default/kubernetes:https" -j KUBE-MARK-MASQ
> -A KUBE-SEP-Y3NQPZDG46ECTICS -p tcp -m comment --comment 
> "default/kubernetes:https" -m recent --set --name KUBE-SEP-Y3NQPZDG46ECTICS 
> --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 
> 192.168.1.50:6443
> -A KUBE-SEP-ZGZGAS3RZVPCLREZ -s 10.44.0.3/32 -m comment --comment 
> "default/coffee-svc:http" -j KUBE-MARK-MASQ
> -A KUBE-SEP-ZGZGAS3RZVPCLREZ -p tcp -m comment --comment 
> "default/coffee-svc:http" -m tcp -j DNAT --to-destination 10.44.0.3:80
> -A KUBE-SEP-ZL5JWKVPNBJ44DP6 -s 10.47.0.2/32 -m comment --comment 
> "default/tea-svc:http" -j KUBE-MARK-MASQ
> -A KUBE-SEP-ZL5JWKVPNBJ44DP6 -p tcp -m comment --comment 
> "default/tea-svc:http" -m tcp -j DNAT --to-destination 10.47.0.2:80
> -A KUBE-SERVICES -d 10.110.145.48/32 -p tcp -m comment --comment 
> "default/coffee-svc:http cluster IP" -m tcp --dport 80 -j 
> KUBE-SVC-I277KLBDTTJWT3KA
> -A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment 
> "default/kubernetes:https cluster IP" -m tcp --dport 443 -j 
> KUBE-SVC-NPX46M4PTMTKRN6Y
> -A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment 
> "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j 
> KUBE-SVC-TCOU7JCQXEZGVUNU
> -A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment 
> "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j 
> KUBE-SVC-ERIFXISQEP7F7OF4
> -A KUBE-SERVICES -d 10.106.143.159/32 -p tcp -m comment --comment 
> "default/tea-svc:http cluster IP" -m tcp --dport 80 -j 
> KUBE-SVC-DVHFM6YVY2RW3DPQ
> -A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: 
> this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j 
> KUBE-NODEPORTS
> -A KUBE-SVC-DVHFM6YVY2RW3DPQ -m comment --comment "default/tea-svc:http" 
> -m statistic --mode random --probability 0.33332999982 -j 
> KUBE-SEP-5YKGFUMBPOG2O4CX
> -A KUBE-SVC-DVHFM6YVY2RW3DPQ -m comment --comment "default/tea-svc:http" 
> -m statistic --mode random --probability 0.50000000000 -j 
> KUBE-SEP-N736QRXYX3KKDBUU
> -A KUBE-SVC-DVHFM6YVY2RW3DPQ -m comment --comment "default/tea-svc:http" 
> -j KUBE-SEP-ZL5JWKVPNBJ44DP6
> -A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment 
> "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-H7KNLTDWROLYJK5M
> -A KUBE-SVC-I277KLBDTTJWT3KA -m comment --comment 
> "default/coffee-svc:http" -m statistic --mode random --probability 
> 0.50000000000 -j KUBE-SEP-ZGZGAS3RZVPCLREZ
> -A KUBE-SVC-I277KLBDTTJWT3KA -m comment --comment 
> "default/coffee-svc:http" -j KUBE-SEP-5EVYJB2LHCX2OGR5
> -A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment 
> "default/kubernetes:https" -m recent --rcheck --seconds 180 --reap --name 
> KUBE-SEP-Y3NQPZDG46ECTICS --mask 255.255.255.255 --rsource -j 
> KUBE-SEP-Y3NQPZDG46ECTICS
> -A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment 
> "default/kubernetes:https" -j KUBE-SEP-Y3NQPZDG46ECTICS
> -A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment 
> "kube-system/kube-dns:dns" -j KUBE-SEP-MGVHHE5VCTX2DVNY
> -A WEAVE -s 10.32.0.0/12 -d 224.0.0.0/4 -j RETURN
> -A WEAVE ! -s 10.32.0.0/12 -d 10.32.0.0/12 -j MASQUERADE
> -A WEAVE -s 10.32.0.0/12 ! -d 10.32.0.0/12 -j MASQUERADE
> COMMIT
> # Completed on Mon Nov 14 12:10:10 2016
> # Generated by iptables-save v1.6.0 on Mon Nov 14 12:10:10 2016
> *filter
> :INPUT ACCEPT [46:2472]
> :FORWARD ACCEPT [0:0]
> :OUTPUT ACCEPT [44:8768]
> :DOCKER - [0:0]
> :DOCKER-ISOLATION - [0:0]
> :KUBE-FIREWALL - [0:0]
> :KUBE-SERVICES - [0:0]
> :WEAVE-NPC - [0:0]
> :WEAVE-NPC-DEFAULT - [0:0]
> :WEAVE-NPC-INGRESS - [0:0]
> -A INPUT -j KUBE-FIREWALL
> -A INPUT -d 172.17.0.1/32 -i docker0 -p tcp -m tcp --dport 6783 -j DROP
> -A INPUT -d 172.17.0.1/32 -i docker0 -p udp -m udp --dport 6783 -j DROP
> -A INPUT -d 172.17.0.1/32 -i docker0 -p udp -m udp --dport 6784 -j DROP
> -A INPUT -i docker0 -p udp -m udp --dport 53 -j ACCEPT
> -A INPUT -i docker0 -p tcp -m tcp --dport 53 -j ACCEPT
> -A FORWARD -i docker0 -o weave -j DROP
> -A FORWARD -j DOCKER-ISOLATION
> -A FORWARD -o docker0 -j DOCKER
> -A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
> -A FORWARD -i docker0 ! -o docker0 -j ACCEPT
> -A FORWARD -i docker0 -o docker0 -j ACCEPT
> -A FORWARD -o weave -j WEAVE-NPC
> -A FORWARD -o weave -m state --state NEW -j NFLOG --nflog-group 86
> -A FORWARD -o weave -j DROP
> -A OUTPUT -m comment --comment "kubernetes service portals" -j 
> KUBE-SERVICES
> -A OUTPUT -j KUBE-FIREWALL
> -A DOCKER-ISOLATION -j RETURN
> -A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping 
> marked packets" -m mark --mark 0x8000/0x8000 -j DROP
> -A WEAVE-NPC -m state --state RELATED,ESTABLISHED -j ACCEPT
> -A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-DEFAULT
> -A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-INGRESS
> -A WEAVE-NPC-DEFAULT -m set --match-set weave-k?Z;25^M}|1s7P3|H9i;*;MhG 
> dst -j ACCEPT
> -A WEAVE-NPC-DEFAULT -m set --match-set weave-iuZcey(5DeXbzgRFs8Szo]<@p 
> dst -j ACCEPT
> COMMIT
> # Completed on Mon Nov 14 12:10:10 2016
>
> Anyway I can curl to the pod throw the ClusterIP
>
> root@master:~# curl -kIv 10.47.0.3
> * Rebuilt URL to: 10.47.0.3/
> *   Trying 10.47.0.3...
> * Connected to 10.47.0.3 (10.47.0.3) port 80 (#0)
> > HEAD / HTTP/1.1
> > Host: 10.47.0.3
> > User-Agent: curl/7.47.0
> > Accept: */*
> > 
> < HTTP/1.1 200 OK
> HTTP/1.1 200 OK
> < Server: nginx/1.11.5
> Server: nginx/1.11.5
> < Date: Mon, 14 Nov 2016 11:13:54 GMT
> Date: Mon, 14 Nov 2016 11:13:54 GMT
> < Content-Type: text/html
> Content-Type: text/html
> < Connection: keep-alive
> Connection: keep-alive
> < Expires: Mon, 14 Nov 2016 11:13:53 GMT
> Expires: Mon, 14 Nov 2016 11:13:53 GMT
> < Cache-Control: no-cache
> Cache-Control: no-cache
>
> < 
> * Connection #0 to host 10.47.0.3 left intact
>
> Or even the Ingress Controller:
>
> root@master:~# curl -kLIv 10.44.0.1/coffee
> *   Trying 10.44.0.1...
> * Connected to 10.44.0.1 (10.44.0.1) port 80 (#0)
> > HEAD /coffee HTTP/1.1
> > Host: 10.44.0.1
> > User-Agent: curl/7.47.0
> > Accept: */*
> > 
> < HTTP/1.1 301 Moved Permanently
> HTTP/1.1 301 Moved Permanently
> < Server: nginx/1.11.5
> Server: nginx/1.11.5
> < Date: Mon, 14 Nov 2016 11:15:01 GMT
> Date: Mon, 14 Nov 2016 11:15:01 GMT
> < Content-Type: text/html
> Content-Type: text/html
> < Content-Length: 185
> Content-Length: 185
> < Connection: keep-alive
> Connection: keep-alive
> < Location: https://10.44.0.1/coffee
> Location: https://10.44.0.1/coffee
>
> < 
> * Connection #0 to host 10.44.0.1 left intact
> * Issue another request to this URL: 'https://10.44.0.1/coffee'
> * Found bundle for host 10.44.0.1: 0x5564b0981330 [can pipeline]
> *   Trying 10.44.0.1...
> * Connected to 10.44.0.1 (10.44.0.1) port 443 (#1)
> * found 173 certificates in /etc/ssl/certs/ca-certificates.crt
> * found 692 certificates in /etc/ssl/certs
> * ALPN, offering http/1.1
> * SSL connection using TLS1.2 / ECDHE_RSA_AES_128_GCM_SHA256
> *          server certificate verification SKIPPED
> *          server certificate status verification SKIPPED
> *          common name: 192.168.33.43 (does not match '10.44.0.1')
> *          server certificate expiration date OK
> *          server certificate activation date OK
> *          certificate public key: RSA
> *          certificate version: #3
> *          subject: C=US,ST=Some-State,O=Internet Widgits Pty 
> Ltd,CN=192.168.33.43
> *          start date: Tue, 01 Mar 2016 19:17:58 GMT
> *          expire date: Wed, 01 Mar 2017 19:17:58 GMT
> *          issuer: C=US,ST=Some-State,O=Internet Widgits Pty 
> Ltd,CN=192.168.33.43
> *          compression: NULL
> * ALPN, server did not agree to a protocol
> > HEAD /coffee HTTP/1.1
> > Host: 10.44.0.1
> > User-Agent: curl/7.47.0
> > Accept: */*
> > 
> < HTTP/1.1 200 OK
> HTTP/1.1 200 OK
> < Server: nginx/1.11.5
> Server: nginx/1.11.5
> < Date: Mon, 14 Nov 2016 11:15:01 GMT
> Date: Mon, 14 Nov 2016 11:15:01 GMT
> < Content-Type: text/html
> Content-Type: text/html
> < Connection: keep-alive
> Connection: keep-alive
> < Expires: Mon, 14 Nov 2016 11:15:00 GMT
> Expires: Mon, 14 Nov 2016 11:15:00 GMT
> < Cache-Control: no-cache
> Cache-Control: no-cache
>
> < 
> * Connection #1 to host 10.44.0.1 left intact
>
> So any clue about what is going on?
> Thanks so much for help and sorry for the endless post...
>


Hi, guys, same issue with me for this demo,  do you fix it now?  if yes, 
could you append the details here, thanks.
 

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to