i followed this link: https://access.redhat.com/documentation/en/openshift-enterprise/3.1/cluster-administration/chapter-18-troubleshooting-openshift-sdn#debugging-the-router
and im not getting anything with command: [root@osmaster ~]# oc get hostsubnet [root@osmaster ~]# //robert On Sat, May 21, 2016 at 11:15 AM, holo holo <[email protected]> wrote: > I, m testing it on my local computer with kvm and there are not any > firewalls set up. All VMs are in same virtual network. > > I checked with nmap port with you suggested and it is not opened on any > node.. did i missed something with configuration? > > [holo@latitude ~]$ nmap osmaster > > Starting Nmap 6.40 ( http://nmap.org ) at 2016-05-21 11:12 CEST > Nmap scan report for osmaster (192.168.122.209) > Host is up (0.00026s latency). > Not shown: 993 closed ports > PORT STATE SERVICE > 22/tcp open ssh > 53/tcp open domain > 80/tcp open http > 443/tcp open https > 4001/tcp open newoak > 7001/tcp open afs3-callback > 8443/tcp open https-alt > > Nmap done: 1 IP address (1 host up) scanned in 0.06 seconds > [holo@latitude ~]$ nmap osnode1 > > Starting Nmap 6.40 ( http://nmap.org ) at 2016-05-21 11:13 CEST > Nmap scan report for osnode1 (192.168.122.145) > Host is up (0.00033s latency). > Not shown: 999 closed ports > PORT STATE SERVICE > 22/tcp open ssh > > Nmap done: 1 IP address (1 host up) scanned in 0.07 seconds > [holo@latitude ~]$ nmap osnode2 > > Starting Nmap 6.40 ( http://nmap.org ) at 2016-05-21 11:13 CEST > Nmap scan report for osnode2 (192.168.122.206) > Host is up (0.00037s latency). > Not shown: 999 closed ports > PORT STATE SERVICE > 22/tcp open ssh > > Nmap done: 1 IP address (1 host up) scanned in 0.14 seconds > [holo@latitude ~]$ > > > //robert > > On Fri, May 20, 2016 at 8:46 PM, Clayton Coleman <[email protected]> > wrote: > >> No route to host is usually a configuration issue with openshift SDN - >> most commonly on things like AWS it's that the firewall doesn't allow >> traffic on the SDN port (which I think is 4789) to pass between nodes. >> >> On Thu, May 19, 2016 at 11:01 AM, holo holo <[email protected]> wrote: >> > one more log connected with same thing: >> > >> > >> > E0519 11:00:09.099712 2098 pod_workers.go:138] Error syncing pod >> > 5d3c48a1-1dd2-11e6-a164-525400c36a07, skipping: failed to >> "StartContainer" >> > for "testapp4" with ErrImagePull: "API error (500): Get >> > http://172.30.236.174:5000/v2/: dial tcp 172.30.236.174:5000: >> getsockopt: no >> > route to host\n" >> > >> > >> > //robert >> > >> > On Thu, May 19, 2016 at 4:55 PM, holo holo <[email protected]> wrote: >> >> >> >> Hello all. >> >> >> >> I configured openshift and everything is working properly on host where >> >> docker-register is started. When i added new node and i try to deploy >> >> containers on it i have such error in logs: >> >> >> >> E0519 10:51:38.574152 2135 pod_workers.go:138] Error syncing pod >> >> 083b958e-1dc0-11e6-8ca2-525400c36a07, skipping: failed to >> "StartContainer" >> >> for "testapp4" with ImagePullBackOff: "Back-off pulling image >> >> \" >> 172.30.236.174:5000/test/testapp4@sha256:64c3dc4cb983986a1dd5a7979f03f449b089f4baaf979b67363a92aac43e49cd\ >> <http://172.30.236.174:5000/test/testapp4@sha256:64c3dc4cb983986a1dd5a7979f03f449b089f4baaf979b67363a92aac43e49cd%5C> >> "" >> >> >> >> I'm guessing problem is with it that new node not "see" docker-registry >> >> address 172.30.236.174 which is deployed on other node. Should i do >> >> something more with new node (i just started openshift with node >> config)? >> >> >> >> Best regards >> >> Robert >> > >> > >> > >> > _______________________________________________ >> > users mailing list >> > [email protected] >> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users >> > >> > >
_______________________________________________ users mailing list [email protected] http://lists.openshift.redhat.com/openshiftmm/listinfo/users
