Re: openvswitch?

2018-01-03 Thread Tim Dudgeon
Looks like this problem has fixed itself over the last couple of weeks 
(I just updated openshift-ansible on the release-3.7 branch) .

That package dependency error is no longer happening.
It now seems possible to deploy a minimal 3.7 distribution using the 
Ansible installer.

I have no idea what the source of the problem was or what has changed.


On 22/12/17 10:09, Tim Dudgeon wrote:


I tried disabling the package checks but this just pushes the failure 
down the line:


  1. Hosts:    host-10-0-0-10, host-10-0-0-12, host-10-0-0-13, 
host-10-0-0-6, host-10-0-0-9

 Play: Configure nodes
 Task: Install sdn-ovs package
 Message:  Error: Package: origin-sdn-ovs-3.7.0-1.0.7ed6862.x86_64 
(centos-openshift-origin37)

  Requires: openvswitch >= 2.6.1

Something seems broken with the package dependencies?

This happens when trying to install v3.7 using openshift-ansible from 
branch release-3.7.

openshift_deployment_type=origin
openshift_release=v3.7


On 21/12/17 16:48, Tim Dudgeon wrote:


Yes, but is this error a result of broken dependencies in the RPMs?
There's no mention of needing to instal openvswitch as part of the 
pre-requisites mentioned here:
https://docs.openshift.org/latest/install_config/install/host_preparation.html 




On 20/12/17 20:27, Joel Pearson wrote:
It’s in the paas repo 
http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin/
On Thu, 21 Dec 2017 at 1:09 am, Tim Dudgeon <tdudgeon...@gmail.com 
<mailto:tdudgeon...@gmail.com>> wrote:


I just starting hitting this error when using the ansible installer
(installing v3.70 from openshift-ansible on branch release-3.7).

1. Hosts:    host-10-0-0-10, host-10-0-0-13, host-10-0-0-7,
host-10-0-0-8, host-10-0-0-9
  Play: OpenShift Health Checks
  Task: Run health checks (install) - EL
  Message:  One or more checks failed
  Details:  check "package_availability":
    Could not perform a yum update.
    Errors from dependency resolution:
  origin-sdn-ovs-3.7.0-1.0.7ed6862.x86_64 requires
openvswitch >= 2.6.1
    You should resolve these issues before
proceeding with
an install.
    You may need to remove or downgrade packages or
enable/disable yum repositories.

    check "package_version":
    Not all of the required packages are available
at their
requested version
    openvswitch:['2.6', '2.7', '2.8']
    Please check your subscriptions and enabled
repositories.

This was not happening before. Where does openvswitch come from?
Can't
find it in the standard rpm repos.

Tim

___
users mailing list
users@lists.openshift.redhat.com
<mailto:users@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/users







___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: openvswitch?

2017-12-22 Thread Tim Dudgeon
I tried disabling the package checks but this just pushes the failure 
down the line:


  1. Hosts:    host-10-0-0-10, host-10-0-0-12, host-10-0-0-13, 
host-10-0-0-6, host-10-0-0-9

 Play: Configure nodes
 Task: Install sdn-ovs package
 Message:  Error: Package: origin-sdn-ovs-3.7.0-1.0.7ed6862.x86_64 
(centos-openshift-origin37)

  Requires: openvswitch >= 2.6.1

Something seems broken with the package dependencies?

This happens when trying to install v3.7 using openshift-ansible from 
branch release-3.7.

openshift_deployment_type=origin
openshift_release=v3.7


On 21/12/17 16:48, Tim Dudgeon wrote:


Yes, but is this error a result of broken dependencies in the RPMs?
There's no mention of needing to instal openvswitch as part of the 
pre-requisites mentioned here:
https://docs.openshift.org/latest/install_config/install/host_preparation.html 




On 20/12/17 20:27, Joel Pearson wrote:
It’s in the paas repo 
http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin/
On Thu, 21 Dec 2017 at 1:09 am, Tim Dudgeon <tdudgeon...@gmail.com 
<mailto:tdudgeon...@gmail.com>> wrote:


I just starting hitting this error when using the ansible installer
(installing v3.70 from openshift-ansible on branch release-3.7).

1. Hosts:    host-10-0-0-10, host-10-0-0-13, host-10-0-0-7,
host-10-0-0-8, host-10-0-0-9
  Play: OpenShift Health Checks
  Task: Run health checks (install) - EL
  Message:  One or more checks failed
  Details:  check "package_availability":
    Could not perform a yum update.
    Errors from dependency resolution:
  origin-sdn-ovs-3.7.0-1.0.7ed6862.x86_64 requires
openvswitch >= 2.6.1
    You should resolve these issues before proceeding
with
an install.
    You may need to remove or downgrade packages or
enable/disable yum repositories.

    check "package_version":
    Not all of the required packages are available at
their
requested version
    openvswitch:['2.6', '2.7', '2.8']
    Please check your subscriptions and enabled
repositories.

This was not happening before. Where does openvswitch come from?
Can't
find it in the standard rpm repos.

Tim

___
users mailing list
users@lists.openshift.redhat.com
<mailto:users@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/users





___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: openvswitch?

2017-12-21 Thread Tim Dudgeon

Yes, but is this error a result of broken dependencies in the RPMs?
There's no mention of needing to instal openvswitch as part of the 
pre-requisites mentioned here:
https://docs.openshift.org/latest/install_config/install/host_preparation.html 




On 20/12/17 20:27, Joel Pearson wrote:
It’s in the paas repo 
http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin/
On Thu, 21 Dec 2017 at 1:09 am, Tim Dudgeon <tdudgeon...@gmail.com 
<mailto:tdudgeon...@gmail.com>> wrote:


I just starting hitting this error when using the ansible installer
(installing v3.70 from openshift-ansible on branch release-3.7).

1. Hosts:    host-10-0-0-10, host-10-0-0-13, host-10-0-0-7,
host-10-0-0-8, host-10-0-0-9
  Play: OpenShift Health Checks
  Task: Run health checks (install) - EL
  Message:  One or more checks failed
  Details:  check "package_availability":
    Could not perform a yum update.
    Errors from dependency resolution:
  origin-sdn-ovs-3.7.0-1.0.7ed6862.x86_64 requires
openvswitch >= 2.6.1
    You should resolve these issues before proceeding with
an install.
    You may need to remove or downgrade packages or
enable/disable yum repositories.

    check "package_version":
    Not all of the required packages are available at
their
requested version
    openvswitch:['2.6', '2.7', '2.8']
    Please check your subscriptions and enabled
repositories.

This was not happening before. Where does openvswitch come from? Can't
find it in the standard rpm repos.

Tim

___
users mailing list
users@lists.openshift.redhat.com
<mailto:users@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/users



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: openvswitch?

2017-12-20 Thread Joel Pearson
It’s in the paas repo
http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin/
On Thu, 21 Dec 2017 at 1:09 am, Tim Dudgeon <tdudgeon...@gmail.com> wrote:

> I just starting hitting this error when using the ansible installer
> (installing v3.70 from openshift-ansible on branch release-3.7).
>
> 1. Hosts:host-10-0-0-10, host-10-0-0-13, host-10-0-0-7,
> host-10-0-0-8, host-10-0-0-9
>   Play: OpenShift Health Checks
>   Task: Run health checks (install) - EL
>   Message:  One or more checks failed
>   Details:  check "package_availability":
> Could not perform a yum update.
> Errors from dependency resolution:
>   origin-sdn-ovs-3.7.0-1.0.7ed6862.x86_64 requires
> openvswitch >= 2.6.1
> You should resolve these issues before proceeding with
> an install.
> You may need to remove or downgrade packages or
> enable/disable yum repositories.
>
> check "package_version":
> Not all of the required packages are available at their
> requested version
> openvswitch:['2.6', '2.7', '2.8']
> Please check your subscriptions and enabled repositories.
>
> This was not happening before. Where does openvswitch come from? Can't
> find it in the standard rpm repos.
>
> Tim
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


openvswitch?

2017-12-20 Thread Tim Dudgeon
I just starting hitting this error when using the ansible installer 
(installing v3.70 from openshift-ansible on branch release-3.7).


1. Hosts:    host-10-0-0-10, host-10-0-0-13, host-10-0-0-7, 
host-10-0-0-8, host-10-0-0-9

 Play: OpenShift Health Checks
 Task: Run health checks (install) - EL
 Message:  One or more checks failed
 Details:  check "package_availability":
   Could not perform a yum update.
   Errors from dependency resolution:
 origin-sdn-ovs-3.7.0-1.0.7ed6862.x86_64 requires 
openvswitch >= 2.6.1
   You should resolve these issues before proceeding with 
an install.
   You may need to remove or downgrade packages or 
enable/disable yum repositories.


   check "package_version":
   Not all of the required packages are available at their 
requested version

       openvswitch:['2.6', '2.7', '2.8']
   Please check your subscriptions and enabled repositories.

This was not happening before. Where does openvswitch come from? Can't 
find it in the standard rpm repos.


Tim

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Network issues with openvswitch

2017-10-23 Thread Aleksandar Lazic
Title: Re: Network issues with openvswitch


Hi Yu Wei.

Ah that's a good point.

Do you have seen this doc?
https://access.redhat.com/documentation/en-us/reference_architectures/2017/html/deploying_red_hat_openshift_container_platform_3.4_on_red_hat_openstack_platform_10/

Regards
Aleks

on Montag, 23. Oktober 2017 at 19:09 was written:






My environment is setting up on VMs provided by openstack.
It seemed that nodes not working were created from resource pool in which openstack has different version of ovs.
As I have destroyed the environment and want to try again.  I couldn't get more information now.

Thanks,
Jared, (韦煜)
Software developer
Interested in open source software, big data, Linux
From: Aleksandar Lazic <al...@me2digital.eu>
Sent: Tuesday, October 24, 2017 12:18:55 AM
To: Yu Wei; users@lists.openshift.redhat.com
Subject: Re: Network issues with openvswitch 
 
Hi Yu Wei.

Interesting issue.
What's the difference between the nodes which the connection work and the one from which the connection does not work?

Please can you share some more Informations.

I assume this is on aws, is the UDP port 4789 open from everywhere, as described in the doc?
https://docs.openshift.org/3.6/install_config/install/prerequisites.html#prereq-network-access

and of course the other ports also.

oc get nodes
oc describe svc -n default docker-registry

Do you have reboot the notworking nodes?
Are there errors in the journald logs?

Best Regards
Aleks

on Montag, 23. Oktober 2017 at 04:38 was written:





Hi Aleks,

I setup openshift origin cluster with 1lb + 3 masters + 5 nodes.
In some nodes, pods running on them couldn't be reached by other nodes or pods running on other nodes. It indicates "no route to host". 
[root@host-10-1-130-32 ~]# curl -kv docker-registry.default.svc.cluster.local:5000
* About to connect() to docker-registry.default.svc.cluster.local port 5000 (#0)
*   Trying 172.30.22.28...
* No route to host
* Failed connect to docker-registry.default.svc.cluster.local:5000; No route to host
* Closing connection 0
curl: (7) Failed connect to docker-registry.default.svc.cluster.local:5000; No route to host

And other nodes works fine.
In my previous mail, host name of node is host-10-1-130-32.
Output of "ifconfig tun0" is as below,
[root@host-10-1-130-32 ~]# ifconfig tun0
tun0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
       inet 10.130.2.1  netmask 255.255.254.0  broadcast 0.0.0.0
       inet6 fe80::cc50:3dff:fe07:9ea2  prefixlen 64  scopeid 0x20
       ether ce:50:3d:07:9e:a2  txqueuelen 1000  (Ethernet)
       RX packets 97906  bytes 8665783 (8.2 MiB)
       RX errors 0  dropped 0  overruns 0  frame 0
       TX packets 163379  bytes 27405744 (26.1 MiB)
       TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

I also tried to capture packet via tcpdump, and found some stuff as following, 
10.1.130.32.58147 > 10.1.236.92.4789: [no cksum] VXLAN, flags [I] (0x08), vni 0
ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.128.1.45 tell 10.130.2.1, length 28
       0x:  04f9 38ae 659b fa16 3e6c dd90 0800 4500  ..8.e...>lE.
       0x0010:  004e 543c 4000 4011 63e4 0a01 8220 0a01  .NT<@.@.c...
       0x0020:  ec5c e323 12b5 003a  0800    .\.#...:
       0x0030:      ce50 3d07 9ea2 0806  .P=.
       0x0040:  0001 0800 0604 0001 ce50 3d07 9ea2 0a82  .P=.
       0x0050:  0201    0a80 012d            ...-
  25  00:22:47.214387 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.1.130.2 tell 10.1.130.45, length 46
       0x:     fa16 3e5a a862 0806 0001  >Z.b
       0x0010:  0800 0604 0001 fa16 3e5a a862 0a01 822d  >Z.b...-
       0x0020:     0a01 8202     
       0x0030:                   
  26  00:22:47.258344 IP6 (hlim 255, next-header ICMPv6 (58) payload length: 24) :: > ff02::1:ffa1:1fbb: [icmp6 sum ok] ICMP6, neighbor solicitation, length 24, who has fe80::824:c2ff:fea1:1fbb
       0x:   ffa1 1fbb 0a24 c2a1 1fbb 86dd 6000  33.$..`.
       0x0010:   0018 3aff       :...
       0x0020:     ff02      
       0x0030:  0001 ffa1 1fbb 8700 724a   fe80  rJ..
       0x0040:     0824 c2ff fea1 1fbb       ...$..
  27  00:22:47.282619 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.1.130.2 tell 10.1.130.73, length 46
       0x:     fa16 3ec4 a9be 0806 0001  >...
       0x0010:  0800 0604 0001 fa16 3ec4 a9be 0a01 8249  >..I
       0x0020:     0a01 8202     
       0x0030:                   

I didn't understand why the IP marked in red above were involved.

Thanks,
Jared, (韦

Re: Network issues with openvswitch

2017-10-23 Thread Yu Wei
My environment is setting up on VMs provided by openstack.

It seemed that nodes not working were created from resource pool in which 
openstack has different version of ovs.

As I have destroyed the environment and want to try again.  I couldn't get more 
information now.


Thanks,

Jared, (韦煜)
Software developer
Interested in open source software, big data, Linux


From: Aleksandar Lazic <al...@me2digital.eu>
Sent: Tuesday, October 24, 2017 12:18:55 AM
To: Yu Wei; users@lists.openshift.redhat.com
Subject: Re: Network issues with openvswitch

Hi Yu Wei.

Interesting issue.
What's the difference between the nodes which the connection work and the one 
from which the connection does not work?

Please can you share some more Informations.

I assume this is on aws, is the UDP port 4789 open from everywhere, as 
described in the doc?
https://docs.openshift.org/3.6/install_config/install/prerequisites.html#prereq-network-access

and of course the other ports also.

oc get nodes
oc describe svc -n default docker-registry

Do you have reboot the notworking nodes?
Are there errors in the journald logs?

Best Regards
Aleks

on Montag, 23. Oktober 2017 at 04:38 was written:


Hi Aleks,

I setup openshift origin cluster with 1lb + 3 masters + 5 nodes.
In some nodes, pods running on them couldn't be reached by other nodes or pods 
running on other nodes. It indicates "no route to host".
[root@host-10-1-130-32 ~]# curl -kv 
docker-registry.default.svc.cluster.local:5000
* About to connect() to docker-registry.default.svc.cluster.local port 5000 (#0)
*   Trying 172.30.22.28...
* No route to host
* Failed connect to docker-registry.default.svc.cluster.local:5000; No route to 
host
* Closing connection 0
curl: (7) Failed connect to docker-registry.default.svc.cluster.local:5000; No 
route to host

And other nodes works fine.
In my previous mail, host name of node is host-10-1-130-32.
Output of "ifconfig tun0" is as below,
[root@host-10-1-130-32 ~]# ifconfig tun0
tun0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
   inet 10.130.2.1  netmask 255.255.254.0  broadcast 0.0.0.0
   inet6 fe80::cc50:3dff:fe07:9ea2  prefixlen 64  scopeid 0x20
   ether ce:50:3d:07:9e:a2  txqueuelen 1000  (Ethernet)
   RX packets 97906  bytes 8665783 (8.2 MiB)
   RX errors 0  dropped 0  overruns 0  frame 0
   TX packets 163379  bytes 27405744 (26.1 MiB)
   TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

I also tried to capture packet via tcpdump, and found some stuff as following,
10.1.130.32.58147 > 10.1.236.92.4789: [no cksum] VXLAN, flags [I] (0x08), vni 0
ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.128.1.45 tell 
10.130.2.1, length 28
   0x:  04f9 38ae 659b fa16 3e6c dd90 0800 4500  ..8.e...>lE.
   0x0010:  004e 543c 4000 4011 63e4 0a01 8220 0a01  .NT<@.@.c...
   0x0020:  ec5c e323 12b5 003a  0800    .\.#...:
   0x0030:      ce50 3d07 9ea2 0806  .P=.
   0x0040:  0001 0800 0604 0001 ce50 3d07 9ea2 0a82  .P=.
   0x0050:  0201    0a80 012d...-
  25  00:22:47.214387 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 
10.1.130.2 tell 10.1.130.45, length 46
   0x:     fa16 3e5a a862 0806 0001  >Z.b
   0x0010:  0800 0604 0001 fa16 3e5a a862 0a01 822d  >Z.b...-
   0x0020:     0a01 8202     
   0x0030:       
  26  00:22:47.258344 IP6 (hlim 255, next-header ICMPv6 (58) payload length: 
24) :: > ff02::1:ffa1:1fbb: [icmp6 sum ok] ICMP6, neighbor solicitation, length 
24, who has fe80::824:c2ff:fea1:1fbb
   0x:   ffa1 1fbb 0a24 c2a1 1fbb 86dd 6000  33.$..`.
   0x0010:   0018 3aff       :...
   0x0020:     ff02      
   0x0030:  0001 ffa1 1fbb 8700 724a   fe80  rJ..
   0x0040:     0824 c2ff fea1 1fbb   ...$..
  27  00:22:47.282619 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 
10.1.130.2 tell 10.1.130.73, length 46
   0x:     fa16 3ec4 a9be 0806 0001  >...
   0x0010:  0800 0604 0001 fa16 3ec4 a9be 0a01 8249  >..I
   0x0020:     0a01 8202     
   0x0030:       

I didn't understand why the IP marked in red above were involved.

Thanks,
Jared, (韦煜)
Software developer
Interested in open source software, big data, Linux

From: Aleksandar Lazic <al...@me2digital.eu>
Sent: Monday, October 23, 2017 2:34:13 AM
To: Yu Wei; users@lists.openshift.redhat.com; d...@lists.openshift.redhat.com
Subject: R

Re: Network issues with openvswitch

2017-10-23 Thread Aleksandar Lazic
Title: Re: Network issues with openvswitch


Hi Yu Wei.

Interesting issue.
What's the difference between the nodes which the connection work and the one from which the connection does not work?

Please can you share some more Informations.

I assume this is on aws, is the UDP port 4789 open from everywhere, as described in the doc?
https://docs.openshift.org/3.6/install_config/install/prerequisites.html#prereq-network-access

and of course the other ports also.

oc get nodes
oc describe svc -n default docker-registry

Do you have reboot the notworking nodes?
Are there errors in the journald logs?

Best Regards
Aleks

on Montag, 23. Oktober 2017 at 04:38 was written:






Hi Aleks,

I setup openshift origin cluster with 1lb + 3 masters + 5 nodes.
In some nodes, pods running on them couldn't be reached by other nodes or pods running on other nodes. It indicates "no route to host". 
[root@host-10-1-130-32 ~]# curl -kv docker-registry.default.svc.cluster.local:5000
* About to connect() to docker-registry.default.svc.cluster.local port 5000 (#0)
*   Trying 172.30.22.28...
* No route to host
* Failed connect to docker-registry.default.svc.cluster.local:5000; No route to host
* Closing connection 0
curl: (7) Failed connect to docker-registry.default.svc.cluster.local:5000; No route to host

And other nodes works fine.
In my previous mail, host name of node is host-10-1-130-32.
Output of "ifconfig tun0" is as below,
[root@host-10-1-130-32 ~]# ifconfig tun0
tun0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.130.2.1  netmask 255.255.254.0  broadcast 0.0.0.0
        inet6 fe80::cc50:3dff:fe07:9ea2  prefixlen 64  scopeid 0x20
        ether ce:50:3d:07:9e:a2  txqueuelen 1000  (Ethernet)
        RX packets 97906  bytes 8665783 (8.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 163379  bytes 27405744 (26.1 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

I also tried to capture packet via tcpdump, and found some stuff as following, 
10.1.130.32.58147 > 10.1.236.92.4789: [no cksum] VXLAN, flags [I] (0x08), vni 0
ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.128.1.45 tell 10.130.2.1, length 28
        0x:  04f9 38ae 659b fa16 3e6c dd90 0800 4500  ..8.e...>lE.
        0x0010:  004e 543c 4000 4011 63e4 0a01 8220 0a01  .NT<@.@.c...
        0x0020:  ec5c e323 12b5 003a  0800    .\.#...:
        0x0030:      ce50 3d07 9ea2 0806  .P=.
        0x0040:  0001 0800 0604 0001 ce50 3d07 9ea2 0a82  .P=.
        0x0050:  0201    0a80 012d            ...-
   25  00:22:47.214387 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.1.130.2 tell 10.1.130.45, length 46
        0x:     fa16 3e5a a862 0806 0001  >Z.b
        0x0010:  0800 0604 0001 fa16 3e5a a862 0a01 822d  >Z.b...-
        0x0020:     0a01 8202     
        0x0030:                   
   26  00:22:47.258344 IP6 (hlim 255, next-header ICMPv6 (58) payload length: 24) :: > ff02::1:ffa1:1fbb: [icmp6 sum ok] ICMP6, neighbor solicitation, length 24, who has fe80::824:c2ff:fea1:1fbb
        0x:   ffa1 1fbb 0a24 c2a1 1fbb 86dd 6000  33.$..`.
        0x0010:   0018 3aff       :...
        0x0020:     ff02      
        0x0030:  0001 ffa1 1fbb 8700 724a   fe80  rJ..
        0x0040:     0824 c2ff fea1 1fbb       ...$..
   27  00:22:47.282619 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.1.130.2 tell 10.1.130.73, length 46
        0x:     fa16 3ec4 a9be 0806 0001  >...
        0x0010:  0800 0604 0001 fa16 3ec4 a9be 0a01 8249  >..I
        0x0020:     0a01 8202     
        0x0030:                   

I didn't understand why the IP marked in red above were involved.

Thanks,
Jared, (韦煜)
Software developer
Interested in open source software, big data, Linux
From: Aleksandar Lazic <al...@me2digital.eu>
Sent: Monday, October 23, 2017 2:34:13 AM
To: Yu Wei; users@lists.openshift.redhat.com; d...@lists.openshift.redhat.com
Subject: Re: Network issues with openvswitch 
 
Hi Yu Wei.

on Sonntag, 22. Oktober 2017 at 19:13 was written:

> Hi,

> I execute following command on work node of openshift origin cluster 3.6.
>
> [root@host-10-1-130-32 ~]# traceroute docker-registry.default.svc
> traceroute to docker-registry.default.svc (172.30.22.28), 30 hops max, 60 byte packets
>  1  bogon (10.130.2.1)  3005.715 ms !H  3005.682 ms !H  3005.664 ms !H
>  It seemed content marked in red  should be hostname of work node.
>  How could I debug such issue? Where to start

Re: Network issues with openvswitch

2017-10-22 Thread Yu Wei
Hi Aleks,


I setup openshift origin cluster with 1lb + 3 masters + 5 nodes.

In some nodes, pods running on them couldn't be reached by other nodes or pods 
running on other nodes. It indicates "no route to host".

[root@host-10-1-130-32 ~]# curl -kv 
docker-registry.default.svc.cluster.local:5000
* About to connect() to docker-registry.default.svc.cluster.local port 5000 (#0)
*   Trying 172.30.22.28...
* No route to host
* Failed connect to docker-registry.default.svc.cluster.local:5000; No route to 
host
* Closing connection 0
curl: (7) Failed connect to docker-registry.default.svc.cluster.local:5000; No 
route to host


And other nodes works fine.

In my previous mail, host name of node is host-10-1-130-32.

Output of "ifconfig tun0" is as below,

[root@host-10-1-130-32 ~]# ifconfig tun0
tun0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
inet 10.130.2.1  netmask 255.255.254.0  broadcast 0.0.0.0
inet6 fe80::cc50:3dff:fe07:9ea2  prefixlen 64  scopeid 0x20
ether ce:50:3d:07:9e:a2  txqueuelen 1000  (Ethernet)
RX packets 97906  bytes 8665783 (8.2 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 163379  bytes 27405744 (26.1 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

I also tried to capture packet via tcpdump, and found some stuff as following,

10.1.130.32.58147 > 10.1.236.92.4789: [no cksum] VXLAN, flags [I] (0x08), vni 0
ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.128.1.45 tell 
10.130.2.1, length 28
0x:  04f9 38ae 659b fa16 3e6c dd90 0800 4500  ..8.e...>lE.
0x0010:  004e 543c 4000 4011 63e4 0a01 8220 0a01  .NT<@.@.c...
0x0020:  ec5c e323 12b5 003a  0800    .\.#...:
0x0030:      ce50 3d07 9ea2 0806  .P=.
0x0040:  0001 0800 0604 0001 ce50 3d07 9ea2 0a82  .P=.
0x0050:  0201    0a80 012d...-
   25  00:22:47.214387 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 
10.1.130.2 tell 10.1.130.45, length 46
0x:     fa16 3e5a a862 0806 0001  >Z.b
0x0010:  0800 0604 0001 fa16 3e5a a862 0a01 822d  >Z.b...-
0x0020:     0a01 8202     
0x0030:       
   26  00:22:47.258344 IP6 (hlim 255, next-header ICMPv6 (58) payload length: 
24) :: > ff02::1:ffa1:1fbb: [icmp6 sum ok] ICMP6, neighbor solicitation, length 
24, who has fe80::824:c2ff:fea1:1fbb
0x:   ffa1 1fbb 0a24 c2a1 1fbb 86dd 6000  33.$..`.
0x0010:   0018 3aff       :...
0x0020:     ff02      
0x0030:  0001 ffa1 1fbb 8700 724a   fe80  rJ..
0x0040:     0824 c2ff fea1 1fbb   ...$..
   27  00:22:47.282619 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 
10.1.130.2 tell 10.1.130.73, length 46
0x:     fa16 3ec4 a9be 0806 0001  >...
0x0010:  0800 0604 0001 fa16 3ec4 a9be 0a01 8249  >..I
0x0020:     0a01 8202     
0x0030:       

I didn't understand why the IP marked in red above were involved.


Thanks,

Jared, (韦煜)
Software developer
Interested in open source software, big data, Linux


From: Aleksandar Lazic <al...@me2digital.eu>
Sent: Monday, October 23, 2017 2:34:13 AM
To: Yu Wei; users@lists.openshift.redhat.com; d...@lists.openshift.redhat.com
Subject: Re: Network issues with openvswitch

Hi Yu Wei.

on Sonntag, 22. Oktober 2017 at 19:13 was written:

> Hi,

> I execute following command on work node of openshift origin cluster 3.6.
>
> [root@host-10-1-130-32 ~]# traceroute docker-registry.default.svc
> traceroute to docker-registry.default.svc (172.30.22.28), 30 hops max, 60 
> byte packets
>  1  bogon (10.130.2.1)  3005.715 ms !H  3005.682 ms !H  3005.664 ms !H
>  It seemed content marked in red  should be hostname of work node.
>  How could I debug such issue? Where to start?

What's the hostname of the node?
I'm not sure what you try to debug or what's the problem you try to
solve?

> Thanks,

> Jared, (韦煜)
>  Software developer
>  Interested in open source software, big data, Linux

--
Best Regards
Aleks
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Network issues with openvswitch

2017-10-22 Thread Aleksandar Lazic
Hi Yu Wei.

on Sonntag, 22. Oktober 2017 at 19:13 was written:

> Hi,

> I execute following command on work node of openshift origin cluster 3.6.
>
> [root@host-10-1-130-32 ~]# traceroute docker-registry.default.svc
> traceroute to docker-registry.default.svc (172.30.22.28), 30 hops max, 60 
> byte packets
>  1  bogon (10.130.2.1)  3005.715 ms !H  3005.682 ms !H  3005.664 ms !H
>  It seemed content marked in red  should be hostname of work node.
>  How could I debug such issue? Where to start?

What's the hostname of the node?
I'm not sure what you try to debug or what's the problem you try to 
solve?

> Thanks,

> Jared, (韦煜)
>  Software developer
>  Interested in open source software, big data, Linux

-- 
Best Regards
Aleks


smime.p7s
Description: S/MIME Cryptographic Signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Network issues with openvswitch

2017-10-22 Thread Yu Wei
Hi,

I execute following command on work node of openshift origin cluster 3.6.

[root@host-10-1-130-32 ~]# traceroute docker-registry.default.svc
traceroute to docker-registry.default.svc (172.30.22.28), 30 hops max, 60 byte 
packets
 1  bogon (10.130.2.1)  3005.715 ms !H  3005.682 ms !H  3005.664 ms !H
It seemed content marked in red should be hostname of work node.
How could I debug such issue? Where to start?



Thanks,

Jared, (韦煜)
Software developer
Interested in open source software, big data, Linux
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Different behavior between installing openshift origin with openvswitch and flannel

2017-08-15 Thread Yu Wei
Hi guys,

I tried to get external traffic into openshift origin cluster using 
nodeport/externalIPs.

When I setup openshift cluster with flannel, exposing service with 
nodeport/externalIPs did not work.

When switched to openvswitch, both worked.


Is this expected behavior? Or did I miss anything?


Thanks,

Jared, (韦煜)
Software developer
Interested in open source software, big data, Linux
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users